Sample records for methods applied examples

  1. 26 CFR 1.482-8 - Examples of the best method rule.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... illustrate the comparative analysis required to apply this rule. As with all of the examples in these... case. Example 10. Cost of services plus method preferred to other methods. (i) FP designs and...

  2. 26 CFR 1.482-8 - Examples of the best method rule.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... illustrate the comparative analysis required to apply this rule. As with all of the examples in these... case. Example 10. Cost of services plus method preferred to other methods. (i) FP designs and...

  3. 26 CFR 1.482-8 - Examples of the best method rule.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... illustrate the comparative analysis required to apply this rule. As with all of the examples in these... case. Example 10. Cost of services plus method preferred to other methods. (i) FP designs and...

  4. 26 CFR 1.482-8 - Examples of the best method rule.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... illustrate the comparative analysis required to apply this rule. As with all of the examples in these... case. Example 10. Cost of services plus method preferred to other methods. (i) FP designs and...

  5. Negative Example Selection for Protein Function Prediction: The NoGO Database

    PubMed Central

    Youngs, Noah; Penfold-Brown, Duncan; Bonneau, Richard; Shasha, Dennis

    2014-01-01

    Negative examples – genes that are known not to carry out a given protein function – are rarely recorded in genome and proteome annotation databases, such as the Gene Ontology database. Negative examples are required, however, for several of the most powerful machine learning methods for integrative protein function prediction. Most protein function prediction efforts have relied on a variety of heuristics for the choice of negative examples. Determining the accuracy of methods for negative example prediction is itself a non-trivial task, given that the Open World Assumption as applied to gene annotations rules out many traditional validation metrics. We present a rigorous comparison of these heuristics, utilizing a temporal holdout, and a novel evaluation strategy for negative examples. We add to this comparison several algorithms adapted from Positive-Unlabeled learning scenarios in text-classification, which are the current state of the art methods for generating negative examples in low-density annotation contexts. Lastly, we present two novel algorithms of our own construction, one based on empirical conditional probability, and the other using topic modeling applied to genes and annotations. We demonstrate that our algorithms achieve significantly fewer incorrect negative example predictions than the current state of the art, using multiple benchmarks covering multiple organisms. Our methods may be applied to generate negative examples for any type of method that deals with protein function, and to this end we provide a database of negative examples in several well-studied organisms, for general use (The NoGO database, available at: bonneaulab.bio.nyu.edu/nogo.html). PMID:24922051

  6. 40 CFR 1065.275 - N2O measurement devices.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... for interpretation of infrared spectra. For example, EPA Test Method 320 is considered a valid method... and length to achieve adequate resolution of the N2O peak for analysis. Examples of acceptable columns....550(b) that would otherwise apply. For example, you may perform a span gas measurement before and...

  7. 40 CFR 1065.275 - N2O measurement devices.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... for interpretation of infrared spectra. For example, EPA Test Method 320 is considered a valid method... and length to achieve adequate resolution of the N2O peak for analysis. Examples of acceptable columns....550(b) that would otherwise apply. For example, you may perform a span gas measurement before and...

  8. Factorization and reduction methods for optimal control of distributed parameter systems

    NASA Technical Reports Server (NTRS)

    Burns, J. A.; Powers, R. K.

    1985-01-01

    A Chandrasekhar-type factorization method is applied to the linear-quadratic optimal control problem for distributed parameter systems. An aeroelastic control problem is used as a model example to demonstrate that if computationally efficient algorithms, such as those of Chandrasekhar-type, are combined with the special structure often available to a particular problem, then an abstract approximation theory developed for distributed parameter control theory becomes a viable method of solution. A numerical scheme based on averaging approximations is applied to hereditary control problems. Numerical examples are given.

  9. Applied Behavior Analysis and Statistical Process Control?

    ERIC Educational Resources Information Center

    Hopkins, B. L.

    1995-01-01

    Incorporating statistical process control (SPC) methods into applied behavior analysis is discussed. It is claimed that SPC methods would likely reduce applied behavior analysts' intimate contacts with problems and would likely yield poor treatment and research decisions. Cases and data presented by Pfadt and Wheeler (1995) are cited as examples.…

  10. Applying the Mixed Methods Instrument Development and Construct Validation Process: the Transformative Experience Questionnaire

    ERIC Educational Resources Information Center

    Koskey, Kristin L. K.; Sondergeld, Toni A.; Stewart, Victoria C.; Pugh, Kevin J.

    2018-01-01

    Onwuegbuzie and colleagues proposed the Instrument Development and Construct Validation (IDCV) process as a mixed methods framework for creating and validating measures. Examples applying IDCV are lacking. We provide an illustrative case integrating the Rasch model and cognitive interviews applied to the development of the Transformative…

  11. [Series: Utilization of Differential Equations and Methods for Solving Them in Medical Physics (2)].

    PubMed

    Murase, Kenya

    2015-01-01

    In this issue, symbolic methods for solving differential equations were firstly introduced. Of the symbolic methods, Laplace transform method was also introduced together with some examples, in which this method was applied to solving the differential equations derived from a two-compartment kinetic model and an equivalent circuit model for membrane potential. Second, series expansion methods for solving differential equations were introduced together with some examples, in which these methods were used to solve Bessel's and Legendre's differential equations. In the next issue, simultaneous differential equations and various methods for solving these differential equations will be introduced together with some examples in medical physics.

  12. Update 2016: Considerations for Using Agile in DoD Acquisition

    DTIC Science & Technology

    2016-12-01

    What Is Agile? 4 2.1 Agile Manifesto and Principles—A Brief History 4 2.2 A Practical Definition 6 2.3 Example Agile Method 6 2.4 Example Agile...5.8 Team Composition 45 5.9 Culture 46 6 Conclusion 48 Appendix A: Examples of Agile Methods 50 Appendix B: Common Objections to Agile 53...thank all those who have contributed to our knowledge of apply- ing “other than traditional” methods for software system acquisition and management over

  13. PLURAL METALLIC COATINGS ON URANIUM AND METHOD OF APPLYING SAME

    DOEpatents

    Gray, A.G.

    1958-09-16

    A method is described of applying protective coatings to uranlum articles. It consists in applying chromium plating to such uranium articles by electrolysis in a chromic acid bath and subsequently applying, to this minum containing alloy. This aluminum contalning alloy (for example one of aluminum and silicon) may then be used as a bonding alloy between the chromized surface and an aluminum can.

  14. HYBRID NEURAL NETWORK AND SUPPORT VECTOR MACHINE METHOD FOR OPTIMIZATION

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan (Inventor)

    2005-01-01

    System and method for optimization of a design associated with a response function, using a hybrid neural net and support vector machine (NN/SVM) analysis to minimize or maximize an objective function, optionally subject to one or more constraints. As a first example, the NN/SVM analysis is applied iteratively to design of an aerodynamic component, such as an airfoil shape, where the objective function measures deviation from a target pressure distribution on the perimeter of the aerodynamic component. As a second example, the NN/SVM analysis is applied to data classification of a sequence of data points in a multidimensional space. The NN/SVM analysis is also applied to data regression.

  15. Hybrid Neural Network and Support Vector Machine Method for Optimization

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan (Inventor)

    2007-01-01

    System and method for optimization of a design associated with a response function, using a hybrid neural net and support vector machine (NN/SVM) analysis to minimize or maximize an objective function, optionally subject to one or more constraints. As a first example, the NN/SVM analysis is applied iteratively to design of an aerodynamic component, such as an airfoil shape, where the objective function measures deviation from a target pressure distribution on the perimeter of the aerodynamic component. As a second example, the NN/SVM analysis is applied to data classification of a sequence of data points in a multidimensional space. The NN/SVM analysis is also applied to data regression.

  16. Generalized query-based active learning to identify differentially methylated regions in DNA.

    PubMed

    Haque, Md Muksitul; Holder, Lawrence B; Skinner, Michael K; Cook, Diane J

    2013-01-01

    Active learning is a supervised learning technique that reduces the number of examples required for building a successful classifier, because it can choose the data it learns from. This technique holds promise for many biological domains in which classified examples are expensive and time-consuming to obtain. Most traditional active learning methods ask very specific queries to the Oracle (e.g., a human expert) to label an unlabeled example. The example may consist of numerous features, many of which are irrelevant. Removing such features will create a shorter query with only relevant features, and it will be easier for the Oracle to answer. We propose a generalized query-based active learning (GQAL) approach that constructs generalized queries based on multiple instances. By constructing appropriately generalized queries, we can achieve higher accuracy compared to traditional active learning methods. We apply our active learning method to find differentially DNA methylated regions (DMRs). DMRs are DNA locations in the genome that are known to be involved in tissue differentiation, epigenetic regulation, and disease. We also apply our method on 13 other data sets and show that our method is better than another popular active learning technique.

  17. "Hand in Glove": Using Qualitative Methods to Connect Research and Practice.

    PubMed

    Harper, Liam D; McCunn, Robert

    2017-08-01

    Recent work has espoused the idea that in applied sporting environments, "fast"-working practitioners should work together with "slow"-working researchers. However, due to economical and logistical constraints, such a coupling may not always be practical. Therefore, alternative means of combining research and applied practice are needed. A particular methodology that has been used in recent years is qualitative research. Examples of qualitative methods include online surveys, 1-on-1 interviews, and focus groups. This article discusses the merits of using qualitative methods to combine applied practice and research in sport science. This includes a discussion of recent examples of the use of such methods in published journal articles, a critique of the approaches employed, and future directions and recommendations. The authors encourage both practitioners and researchers to use and engage with qualitative research with the ultimate goal of benefiting athlete health and sporting performance.

  18. Actuation for simultaneous motions and constraining efforts: an open chain example

    NASA Astrophysics Data System (ADS)

    Perreira, N. Duke

    1997-06-01

    A brief discussion on systems where simultaneous control of forces and velocities are desirable is given and an example linkage with revolute and prismatic joint is selected for further analysis. The Newton-Euler approach for dynamic system analysis is applied to the example to provide a basis of comparison. Gauge invariant transformations are used to convert the dynamic equations into invariant form suitable for use in a new dynamic system analysis method known as the motion-effort approach. This approach uses constraint elimination techniques based on singular value decompositions to recast the invariant form of dynamic system equations into orthogonal sets of motion and effort equations. Desired motions and constraining efforts are partitioned into ideally obtainable and unobtainable portions which are then used to determine the required actuation. The method is applied to the example system and an analytic estimate to its success is made.

  19. Error-trellis Syndrome Decoding Techniques for Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1984-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decoding is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  20. Error-trellis syndrome decoding techniques for convolutional codes

    NASA Technical Reports Server (NTRS)

    Reed, I. S.; Truong, T. K.

    1985-01-01

    An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decordig is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.

  1. The numerical solution of linear multi-term fractional differential equations: systems of equations

    NASA Astrophysics Data System (ADS)

    Edwards, John T.; Ford, Neville J.; Simpson, A. Charles

    2002-11-01

    In this paper, we show how the numerical approximation of the solution of a linear multi-term fractional differential equation can be calculated by reduction of the problem to a system of ordinary and fractional differential equations each of order at most unity. We begin by showing how our method applies to a simple class of problems and we give a convergence result. We solve the Bagley Torvik equation as an example. We show how the method can be applied to a general linear multi-term equation and give two further examples.

  2. Advancing our thinking in presence-only and used-available analysis.

    PubMed

    Warton, David; Aarts, Geert

    2013-11-01

    1. The problems of analysing used-available data and presence-only data are equivalent, and this paper uses this equivalence as a platform for exploring opportunities for advancing analysis methodology. 2. We suggest some potential methodological advances in used-available analysis, made possible via lessons learnt in the presence-only literature, for example, using modern methods to improve predictive performance. We also consider the converse - potential advances in presence-only analysis inspired by used-available methodology. 3. Notwithstanding these potential advances in methodology, perhaps a greater opportunity is in advancing our thinking about how to apply a given method to a particular data set. 4. It is shown by example that strikingly different results can be achieved for a single data set by applying a given method of analysis in different ways - hence having chosen a method of analysis, the next step of working out how to apply it is critical to performance. 5. We review some key issues to consider in deciding how to apply an analysis method: apply the method in a manner that reflects the study design; consider data properties; and use diagnostic tools to assess how reasonable a given analysis is for the data at hand. © 2013 The Authors. Journal of Animal Ecology © 2013 British Ecological Society.

  3. Finite element method formulation in polar coordinates for transient heat conduction problems

    NASA Astrophysics Data System (ADS)

    Duda, Piotr

    2016-04-01

    The aim of this paper is the formulation of the finite element method in polar coordinates to solve transient heat conduction problems. It is hard to find in the literature a formulation of the finite element method (FEM) in polar or cylindrical coordinates for the solution of heat transfer problems. This document shows how to apply the most often used boundary conditions. The global equation system is solved by the Crank-Nicolson method. The proposed algorithm is verified in three numerical tests. In the first example, the obtained transient temperature distribution is compared with the temperature obtained from the presented analytical solution. In the second numerical example, the variable boundary condition is assumed. In the last numerical example the component with the shape different than cylindrical is used. All examples show that the introduction of the polar coordinate system gives better results than in the Cartesian coordinate system. The finite element method formulation in polar coordinates is valuable since it provides a higher accuracy of the calculations without compacting the mesh in cylindrical or similar to tubular components. The proposed method can be applied for circular elements such as boiler drums, outlet headers, flux tubes. This algorithm can be useful during the solution of inverse problems, which do not allow for high density grid. This method can calculate the temperature distribution in the bodies of different properties in the circumferential and the radial direction. The presented algorithm can be developed for other coordinate systems. The examples demonstrate a good accuracy and stability of the proposed method.

  4. Transmission overhaul and replacement predictions using Weibull and renewel theory

    NASA Technical Reports Server (NTRS)

    Savage, M.; Lewicki, D. G.

    1989-01-01

    A method to estimate the frequency of transmission overhauls is presented. This method is based on the two-parameter Weibull statistical distribution for component life. A second method is presented to estimate the number of replacement components needed to support the transmission overhaul pattern. The second method is based on renewal theory. Confidence statistics are applied with both methods to improve the statistical estimate of sample behavior. A transmission example is also presented to illustrate the use of the methods. Transmission overhaul frequency and component replacement calculations are included in the example.

  5. Cyclotron accelerated beams applied in wear and corrosion studies

    NASA Astrophysics Data System (ADS)

    Racolta, P. M.; Popa-Simil, L.; Ivanov, E. A.; Alexandreanu, B.

    1996-05-01

    Wear and corrosion processes are characterized by a loss of material that is, for machine parts and components, usually in a micrometer's range. That is why, in the last two decades, many direct applications in machine construction, petrochemical and metallurgical industries based on the Thin Layer Activation (TLA) technique have been developed. In this paper general working patterns together with a few examples of TLA applications carried out using our laboratory's U-120 Cyclotron are presented. The relation between the counting rate of the radiation originating from the component's irradiated zone and the loss of the worn material can be determined mainly by two methods: the oil circulation method and the remnant radioactivity measuring method. The first method is illustrated with some typical examples such as the optimization of the running-in program of a diesel engine and anti-wear features certifying of lubricant oils. There is also presented an example where the second method mentioned above has been applied to corrosion rate determinations for different kinds of unoxidable steels used in inert gas generator construction.

  6. GI-13 Integration of Methods for Air Quality and Health Data, Remote Sensed and In-Situ with Disease Estimate Techniques

    EPA Science Inventory

    GI-13 – A brief review of the GEO Work Plan DescriptionGlobal map examples of PM2.5 satellite measuresUS Maps showing examples of fused in-situ and satellite dataNew AQ Monitoring approach with social value – Village Green exampleComputing and Systems Applied in Energ...

  7. 2D-dynamic representation of DNA sequences as a graphical tool in bioinformatics

    NASA Astrophysics Data System (ADS)

    Bielińska-Wa̧Ż, D.; Wa̧Ż, P.

    2016-10-01

    2D-dynamic representation of DNA sequences is briefly reviewed. Some new examples of 2D-dynamic graphs which are the graphical tool of the method are shown. Using the examples of the complete genome sequences of the Zika virus it is shown that the present method can be applied for the study of the evolution of viral genomes.

  8. A note on the preconditioner Pm=(I+Sm)

    NASA Astrophysics Data System (ADS)

    Kohno, Toshiyuki; Niki, Hiroshi

    2009-03-01

    Kotakemori et al. [H. Kotakemori, K. Harada, M. Morimoto, H. Niki, A comparison theorem for the iterative method with the preconditioner (I+Smax), Journal of Computational and Applied Mathematics 145 (2002) 373-378] have reported that the convergence rate of the iterative method with a preconditioner Pm=(I+Sm) was superior to one of the modified Gauss-Seidel method under the condition. These authors derived a theorem comparing the Gauss-Seidel method with the proposed method. However, through application of a counter example, Wen Li [Wen Li, A note on the preconditioned GaussSeidel (GS) method for linear systems, Journal of Computational and Applied Mathematics 182 (2005) 81-91] pointed out that there exists a special matrix that does not satisfy this comparison theorem. In this note, we analyze the reason why such a to counter example may be produced, and propose a preconditioner to overcome this problem.

  9. Probabilistic Methods for Uncertainty Propagation Applied to Aircraft Design

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Lin, Hong-Zong; Khalessi, Mohammad R.

    2002-01-01

    Three methods of probabilistic uncertainty propagation and quantification (the method of moments, Monte Carlo simulation, and a nongradient simulation search method) are applied to an aircraft analysis and conceptual design program to demonstrate design under uncertainty. The chosen example problems appear to have discontinuous design spaces and thus these examples pose difficulties for many popular methods of uncertainty propagation and quantification. However, specific implementation features of the first and third methods chosen for use in this study enable successful propagation of small uncertainties through the program. Input uncertainties in two configuration design variables are considered. Uncertainties in aircraft weight are computed. The effects of specifying required levels of constraint satisfaction with specified levels of input uncertainty are also demonstrated. The results show, as expected, that the designs under uncertainty are typically heavier and more conservative than those in which no input uncertainties exist.

  10. Method of Determining the Filtration Properties of oil-Bearing Crops in the Process of Their Pressing by the Example of Rape-oil Extrusion

    NASA Astrophysics Data System (ADS)

    Slavnov, E. V.; Petrov, I. A.

    2014-07-01

    A method of determining the change in the fi ltration properties of oil-bearing crops in the process of their pressing by repeated dynamic loading is proposed. The use of this method is demonstrated by the example of rape-oil extrusion. It was established that the change in the mass concentration of the oil in a rape mix from 0.45 to 0.23 leads to a decrease in the permeability of the mix by 101.5-102 times depending on the pressure applied to it. It is shown that the dependence of the permeability of this mix on the pressure applied to it is nonmonotone in character.

  11. Handling Missing Data in Structural Equation Models in R: A Replication Study for Applied Researchers

    ERIC Educational Resources Information Center

    Wolgast, Anett; Schwinger, Malte; Hahnel, Carolin; Stiensmeier-Pelster, Joachim

    2017-01-01

    Introduction: Multiple imputation (MI) is one of the most highly recommended methods for replacing missing values in research data. The scope of this paper is to demonstrate missing data handling in SEM by analyzing two modified data examples from educational psychology, and to give practical recommendations for applied researchers. Method: We…

  12. The dimension split element-free Galerkin method for three-dimensional potential problems

    NASA Astrophysics Data System (ADS)

    Meng, Z. J.; Cheng, H.; Ma, L. D.; Cheng, Y. M.

    2018-06-01

    This paper presents the dimension split element-free Galerkin (DSEFG) method for three-dimensional potential problems, and the corresponding formulae are obtained. The main idea of the DSEFG method is that a three-dimensional potential problem can be transformed into a series of two-dimensional problems. For these two-dimensional problems, the improved moving least-squares (IMLS) approximation is applied to construct the shape function, which uses an orthogonal function system with a weight function as the basis functions. The Galerkin weak form is applied to obtain a discretized system equation, and the penalty method is employed to impose the essential boundary condition. The finite difference method is selected in the splitting direction. For the purposes of demonstration, some selected numerical examples are solved using the DSEFG method. The convergence study and error analysis of the DSEFG method are presented. The numerical examples show that the DSEFG method has greater computational precision and computational efficiency than the IEFG method.

  13. Age adjustment in ecological studies: using a study on arsenic ingestion and bladder cancer as an example.

    PubMed

    Guo, How-Ran

    2011-10-20

    Despite its limitations, ecological study design is widely applied in epidemiology. In most cases, adjustment for age is necessary, but different methods may lead to different conclusions. To compare three methods of age adjustment, a study on the associations between arsenic in drinking water and incidence of bladder cancer in 243 townships in Taiwan was used as an example. A total of 3068 cases of bladder cancer, including 2276 men and 792 women, were identified during a ten-year study period in the study townships. Three methods were applied to analyze the same data set on the ten-year study period. The first (Direct Method) applied direct standardization to obtain standardized incidence rate and then used it as the dependent variable in the regression analysis. The second (Indirect Method) applied indirect standardization to obtain standardized incidence ratio and then used it as the dependent variable in the regression analysis instead. The third (Variable Method) used proportions of residents in different age groups as a part of the independent variables in the multiple regression models. All three methods showed a statistically significant positive association between arsenic exposure above 0.64 mg/L and incidence of bladder cancer in men and women, but different results were observed for the other exposure categories. In addition, the risk estimates obtained by different methods for the same exposure category were all different. Using an empirical example, the current study confirmed the argument made by other researchers previously that whereas the three different methods of age adjustment may lead to different conclusions, only the third approach can obtain unbiased estimates of the risks. The third method can also generate estimates of the risk associated with each age group, but the other two are unable to evaluate the effects of age directly.

  14. MONOMIALS AND BASIN CYLINDERS FOR NETWORK DYNAMICS.

    PubMed

    Austin, Daniel; Dinwoodie, Ian H

    We describe methods to identify cylinder sets inside a basin of attraction for Boolean dynamics of biological networks. Such sets are used for designing regulatory interventions that make the system evolve towards a chosen attractor, for example initiating apoptosis in a cancer cell. We describe two algebraic methods for identifying cylinders inside a basin of attraction, one based on the Groebner fan that finds monomials that define cylinders and the other on primary decomposition. Both methods are applied to current examples of gene networks.

  15. MONOMIALS AND BASIN CYLINDERS FOR NETWORK DYNAMICS

    PubMed Central

    AUSTIN, DANIEL; DINWOODIE, IAN H

    2014-01-01

    We describe methods to identify cylinder sets inside a basin of attraction for Boolean dynamics of biological networks. Such sets are used for designing regulatory interventions that make the system evolve towards a chosen attractor, for example initiating apoptosis in a cancer cell. We describe two algebraic methods for identifying cylinders inside a basin of attraction, one based on the Groebner fan that finds monomials that define cylinders and the other on primary decomposition. Both methods are applied to current examples of gene networks. PMID:25620893

  16. A method for the analysis of nonlinearities in aircraft dynamic response to atmospheric turbulence

    NASA Technical Reports Server (NTRS)

    Sidwell, K.

    1976-01-01

    An analytical method is developed which combines the equivalent linearization technique for the analysis of the response of nonlinear dynamic systems with the amplitude modulated random process (Press model) for atmospheric turbulence. The method is initially applied to a bilinear spring system. The analysis of the response shows good agreement with exact results obtained by the Fokker-Planck equation. The method is then applied to an example of control-surface displacement limiting in an aircraft with a pitch-hold autopilot.

  17. Methods of editing cloud and atmospheric layer affected pixels from satellite data

    NASA Technical Reports Server (NTRS)

    Nixon, P. R. (Principal Investigator); Wiegand, C. L.; Richardson, A. J.; Johnson, M. P.

    1982-01-01

    Practical methods of computer screening cloud-contaminated pixels from data of various satellite systems are proposed. Examples are given of the location of clouds and representative landscape features in HCMM spectral space of reflectance (VIS) vs emission (IR). Methods of screening out cloud affected HCMM are discussed. The character of subvisible absorbing-emitting atmospheric layers (subvisible cirrus or SCi) in HCMM data is considered and radiosonde soundings are examined in relation to the presence of SCi. The statistical characteristics of multispectral meteorological satellite data in clear and SCi affected areas are discussed. Examples in TIROS-N and NOAA-7 data from several states and Mexico are presented. The VIS-IR cluster screening method for removing clouds is applied to a 262, 144 pixel HCMM scene from south Texas and northeast Mexico. The SCi that remain after cluster screening are sited out by applying a statistically determined IR limit.

  18. A Teacher's Guide to Memory Techniques.

    ERIC Educational Resources Information Center

    Hodges, Daniel L.

    1982-01-01

    To aid instructors in teaching their students to use effective methods of memorization, this article outlines major memory methods, provides examples of their use, evaluates the methods, and discusses ways students can be taught to apply them. First, common, but less effective, memory methods are presented, including reading and re-reading…

  19. Focus Group Discussions: Three Examples from Family and Consumer Science Research.

    ERIC Educational Resources Information Center

    Garrison, M. E. Betsy; Pierce, Sarah H.; Monroe, Pamela A.; Sasser, Diane D.; Shaffer, Amy C.; Blalock, Lydia B.

    1999-01-01

    Gives examples of the focus group method in terms of question development, group composition and recruitment, interview protocols, and data analysis as applied to three family and consumer-sciences research projects: consumer behavior of working female adolescents, work readiness of adult males with low educational attainment, and definition of…

  20. Teaching Density with a Little Drama

    ERIC Educational Resources Information Center

    Karakas, Mehmet

    2012-01-01

    This article provides an example of an innovative science activity applied in a science methods course for future elementary teachers at a small university in northeastern Turkey. The aim of the activity is to help prospective elementary teachers understand the density concept in a simple way and see an innovative teaching example. The instructor…

  1. 26 CFR 1.482-8 - Examples of the best method rule.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... illustrate the comparative analysis required to apply this rule. As with all of the examples in these... retail market in the United States. The study concludes that this segment of the U.S. market, which is not exploited by USSub, may generate substantial profits. Based on this study, FP enters into a...

  2. Hydrogenation of passivated contacts

    DOEpatents

    Nemeth, William; Yuan, Hao-Chih; LaSalvia, Vincenzo; Stradins, Pauls; Page, Matthew R.

    2018-03-06

    Methods of hydrogenation of passivated contacts using materials having hydrogen impurities are provided. An example method includes applying, to a passivated contact, a layer of a material, the material containing hydrogen impurities. The method further includes subsequently annealing the material and subsequently removing the material from the passivated contact.

  3. Testing the Stability of 2-D Recursive QP, NSHP and General Digital Filters of Second Order

    NASA Astrophysics Data System (ADS)

    Rathinam, Ananthanarayanan; Ramesh, Rengaswamy; Reddy, P. Subbarami; Ramaswami, Ramaswamy

    Several methods for testing stability of first quadrant quarter-plane two dimensional (2-D) recursive digital filters have been suggested in 1970's and 80's. Though Jury's row and column algorithms, row and column concatenation stability tests have been considered as highly efficient mapping methods. They still fall short of accuracy as they need infinite number of steps to conclude about the exact stability of the filters and also the computational time required is enormous. In this paper, we present procedurally very simple algebraic method requiring only two steps when applied to the second order 2-D quarter - plane filter. We extend the same method to the second order Non-Symmetric Half-plane (NSHP) filters. Enough examples are given for both these types of filters as well as some lower order general recursive 2-D digital filters. We applied our method to barely stable or barely unstable filter examples available in the literature and got the same decisions thus showing that our method is accurate enough.

  4. Accounting for Uncertainty in Decision Analytic Models Using Rank Preserving Structural Failure Time Modeling: Application to Parametric Survival Models.

    PubMed

    Bennett, Iain; Paracha, Noman; Abrams, Keith; Ray, Joshua

    2018-01-01

    Rank Preserving Structural Failure Time models are one of the most commonly used statistical methods to adjust for treatment switching in oncology clinical trials. The method is often applied in a decision analytic model without appropriately accounting for additional uncertainty when determining the allocation of health care resources. The aim of the study is to describe novel approaches to adequately account for uncertainty when using a Rank Preserving Structural Failure Time model in a decision analytic model. Using two examples, we tested and compared the performance of the novel Test-based method with the resampling bootstrap method and with the conventional approach of no adjustment. In the first example, we simulated life expectancy using a simple decision analytic model based on a hypothetical oncology trial with treatment switching. In the second example, we applied the adjustment method on published data when no individual patient data were available. Mean estimates of overall and incremental life expectancy were similar across methods. However, the bootstrapped and test-based estimates consistently produced greater estimates of uncertainty compared with the estimate without any adjustment applied. Similar results were observed when using the test based approach on a published data showing that failing to adjust for uncertainty led to smaller confidence intervals. Both the bootstrapping and test-based approaches provide a solution to appropriately incorporate uncertainty, with the benefit that the latter can implemented by researchers in the absence of individual patient data. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  5. Chebyshev polynomials in the spectral Tau method and applications to Eigenvalue problems

    NASA Technical Reports Server (NTRS)

    Johnson, Duane

    1996-01-01

    Chebyshev Spectral methods have received much attention recently as a technique for the rapid solution of ordinary differential equations. This technique also works well for solving linear eigenvalue problems. Specific detail is given to the properties and algebra of chebyshev polynomials; the use of chebyshev polynomials in spectral methods; and the recurrence relationships that are developed. These formula and equations are then applied to several examples which are worked out in detail. The appendix contains an example FORTRAN program used in solving an eigenvalue problem.

  6. Monotonicity based imaging method for time-domain eddy current problems

    NASA Astrophysics Data System (ADS)

    Su, Z.; Ventre, S.; Udpa, L.; Tamburrino, A.

    2017-12-01

    Eddy current imaging is an example of inverse problem in nondestructive evaluation for detecting anomalies in conducting materials. This paper introduces the concept of time constants and associated natural modes in eddy current imaging. The monotonicity of time constants is then described and applied to develop a non-iterative imaging method. The proposed imaging method has a low computational cost which makes it suitable for real-time operations. Full 3D numerical examples prove the effectiveness of the method in realistic scenarios. This paper is dedicated to Professor Guglielmo Rubinacci on the occasion of his 65th Birthday.

  7. A Best-Fit Line Using the Method of Averages.

    ERIC Educational Resources Information Center

    Hoppe, Jack

    2002-01-01

    Describes a method for calculating lines of best fit that is easy to understand and apply. Presents an example using the Arrhenius plot of a first-order reaction from which the energy of activation is calculated. (MM)

  8. Fundamental solution of the problem of linear programming and method of its determination

    NASA Technical Reports Server (NTRS)

    Petrunin, S. V.

    1978-01-01

    The idea of a fundamental solution to a problem in linear programming is introduced. A method of determining the fundamental solution and of applying this method to the solution of a problem in linear programming is proposed. Numerical examples are cited.

  9. Source signature estimation from multimode surface waves via mode-separated virtual real source method

    NASA Astrophysics Data System (ADS)

    Gao, Lingli; Pan, Yudi

    2018-05-01

    The correct estimation of the seismic source signature is crucial to exploration geophysics. Based on seismic interferometry, the virtual real source (VRS) method provides a model-independent way for source signature estimation. However, when encountering multimode surface waves, which are commonly seen in the shallow seismic survey, strong spurious events appear in seismic interferometric results. These spurious events introduce errors in the virtual-source recordings and reduce the accuracy of the source signature estimated by the VRS method. In order to estimate a correct source signature from multimode surface waves, we propose a mode-separated VRS method. In this method, multimode surface waves are mode separated before seismic interferometry. Virtual-source recordings are then obtained by applying seismic interferometry to each mode individually. Therefore, artefacts caused by cross-mode correlation are excluded in the virtual-source recordings and the estimated source signatures. A synthetic example showed that a correct source signature can be estimated with the proposed method, while strong spurious oscillation occurs in the estimated source signature if we do not apply mode separation first. We also applied the proposed method to a field example, which verified its validity and effectiveness in estimating seismic source signature from shallow seismic shot gathers containing multimode surface waves.

  10. Introducing 3D U-statistic method for separating anomaly from background in exploration geochemical data with associated software development

    NASA Astrophysics Data System (ADS)

    Ghannadpour, Seyyed Saeed; Hezarkhani, Ardeshir

    2016-03-01

    The U-statistic method is one of the most important structural methods to separate the anomaly from the background. It considers the location of samples and carries out the statistical analysis of the data without judging from a geochemical point of view and tries to separate subpopulations and determine anomalous areas. In the present study, to use U-statistic method in three-dimensional (3D) condition, U-statistic is applied on the grade of two ideal test examples, by considering sample Z values (elevation). So far, this is the first time that this method has been applied on a 3D condition. To evaluate the performance of 3D U-statistic method and in order to compare U-statistic with one non-structural method, the method of threshold assessment based on median and standard deviation (MSD method) is applied on the two example tests. Results show that the samples indicated by U-statistic method as anomalous are more regular and involve less dispersion than those indicated by the MSD method. So that, according to the location of anomalous samples, denser areas of them can be determined as promising zones. Moreover, results show that at a threshold of U = 0, the total error of misclassification for U-statistic method is much smaller than the total error of criteria of bar {x}+n× s. Finally, 3D model of two test examples for separating anomaly from background using 3D U-statistic method is provided. The source code for a software program, which was developed in the MATLAB programming language in order to perform the calculations of the 3D U-spatial statistic method, is additionally provided. This software is compatible with all the geochemical varieties and can be used in similar exploration projects.

  11. Effect of surface preparation on service life of top-coats applied to weathered primer paint

    Treesearch

    R. Sam Williams; Mark Knaebe; Peter Sotos

    2008-01-01

    Paint companies usually recommend that topcoats be applied to primer paint within two weeks. Unfortunately, this is not always possible. For example, onset of winter weather shortly after applying primer may delay topcoat application until spring. Scuff sanding or repriming are often recommended remedial methods for preparing a weathered primer for topcoats, but there...

  12. Assessing and grouping chemicals applying partial ordering Alkyl anilines as an illustrative example.

    PubMed

    Carlsen, Lars; Bruggemann, Rainer

    2018-06-03

    In chemistry there is a long tradition in classification. Usually methods are adopted from the wide field of cluster analysis. Here, based on the example of 21 alkyl anilines we show that also concepts taken out from the mathematical discipline of partially ordered sets may also be applied. The chemical compounds are described by a multi-indicator system. For the present study four indicators, mainly taken from the field of environmental chemistry were applied and a Hasse diagram was constructed. A Hasse diagram is an acyclic, transitively reduced, triangle free graph that may have several components. The crucial question is, whether or not the Hasse diagram can be interpreted from a structural chemical point of view. This is indeed the case, but it must be clearly stated that a guarantee for meaningful results in general cannot be given. For that further theoretical work is needed. Two cluster analysis methods are applied (K-means and a hierarchical cluster method). In both cases the partitioning of the set of 21 compounds by the component structure of the Hasse diagram appears to be better interpretable. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  13. Applications of neural networks in training science.

    PubMed

    Pfeiffer, Mark; Hohmann, Andreas

    2012-04-01

    Training science views itself as an integrated and applied science, developing practical measures founded on scientific method. Therefore, it demands consideration of a wide spectrum of approaches and methods. Especially in the field of competitive sports, research questions are usually located in complex environments, so that mainly field studies are drawn upon to obtain broad external validity. Here, the interrelations between different variables or variable sets are mostly of a nonlinear character. In these cases, methods like neural networks, e.g., the pattern recognizing methods of Self-Organizing Kohonen Feature Maps or similar instruments to identify interactions might be successfully applied to analyze data. Following on from a classification of data analysis methods in training-science research, the aim of the contribution is to give examples of varied sports in which network approaches can be effectually used in training science. First, two examples are given in which neural networks are employed for pattern recognition. While one investigation deals with the detection of sporting talent in swimming, the other is located in game sports research, identifying tactical patterns in team handball. The third and last example shows how an artificial neural network can be used to predict competitive performance in swimming. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Insights: A New Method to Balance Chemical Equations.

    ERIC Educational Resources Information Center

    Garcia, Arcesio

    1987-01-01

    Describes a method designed to balance oxidation-reduction chemical equations. Outlines a method which is based on changes in the oxidation number that can be applied to both molecular reactions and ionic reactions. Provides examples and delineates the steps to follow for each type of equation balancing. (TW)

  15. A New View of Earthquake Ground Motion Data: The Hilbert Spectral Analysis

    NASA Technical Reports Server (NTRS)

    Huang, Norden; Busalacchi, Antonio J. (Technical Monitor)

    2000-01-01

    A brief description of the newly developed Empirical Mode Decomposition (ENID) and Hilbert Spectral Analysis (HSA) method will be given. The decomposition is adaptive and can be applied to both nonlinear and nonstationary data. Example of the method applied to a sample earthquake record will be given. The results indicate those low frequency components, totally missed by the Fourier analysis, are clearly identified by the new method. Comparisons with Wavelet and window Fourier analysis show the new method offers much better temporal and frequency resolutions.

  16. Peak-Seeking Control Using Gradient and Hessian Estimates

    NASA Technical Reports Server (NTRS)

    Ryan, John J.; Speyer, Jason L.

    2010-01-01

    A peak-seeking control method is presented which utilizes a linear time-varying Kalman filter. Performance function coordinate and magnitude measurements are used by the Kalman filter to estimate the gradient and Hessian of the performance function. The gradient and Hessian are used to command the system toward a local extremum. The method is naturally applied to multiple-input multiple-output systems. Applications of this technique to a single-input single-output example and a two-input one-output example are presented.

  17. Teaching group theory using Rubik's cubes

    NASA Astrophysics Data System (ADS)

    Cornock, Claire

    2015-10-01

    Being situated within a course at the applied end of the spectrum of maths degrees, the pure mathematics modules at Sheffield Hallam University have an applied spin. Pure topics are taught through consideration of practical examples such as knots, cryptography and automata. Rubik's cubes are used to teach group theory within a final year pure elective based on physical examples. Abstract concepts, such as subgroups, homomorphisms and equivalence relations are explored with the cubes first. In addition to this, conclusions about the cubes can be made through the consideration of algebraic approaches through a process of discovery. The teaching, learning and assessment methods are explored in this paper, along with the challenges and limitations of the methods. The physical use of Rubik's cubes within the classroom and examination will be presented, along with the use of peer support groups in this process. The students generally respond positively to the teaching methods and the use of the cubes.

  18. The equivalent magnetizing method applied to the design of gradient coils for MRI.

    PubMed

    Lopez, Hector Sanchez; Liu, Feng; Crozier, Stuart

    2008-01-01

    This paper presents a new method for the design of gradient coils for Magnetic Resonance Imaging systems. The method is based on the equivalence between a magnetized volume surrounded by a conducting surface and its equivalent representation in surface current/charge density. We demonstrate that the curl of the vertical magnetization induces a surface current density whose stream line defines the coil current pattern. This method can be applied for coils wounds on arbitrary surface shapes. A single layer unshielded transverse gradient coil is designed and compared, with the designs obtained using two conventional methods. Through the presented example we demonstrate that the generated unconventional current patterns obtained using the magnetizing current method produces a superior gradient coil performance than coils designed by applying conventional methods.

  19. Computing the optimal path in stochastic dynamical systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bauver, Martha; Forgoston, Eric, E-mail: eric.forgoston@montclair.edu; Billings, Lora

    2016-08-15

    In stochastic systems, one is often interested in finding the optimal path that maximizes the probability of escape from a metastable state or of switching between metastable states. Even for simple systems, it may be impossible to find an analytic form of the optimal path, and in high-dimensional systems, this is almost always the case. In this article, we formulate a constructive methodology that is used to compute the optimal path numerically. The method utilizes finite-time Lyapunov exponents, statistical selection criteria, and a Newton-based iterative minimizing scheme. The method is applied to four examples. The first example is a two-dimensionalmore » system that describes a single population with internal noise. This model has an analytical solution for the optimal path. The numerical solution found using our computational method agrees well with the analytical result. The second example is a more complicated four-dimensional system where our numerical method must be used to find the optimal path. The third example, although a seemingly simple two-dimensional system, demonstrates the success of our method in finding the optimal path where other numerical methods are known to fail. In the fourth example, the optimal path lies in six-dimensional space and demonstrates the power of our method in computing paths in higher-dimensional spaces.« less

  20. Method for loading shape memory polymer gripper mechanisms

    DOEpatents

    Lee, Abraham P.; Benett, William J.; Schumann, Daniel L.; Krulevitch, Peter A.; Fitch, Joseph P.

    2002-01-01

    A method and apparatus for loading deposit material, such as an embolic coil, into a shape memory polymer (SMP) gripping/release mechanism. The apparatus enables the application of uniform pressure to secure a grip by the SMP mechanism on the deposit material via differential pressure between, for example, vacuum within the SMP mechanism and hydrostatic water pressure on the exterior of the SMP mechanism. The SMP tubing material of the mechanism is heated to above the glass transformation temperature (Tg) while reshaping, and subsequently cooled to below Tg to freeze the shape. The heating and/or cooling may, for example, be provided by the same water applied for pressurization or the heating can be applied by optical fibers packaged to the SMP mechanism for directing a laser beam, for example, thereunto. At a point of use, the deposit material is released from the SMP mechanism by reheating the SMP material to above the temperature Tg whereby it returns to its initial shape. The reheating of the SMP material may be carried out by injecting heated fluid (water) through an associated catheter or by optical fibers and an associated beam of laser light, for example.

  1. Developing an OD-Intervention Metric System with the Use of Applied Theory-Building Methodology: A Work/Life-Intervention Example

    ERIC Educational Resources Information Center

    Morris, Michael Lane; Storberg-Walker, Julia; McMillan, Heather S.

    2009-01-01

    This article presents a new model, generated through applied theory-building research methods, that helps human resource development (HRD) practitioners evaluate the return on investment (ROI) of organization development (OD) interventions. This model, called organization development human-capital accounting system (ODHCAS), identifies…

  2. Using the SCR Specification Technique in a High School Programming Course.

    ERIC Educational Resources Information Center

    Rosen, Edward; McKim, James C., Jr.

    1992-01-01

    Presents the underlying ideas of the Software Cost Reduction (SCR) approach to requirements specifications. Results of applying this approach to the teaching of programing to high school students indicate that students perform better in writing programs. An appendix provides two examples of how the method is applied to problem solving. (MDH)

  3. A Method for Establishing a Depreciated Monetary Value for Print Collections.

    ERIC Educational Resources Information Center

    Marman, Edward

    1995-01-01

    Outlines a method for establishing a depreciated value of a library collection and includes an example of applying the formula for calculating depreciation. The method is based on the useful life of books, other print, and audio visual materials; their original cost; and on sampling subsets or sections of the collection. (JKP)

  4. 48 CFR 1631.203-70 - Allocation techniques.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... shall consistently apply the methods and techniques established to classify direct and indirect costs... meant to be exhaustive, but rather are examples of allocation methods that may be acceptable under... 48 Federal Acquisition Regulations System 6 2010-10-01 2010-10-01 true Allocation techniques. 1631...

  5. Series: Utilization of Differential Equations and Methods for Solving Them in Medical Physics (3).

    PubMed

    Murase, Kenya

    2016-01-01

    In this issue, simultaneous differential equations were introduced. These differential equations are often used in the field of medical physics. The methods for solving them were also introduced, which include Laplace transform and matrix methods. Some examples were also introduced, in which Laplace transform and matrix methods were applied to solving simultaneous differential equations derived from a three-compartment kinetic model for analyzing the glucose metabolism in tissues and Bloch equations for describing the behavior of the macroscopic magnetization in magnetic resonance imaging.In the next (final) issue, partial differential equations and various methods for solving them will be introduced together with some examples in medical physics.

  6. Conformal mapping for multiple terminals

    PubMed Central

    Wang, Weimin; Ma, Wenying; Wang, Qiang; Ren, Hao

    2016-01-01

    Conformal mapping is an important mathematical tool that can be used to solve various physical and engineering problems in many fields, including electrostatics, fluid mechanics, classical mechanics, and transformation optics. It is an accurate and convenient way to solve problems involving two terminals. However, when faced with problems involving three or more terminals, which are more common in practical applications, existing conformal mapping methods apply assumptions or approximations. A general exact method does not exist for a structure with an arbitrary number of terminals. This study presents a conformal mapping method for multiple terminals. Through an accurate analysis of boundary conditions, additional terminals or boundaries are folded into the inner part of a mapped region. The method is applied to several typical situations, and the calculation process is described for two examples of an electrostatic actuator with three electrodes and of a light beam splitter with three ports. Compared with previously reported results, the solutions for the two examples based on our method are more precise and general. The proposed method is helpful in promoting the application of conformal mapping in analysis of practical problems. PMID:27830746

  7. A method to align a bent crystal for channeling experiments by using quasichanneling oscillations

    NASA Astrophysics Data System (ADS)

    Sytov, A. I.; Guidi, V.; Tikhomirov, V. V.; Bandiera, L.; Bagli, E.; Germogli, G.; Mazzolari, A.; Romagnoni, M.

    2018-04-01

    A method to calculate both the bent crystal angle of alignment and radius of curvature by using only one distribution of deflection angles has been developed. The method is based on measuring of the angular position of recently predicted and observed quasichanneling oscillations in the deflection angle distribution and consequent fitting of both the radius and angular alignment by analytic formulae. In this paper this method is applied on the example of simulated angular distributions over a wide range of values of both radius and alignment for electrons. It is carried out through the example of (111) nonequidistant planes though this technique is general and could be applied to any kind of planes. In addition, the method application constraints are also discussed. It is shown by simulations that this method, being in fact a sort of beam diagnostics, allows one in a certain case to increase the crystal alignment accuracy as well as to control precisely the radius of curvature inside an accelerator tube without vacuum breaking. In addition, it speeds up the procedure of crystal alignment in channeling experiments, reducing beamtime consuming.

  8. Optimization under variability and uncertainty: a case study for NOx emissions control for a gasification system.

    PubMed

    Chen, Jianjun; Frey, H Christopher

    2004-12-15

    Methods for optimization of process technologies considering the distinction between variability and uncertainty are developed and applied to case studies of NOx control for Integrated Gasification Combined Cycle systems. Existing methods of stochastic optimization (SO) and stochastic programming (SP) are demonstrated. A comparison of SO and SP results provides the value of collecting additional information to reduce uncertainty. For example, an expected annual benefit of 240,000 dollars is estimated if uncertainty can be reduced before a final design is chosen. SO and SP are typically applied to uncertainty. However, when applied to variability, the benefit of dynamic process control is obtained. For example, an annual savings of 1 million dollars could be achieved if the system is adjusted to changes in process conditions. When variability and uncertainty are treated distinctively, a coupled stochastic optimization and programming method and a two-dimensional stochastic programming method are demonstrated via a case study. For the case study, the mean annual benefit of dynamic process control is estimated to be 700,000 dollars, with a 95% confidence range of 500,000 dollars to 940,000 dollars. These methods are expected to be of greatest utility for problems involving a large commitment of resources, for which small differences in designs can produce large cost savings.

  9. Reflection Coefficients.

    ERIC Educational Resources Information Center

    Greenslade, Thomas B., Jr.

    1994-01-01

    Discusses and provides an example of reflectivity approximation to determine whether reflection will occur. Provides a method to show thin-film interference on a projection screen. Also applies the reflectivity concepts to electromagnetic wave systems. (MVL)

  10. Applying integrals of motion to the numerical solution of differential equations

    NASA Technical Reports Server (NTRS)

    Vezewski, D. J.

    1980-01-01

    A method is developed for using the integrals of systems of nonlinear, ordinary, differential equations in a numerical integration process to control the local errors in these integrals and reduce the global errors of the solution. The method is general and can be applied to either scalar or vector integrals. A number of example problems, with accompanying numerical results, are used to verify the analysis and support the conjecture of global error reduction.

  11. Applying integrals of motion to the numerical solution of differential equations

    NASA Technical Reports Server (NTRS)

    Jezewski, D. J.

    1979-01-01

    A method is developed for using the integrals of systems of nonlinear, ordinary differential equations in a numerical integration process to control the local errors in these integrals and reduce the global errors of the solution. The method is general and can be applied to either scaler or vector integrals. A number of example problems, with accompanying numerical results, are used to verify the analysis and support the conjecture of global error reduction.

  12. Estimating costs and performance of systems for machine processing of remotely sensed data

    NASA Technical Reports Server (NTRS)

    Ballard, R. J.; Eastwood, L. F., Jr.

    1977-01-01

    This paper outlines a method for estimating computer processing times and costs incurred in producing information products from digital remotely sensed data. The method accounts for both computation and overhead, and may be applied to any serial computer. The method is applied to estimate the cost and computer time involved in producing Level II Land Use and Vegetative Cover Maps for a five-state midwestern region. The results show that the amount of data to be processed overloads some example computer systems, but that the processing is feasible on others.

  13. Boundary element modelling of dynamic behavior of piecewise homogeneous anisotropic elastic solids

    NASA Astrophysics Data System (ADS)

    Igumnov, L. A.; Markov, I. P.; Litvinchuk, S. Yu

    2018-04-01

    A traditional direct boundary integral equations method is applied to solve three-dimensional dynamic problems of piecewise homogeneous linear elastic solids. The materials of homogeneous parts are considered to be generally anisotropic. The technique used to solve the boundary integral equations is based on the boundary element method applied together with the Radau IIA convolution quadrature method. A numerical example of suddenly loaded 3D prismatic rod consisting of two subdomains with different anisotropic elastic properties is presented to verify the accuracy of the proposed formulation.

  14. Using pyramids to define local thresholds for blob detection.

    PubMed

    Shneier, M

    1983-03-01

    A method of detecting blobs in images is described. The method involves building a succession of lower resolution images and looking for spots in these images. A spot in a low resolution image corresponds to a distinguished compact region in a known position in the original image. Further, it is possible to calculate thresholds in the low resolution image, using very simple methods, and to apply those thresholds to the region of the original image corresponding to the spot. Examples are shown in which variations of the technique are applied to several images.

  15. Applying Process Improvement Methods to Clinical and Translational Research: Conceptual Framework and Case Examples

    PubMed Central

    Selker, Harry P.; Leslie, Laurel K.

    2015-01-01

    Abstract There is growing appreciation that process improvement holds promise for improving quality and efficiency across the translational research continuum but frameworks for such programs are not often described. The purpose of this paper is to present a framework and case examples of a Research Process Improvement Program implemented at Tufts CTSI. To promote research process improvement, we developed online training seminars, workshops, and in‐person consultation models to describe core process improvement principles and methods, demonstrate the use of improvement tools, and illustrate the application of these methods in case examples. We implemented these methods, as well as relational coordination theory, with junior researchers, pilot funding awardees, our CTRC, and CTSI resource and service providers. The program focuses on capacity building to address common process problems and quality gaps that threaten the efficient, timely and successful completion of clinical and translational studies. PMID:26332869

  16. Series: Utilization of Differential Equations and Methods for Solving Them in Medical Physics (4).

    PubMed

    Murase, Kenya

    2016-01-01

    Partial differential equations are often used in the field of medical physics. In this (final) issue, the methods for solving the partial differential equations were introduced, which include separation of variables, integral transform (Fourier and Fourier-sine transforms), Green's function, and series expansion methods. Some examples were also introduced, in which the integral transform and Green's function methods were applied to solving Pennes' bioheat transfer equation and the Fourier series expansion method was applied to Navier-Stokes equation for analyzing the wall shear stress in blood vessels.Finally, the author hopes that this series will be helpful for people who engage in medical physics.

  17. Mixing Research Methods in Health Professional Degrees: Thoughts for Undergraduate Students and Supervisors

    ERIC Educational Resources Information Center

    Anaf, Sophie; Sheppard, Lorraine A.

    2007-01-01

    This commentary considers some of the challenges of applying mixed methods research in undergraduate research degrees, especially in professions with a clinical health focus. Our experience in physiotherapy academia is used as an example. Mixed methods research is increasingly appreciated in its own right as a "third paradigm," however the success…

  18. Numerical approximations for fractional diffusion equations via a Chebyshev spectral-tau method

    NASA Astrophysics Data System (ADS)

    Doha, Eid H.; Bhrawy, Ali H.; Ezz-Eldien, Samer S.

    2013-10-01

    In this paper, a class of fractional diffusion equations with variable coefficients is considered. An accurate and efficient spectral tau technique for solving the fractional diffusion equations numerically is proposed. This method is based upon Chebyshev tau approximation together with Chebyshev operational matrix of Caputo fractional differentiation. Such approach has the advantage of reducing the problem to the solution of a system of algebraic equations, which may then be solved by any standard numerical technique. We apply this general method to solve four specific examples. In each of the examples considered, the numerical results show that the proposed method is of high accuracy and is efficient for solving the time-dependent fractional diffusion equations.

  19. Optimizing efficiency of height modeling for extensive forest inventories.

    Treesearch

    T.M. Barrett

    2006-01-01

    Although critical to monitoring forest ecosystems, inventories are expensive. This paper presents a generalizable method for using an integer programming model to examine tradeoffs between cost and estimation error for alternative measurement strategies in forest inventories. The method is applied to an example problem of choosing alternative height-modeling strategies...

  20. An Approach to Teaching Applied GIS: Implementation for Local Organizations.

    ERIC Educational Resources Information Center

    Benhart, John, Jr.

    2000-01-01

    Describes the instructional method, Client-Life Cycle GIS Project Learning, used in a course at Indiana University of Pennsylvania that enables students to learn with and about geographic information system (GIS). Discusses the course technical issues in GIS and an example project using this method. (CMK)

  1. A study of delamination buckling of laminates

    NASA Technical Reports Server (NTRS)

    Mukherjee, Yu-Xie; Xie, Zhi-Cheng; Ingraffea, Anthony

    1990-01-01

    The subject of this paper is the buckling of laminated plates, with a preexisting delamination, subjected to in-plane loading. Each laminate is modelled as an orthotropic Mindlin plate. The analysis is carried out by a combination of the finite element and asymptotic expansion methods. By applying the finite element method, plates with general delamination regions can be studied. The asymptotic expansion method reduces the number of unknown variables of the eigenvalue equation to that of the equation for a single Kirchhoff plate. Numerical results are presented for several examples. The effects of the shape, size, and position of the delamination on the buckling load are studied through these examples.

  2. Mixed methods in gerontological research: Do the qualitative and quantitative data “touch”?

    PubMed Central

    Happ, Mary Beth

    2010-01-01

    This paper distinguishes between parallel and integrated mixed methods research approaches. Barriers to integrated mixed methods approaches in gerontological research are discussed and critiqued. The author presents examples of mixed methods gerontological research to illustrate approaches to data integration at the levels of data analysis, interpretation, and research reporting. As a summary of the methodological literature, four basic levels of mixed methods data combination are proposed. Opportunities for mixing qualitative and quantitative data are explored using contemporary examples from published studies. Data transformation and visual display, judiciously applied, are proposed as pathways to fuller mixed methods data integration and analysis. Finally, practical strategies for mixing qualitative and quantitative data types are explicated as gerontological research moves beyond parallel mixed methods approaches to achieve data integration. PMID:20077973

  3. Applying remote sensing to invasive species science—A tamarisk example

    USGS Publications Warehouse

    Morisette, Jeffrey T.

    2011-01-01

    The Invasive Species Science Branch of the Fort Collins Science Center provides research and technical assistance relating to management concerns for invasive species, including understanding how these species are introduced, identifying areas vulnerable to invasion, forecasting invasions, and developing control methods. This fact sheet considers the invasive plant species tamarisk (Tamarix spp), addressing three fundamental questions: *Where is it now? *What are the potential or realized ecological impacts of invasion? *Where can it survive and thrive if introduced? It provides peer-review examples of how the U.S. Geological Survey, working with other federal agencies and university partners, are applying remote-sensing technologies to address these key questions.

  4. Aerospace reliability applied to biomedicine.

    NASA Technical Reports Server (NTRS)

    Lalli, V. R.; Vargo, D. J.

    1972-01-01

    An analysis is presented that indicates that the reliability and quality assurance methodology selected by NASA to minimize failures in aerospace equipment can be applied directly to biomedical devices to improve hospital equipment reliability. The Space Electric Rocket Test project is used as an example of NASA application of reliability and quality assurance (R&QA) methods. By analogy a comparison is made to show how these same methods can be used in the development of transducers, instrumentation, and complex systems for use in medicine.

  5. Applying quantum principles to psychology

    NASA Astrophysics Data System (ADS)

    Busemeyer, Jerome R.; Wang, Zheng; Khrennikov, Andrei; Basieva, Irina

    2014-12-01

    This article starts out with a detailed example illustrating the utility of applying quantum probability to psychology. Then it describes several alternative mathematical methods for mapping fundamental quantum concepts (such as state preparation, measurement, state evolution) to fundamental psychological concepts (such as stimulus, response, information processing). For state preparation, we consider both pure states and densities with mixtures. For measurement, we consider projective measurements and positive operator valued measurements. The advantages and disadvantages of each method with respect to applications in psychology are discussed.

  6. Recognition Using Hybrid Classifiers.

    PubMed

    Osadchy, Margarita; Keren, Daniel; Raviv, Dolev

    2016-04-01

    A canonical problem in computer vision is category recognition (e.g., find all instances of human faces, cars etc., in an image). Typically, the input for training a binary classifier is a relatively small sample of positive examples, and a huge sample of negative examples, which can be very diverse, consisting of images from a large number of categories. The difficulty of the problem sharply increases with the dimension and size of the negative example set. We propose to alleviate this problem by applying a "hybrid" classifier, which replaces the negative samples by a prior, and then finds a hyperplane which separates the positive samples from this prior. The method is extended to kernel space and to an ensemble-based approach. The resulting binary classifiers achieve an identical or better classification rate than SVM, while requiring far smaller memory and lower computational complexity to train and apply.

  7. Homogeneous Biosensing Based on Magnetic Particle Labels

    PubMed Central

    Schrittwieser, Stefan; Pelaz, Beatriz; Parak, Wolfgang J.; Lentijo-Mozo, Sergio; Soulantica, Katerina; Dieckhoff, Jan; Ludwig, Frank; Guenther, Annegret; Tschöpe, Andreas; Schotter, Joerg

    2016-01-01

    The growing availability of biomarker panels for molecular diagnostics is leading to an increasing need for fast and sensitive biosensing technologies that are applicable to point-of-care testing. In that regard, homogeneous measurement principles are especially relevant as they usually do not require extensive sample preparation procedures, thus reducing the total analysis time and maximizing ease-of-use. In this review, we focus on homogeneous biosensors for the in vitro detection of biomarkers. Within this broad range of biosensors, we concentrate on methods that apply magnetic particle labels. The advantage of such methods lies in the added possibility to manipulate the particle labels by applied magnetic fields, which can be exploited, for example, to decrease incubation times or to enhance the signal-to-noise-ratio of the measurement signal by applying frequency-selective detection. In our review, we discriminate the corresponding methods based on the nature of the acquired measurement signal, which can either be based on magnetic or optical detection. The underlying measurement principles of the different techniques are discussed, and biosensing examples for all techniques are reported, thereby demonstrating the broad applicability of homogeneous in vitro biosensing based on magnetic particle label actuation. PMID:27275824

  8. Implementing Quality Criteria in Designing and Conducting a Sequential Quan [right arrow] Qual Mixed Methods Study of Student Engagement with Learning Applied Research Methods Online

    ERIC Educational Resources Information Center

    Ivankova, Nataliya V.

    2014-01-01

    In spite of recent methodological developments related to quality assurance in mixed methods research, practical examples of how to implement quality criteria in designing and conducting sequential QUAN [right arrow] QUAL mixed methods studies to ensure the process is systematic and rigorous remain scarce. This article discusses a three-step…

  9. The design of multirate digital control systems

    NASA Technical Reports Server (NTRS)

    Berg, M. C.

    1986-01-01

    The successive loop closures synthesis method is the only method for multirate (MR) synthesis in common use. A new method for MR synthesis is introduced which requires a gradient-search solution to a constrained optimization problem. Some advantages of this method are that the control laws for all control loops are synthesized simultaneously, taking full advantage of all cross-coupling effects, and that simple, low-order compensator structures are easily accomodated. The algorithm and associated computer program for solving the constrained optimization problem are described. The successive loop closures , optimal control, and constrained optimization synthesis methods are applied to two example design problems. A series of compensator pairs are synthesized for each example problem. The succesive loop closure, optimal control, and constrained optimization synthesis methods are compared, in the context of the two design problems.

  10. Symmetric functions and wavefunctions of XXZ-type six-vertex models and elliptic Felderhof models by Izergin-Korepin analysis

    NASA Astrophysics Data System (ADS)

    Motegi, Kohei

    2018-05-01

    We present a method to analyze the wavefunctions of six-vertex models by extending the Izergin-Korepin analysis originally developed for domain wall boundary partition functions. First, we apply the method to the case of the basic wavefunctions of the XXZ-type six-vertex model. By giving the Izergin-Korepin characterization of the wavefunctions, we show that these wavefunctions can be expressed as multiparameter deformations of the quantum group deformed Grothendieck polynomials. As a second example, we show that the Izergin-Korepin analysis is effective for analysis of the wavefunctions for a triangular boundary and present the explicit forms of the symmetric functions representing these wavefunctions. As a third example, we apply the method to the elliptic Felderhof model which is a face-type version and an elliptic extension of the trigonometric Felderhof model. We show that the wavefunctions can be expressed as one-parameter deformations of an elliptic analog of the Vandermonde determinant and elliptic symmetric functions.

  11. Advanced Cell Classifier: User-Friendly Machine-Learning-Based Software for Discovering Phenotypes in High-Content Imaging Data.

    PubMed

    Piccinini, Filippo; Balassa, Tamas; Szkalisity, Abel; Molnar, Csaba; Paavolainen, Lassi; Kujala, Kaisa; Buzas, Krisztina; Sarazova, Marie; Pietiainen, Vilja; Kutay, Ulrike; Smith, Kevin; Horvath, Peter

    2017-06-28

    High-content, imaging-based screens now routinely generate data on a scale that precludes manual verification and interrogation. Software applying machine learning has become an essential tool to automate analysis, but these methods require annotated examples to learn from. Efficiently exploring large datasets to find relevant examples remains a challenging bottleneck. Here, we present Advanced Cell Classifier (ACC), a graphical software package for phenotypic analysis that addresses these difficulties. ACC applies machine-learning and image-analysis methods to high-content data generated by large-scale, cell-based experiments. It features methods to mine microscopic image data, discover new phenotypes, and improve recognition performance. We demonstrate that these features substantially expedite the training process, successfully uncover rare phenotypes, and improve the accuracy of the analysis. ACC is extensively documented, designed to be user-friendly for researchers without machine-learning expertise, and distributed as a free open-source tool at www.cellclassifier.org. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Experimental design methodologies in the optimization of chiral CE or CEC separations: an overview.

    PubMed

    Dejaegher, Bieke; Mangelings, Debby; Vander Heyden, Yvan

    2013-01-01

    In this chapter, an overview of experimental designs to develop chiral capillary electrophoresis (CE) and capillary electrochromatographic (CEC) methods is presented. Method development is generally divided into technique selection, method optimization, and method validation. In the method optimization part, often two phases can be distinguished, i.e., a screening and an optimization phase. In method validation, the method is evaluated on its fit for purpose. A validation item, also applying experimental designs, is robustness testing. In the screening phase and in robustness testing, screening designs are applied. During the optimization phase, response surface designs are used. The different design types and their application steps are discussed in this chapter and illustrated by examples of chiral CE and CEC methods.

  13. Quantitation of absorbed or deposited materials on a substrate that measures energy deposition

    DOEpatents

    Grant, Patrick G.; Bakajin, Olgica; Vogel, John S.; Bench, Graham

    2005-01-18

    This invention provides a system and method for measuring an energy differential that correlates to quantitative measurement of an amount mass of an applied localized material. Such a system and method remains compatible with other methods of analysis, such as, for example, quantitating the elemental or isotopic content, identifying the material, or using the material in biochemical analysis.

  14. Modal analysis applied to circular, rectangular, and coaxial waveguides

    NASA Technical Reports Server (NTRS)

    Hoppe, D. J.

    1988-01-01

    Recent developments in the analysis of various waveguide components and feedhorns using Modal Analysis (Mode Matching Method) are summarized. A brief description of the theory is presented, and the important features of the method are pointed out. Specific examples in circular, rectangular, and coaxial waveguides are included, with comparisons between the theory and experimental measurements. Extensions to the methods are described.

  15. A Method for Measuring International Openness

    ERIC Educational Resources Information Center

    Ferrieri, Gaetano

    2006-01-01

    The author illustrates a method for measuring international openness by bringing forward some examples. The index proposed measures the capacity of countries for a given phenomenon, adjusted for their weight in the phenomena concerned. In this study, the Index is applied to measure the degree of openness to international migration in a number of…

  16. Muscular Activities of an Athlete

    ERIC Educational Resources Information Center

    Wegner, Claas; Gröben, Bernd; Berning, Nane; Tönnesmann, Nora

    2017-01-01

    Interdisciplinary teaching is a teaching method that is not as easy as other teaching methods to integrate into the everyday school schedule. This paper serves as an example and gives explanations on how this didactic approach can be applied. The lessons described were conducted successfully with a Year 11 class at a secondary school in which the…

  17. Revisiting the Scale-Invariant, Two-Dimensional Linear Regression Method

    ERIC Educational Resources Information Center

    Patzer, A. Beate C.; Bauer, Hans; Chang, Christian; Bolte, Jan; Su¨lzle, Detlev

    2018-01-01

    The scale-invariant way to analyze two-dimensional experimental and theoretical data with statistical errors in both the independent and dependent variables is revisited by using what we call the triangular linear regression method. This is compared to the standard least-squares fit approach by applying it to typical simple sets of example data…

  18. Using FTIR-ATR Spectroscopy to Teach the Internal Standard Method

    ERIC Educational Resources Information Center

    Bellamy, Michael K.

    2010-01-01

    The internal standard method is widely applied in quantitative analyses. However, most analytical chemistry textbooks either omit this topic or only provide examples of a single-point internal standardization. An experiment designed to teach students how to prepare an internal standard calibration curve is described. The experiment is a modified…

  19. Least-squares Minimization Approaches to Interpret Total Magnetic Anomalies Due to Spheres

    NASA Astrophysics Data System (ADS)

    Abdelrahman, E. M.; El-Araby, T. M.; Soliman, K. S.; Essa, K. S.; Abo-Ezz, E. R.

    2007-05-01

    We have developed three different least-squares approaches to determine successively: the depth, magnetic angle, and amplitude coefficient of a buried sphere from a total magnetic anomaly. By defining the anomaly value at the origin and the nearest zero-anomaly distance from the origin on the profile, the problem of depth determination is transformed into the problem of finding a solution of a nonlinear equation of the form f(z)=0. Knowing the depth and applying the least-squares method, the magnetic angle and amplitude coefficient are determined using two simple linear equations. In this way, the depth, magnetic angle, and amplitude coefficient are determined individually from all observed total magnetic data. The method is applied to synthetic examples with and without random errors and tested on a field example from Senegal, West Africa. In all cases, the depth solutions are in good agreement with the actual ones.

  20. A trust region approach with multivariate Padé model for optimal circuit design

    NASA Astrophysics Data System (ADS)

    Abdel-Malek, Hany L.; Ebid, Shaimaa E. K.; Mohamed, Ahmed S. A.

    2017-11-01

    Since the optimization process requires a significant number of consecutive function evaluations, it is recommended to replace the function by an easily evaluated approximation model during the optimization process. The model suggested in this article is based on a multivariate Padé approximation. This model is constructed using data points of ?, where ? is the number of parameters. The model is updated over a sequence of trust regions. This model avoids the slow convergence of linear models of ? and has features of quadratic models that need interpolation data points of ?. The proposed approach is tested by applying it to several benchmark problems. Yield optimization using such a direct method is applied to some practical circuit examples. Minimax solution leads to a suitable initial point to carry out the yield optimization process. The yield is optimized by the proposed derivative-free method for active and passive filter examples.

  1. Method to estimate center of rigidity using vibration recordings

    USGS Publications Warehouse

    Safak, Erdal; Çelebi, Mehmet

    1990-01-01

    A method to estimate the center of rigidity of buildings by using vibration recordings is presented. The method is based on the criterion that the coherence of translational motions with the rotational motion is minimum at the center of rigidity. Since the coherence is a function of frequency, a gross but frequency-independent measure of the coherency is defined as the integral of the coherence function over the frequency. The center of rigidity is determined by minimizing this integral. The formulation is given for two-dimensional motions. Two examples are presented for the method; a rectangular building with ambient-vibration recordings, and a triangular building with earthquake-vibration recordings. Although the examples given are for buildings, the method can be applied to any structure with two-dimensional motions.

  2. Boundary-integral methods in elasticity and plasticity. [solutions of boundary value problems

    NASA Technical Reports Server (NTRS)

    Mendelson, A.

    1973-01-01

    Recently developed methods that use boundary-integral equations applied to elastic and elastoplastic boundary value problems are reviewed. Direct, indirect, and semidirect methods using potential functions, stress functions, and displacement functions are described. Examples of the use of these methods for torsion problems, plane problems, and three-dimensional problems are given. It is concluded that the boundary-integral methods represent a powerful tool for the solution of elastic and elastoplastic problems.

  3. Water supply management using an extended group fuzzy decision-making method: a case study in north-eastern Iran

    NASA Astrophysics Data System (ADS)

    Minatour, Yasser; Bonakdari, Hossein; Zarghami, Mahdi; Bakhshi, Maryam Ali

    2015-09-01

    The purpose of this study was to develop a group fuzzy multi-criteria decision-making method to be applied in rating problems associated with water resources management. Thus, here Chen's group fuzzy TOPSIS method extended by a difference technique to handle uncertainties of applying a group decision making. Then, the extended group fuzzy TOPSIS method combined with a consistency check. In the presented method, initially linguistic judgments are being surveyed via a consistency checking process, and afterward these judgments are being used in the extended Chen's fuzzy TOPSIS method. Here, each expert's opinion is turned to accurate mathematical numbers and, then, to apply uncertainties, the opinions of group are turned to fuzzy numbers using three mathematical operators. The proposed method is applied to select the optimal strategy for the rural water supply of Nohoor village in north-eastern Iran, as a case study and illustrated example. Sensitivity analyses test over results and comparing results with project reality showed that proposed method offered good results for water resources projects.

  4. Medical Image Analysis by Cognitive Information Systems - a Review.

    PubMed

    Ogiela, Lidia; Takizawa, Makoto

    2016-10-01

    This publication presents a review of medical image analysis systems. The paradigms of cognitive information systems will be presented by examples of medical image analysis systems. The semantic processes present as it is applied to different types of medical images. Cognitive information systems were defined on the basis of methods for the semantic analysis and interpretation of information - medical images - applied to cognitive meaning of medical images contained in analyzed data sets. Semantic analysis was proposed to analyzed the meaning of data. Meaning is included in information, for example in medical images. Medical image analysis will be presented and discussed as they are applied to various types of medical images, presented selected human organs, with different pathologies. Those images were analyzed using different classes of cognitive information systems. Cognitive information systems dedicated to medical image analysis was also defined for the decision supporting tasks. This process is very important for example in diagnostic and therapy processes, in the selection of semantic aspects/features, from analyzed data sets. Those features allow to create a new way of analysis.

  5. Applied Use Value of Scientific Information for Management of Ecosystem Services

    NASA Astrophysics Data System (ADS)

    Raunikar, R. P.; Forney, W.; Bernknopf, R.; Mishra, S.

    2012-12-01

    The U.S. Geological Survey has developed and applied methods for quantifying the value of scientific information (VOI) that are based on the applied use value of the information. In particular the applied use value of U.S. Geological Survey information often includes efficient management of ecosystem services. The economic nature of U.S. Geological Survey scientific information is largely equivalent to that of any information, but we focus application of our VOI quantification methods on the information products provided freely to the public by the U.S. Geological Survey. We describe VOI economics in general and illustrate by referring to previous studies that use the evolving applied use value methods, which includes examples of the siting of landfills in Louden County, the mineral exploration efficiencies of finer resolution geologic maps in Canada, and improved agricultural production and groundwater protection in Eastern Iowa possible with Landsat moderate resolution satellite imagery. Finally, we describe the adaptation of the applied use value method to the case of streamgage information used to improve the efficiency of water markets in New Mexico.

  6. Twenty Years On!: Updating the IEA BESTEST Building Thermal Fabric Test Cases for ASHRAE Standard 140

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Judkoff, R.; Neymark, J.

    2013-07-01

    ANSI/ASHRAE Standard 140, Standard Method of Test for the Evaluation of Building Energy Analysis Computer Programs applies the IEA BESTEST building thermal fabric test cases and example simulation results originally published in 1995. These software accuracy test cases and their example simulation results, which comprise the first test suite adapted for the initial 2001 version of Standard 140, are approaching their 20th anniversary. In response to the evolution of the state of the art in building thermal fabric modeling since the test cases and example simulation results were developed, work is commencing to update the normative test specification and themore » informative example results.« less

  7. Twenty Years On!: Updating the IEA BESTEST Building Thermal Fabric Test Cases for ASHRAE Standard 140: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Judkoff, R.; Neymark, J.

    2013-07-01

    ANSI/ASHRAE Standard 140, Standard Method of Test for the Evaluation of Building Energy Analysis Computer Programs applies the IEA BESTEST building thermal fabric test cases and example simulation results originally published in 1995. These software accuracy test cases and their example simulation results, which comprise the first test suite adapted for the initial 2001 version of Standard 140, are approaching their 20th anniversary. In response to the evolution of the state of the art in building thermal fabric modeling since the test cases and example simulation results were developed, work is commencing to update the normative test specification and themore » informative example results.« less

  8. The Use of Simulation and Cases to Teach Real World Decision Making: Applied Example for Health Care Management Graduate Programs

    ERIC Educational Resources Information Center

    Eisenhardt, Alyson; Ninassi, Susanne Bruno

    2016-01-01

    Many pedagogy experts suggest the use of real world scenarios and simulations as a means of teaching students to apply decision analysis concepts to their field of study. These methods allow students an opportunity to synthesize knowledge, skills, and abilities by presenting a field-based dilemma. The use of real world scenarios and simulations…

  9. Pulsed source ion implantation apparatus and method

    DOEpatents

    Leung, Ka-Ngo

    1996-01-01

    A new pulsed plasma-immersion ion-implantation apparatus that implants ions in large irregularly shaped objects to controllable depth without overheating the target, minimizing voltage breakdown, and using a constant electrical bias applied to the target. Instead of pulsing the voltage applied to the target, the plasma source, for example a tungsten filament or a RF antenna, is pulsed. Both electrically conducting and insulating targets can be implanted.

  10. A new method for finding the minimum free energy pathway of ions and small molecule transportation through protein based on 3D-RISM theory and the string method

    NASA Astrophysics Data System (ADS)

    Yoshida, Norio

    2018-05-01

    A new method for finding the minimum free energy pathway (MFEP) of ions and small molecule transportation through a protein based on the three-dimensional reference interaction site model (3D-RISM) theory combined with the string method has been proposed. The 3D-RISM theory produces the distribution function, or the potential of mean force (PMF), for transporting substances around the given protein structures. By applying the string method to the PMF surface, one can readily determine the MFEP on the PMF surface. The method has been applied to consider the Na+ conduction pathway of channelrhodopsin as an example.

  11. Applying Process Improvement Methods to Clinical and Translational Research: Conceptual Framework and Case Examples.

    PubMed

    Daudelin, Denise H; Selker, Harry P; Leslie, Laurel K

    2015-12-01

    There is growing appreciation that process improvement holds promise for improving quality and efficiency across the translational research continuum but frameworks for such programs are not often described. The purpose of this paper is to present a framework and case examples of a Research Process Improvement Program implemented at Tufts CTSI. To promote research process improvement, we developed online training seminars, workshops, and in-person consultation models to describe core process improvement principles and methods, demonstrate the use of improvement tools, and illustrate the application of these methods in case examples. We implemented these methods, as well as relational coordination theory, with junior researchers, pilot funding awardees, our CTRC, and CTSI resource and service providers. The program focuses on capacity building to address common process problems and quality gaps that threaten the efficient, timely and successful completion of clinical and translational studies. © 2015 The Authors. Clinical and Translational Science published by Wiley Periodicals, Inc.

  12. East Europe Report, Economic and Industrial Affairs.

    DTIC Science & Technology

    1984-06-07

    undeniable that even objectivized norms often lack the required quality. Many of them have not been determined by dependable analytical met methods ...incentive methods , such as, for example, contract wages and their various modifications, are applied only to a limited extent. Many economic managers...discipline. Method of the Future—Work Team Khozrashchet We have been testing and gradually expanding work team forms of labor organiza- tion and

  13. The Art of Teaching Jungian Analysis.

    ERIC Educational Resources Information Center

    Russell-Chapin, Lori A.; And Others

    1996-01-01

    Teaching Carl Jung's constructs such as individuation can serve as a blueprint for counselor development. Also discussed are mandalas, masks, active imagination, dreams and poetry. Suggestions and examples of teaching methods are described as they apply to counselor education. (KW)

  14. Microscopic Lagrangian description of warm plasmas. III - Nonlinear wave-particle interaction

    NASA Technical Reports Server (NTRS)

    Galloway, J. J.; Crawford, F. W.

    1977-01-01

    The averaged-Lagrangian method is applied to nonlinear wave-particle interactions in an infinite, homogeneous, magnetic-field-free plasma. The specific example of Langmuir waves is considered, and the combined effects of four-wave interactions and wave-particle interactions are treated. It is demonstrated how the latter lead to diffusion in velocity space, and the quasilinear diffusion equation is derived. The analysis is generalized to the random phase approximation. The paper concludes with a summary of the method as applied in Parts 1-3 of the paper.

  15. Unfolding and unfoldability of digital pulses in the z-domain

    NASA Astrophysics Data System (ADS)

    Regadío, Alberto; Sánchez-Prieto, Sebastián

    2018-04-01

    The unfolding (or deconvolution) technique is used in the development of digital pulse processing systems applied to particle detection. This technique is applied to digital signals obtained by digitization of analog signals that represent the combined response of the particle detectors and the associated signal conditioning electronics. This work describes a technique to determine if the signal is unfoldable. For unfoldable signals the characteristics of the unfolding system (unfolder) are presented. Finally, examples of the method applied to real experimental setup are discussed.

  16. 29 CFR 2530.200b-3 - Determination of service to be credited to employees.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... the equivalencies permitted under this section, or the elapsed time method of crediting service permitted under this section, or the elapsed time method of crediting service permitted under § 2530.200b-9... applied. Thus, for example, a plan may provide that part-time employees are credited under the general...

  17. Highly Accurate Beam Torsion Solutions Using the p-Version Finite Element Method

    NASA Technical Reports Server (NTRS)

    Smith, James P.

    1996-01-01

    A new treatment of the classical beam torsion boundary value problem is applied. Using the p-version finite element method with shape functions based on Legendre polynomials, torsion solutions for generic cross-sections comprised of isotropic materials are developed. Element shape functions for quadrilateral and triangular elements are discussed, and numerical examples are provided.

  18. Paleohydrological methods and some examples from Swedish fluvial environments. II - River meanders.

    USGS Publications Warehouse

    Williams, G.P.

    1984-01-01

    Empirical relations are developed between river-meander features and water-discharge characteristics for 19 reaches along Swedish rivers. In these relations, either average channel width or average radius of curvature of meander arcs can be used to estimate average annual peak discharge and average daily discharge. By accepting certain assumptions, the relations can be applied to other meandering Swedish rivers, present or ancient. The Oster-Dalalven River near Mora is used as an example.-Author

  19. Environmental Response Laboratory Network (ERLN) Basic Ordering Agreement Example

    EPA Pesticide Factsheets

    A BOA is a written instrument of understanding between EPA and a laboratory that contains terms and clauses applying to all future orders, a description of services to be provided, and methods for pricing, issuing, and delivering future orders.

  20. Solution of Fifth-order Korteweg and de Vries Equation by Homotopy perturbation Transform Method using He's Polynomial

    NASA Astrophysics Data System (ADS)

    Sharma, Dinkar; Singh, Prince; Chauhan, Shubha

    2017-06-01

    In this paper, a combined form of the Laplace transform method with the homotopy perturbation method is applied to solve nonlinear fifth order Korteweg de Vries (KdV) equations. The method is known as homotopy perturbation transform method (HPTM). The nonlinear terms can be easily handled by the use of He's polynomials. Two test examples are considered to illustrate the present scheme. Further the results are compared with Homotopy perturbation method (HPM).

  1. Information loss method to measure node similarity in networks

    NASA Astrophysics Data System (ADS)

    Li, Yongli; Luo, Peng; Wu, Chong

    2014-09-01

    Similarity measurement for the network node has been paid increasing attention in the field of statistical physics. In this paper, we propose an entropy-based information loss method to measure the node similarity. The whole model is established based on this idea that less information loss is caused by seeing two more similar nodes as the same. The proposed new method has relatively low algorithm complexity, making it less time-consuming and more efficient to deal with the large scale real-world network. In order to clarify its availability and accuracy, this new approach was compared with some other selected approaches on two artificial examples and synthetic networks. Furthermore, the proposed method is also successfully applied to predict the network evolution and predict the unknown nodes' attributions in the two application examples.

  2. Pulsed source ion implantation apparatus and method

    DOEpatents

    Leung, K.N.

    1996-09-24

    A new pulsed plasma-immersion ion-implantation apparatus that implants ions in large irregularly shaped objects to controllable depth without overheating the target, minimizing voltage breakdown, and using a constant electrical bias applied to the target. Instead of pulsing the voltage applied to the target, the plasma source, for example a tungsten filament or a RF antenna, is pulsed. Both electrically conducting and insulating targets can be implanted. 16 figs.

  3. Developing the Model of "Pedagogical Art Communication" Using Social Phenomenological Analysis: An Introduction to a Research Method and an Example for Its Outcome

    ERIC Educational Resources Information Center

    Hofmann, Fabian

    2016-01-01

    Social phenomenological analysis is presented as a research method for museum and art education. After explaining its methodological background, it is shown how this method has been applied in a study of gallery talks or guided tours in art museums: Analyzing the situation by description and interpretation, a model for understanding gallery talks…

  4. Integrals for IBS and beam cooling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burov, A.; /Fermilab

    Simulation of beam cooling usually requires performing certain integral transformations every time step or so, which is a significant burden on the CPU. Examples are the dispersion integrals (Hilbert transforms) in the stochastic cooling, wake fields and IBS integrals. An original method is suggested for fast and sufficiently accurate computation of the integrals. This method is applied for the dispersion integral. Some methodical aspects of the IBS analysis are discussed.

  5. Integrals for IBS and Beam Cooling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burov, A.

    Simulation of beam cooling usually requires performing certain integral transformations every time step or so, which is a significant burden on the CPU. Examples are the dispersion integrals (Hilbert transforms) in the stochastic cooling, wake fields and IBS integrals. An original method is suggested for fast and sufficiently accurate computation of the integrals. This method is applied for the dispersion integral. Some methodical aspects of the IBS analysis are discussed.

  6. Integrating Science and Engineering to Implement Evidence-Based Practices in Health Care Settings.

    PubMed

    Wu, Shinyi; Duan, Naihua; Wisdom, Jennifer P; Kravitz, Richard L; Owen, Richard R; Sullivan, J Greer; Wu, Albert W; Di Capua, Paul; Hoagwood, Kimberly Eaton

    2015-09-01

    Integrating two distinct and complementary paradigms, science and engineering, may produce more effective outcomes for the implementation of evidence-based practices in health care settings. Science formalizes and tests innovations, whereas engineering customizes and optimizes how the innovation is applied tailoring to accommodate local conditions. Together they may accelerate the creation of an evidence-based healthcare system that works effectively in specific health care settings. We give examples of applying engineering methods for better quality, more efficient, and safer implementation of clinical practices, medical devices, and health services systems. A specific example was applying systems engineering design that orchestrated people, process, data, decision-making, and communication through a technology application to implement evidence-based depression care among low-income patients with diabetes. We recommend that leading journals recognize the fundamental role of engineering in implementation research, to improve understanding of design elements that create a better fit between program elements and local context.

  7. Thermal Pyrolytic Graphite Enhanced Components

    NASA Technical Reports Server (NTRS)

    Hardesty, Robert E. (Inventor)

    2015-01-01

    A thermally conductive composite material, a thermal transfer device made of the material, and a method for making the material are disclosed. Apertures or depressions are formed in aluminum or aluminum alloy. Plugs are formed of thermal pyrolytic graphite. An amount of silicon sufficient for liquid interface diffusion bonding is applied, for example by vapor deposition or use of aluminum silicon alloy foil. The plugs are inserted in the apertures or depressions. Bonding energy is applied, for example by applying pressure and heat using a hot isostatic press. The thermal pyrolytic graphite, aluminum or aluminum alloy and silicon form a eutectic alloy. As a result, the plugs are bonded into the apertures or depressions. The composite material can be machined to produce finished devices such as the thermal transfer device. Thermally conductive planes of the thermal pyrolytic graphite plugs may be aligned in parallel to present a thermal conduction path.

  8. A Gentle Introduction to Bayesian Analysis: Applications to Developmental Research

    PubMed Central

    van de Schoot, Rens; Kaplan, David; Denissen, Jaap; Asendorpf, Jens B; Neyer, Franz J; van Aken, Marcel AG

    2014-01-01

    Bayesian statistical methods are becoming ever more popular in applied and fundamental research. In this study a gentle introduction to Bayesian analysis is provided. It is shown under what circumstances it is attractive to use Bayesian estimation, and how to interpret properly the results. First, the ingredients underlying Bayesian methods are introduced using a simplified example. Thereafter, the advantages and pitfalls of the specification of prior knowledge are discussed. To illustrate Bayesian methods explained in this study, in a second example a series of studies that examine the theoretical framework of dynamic interactionism are considered. In the Discussion the advantages and disadvantages of using Bayesian statistics are reviewed, and guidelines on how to report on Bayesian statistics are provided. PMID:24116396

  9. Scaling Laws Applied to a Modal Formulation of the Aeroservoelastic Equations

    NASA Technical Reports Server (NTRS)

    Pototzky, Anthony S.

    2002-01-01

    A method of scaling is described that easily converts the aeroelastic equations of motion of a full-sized aircraft into ones of a wind-tunnel model. To implement the method, a set of rules is provided for the conversion process involving matrix operations with scale factors. In addition, a technique for analytically incorporating a spring mounting system into the aeroelastic equations is also presented. As an example problem, a finite element model of a full-sized aircraft is introduced from the High Speed Research (HSR) program to exercise the scaling method. With a set of scale factor values, a brief outline is given of a procedure to generate the first-order aeroservoelastic analytical model representing the wind-tunnel model. To verify the scaling process as applied to the example problem, the root-locus patterns from the full-sized vehicle and the wind-tunnel model are compared to see if the root magnitudes scale with the frequency scale factor value. Selected time-history results are given from a numerical simulation of an active-controlled wind-tunnel model to demonstrate the utility of the scaling process.

  10. A Note on Multigrid Theory for Non-nested Grids and/or Quadrature

    NASA Technical Reports Server (NTRS)

    Douglas, C. C.; Douglas, J., Jr.; Fyfe, D. E.

    1996-01-01

    We provide a unified theory for multilevel and multigrid methods when the usual assumptions are not present. For example, we do not assume that the solution spaces or the grids are nested. Further, we do not assume that there is an algebraic relationship between the linear algebra problems on different levels. What we provide is a computationally useful theory for adaptively changing levels. Theory is provided for multilevel correction schemes, nested iteration schemes, and one way (i.e., coarse to fine grid with no correction iterations) schemes. We include examples showing the applicability of this theory: finite element examples using quadrature in the matrix assembly and finite volume examples with non-nested grids. Our theory applies directly to other discretizations as well.

  11. Ground-based measurements of ionospheric dynamics

    NASA Astrophysics Data System (ADS)

    Kouba, Daniel; Chum, Jaroslav

    2018-05-01

    Different methods are used to research and monitor the ionospheric dynamics using ground measurements: Digisonde Drift Measurements (DDM) and Continuous Doppler Sounding (CDS). For the first time, we present comparison between both methods on specific examples. Both methods provide information about the vertical drift velocity component. The DDM provides more information about the drift velocity vector and detected reflection points. However, the method is limited by the relatively low time resolution. In contrast, the strength of CDS is its high time resolution. The discussed methods can be used for real-time monitoring of medium scale travelling ionospheric disturbances. We conclude that it is advantageous to use both methods simultaneously if possible. The CDS is then applied for the disturbance detection and analysis, and the DDM is applied for the reflection height control.

  12. Assessing ConnDOT's portland cement concrete testing methods phase II : field trials and implementation.

    DOT National Transportation Integrated Search

    2012-04-01

    This paper presents a description of efforts to disseminate findings from the Phase I study (SPR-2244), provides examples of applied maturity testing and temperature monitoring in Connecticut, reviews several State Highway Agency protocols for using ...

  13. Solid State Kinetic Parameters and Chemical Mechanism of the Dehydration of CoCl2.6H2O.

    ERIC Educational Resources Information Center

    Ribas, Joan; And Others

    1988-01-01

    Presents an experimental example illustrating the most common methods for the determination of kinetic parameters. Discusses the different theories and equations to be applied and the mechanism derived from the kinetic results. (CW)

  14. Development of stable isotope mixing models in ecology - Dublin

    EPA Science Inventory

    More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...

  15. Historical development of stable isotope mixing models in ecology

    EPA Science Inventory

    More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...

  16. Development of stable isotope mixing models in ecology - Perth

    EPA Science Inventory

    More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...

  17. Development of stable isotope mixing models in ecology - Fremantle

    EPA Science Inventory

    More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...

  18. Development of stable isotope mixing models in ecology - Sydney

    EPA Science Inventory

    More than 40 years ago, stable isotope analysis methods used in geochemistry began to be applied to ecological studies. One common application is using mathematical mixing models to sort out the proportional contributions of various sources to a mixture. Examples include contri...

  19. Habitat suitability criteria via parametric distributions: estimation, model selection and uncertainty

    USGS Publications Warehouse

    Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.

    2016-01-01

    Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.

  20. A Comparison of Best Fit Lines for Data with Outliers

    ERIC Educational Resources Information Center

    Glaister, P.

    2005-01-01

    Three techniques for determining a straight line fit to data are compared. The methods are applied to a range of datasets containing one or more outliers, and to a specific example from the field of chemistry. For the method which is the most resistant to the presence of outliers, a Microsoft Excel spreadsheet, as well as two Matlab routines, are…

  1. Methodology for applying monitored natural attenuation to petroleum hydrocarbon-contaminated ground-water systems with examples from South Carolina

    USGS Publications Warehouse

    Chapelle, Frank H.; Robertson, John F.; Landmeyer, James E.; Bradley, Paul M.

    2000-01-01

    These two sites illustrate how the efficiency of natural attenuation processes acting on petroleum hydrocarbons can be systematically evaluated using hydrologic, geochemical, and microbiologic methods.  These methods, in turn, can be used to assess the role that the natural attenuation of petroleum hydrocarbons can play in achieving overall site remediation.

  2. Study Unveils New Method for Universal Extraction and PCR Amplification of Fungal DNA

    DTIC Science & Technology

    2014-06-12

    Wickes noted that there are methods to extract fungi from soil , for example, and "once you get down to pure DNA, everything else is the same," he said...rare or hard to identify fungal infections. The new extraction and amplification method can be universally applied to fungi , according to the...best treatments. In addition, rare fungi , or species with phenotypic doppelgangers, can stump medical mycologists, so molecular methods are critical

  3. The dream interview method in addiction recovery. A treatment guide.

    PubMed

    Flowers, L K; Zweben, J E

    1996-01-01

    The Dream Interview Method is a recently developed tool for dream interpretation that can facilitate work on addiction issues at all stages of recovery. This paper describes the method in detail and discusses examples of its application in a group composed of individuals in varying stages of the recovery process. It permits the therapist to accelerate the development of insight, and once the method is learned, it can be applied in self-help formats.

  4. A refined method for multivariate meta-analysis and meta-regression.

    PubMed

    Jackson, Daniel; Riley, Richard D

    2014-02-20

    Making inferences about the average treatment effect using the random effects model for meta-analysis is problematic in the common situation where there is a small number of studies. This is because estimates of the between-study variance are not precise enough to accurately apply the conventional methods for testing and deriving a confidence interval for the average effect. We have found that a refined method for univariate meta-analysis, which applies a scaling factor to the estimated effects' standard error, provides more accurate inference. We explain how to extend this method to the multivariate scenario and show that our proposal for refined multivariate meta-analysis and meta-regression can provide more accurate inferences than the more conventional approach. We explain how our proposed approach can be implemented using standard output from multivariate meta-analysis software packages and apply our methodology to two real examples. Copyright © 2013 John Wiley & Sons, Ltd.

  5. Time-Parallel Solutions to Ordinary Differential Equations on GPUs with a New Functional Optimization Approach Related to the Sobolev Gradient Method

    DTIC Science & Technology

    2012-10-01

    black and approximations in cyan and magenta. The second ODE is the pendulum equation, given by: This ODE was also implemented using Crank...The drawback of approaches like the one proposed can be observed with a very simple example. Suppose vector is found by applying 4 linear...public release; distribution unlimited Figure 2. A phase space plot of the Pendulum example. Fine solution (black) contains 32768 time steps

  6. Simulation methods to estimate design power: an overview for applied research

    PubMed Central

    2011-01-01

    Background Estimating the required sample size and statistical power for a study is an integral part of study design. For standard designs, power equations provide an efficient solution to the problem, but they are unavailable for many complex study designs that arise in practice. For such complex study designs, computer simulation is a useful alternative for estimating study power. Although this approach is well known among statisticians, in our experience many epidemiologists and social scientists are unfamiliar with the technique. This article aims to address this knowledge gap. Methods We review an approach to estimate study power for individual- or cluster-randomized designs using computer simulation. This flexible approach arises naturally from the model used to derive conventional power equations, but extends those methods to accommodate arbitrarily complex designs. The method is universally applicable to a broad range of designs and outcomes, and we present the material in a way that is approachable for quantitative, applied researchers. We illustrate the method using two examples (one simple, one complex) based on sanitation and nutritional interventions to improve child growth. Results We first show how simulation reproduces conventional power estimates for simple randomized designs over a broad range of sample scenarios to familiarize the reader with the approach. We then demonstrate how to extend the simulation approach to more complex designs. Finally, we discuss extensions to the examples in the article, and provide computer code to efficiently run the example simulations in both R and Stata. Conclusions Simulation methods offer a flexible option to estimate statistical power for standard and non-traditional study designs and parameters of interest. The approach we have described is universally applicable for evaluating study designs used in epidemiologic and social science research. PMID:21689447

  7. Quantification of Microbial Phenotypes

    PubMed Central

    Martínez, Verónica S.; Krömer, Jens O.

    2016-01-01

    Metabolite profiling technologies have improved to generate close to quantitative metabolomics data, which can be employed to quantitatively describe the metabolic phenotype of an organism. Here, we review the current technologies available for quantitative metabolomics, present their advantages and drawbacks, and the current challenges to generate fully quantitative metabolomics data. Metabolomics data can be integrated into metabolic networks using thermodynamic principles to constrain the directionality of reactions. Here we explain how to estimate Gibbs energy under physiological conditions, including examples of the estimations, and the different methods for thermodynamics-based network analysis. The fundamentals of the methods and how to perform the analyses are described. Finally, an example applying quantitative metabolomics to a yeast model by 13C fluxomics and thermodynamics-based network analysis is presented. The example shows that (1) these two methods are complementary to each other; and (2) there is a need to take into account Gibbs energy errors. Better estimations of metabolic phenotypes will be obtained when further constraints are included in the analysis. PMID:27941694

  8. 1/f noise application in reflexotherapy

    NASA Astrophysics Data System (ADS)

    Ostrova, S. O.; Bulgakov, A. E.; Klyushkin, I. V.; Abdullina, A. M.

    1993-08-01

    The instruments and methods which make it possible to apply 1/f noise in reflexotherapy by action of physical factors on biologically active points are dealt with. The efficiency of their application was shown on the example of treatment of gastric and duodenal ulcers without medicaments.

  9. Network analysis of a financial market based on genuine correlation and threshold method

    NASA Astrophysics Data System (ADS)

    Namaki, A.; Shirazi, A. H.; Raei, R.; Jafari, G. R.

    2011-10-01

    A financial market is an example of an adaptive complex network consisting of many interacting units. This network reflects market’s behavior. In this paper, we use Random Matrix Theory (RMT) notion for specifying the largest eigenvector of correlation matrix as the market mode of stock network. For a better risk management, we clean the correlation matrix by removing the market mode from data and then construct this matrix based on the residuals. We show that this technique has an important effect on correlation coefficient distribution by applying it for Dow Jones Industrial Average (DJIA). To study the topological structure of a network we apply the removing market mode technique and the threshold method to Tehran Stock Exchange (TSE) as an example. We show that this network follows a power-law model in certain intervals. We also show the behavior of clustering coefficients and component numbers of this network for different thresholds. These outputs are useful for both theoretical and practical purposes such as asset allocation and risk management.

  10. Development of a probabilistic analysis methodology for structural reliability estimation

    NASA Technical Reports Server (NTRS)

    Torng, T. Y.; Wu, Y.-T.

    1991-01-01

    The novel probabilistic analysis method for assessment of structural reliability presented, which combines fast-convolution with an efficient structural reliability analysis, can after identifying the most important point of a limit state proceed to establish a quadratic-performance function. It then transforms the quadratic function into a linear one, and applies fast convolution. The method is applicable to problems requiring computer-intensive structural analysis. Five illustrative examples of the method's application are given.

  11. Automated Analysis of Renewable Energy Datasets ('EE/RE Data Mining')

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bush, Brian; Elmore, Ryan; Getman, Dan

    This poster illustrates methods to substantially improve the understanding of renewable energy data sets and the depth and efficiency of their analysis through the application of statistical learning methods ('data mining') in the intelligent processing of these often large and messy information sources. The six examples apply methods for anomaly detection, data cleansing, and pattern mining to time-series data (measurements from metering points in buildings) and spatiotemporal data (renewable energy resource datasets).

  12. Advances and future directions of research on spectral methods

    NASA Technical Reports Server (NTRS)

    Patera, A. T.

    1986-01-01

    Recent advances in spectral methods are briefly reviewed and characterized with respect to their convergence and computational complexity. Classical finite element and spectral approaches are then compared, and spectral element (or p-type finite element) approximations are introduced. The method is applied to the full Navier-Stokes equations, and examples are given of the application of the technique to several transitional flows. Future directions of research in the field are outlined.

  13. Methods for resistive switching of memristors

    DOEpatents

    Mickel, Patrick R.; James, Conrad D.; Lohn, Andrew; Marinella, Matthew; Hsia, Alexander H.

    2016-05-10

    The present invention is directed generally to resistive random-access memory (RRAM or ReRAM) devices and systems, as well as methods of employing a thermal resistive model to understand and determine switching of such devices. In particular example, the method includes generating a power-resistance measurement for the memristor device and applying an isothermal model to the power-resistance measurement in order to determine one or more parameters of the device (e.g., filament state).

  14. Efficient simulation of intrinsic, extrinsic and external noise in biochemical systems

    PubMed Central

    Pischel, Dennis; Sundmacher, Kai; Flassig, Robert J.

    2017-01-01

    Abstract Motivation: Biological cells operate in a noisy regime influenced by intrinsic, extrinsic and external noise, which leads to large differences of individual cell states. Stochastic effects must be taken into account to characterize biochemical kinetics accurately. Since the exact solution of the chemical master equation, which governs the underlying stochastic process, cannot be derived for most biochemical systems, approximate methods are used to obtain a solution. Results: In this study, a method to efficiently simulate the various sources of noise simultaneously is proposed and benchmarked on several examples. The method relies on the combination of the sigma point approach to describe extrinsic and external variability and the τ-leaping algorithm to account for the stochasticity due to probabilistic reactions. The comparison of our method to extensive Monte Carlo calculations demonstrates an immense computational advantage while losing an acceptable amount of accuracy. Additionally, the application to parameter optimization problems in stochastic biochemical reaction networks is shown, which is rarely applied due to its huge computational burden. To give further insight, a MATLAB script is provided including the proposed method applied to a simple toy example of gene expression. Availability and implementation: MATLAB code is available at Bioinformatics online. Contact: flassig@mpi-magdeburg.mpg.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28881987

  15. Bayesian Treed Calibration: An Application to Carbon Capture With AX Sorbent

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konomi, Bledar A.; Karagiannis, Georgios; Lai, Kevin

    2017-01-02

    In cases where field or experimental measurements are not available, computer models can model real physical or engineering systems to reproduce their outcomes. They are usually calibrated in light of experimental data to create a better representation of the real system. Statistical methods, based on Gaussian processes, for calibration and prediction have been especially important when the computer models are expensive and experimental data limited. In this paper, we develop the Bayesian treed calibration (BTC) as an extension of standard Gaussian process calibration methods to deal with non-stationarity computer models and/or their discrepancy from the field (or experimental) data. Ourmore » proposed method partitions both the calibration and observable input space, based on a binary tree partitioning, into sub-regions where existing model calibration methods can be applied to connect a computer model with the real system. The estimation of the parameters in the proposed model is carried out using Markov chain Monte Carlo (MCMC) computational techniques. Different strategies have been applied to improve mixing. We illustrate our method in two artificial examples and a real application that concerns the capture of carbon dioxide with AX amine based sorbents. The source code and the examples analyzed in this paper are available as part of the supplementary materials.« less

  16. Fast heap transform-based QR-decomposition of real and complex matrices: algorithms and codes

    NASA Astrophysics Data System (ADS)

    Grigoryan, Artyom M.

    2015-03-01

    In this paper, we describe a new look on the application of Givens rotations to the QR-decomposition problem, which is similar to the method of Householder transformations. We apply the concept of the discrete heap transform, or signal-induced unitary transforms which had been introduced by Grigoryan (2006) and used in signal and image processing. Both cases of real and complex nonsingular matrices are considered and examples of performing QR-decomposition of square matrices are given. The proposed method of QR-decomposition for the complex matrix is novel and differs from the known method of complex Givens rotation and is based on analytical equations for the heap transforms. Many examples illustrated the proposed heap transform method of QR-decomposition are given, algorithms are described in detail, and MATLAB-based codes are included.

  17. Acquiring 3-D information about thick objects from differential interference contrast images using texture extraction

    NASA Astrophysics Data System (ADS)

    Sierra, Heidy; Brooks, Dana; Dimarzio, Charles

    2010-07-01

    The extraction of 3-D morphological information about thick objects is explored in this work. We extract this information from 3-D differential interference contrast (DIC) images by applying a texture detection method. Texture extraction methods have been successfully used in different applications to study biological samples. A 3-D texture image is obtained by applying a local entropy-based texture extraction method. The use of this method to detect regions of blastocyst mouse embryos that are used in assisted reproduction techniques such as in vitro fertilization is presented as an example. Results demonstrate the potential of using texture detection methods to improve morphological analysis of thick samples, which is relevant to many biomedical and biological studies. Fluorescence and optical quadrature microscope phase images are used for validation.

  18. Methods for the Analysis of Protein Phosphorylation-Mediated Cellular Signaling Networks

    NASA Astrophysics Data System (ADS)

    White, Forest M.; Wolf-Yadlin, Alejandro

    2016-06-01

    Protein phosphorylation-mediated cellular signaling networks regulate almost all aspects of cell biology, including the responses to cellular stimulation and environmental alterations. These networks are highly complex and comprise hundreds of proteins and potentially thousands of phosphorylation sites. Multiple analytical methods have been developed over the past several decades to identify proteins and protein phosphorylation sites regulating cellular signaling, and to quantify the dynamic response of these sites to different cellular stimulation. Here we provide an overview of these methods, including the fundamental principles governing each method, their relative strengths and weaknesses, and some examples of how each method has been applied to the analysis of complex signaling networks. When applied correctly, each of these techniques can provide insight into the topology, dynamics, and regulation of protein phosphorylation signaling networks.

  19. Theoretical and numerical difficulties in 3-D vector potential methods in finite element magnetostatic computations

    NASA Technical Reports Server (NTRS)

    Demerdash, N. A.; Wang, R.

    1990-01-01

    This paper describes the results of application of three well known 3D magnetic vector potential (MVP) based finite element formulations for computation of magnetostatic fields in electrical devices. The three methods were identically applied to three practical examples, the first of which contains only one medium (free space), while the second and third examples contained a mix of free space and iron. The first of these methods is based on the unconstrained curl-curl of the MVP, while the second and third methods are predicated upon constraining the divergence of the MVP 10 zero (Coulomb's Gauge). It was found that the two latter methods cease to give useful and meaningful results when the global solution region contains a mix of media of high and low permeabilities. Furthermore, it was found that their results do not achieve the intended zero constraint on the divergence of the MVP.

  20. [Application of Stata software to test heterogeneity in meta-analysis method].

    PubMed

    Wang, Dan; Mou, Zhen-yun; Zhai, Jun-xia; Zong, Hong-xia; Zhao, Xiao-dong

    2008-07-01

    To introduce the application of Stata software to heterogeneity test in meta-analysis. A data set was set up according to the example in the study, and the corresponding commands of the methods in Stata 9 software were applied to test the example. The methods used were Q-test and I2 statistic attached to the fixed effect model forest plot, H statistic and Galbraith plot. The existence of the heterogeneity among studies could be detected by Q-test and H statistic and the degree of the heterogeneity could be detected by I2 statistic. The outliers which were the sources of the heterogeneity could be spotted from the Galbraith plot. Heterogeneity test in meta-analysis can be completed by the four methods in Stata software simply and quickly. H and I2 statistics are more robust, and the outliers of the heterogeneity can be clearly seen in the Galbraith plot among the four methods.

  1. Effectiveness of project ACORDE materials: applied evaluative research in a preclinical technique course.

    PubMed

    Shugars, D A; Trent, P J; Heymann, H O

    1979-08-01

    Two instructional strategies, the traditional lecture method and a standardized self-instructional (ACORDE) format, were compared for efficiency and perceived usefulness in a preclinical restorative dentistry technique course through the use of a posttest-only control group research design. Control and experimental groups were compared on (a) technique grades, (b) didactic grades, (c) amount of time spent, (d) student and faculty perceptions, and (e) observation of social dynamics. The results of this study demonstrated the effectiveness of Project ACORDE materials in teaching dental students, provided an example of applied research designed to test contemplated instructional innovations prior to use and used a method which highlighted qualitative, as well as quantitative, techniques for data gathering in applied research.

  2. Formal Methods for Life-Critical Software

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Johnson, Sally C.

    1993-01-01

    The use of computer software in life-critical applications, such as for civil air transports, demands the use of rigorous formal mathematical verification procedures. This paper demonstrates how to apply formal methods to the development and verification of software by leading the reader step-by-step through requirements analysis, design, implementation, and verification of an electronic phone book application. The current maturity and limitations of formal methods tools and techniques are then discussed, and a number of examples of the successful use of formal methods by industry are cited.

  3. A new Lagrangian random choice method for steady two-dimensional supersonic/hypersonic flow

    NASA Technical Reports Server (NTRS)

    Loh, C. Y.; Hui, W. H.

    1991-01-01

    Glimm's (1965) random choice method has been successfully applied to compute steady two-dimensional supersonic/hypersonic flow using a new Lagrangian formulation. The method is easy to program, fast to execute, yet it is very accurate and robust. It requires no grid generation, resolves slipline and shock discontinuities crisply, can handle boundary conditions most easily, and is applicable to hypersonic as well as supersonic flow. It represents an accurate and fast alternative to the existing Eulerian methods. Many computed examples are given.

  4. Planar dielectric waveguides in rotation are optical fibers: comparison with the classical model.

    PubMed

    Peña García, Antonio; Pérez-Ocón, Francisco; Jiménez, José Ramón

    2008-01-21

    A novel and simpler method to calculate the main parameters in fiber optics is presented. This method is based in a planar dielectric waveguide in rotation and, as an example, it is applied to calculate the turning points and the inner caustic in an optical fiber with a parabolic refractive index. It is shown that the solution found using this method agrees with the standard (and more complex) method, whose solutions for these points are also summarized in this paper.

  5. A method of using cluster analysis to study statistical dependence in multivariate data

    NASA Technical Reports Server (NTRS)

    Borucki, W. J.; Card, D. H.; Lyle, G. C.

    1975-01-01

    A technique is presented that uses both cluster analysis and a Monte Carlo significance test of clusters to discover associations between variables in multidimensional data. The method is applied to an example of a noisy function in three-dimensional space, to a sample from a mixture of three bivariate normal distributions, and to the well-known Fisher's Iris data.

  6. A sensitivity equation approach to shape optimization in fluid flows

    NASA Technical Reports Server (NTRS)

    Borggaard, Jeff; Burns, John

    1994-01-01

    A sensitivity equation method to shape optimization problems is applied. An algorithm is developed and tested on a problem of designing optimal forebody simulators for a 2D, inviscid supersonic flow. The algorithm uses a BFGS/Trust Region optimization scheme with sensitivities computed by numerically approximating the linear partial differential equations that determine the flow sensitivities. Numerical examples are presented to illustrate the method.

  7. Disentangling giant component and finite cluster contributions in sparse random matrix spectra.

    PubMed

    Kühn, Reimer

    2016-04-01

    We describe a method for disentangling giant component and finite cluster contributions to sparse random matrix spectra, using sparse symmetric random matrices defined on Erdős-Rényi graphs as an example and test bed. Our methods apply to sparse matrices defined in terms of arbitrary graphs in the configuration model class, as long as they have finite mean degree.

  8. A simple plug-in bagging ensemble based on threshold-moving for classifying binary and multiclass imbalanced data.

    PubMed

    Collell, Guillem; Prelec, Drazen; Patil, Kaustubh R

    2018-01-31

    Class imbalance presents a major hurdle in the application of classification methods. A commonly taken approach is to learn ensembles of classifiers using rebalanced data. Examples include bootstrap averaging (bagging) combined with either undersampling or oversampling of the minority class examples. However, rebalancing methods entail asymmetric changes to the examples of different classes, which in turn can introduce their own biases. Furthermore, these methods often require specifying the performance measure of interest a priori, i.e., before learning. An alternative is to employ the threshold moving technique, which applies a threshold to the continuous output of a model, offering the possibility to adapt to a performance measure a posteriori , i.e., a plug-in method. Surprisingly, little attention has been paid to this combination of a bagging ensemble and threshold-moving. In this paper, we study this combination and demonstrate its competitiveness. Contrary to the other resampling methods, we preserve the natural class distribution of the data resulting in well-calibrated posterior probabilities. Additionally, we extend the proposed method to handle multiclass data. We validated our method on binary and multiclass benchmark data sets by using both, decision trees and neural networks as base classifiers. We perform analyses that provide insights into the proposed method.

  9. 3D Modeling as Method for Construction and Analysis of Graphic Objects

    NASA Astrophysics Data System (ADS)

    Kheyfets, A. L.; Vasilieva, V. N.

    2017-11-01

    The use of 3D modeling when constructing and analyzing perspective projections and shadows is considered. The creation of photorealistic image is shown. The perspective of the construction project and characterization of its image are given as an example. The authors consider the construction of a dynamic block as a means of graphical information storage and automation of geometric constructions. The example of the dynamic block construction at creating a truss node is demonstrated. The constructions are considered as applied to the Auto-CAD software. The paper is aimed at improving the graphic methods of architectural design and improving the educational process when training the Bachelor’s degree students majoring in construction.

  10. Modified surface testing method for large convex aspheric surfaces based on diffraction optics.

    PubMed

    Zhang, Haidong; Wang, Xiaokun; Xue, Donglin; Zhang, Xuejun

    2017-12-01

    Large convex aspheric optical elements have been widely applied in advanced optical systems, which have presented a challenging metrology problem. Conventional testing methods cannot satisfy the demand gradually with the change of definition of "large." A modified method is proposed in this paper, which utilizes a relatively small computer-generated hologram and an illumination lens with certain feasibility to measure the large convex aspherics. Two example systems are designed to demonstrate the applicability, and also, the sensitivity of this configuration is analyzed, which proves the accuracy of the configuration can be better than 6 nm with careful alignment and calibration of the illumination lens in advance. Design examples and analysis show that this configuration is applicable to measure the large convex aspheric surfaces.

  11. Recyclable organic solar cells on substrates comprising cellulose nanocrystals (CNC)

    DOEpatents

    Kippelen, Bernard; Fuentes-Hernandez, Canek; Zhou, Yinhua; Moon, Robert; Youngblood, Jeffrey P

    2015-12-01

    Recyclable organic solar cells are disclosed herein. Systems and methods are further disclosed for producing, improving performance, and for recycling the solar cells. In certain example embodiments, the recyclable organic solar cells disclosed herein include: a first electrode; a second electrode; a photoactive layer disposed between the first electrode and the second electrode; an interlayer comprising a Lewis basic oligomer or polymer disposed between the photoactive layer and at least a portion of the first electrode or the second electrode; and a substrate disposed adjacent to the first electrode or the second electrode. The interlayer reduces the work function associated with the first or second electrode. In certain example embodiments, the substrate comprises cellulose nanocrystals that can be recycled. In certain example embodiments, one or more of the first electrode, the photoactive layer, and the second electrode may be applied by a film transfer lamination method.

  12. Examples of measurement uncertainty evaluations in accordance with the revised GUM

    NASA Astrophysics Data System (ADS)

    Runje, B.; Horvatic, A.; Alar, V.; Medic, S.; Bosnjakovic, A.

    2016-11-01

    The paper presents examples of the evaluation of uncertainty components in accordance with the current and revised Guide to the expression of uncertainty in measurement (GUM). In accordance with the proposed revision of the GUM a Bayesian approach was conducted for both type A and type B evaluations.The law of propagation of uncertainty (LPU) and the law of propagation of distribution applied through the Monte Carlo method, (MCM) were used to evaluate associated standard uncertainties, expanded uncertainties and coverage intervals. Furthermore, the influence of the non-Gaussian dominant input quantity and asymmetric distribution of the output quantity y on the evaluation of measurement uncertainty was analyzed. In the case when the probabilistically coverage interval is not symmetric, the coverage interval for the probability P is estimated from the experimental probability density function using the Monte Carlo method. Key highlights of the proposed revision of the GUM were analyzed through a set of examples.

  13. Recent archaeomagnetic studies in Slovakia: Comparison of methodological approaches

    NASA Astrophysics Data System (ADS)

    Kubišová, Lenka

    2016-03-01

    We review the recent archaeomagnetic studies carried out on the territory of Slovakia, focusing on the comparison of methodological approaches, discussing pros and cons of the individual applied methods from the perspective of our experience. The most widely used methods for the determination of intensity and direction of the archaeomegnetic field by demagnetisation of the sample material are the alternating field (AF) demagnetisation and the Thellier double heating method. These methods are used not only for archaeomagnetic studies but also help to solve some geological problems. The two methods were applied to samples collected recently at several sites of Slovakia, where archaeological prospection invoked by earthwork or reconstruction work of developing projects demanded archaeomagnetic dating. Then we discuss advantages and weaknesses of the investigated methods from different perspectives based on several examples and our recent experience.

  14. Mathematical modeling and simulation of aquatic and aerial animal locomotion

    NASA Astrophysics Data System (ADS)

    Hou, T. Y.; Stredie, V. G.; Wu, T. Y.

    2007-08-01

    In this paper, we investigate the locomotion of fish and birds by applying a new unsteady, flexible wing theory that takes into account the strong nonlinear dynamics semi-analytically. We also make extensive comparative study between the new approach and the modified vortex blob method inspired from Chorin's and Krasny's work. We first implement the modified vortex blob method for two examples and then discuss the numerical implementation of the nonlinear analytical mathematical model of Wu. We will demonstrate that Wu's method can capture the nonlinear effects very well by applying it to some specific cases and by comparing with the experiments available. In particular, we apply Wu's method to analyze Wagner's result for a wing abruptly undergoing an increase in incidence angle. Moreover, we study the vorticity generated by a wing in heaving, pitching and bending motion. In both cases, we show that the new method can accurately represent the vortex structure behind a flying wing and its influence on the bound vortex sheet on the wing.

  15. [Algorithm for estimating chlorophyll-a concentration in case II water body based on bio-optical model].

    PubMed

    Yang, Wei; Chen, Jin; Mausushita, Bunki

    2009-01-01

    In the present study, a novel retrieval method for estimating chlorophyll-a concentration in case II waters based on bio-optical model was proposed and was tested with the data measured in the laboratory. A series of reflectance spectra, with which the concentration of each sample constituent (for example chlorophyll-a, NPSS etc.) was obtained from accurate experiments, were used to calculate the absorption and backscattering coefficients of the constituents of the case II waters. Then non-negative least square method was applied to calculate the concentration of chlorophyll-a and non-phytoplankton suspended sediments (NPSS). Green algae was firstly collected from the Kasumigaura lake in Japan and then cultured in the laboratory. The reflectance spectra of waters with different amounts of phytoplankton and NPSS were measured in the dark room using FieldSpec Pro VNIR (Analytical Spectral Devises Inc. , Boulder, CO, USA). In order to validate whether this method can be applied in multispectral data (for example Landsat TM), the spectra measured in the laboratory were resampled with Landsat TM bands 1, 2, 3 and 4. Different combinations of TM bands were compared to derive the most appropriate wavelength for detecting chlorophyll-a in case II water for green algae. The results indicated that the combination of TM bands 2, 3 and 4 achieved much better accuracy than other combinations, and the estimated concentration of chlorophyll-a was significantly more accurate than empirical methods. It is expected that this method can be directly applied to the real remotely sensed image because it is based on bio-optical model.

  16. A comparative study of Conroy and Monte Carlo methods applied to multiple quadratures and multiple scattering

    NASA Technical Reports Server (NTRS)

    Deepak, A.; Fluellen, A.

    1978-01-01

    An efficient numerical method of multiple quadratures, the Conroy method, is applied to the problem of computing multiple scattering contributions in the radiative transfer through realistic planetary atmospheres. A brief error analysis of the method is given and comparisons are drawn with the more familiar Monte Carlo method. Both methods are stochastic problem-solving models of a physical or mathematical process and utilize the sampling scheme for points distributed over a definite region. In the Monte Carlo scheme the sample points are distributed randomly over the integration region. In the Conroy method, the sample points are distributed systematically, such that the point distribution forms a unique, closed, symmetrical pattern which effectively fills the region of the multidimensional integration. The methods are illustrated by two simple examples: one, of multidimensional integration involving two independent variables, and the other, of computing the second order scattering contribution to the sky radiance.

  17. Fracture control methods for space vehicles. Volume 1: Fracture control design methods. [for space shuttle configuration planning

    NASA Technical Reports Server (NTRS)

    Liu, A. F.

    1974-01-01

    A systematic approach for applying methods for fracture control in the structural components of space vehicles consists of four major steps. The first step is to define the primary load-carrying structural elements and the type of load, environment, and design stress levels acting upon them. The second step is to identify the potential fracture-critical parts by means of a selection logic flow diagram. The third step is to evaluate the safe-life and fail-safe capabilities of the specified part. The last step in the sequence is to apply the control procedures that will prevent damage to the fracture-critical parts. The fracture control methods discussed include fatigue design and analysis methods, methods for preventing crack-like defects, fracture mechanics analysis methods, and nondestructive evaluation methods. An example problem is presented for evaluation of the safe-crack-growth capability of the space shuttle crew compartment skin structure.

  18. Simulation methods to estimate design power: an overview for applied research.

    PubMed

    Arnold, Benjamin F; Hogan, Daniel R; Colford, John M; Hubbard, Alan E

    2011-06-20

    Estimating the required sample size and statistical power for a study is an integral part of study design. For standard designs, power equations provide an efficient solution to the problem, but they are unavailable for many complex study designs that arise in practice. For such complex study designs, computer simulation is a useful alternative for estimating study power. Although this approach is well known among statisticians, in our experience many epidemiologists and social scientists are unfamiliar with the technique. This article aims to address this knowledge gap. We review an approach to estimate study power for individual- or cluster-randomized designs using computer simulation. This flexible approach arises naturally from the model used to derive conventional power equations, but extends those methods to accommodate arbitrarily complex designs. The method is universally applicable to a broad range of designs and outcomes, and we present the material in a way that is approachable for quantitative, applied researchers. We illustrate the method using two examples (one simple, one complex) based on sanitation and nutritional interventions to improve child growth. We first show how simulation reproduces conventional power estimates for simple randomized designs over a broad range of sample scenarios to familiarize the reader with the approach. We then demonstrate how to extend the simulation approach to more complex designs. Finally, we discuss extensions to the examples in the article, and provide computer code to efficiently run the example simulations in both R and Stata. Simulation methods offer a flexible option to estimate statistical power for standard and non-traditional study designs and parameters of interest. The approach we have described is universally applicable for evaluating study designs used in epidemiologic and social science research.

  19. Testing for association with multiple traits in generalized estimation equations, with application to neuroimaging data.

    PubMed

    Zhang, Yiwei; Xu, Zhiyuan; Shen, Xiaotong; Pan, Wei

    2014-08-01

    There is an increasing need to develop and apply powerful statistical tests to detect multiple traits-single locus associations, as arising from neuroimaging genetics and other studies. For example, in the Alzheimer's Disease Neuroimaging Initiative (ADNI), in addition to genome-wide single nucleotide polymorphisms (SNPs), thousands of neuroimaging and neuropsychological phenotypes as intermediate phenotypes for Alzheimer's disease, have been collected. Although some classic methods like MANOVA and newly proposed methods may be applied, they have their own limitations. For example, MANOVA cannot be applied to binary and other discrete traits. In addition, the relationships among these methods are not well understood. Importantly, since these tests are not data adaptive, depending on the unknown association patterns among multiple traits and between multiple traits and a locus, these tests may or may not be powerful. In this paper we propose a class of data-adaptive weights and the corresponding weighted tests in the general framework of generalized estimation equations (GEE). A highly adaptive test is proposed to select the most powerful one from this class of the weighted tests so that it can maintain high power across a wide range of situations. Our proposed tests are applicable to various types of traits with or without covariates. Importantly, we also analytically show relationships among some existing and our proposed tests, indicating that many existing tests are special cases of our proposed tests. Extensive simulation studies were conducted to compare and contrast the power properties of various existing and our new methods. Finally, we applied the methods to an ADNI dataset to illustrate the performance of the methods. We conclude with the recommendation for the use of the GEE-based Score test and our proposed adaptive test for their high and complementary performance. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Apparatus for loading shape memory gripper mechanisms

    DOEpatents

    Lee, Abraham P.; Benett, William J.; Schumann, Daniel L.; Krulevitch, Peter A.; Fitch, Joseph P.

    2001-01-01

    A method and apparatus for loading deposit material, such as an embolic coil, into a shape memory polymer (SMP) gripping/release mechanism. The apparatus enables the application of uniform pressure to secure a grip by the SMP mechanism on the deposit material via differential pressure between, for example, vacuum within the SMP mechanism and hydrostatic water pressure on the exterior of the SMP mechanism. The SMP tubing material of the mechanism is heated to above the glass transformation temperature (Tg) while reshaping, and subsequently cooled to below Tg to freeze the shape. The heating and/or cooling may, for example, be provided by the same water applied for pressurization or the heating can be applied by optical fibers packaged to the SMP mechanism for directing a laser beam, for example, thereunto. At a point of use, the deposit material is released from the SMP mechanism by reheating the SMP material to above the temperature Tg whereby it returns to its initial shape. The reheating of the SM material may be carried out by injecting heated fluid (water) through an associated catheter or by optical fibers and an associated beam of laser light, for example.

  1. Symplectic discretization for spectral element solution of Maxwell's equations

    NASA Astrophysics Data System (ADS)

    Zhao, Yanmin; Dai, Guidong; Tang, Yifa; Liu, Qinghuo

    2009-08-01

    Applying the spectral element method (SEM) based on the Gauss-Lobatto-Legendre (GLL) polynomial to discretize Maxwell's equations, we obtain a Poisson system or a Poisson system with at most a perturbation. For the system, we prove that any symplectic partitioned Runge-Kutta (PRK) method preserves the Poisson structure and its implied symplectic structure. Numerical examples show the high accuracy of SEM and the benefit of conserving energy due to the use of symplectic methods.

  2. Comparing Methods for UAV-Based Autonomous Surveillance

    NASA Technical Reports Server (NTRS)

    Freed, Michael; Harris, Robert; Shafto, Michael

    2004-01-01

    We describe an approach to evaluating algorithmic and human performance in directing UAV-based surveillance. Its key elements are a decision-theoretic framework for measuring the utility of a surveillance schedule and an evaluation testbed consisting of 243 scenarios covering a well-defined space of possible missions. We apply this approach to two example UAV-based surveillance methods, a TSP-based algorithm and a human-directed approach, then compare them to identify general strengths, and weaknesses of each method.

  3. Statistical Validation of Image Segmentation Quality Based on a Spatial Overlap Index1

    PubMed Central

    Zou, Kelly H.; Warfield, Simon K.; Bharatha, Aditya; Tempany, Clare M.C.; Kaus, Michael R.; Haker, Steven J.; Wells, William M.; Jolesz, Ferenc A.; Kikinis, Ron

    2005-01-01

    Rationale and Objectives To examine a statistical validation method based on the spatial overlap between two sets of segmentations of the same anatomy. Materials and Methods The Dice similarity coefficient (DSC) was used as a statistical validation metric to evaluate the performance of both the reproducibility of manual segmentations and the spatial overlap accuracy of automated probabilistic fractional segmentation of MR images, illustrated on two clinical examples. Example 1: 10 consecutive cases of prostate brachytherapy patients underwent both preoperative 1.5T and intraoperative 0.5T MR imaging. For each case, 5 repeated manual segmentations of the prostate peripheral zone were performed separately on preoperative and on intraoperative images. Example 2: A semi-automated probabilistic fractional segmentation algorithm was applied to MR imaging of 9 cases with 3 types of brain tumors. DSC values were computed and logit-transformed values were compared in the mean with the analysis of variance (ANOVA). Results Example 1: The mean DSCs of 0.883 (range, 0.876–0.893) with 1.5T preoperative MRI and 0.838 (range, 0.819–0.852) with 0.5T intraoperative MRI (P < .001) were within and at the margin of the range of good reproducibility, respectively. Example 2: Wide ranges of DSC were observed in brain tumor segmentations: Meningiomas (0.519–0.893), astrocytomas (0.487–0.972), and other mixed gliomas (0.490–0.899). Conclusion The DSC value is a simple and useful summary measure of spatial overlap, which can be applied to studies of reproducibility and accuracy in image segmentation. We observed generally satisfactory but variable validation results in two clinical applications. This metric may be adapted for similar validation tasks. PMID:14974593

  4. Topology optimization of thermal fluid flows with an adjoint Lattice Boltzmann Method

    NASA Astrophysics Data System (ADS)

    Dugast, Florian; Favennec, Yann; Josset, Christophe; Fan, Yilin; Luo, Lingai

    2018-07-01

    This paper presents an adjoint Lattice Boltzmann Method (LBM) coupled with the Level-Set Method (LSM) for topology optimization of thermal fluid flows. The adjoint-state formulation implies discrete velocity directions in order to take into account the LBM boundary conditions. These boundary conditions are introduced at the beginning of the adjoint-state method as the LBM residuals, so that the adjoint-state boundary conditions can appear directly during the adjoint-state equation formulation. The proposed method is tested with 3 numerical examples concerning thermal fluid flows, but with different objectives: minimization of the mean temperature in the domain, maximization of the heat evacuated by the fluid, and maximization of the heat exchange with heated solid parts. This latter example, treated in several articles, is used to validate our method. In these optimization problems, a limitation of the maximal pressure drop and of the porosity (number of fluid elements) is also applied. The obtained results demonstrate that the method is robust and effective for solving topology optimization of thermal fluid flows.

  5. On Functional Module Detection in Metabolic Networks

    PubMed Central

    Koch, Ina; Ackermann, Jörg

    2013-01-01

    Functional modules of metabolic networks are essential for understanding the metabolism of an organism as a whole. With the vast amount of experimental data and the construction of complex and large-scale, often genome-wide, models, the computer-aided identification of functional modules becomes more and more important. Since steady states play a key role in biology, many methods have been developed in that context, for example, elementary flux modes, extreme pathways, transition invariants and place invariants. Metabolic networks can be studied also from the point of view of graph theory, and algorithms for graph decomposition have been applied for the identification of functional modules. A prominent and currently intensively discussed field of methods in graph theory addresses the Q-modularity. In this paper, we recall known concepts of module detection based on the steady-state assumption, focusing on transition-invariants (elementary modes) and their computation as minimal solutions of systems of Diophantine equations. We present the Fourier-Motzkin algorithm in detail. Afterwards, we introduce the Q-modularity as an example for a useful non-steady-state method and its application to metabolic networks. To illustrate and discuss the concepts of invariants and Q-modularity, we apply a part of the central carbon metabolism in potato tubers (Solanum tuberosum) as running example. The intention of the paper is to give a compact presentation of known steady-state concepts from a graph-theoretical viewpoint in the context of network decomposition and reduction and to introduce the application of Q-modularity to metabolic Petri net models. PMID:24958145

  6. Classicalization by phase space measurements

    NASA Astrophysics Data System (ADS)

    Bolaños, Marduk

    2018-05-01

    This article provides an illustration of the measurement approach to the quantum–classical transition suitable for beginning graduate students. As an example, we apply this framework to a quantum system with a general quadratic Hamiltonian, and obtain the exact solution of the dynamics for an arbitrary measurement strength using phase space methods.

  7. Eyetracking Methodology in SCMC: A Tool for Empowering Learning and Teaching

    ERIC Educational Resources Information Center

    Stickler, Ursula; Shi, Lijing

    2017-01-01

    Computer-assisted language learning, or CALL, is an interdisciplinary area of research, positioned between science and social science, computing and education, linguistics and applied linguistics. This paper argues that by appropriating methods originating in some areas of CALL-related research, for example human-computer interaction (HCI) or…

  8. Perspectives on Linguistic Documentation from Sociolinguistic Research on Dialects

    ERIC Educational Resources Information Center

    Tagliamonte, Sali A.

    2017-01-01

    The goal of the paper is to demonstrate how sociolinguistic research can be applied to endangered language documentation field linguistics. It first provides an overview of the techniques and practices of sociolinguistic fieldwork and the ensuring corpus compilation methods. The discussion is framed with examples from research projects focused on…

  9. The Value of Teaching Preparation during Doctoral Studies: An Example of a Teaching Practicum

    ERIC Educational Resources Information Center

    Edwards, Jeffrey D.; Powers, Joelle; Thompson, Aaron M.; Rutten-Turner, Elizabeth

    2014-01-01

    For doctoral students who seek faculty appointments in academic settings upon graduation, it is imperative those students have access to quality mentoring, direct instruction, and experiential opportunities to apply effective teaching methods during their training. Currently, some doctoral programs are beginning to develop teaching practicums…

  10. The Vroom and Yetton Normative Leadership Model Applied to Public School Case Examples.

    ERIC Educational Resources Information Center

    Sample, John

    This paper seeks to familiarize school administrators with the Vroom and Yetton Normative Leadership model by presenting its essential components and providing original case studies for its application to school settings. The five decision-making methods of the Vroom and Yetton model, including two "autocratic," two…

  11. Cross-scale integration of knowledge for predicting species ranges: a metamodeling framework

    PubMed Central

    Talluto, Matthew V.; Boulangeat, Isabelle; Ameztegui, Aitor; Aubin, Isabelle; Berteaux, Dominique; Butler, Alyssa; Doyon, Frédérik; Drever, C. Ronnie; Fortin, Marie-Josée; Franceschini, Tony; Liénard, Jean; McKenney, Dan; Solarik, Kevin A.; Strigul, Nikolay; Thuiller, Wilfried; Gravel, Dominique

    2016-01-01

    Aim Current interest in forecasting changes to species ranges have resulted in a multitude of approaches to species distribution models (SDMs). However, most approaches include only a small subset of the available information, and many ignore smaller-scale processes such as growth, fecundity, and dispersal. Furthermore, different approaches often produce divergent predictions with no simple method to reconcile them. Here, we present a flexible framework for integrating models at multiple scales using hierarchical Bayesian methods. Location Eastern North America (as an example). Methods Our framework builds a metamodel that is constrained by the results of multiple sub-models and provides probabilistic estimates of species presence. We applied our approach to a simulated dataset to demonstrate the integration of a correlative SDM with a theoretical model. In a second example, we built an integrated model combining the results of a physiological model with presence-absence data for sugar maple (Acer saccharum), an abundant tree native to eastern North America. Results For both examples, the integrated models successfully included information from all data sources and substantially improved the characterization of uncertainty. For the second example, the integrated model outperformed the source models with respect to uncertainty when modelling the present range of the species. When projecting into the future, the model provided a consensus view of two models that differed substantially in their predictions. Uncertainty was reduced where the models agreed and was greater where they diverged, providing a more realistic view of the state of knowledge than either source model. Main conclusions We conclude by discussing the potential applications of our method and its accessibility to applied ecologists. In ideal cases, our framework can be easily implemented using off-the-shelf software. The framework has wide potential for use in species distribution modelling and can drive better integration of multi-source and multi-scale data into ecological decision-making. PMID:27499698

  12. A strategy to apply quantitative epistasis analysis on developmental traits.

    PubMed

    Labocha, Marta K; Yuan, Wang; Aleman-Meza, Boanerges; Zhong, Weiwei

    2017-05-15

    Genetic interactions are keys to understand complex traits and evolution. Epistasis analysis is an effective method to map genetic interactions. Large-scale quantitative epistasis analysis has been well established for single cells. However, there is a substantial lack of such studies in multicellular organisms and their complex phenotypes such as development. Here we present a method to extend quantitative epistasis analysis to developmental traits. In the nematode Caenorhabditis elegans, we applied RNA interference on mutants to inactivate two genes, used an imaging system to quantitatively measure phenotypes, and developed a set of statistical methods to extract genetic interactions from phenotypic measurement. Using two different C. elegans developmental phenotypes, body length and sex ratio, as examples, we showed that this method could accommodate various metazoan phenotypes with performances comparable to those methods in single cell growth studies. Comparing with qualitative observations, this method of quantitative epistasis enabled detection of new interactions involving subtle phenotypes. For example, several sex-ratio genes were found to interact with brc-1 and brd-1, the orthologs of the human breast cancer genes BRCA1 and BARD1, respectively. We confirmed the brc-1 interactions with the following genes in DNA damage response: C34F6.1, him-3 (ortholog of HORMAD1, HORMAD2), sdc-1, and set-2 (ortholog of SETD1A, SETD1B, KMT2C, KMT2D), validating the effectiveness of our method in detecting genetic interactions. We developed a reliable, high-throughput method for quantitative epistasis analysis of developmental phenotypes.

  13. Calculation of load distribution in stiffened cylindrical shells

    NASA Technical Reports Server (NTRS)

    Ebner, H; Koller, H

    1938-01-01

    Thin-walled shells with strong longitudinal and transverse stiffening (for example, stressed-skin fuselages and wings) may, under certain simplifying assumptions, be treated as static systems with finite redundancies. In this report the underlying basis for this method of treatment of the problem is presented and a computation procedure for stiffened cylindrical shells with curved sheet panels indicated. A detailed discussion of the force distribution due to applied concentrated forces is given, and the discussion illustrated by numerical examples which refer to an experimentally determined circular cylindrical shell.

  14. APT drug R&D: the right active ingredient in the right presentation for the right therapeutic use.

    PubMed

    Cavalla, David

    2009-11-01

    Drug repurposing, in which an established active pharmaceutical ingredient is applied in a new way - for example, for a new indication, and often combined with an alternative method of presentation, such as a novel delivery route - is an evolving strategy for pharmaceutical R&D. This article discusses examples of the success of this strategy, and presents an analysis of sales of US pharmaceutical products that suggests that this low-risk approach to new product development retains substantial commercial value.

  15. Quality assurance and management in microelectronics companies: ISO 9000 versus Six Sigma

    NASA Astrophysics Data System (ADS)

    Lupan, Razvan; Kobi, Abdessamad; Robledo, Christian; Bacivarov, Ioan; Bacivarov, Angelica

    2009-01-01

    A strategy for the implementation of the Six Sigma method as an improvement solution for the ISO 9000:2000 Quality Standard is proposed. Our approach is focused on integrating the DMAIC cycle of the Six Sigma method with the PDCA process approach, highly recommended by the standard ISO 9000:2000. The Six Sigma steps applied to each part of the PDCA cycle are presented in detail, giving some tools and training examples. Based on this analysis the authors conclude that applying Six Sigma philosophy to the Quality Standard implementation process is the best way to achieve the optimal results in quality progress and therefore in customers satisfaction.

  16. APPLICATION OF FLOW SIMULATION FOR EVALUATION OF FILLING-ABILITY OF SELF-COMPACTING CONCRETE

    NASA Astrophysics Data System (ADS)

    Urano, Shinji; Nemoto, Hiroshi; Sakihara, Kohei

    In this paper, MPS method was applied to fluid an alysis of self-compacting concrete. MPS method is one of the particle method, and it is suitable for the simulation of moving boundary or free surface problems and large deformation problems. The constitutive equation of self-compacting concrete is assumed as bingham model. In order to investigate flow Stoppage and flow speed of self-compacting concrete, numerical analysis examples of slump flow and L-flow test were performed. In addition, to evaluate verification of compactability of self-compacting concrete, numerical analys is examples of compaction at the part of CFT diaphragm were performed. As a result, it was found that the MPS method was suitable for the simulation of compaction of self-compacting concrete, and a just appraisal was obtained by setting shear strain rate of flow-limit πc and limitation point of segregation.

  17. Design of two-dimensional channels with prescribed velocity distributions along the channel walls

    NASA Technical Reports Server (NTRS)

    Stanitz, John D

    1953-01-01

    A general method of design is developed for two-dimensional unbranched channels with prescribed velocities as a function of arc length along the channel walls. The method is developed for both compressible and incompressible, irrotational, nonviscous flow and applies to the design of elbows, diffusers, nozzles, and so forth. In part I solutions are obtained by relaxation methods; in part II solutions are obtained by a Green's function. Five numerical examples are given in part I including three elbow designs with the same prescribed velocity as a function of arc length along the channel walls but with incompressible, linearized compressible, and compressible flow. One numerical example is presented in part II for an accelerating elbow with linearized compressible flow, and the time required for the solution by a Green's function in part II was considerably less than the time required for the same solution by relaxation methods in part I.

  18. On dynamical systems approaches and methods in f ( R ) cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alho, Artur; Carloni, Sante; Uggla, Claes, E-mail: aalho@math.ist.utl.pt, E-mail: sante.carloni@tecnico.ulisboa.pt, E-mail: claes.uggla@kau.se

    We discuss dynamical systems approaches and methods applied to flat Robertson-Walker models in f ( R )-gravity. We argue that a complete description of the solution space of a model requires a global state space analysis that motivates globally covering state space adapted variables. This is shown explicitly by an illustrative example, f ( R ) = R + α R {sup 2}, α > 0, for which we introduce new regular dynamical systems on global compactly extended state spaces for the Jordan and Einstein frames. This example also allows us to illustrate several local and global dynamical systems techniquesmore » involving, e.g., blow ups of nilpotent fixed points, center manifold analysis, averaging, and use of monotone functions. As a result of applying dynamical systems methods to globally state space adapted dynamical systems formulations, we obtain pictures of the entire solution spaces in both the Jordan and the Einstein frames. This shows, e.g., that due to the domain of the conformal transformation between the Jordan and Einstein frames, not all the solutions in the Jordan frame are completely contained in the Einstein frame. We also make comparisons with previous dynamical systems approaches to f ( R ) cosmology and discuss their advantages and disadvantages.« less

  19. A systematic and critical review on bioanalytical method validation using the example of simultaneous quantitation of antidiabetic agents in blood.

    PubMed

    Fachi, Mariana Millan; Leonart, Letícia Paula; Cerqueira, Letícia Bonancio; Pontes, Flavia Lada Degaut; de Campos, Michel Leandro; Pontarolo, Roberto

    2017-06-15

    A systematic and critical review was conducted on bioanalytical methods validated to quantify combinations of antidiabetic agents in human blood. The aim of this article was to verify how the validation process of bioanalytical methods is performed and the quality of the published records. The validation assays were evaluated according to international guidelines. The main problems in the validation process are pointed out and discussed to help researchers to choose methods that are truly reliable and can be successfully applied for their intended use. The combination of oral antidiabetic agents was chosen as these are some of the most studied drugs and several methods are present in the literature. Moreover, this article may be applied to the validation process of all bioanalytical. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Symplectic Quantization of a Vector-Tensor Gauge Theory with Topological Coupling

    NASA Astrophysics Data System (ADS)

    Barcelos-Neto, J.; Silva, M. B. D.

    We use the symplectic formalism to quantize a gauge theory where vectors and tensors fields are coupled in a topological way. This is an example of reducible theory and a procedure like of ghosts-of-ghosts of the BFV method is applied but in terms of Lagrange multipliers. Our final results are in agreement with the ones found in the literature by using the Dirac method.

  1. A simple method to predict regional fish abundance: an example in the McKenzie River Basin, Oregon

    Treesearch

    D.J. McGarvey; J.M. Johnston

    2011-01-01

    Regional assessments of fisheries resources are increasingly called for, but tools with which to perform them are limited. We present a simple method that can be used to estimate regional carrying capacity and apply it to the McKenzie River Basin, Oregon. First, we use a macroecological model to predict trout densities within small, medium, and large streams in the...

  2. Integrating Science and Engineering to Implement Evidence-Based Practices in Health Care Settings

    PubMed Central

    Wu, Shinyi; Duan, Naihua; Wisdom, Jennifer P.; Kravitz, Richard L.; Owen, Richard R.; Sullivan, Greer; Wu, Albert W.; Di Capua, Paul; Hoagwood, Kimberly Eaton

    2015-01-01

    Integrating two distinct and complementary paradigms, science and engineering, may produce more effective outcomes for the implementation of evidence-based practices in health care settings. Science formalizes and tests innovations, whereas engineering customizes and optimizes how the innovation is applied tailoring to accommodate local conditions. Together they may accelerate the creation of an evidence-based healthcare system that works effectively in specific health care settings. We give examples of applying engineering methods for better quality, more efficient, and safer implementation of clinical practices, medical devices, and health services systems. A specific example was applying systems engineering design that orchestrated people, process, data, decision-making, and communication through a technology application to implement evidence-based depression care among low-income patients with diabetes. We recommend that leading journals recognize the fundamental role of engineering in implementation research, to improve understanding of design elements that create a better fit between program elements and local context. PMID:25217100

  3. Reliability analysis of composite structures

    NASA Technical Reports Server (NTRS)

    Kan, Han-Pin

    1992-01-01

    A probabilistic static stress analysis methodology has been developed to estimate the reliability of a composite structure. Closed form stress analysis methods are the primary analytical tools used in this methodology. These structural mechanics methods are used to identify independent variables whose variations significantly affect the performance of the structure. Once these variables are identified, scatter in their values is evaluated and statistically characterized. The scatter in applied loads and the structural parameters are then fitted to appropriate probabilistic distribution functions. Numerical integration techniques are applied to compute the structural reliability. The predicted reliability accounts for scatter due to variability in material strength, applied load, fabrication and assembly processes. The influence of structural geometry and mode of failure are also considerations in the evaluation. Example problems are given to illustrate various levels of analytical complexity.

  4. Reliability verification of vehicle speed estimate method in forensic videos.

    PubMed

    Kim, Jong-Hyuk; Oh, Won-Taek; Choi, Ji-Hun; Park, Jong-Chan

    2018-06-01

    In various types of traffic accidents, including car-to-car crash, vehicle-pedestrian collision, and hit-and-run accident, driver overspeed is one of the critical issues of traffic accident analysis. Hence, analysis of vehicle speed at the moment of accident is necessary. The present article proposes a vehicle speed estimate method (VSEM) applying a virtual plane and a virtual reference line to a forensic video. The reliability of the VSEM was verified by comparing the results obtained by applying the VSEM to videos from a test vehicle driving with a global positioning system (GPS)-based Vbox speed. The VSEM verified by these procedures was applied to real traffic accident examples to evaluate the usability of the VSEM. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Parameter identification for structural dynamics based on interval analysis algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Chen; Lu, Zixing; Yang, Zhenyu; Liang, Ke

    2018-04-01

    A parameter identification method using interval analysis algorithm for structural dynamics is presented in this paper. The proposed uncertain identification method is investigated by using central difference method and ARMA system. With the help of the fixed memory least square method and matrix inverse lemma, a set-membership identification technology is applied to obtain the best estimation of the identified parameters in a tight and accurate region. To overcome the lack of insufficient statistical description of the uncertain parameters, this paper treats uncertainties as non-probabilistic intervals. As long as we know the bounds of uncertainties, this algorithm can obtain not only the center estimations of parameters, but also the bounds of errors. To improve the efficiency of the proposed method, a time-saving algorithm is presented by recursive formula. At last, to verify the accuracy of the proposed method, two numerical examples are applied and evaluated by three identification criteria respectively.

  6. Resistance fail strain gage technology as applied to composite materials

    NASA Technical Reports Server (NTRS)

    Tuttle, M. E.; Brinson, H. F.

    1985-01-01

    Existing strain gage technologies as applied to orthotropic composite materials are reviewed. The bonding procedures, transverse sensitivity effects, errors due to gage misalignment, and temperature compensation methods are addressed. Numerical examples are included where appropriate. It is shown that the orthotropic behavior of composites can result in experimental error which would not be expected based on practical experience with isotropic materials. In certain cases, the transverse sensitivity of strain gages and/or slight gage misalignment can result in strain measurement errors.

  7. Amphibian molecular ecology and how it has informed conservation.

    PubMed

    McCartney-Melstad, Evan; Shaffer, H Bradley

    2015-10-01

    Molecular ecology has become one of the key tools in the modern conservationist's kit. Here we review three areas where molecular ecology has been applied to amphibian conservation: genes on landscapes, within-population processes, and genes that matter. We summarize relevant analytical methods, recent important studies from the amphibian literature, and conservation implications for each section. Finally, we include five in-depth examples of how molecular ecology has been successfully applied to specific amphibian systems. © 2015 John Wiley & Sons Ltd.

  8. Approximated Stable Inversion for Nonlinear Systems with Nonhyperbolic Internal Dynamics. Revised

    NASA Technical Reports Server (NTRS)

    Devasia, Santosh

    1999-01-01

    A technique to achieve output tracking for nonminimum phase nonlinear systems with non- hyperbolic internal dynamics is presented. The present paper integrates stable inversion techniques (that achieve exact-tracking) with approximation techniques (that modify the internal dynamics) to circumvent the nonhyperbolicity of the internal dynamics - this nonhyperbolicity is an obstruction to applying presently available stable inversion techniques. The theory is developed for nonlinear systems and the method is applied to a two-cart with inverted-pendulum example.

  9. Applications of rule-induction in the derivation of quantitative structure-activity relationships.

    PubMed

    A-Razzak, M; Glen, R C

    1992-08-01

    Recently, methods have been developed in the field of Artificial Intelligence (AI), specifically in the expert systems area using rule-induction, designed to extract rules from data. We have applied these methods to the analysis of molecular series with the objective of generating rules which are predictive and reliable. The input to rule-induction consists of a number of examples with known outcomes (a training set) and the output is a tree-structured series of rules. Unlike most other analysis methods, the results of the analysis are in the form of simple statements which can be easily interpreted. These are readily applied to new data giving both a classification and a probability of correctness. Rule-induction has been applied to in-house generated and published QSAR datasets and the methodology, application and results of these analyses are discussed. The results imply that in some cases it would be advantageous to use rule-induction as a complementary technique in addition to conventional statistical and pattern-recognition methods.

  10. Enhanced Molecular Dynamics Methods Applied to Drug Design Projects.

    PubMed

    Ziada, Sonia; Braka, Abdennour; Diharce, Julien; Aci-Sèche, Samia; Bonnet, Pascal

    2018-01-01

    Nobel Laureate Richard P. Feynman stated: "[…] everything that living things do can be understood in terms of jiggling and wiggling of atoms […]." The importance of computer simulations of macromolecules, which use classical mechanics principles to describe atom behavior, is widely acknowledged and nowadays, they are applied in many fields such as material sciences and drug discovery. With the increase of computing power, molecular dynamics simulations can be applied to understand biological mechanisms at realistic timescales. In this chapter, we share our computational experience providing a global view of two of the widely used enhanced molecular dynamics methods to study protein structure and dynamics through the description of their characteristics, limits and we provide some examples of their applications in drug design. We also discuss the appropriate choice of software and hardware. In a detailed practical procedure, we describe how to set up, run, and analyze two main molecular dynamics methods, the umbrella sampling (US) and the accelerated molecular dynamics (aMD) methods.

  11. A refined method for multivariate meta-analysis and meta-regression

    PubMed Central

    Jackson, Daniel; Riley, Richard D

    2014-01-01

    Making inferences about the average treatment effect using the random effects model for meta-analysis is problematic in the common situation where there is a small number of studies. This is because estimates of the between-study variance are not precise enough to accurately apply the conventional methods for testing and deriving a confidence interval for the average effect. We have found that a refined method for univariate meta-analysis, which applies a scaling factor to the estimated effects’ standard error, provides more accurate inference. We explain how to extend this method to the multivariate scenario and show that our proposal for refined multivariate meta-analysis and meta-regression can provide more accurate inferences than the more conventional approach. We explain how our proposed approach can be implemented using standard output from multivariate meta-analysis software packages and apply our methodology to two real examples. © 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd. PMID:23996351

  12. Applications of rule-induction in the derivation of quantitative structure-activity relationships

    NASA Astrophysics Data System (ADS)

    A-Razzak, Mohammed; Glen, Robert C.

    1992-08-01

    Recently, methods have been developed in the field of Artificial Intelligence (AI), specifically in the expert systems area using rule-induction, designed to extract rules from data. We have applied these methods to the analysis of molecular series with the objective of generating rules which are predictive and reliable. The input to rule-induction consists of a number of examples with known outcomes (a training set) and the output is a tree-structured series of rules. Unlike most other analysis methods, the results of the analysis are in the form of simple statements which can be easily interpreted. These are readily applied to new data giving both a classification and a probability of correctness. Rule-induction has been applied to in-house generated and published QSAR datasets and the methodology, application and results of these analyses are discussed. The results imply that in some cases it would be advantageous to use rule-induction as a complementary technique in addition to conventional statistical and pattern-recognition methods.

  13. Harmony search method: theory and applications.

    PubMed

    Gao, X Z; Govindasamy, V; Xu, H; Wang, X; Zenger, K

    2015-01-01

    The Harmony Search (HS) method is an emerging metaheuristic optimization algorithm, which has been employed to cope with numerous challenging tasks during the past decade. In this paper, the essential theory and applications of the HS algorithm are first described and reviewed. Several typical variants of the original HS are next briefly explained. As an example of case study, a modified HS method inspired by the idea of Pareto-dominance-based ranking is also presented. It is further applied to handle a practical wind generator optimal design problem.

  14. Multifractal detrended cross-correlation analysis for two nonstationary signals.

    PubMed

    Zhou, Wei-Xing

    2008-06-01

    We propose a method called multifractal detrended cross-correlation analysis to investigate the multifractal behaviors in the power-law cross-correlations between two time series or higher-dimensional quantities recorded simultaneously, which can be applied to diverse complex systems such as turbulence, finance, ecology, physiology, geophysics, and so on. The method is validated with cross-correlated one- and two-dimensional binomial measures and multifractal random walks. As an example, we illustrate the method by analyzing two financial time series.

  15. Centrifuge-operated specimen staining method and apparatus

    NASA Technical Reports Server (NTRS)

    Feeback, Daniel L. (Inventor); Clarke, Mark S. F. (Inventor)

    1999-01-01

    A method of staining preselected, mounted specimens of either biological or nonbiological material enclosed within a staining chamber where the liquid staining reagents are applied and removed from the staining chamber using hypergravity as the propelling force. In the preferred embodiment, a spacecraft-operated centrifuge and method of diagnosing biological specimens while in orbit, characterized by hermetically sealing a shell assembly. The assembly contains slide stain apparatus with computer control therefor, the operative effect of which is to overcome microgravity, for example on board an International Space Station.

  16. Method for calculating the rolling and yawing moments due to rolling for unswept wings with or without flaps or ailerons by use of nonlinear section lift data

    NASA Technical Reports Server (NTRS)

    Martina, Albert P

    1953-01-01

    The methods of NACA Reports 865 and 1090 have been applied to the calculation of the rolling- and yawing-moment coefficients due to rolling for unswept wings with or without flaps or ailerons. The methods allow the use of nonlinear section lift data together with lifting-line theory. Two calculated examples are presented in simplified computing forms in order to illustrate the procedures involved.

  17. The most precise computations using Euler's method in standard floating-point arithmetic applied to modelling of biological systems.

    PubMed

    Kalinina, Elizabeth A

    2013-08-01

    The explicit Euler's method is known to be very easy and effective in implementation for many applications. This article extends results previously obtained for the systems of linear differential equations with constant coefficients to arbitrary systems of ordinary differential equations. Optimal (providing minimum total error) step size is calculated at each step of Euler's method. Several examples of solving stiff systems are included. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. Implementing a method of screening one-well hydraulic barrier design alternatives.

    PubMed

    Rubin, Hillel; Shoemaker, Christine A; Köngeter, Jürgen

    2009-01-01

    This article provides details of applying the method developed by the authors (Rubin et al. 2008b) for screening one-well hydraulic barrier design alternatives. The present article with its supporting information (manual and electronic spreadsheets with a case history example) provides the reader complete details and examples of solving the set of nonlinear equations developed by Rubin et al. (2008b). It allows proper use of the analytical solutions and also depicting the various charts given by Rubin et al. (2008b). The final outputs of the calculations are the required position and the discharge of the pumping well. If the contaminant source is nonaqueous phase liquid (NAPL) entrapped within the aquifer, then the method provides an estimate of the aquifer remediation progress (which is a by-product) due to operating the hydraulic barrier.

  19. A modified multi-objective particle swarm optimization approach and its application to the design of a deepwater composite riser

    NASA Astrophysics Data System (ADS)

    Zheng, Y.; Chen, J.

    2017-09-01

    A modified multi-objective particle swarm optimization method is proposed for obtaining Pareto-optimal solutions effectively. Different from traditional multi-objective particle swarm optimization methods, Kriging meta-models and the trapezoid index are introduced and integrated with the traditional one. Kriging meta-models are built to match expensive or black-box functions. By applying Kriging meta-models, function evaluation numbers are decreased and the boundary Pareto-optimal solutions are identified rapidly. For bi-objective optimization problems, the trapezoid index is calculated as the sum of the trapezoid's area formed by the Pareto-optimal solutions and one objective axis. It can serve as a measure whether the Pareto-optimal solutions converge to the Pareto front. Illustrative examples indicate that to obtain Pareto-optimal solutions, the method proposed needs fewer function evaluations than the traditional multi-objective particle swarm optimization method and the non-dominated sorting genetic algorithm II method, and both the accuracy and the computational efficiency are improved. The proposed method is also applied to the design of a deepwater composite riser example in which the structural performances are calculated by numerical analysis. The design aim was to enhance the tension strength and minimize the cost. Under the buckling constraint, the optimal trade-off of tensile strength and material volume is obtained. The results demonstrated that the proposed method can effectively deal with multi-objective optimizations with black-box functions.

  20. Direct detection of metal-insulator phase transitions using the modified Backus-Gilbert method

    NASA Astrophysics Data System (ADS)

    Ulybyshev, Maksim; Winterowd, Christopher; Zafeiropoulos, Savvas

    2018-03-01

    The detection of the (semi)metal-insulator phase transition can be extremely difficult if the local order parameter which characterizes the ordered phase is unknown. In some cases, it is even impossible to define a local order parameter: the most prominent example of such system is the spin liquid state. This state was proposed to exist in the Hubbard model on the hexagonal lattice in a region between the semimetal phase and the antiferromagnetic insulator phase. The existence of this phase has been the subject of a long debate. In order to detect these exotic phases we must use alternative methods to those used for more familiar examples of spontaneous symmetry breaking. We have modified the Backus-Gilbert method of analytic continuation which was previously used in the calculation of the pion quasiparticle mass in lattice QCD. The modification of the method consists of the introduction of the Tikhonov regularization scheme which was used to treat the ill-conditioned kernel. This modified Backus-Gilbert method is applied to the Euclidean propagators in momentum space calculated using the hybrid Monte Carlo algorithm. In this way, it is possible to reconstruct the full dispersion relation and to estimate the mass gap, which is a direct signal of the transition to the insulating state. We demonstrate the utility of this method in our calculations for the Hubbard model on the hexagonal lattice. We also apply the method to the metal-insulator phase transition in the Hubbard-Coulomb model on the square lattice.

  1. Landslide early warning based on failure forecast models: the example of Mt. de La Saxe rockslide, northern Italy

    NASA Astrophysics Data System (ADS)

    Manconi, A.; Giordan, D.

    2015-02-01

    We investigate the use of landslide failure forecast models by exploiting near-real-time monitoring data. Starting from the inverse velocity theory, we analyze landslide surface displacements on different temporal windows, and apply straightforward statistical methods to obtain confidence intervals on the estimated time of failure. Here we describe the main concepts of our method, and show an example of application to a real emergency scenario, the La Saxe rockslide, Aosta Valley region, northern Italy. Based on the herein presented case study, we identify operational thresholds based on the reliability of the forecast models, in order to support the management of early warning systems in the most critical phases of the landslide emergency.

  2. A method for diagnosing time dependent faults using model-based reasoning systems

    NASA Technical Reports Server (NTRS)

    Goodrich, Charles H.

    1995-01-01

    This paper explores techniques to apply model-based reasoning to equipment and systems which exhibit dynamic behavior (that which changes as a function of time). The model-based system of interest is KATE-C (Knowledge based Autonomous Test Engineer) which is a C++ based system designed to perform monitoring and diagnosis of Space Shuttle electro-mechanical systems. Methods of model-based monitoring and diagnosis are well known and have been thoroughly explored by others. A short example is given which illustrates the principle of model-based reasoning and reveals some limitations of static, non-time-dependent simulation. This example is then extended to demonstrate representation of time-dependent behavior and testing of fault hypotheses in that environment.

  3. Measurement methods of building structures deflections

    NASA Astrophysics Data System (ADS)

    Wróblewska, Magdalena

    2018-04-01

    Underground mining exploitation is leading to the occurrence of deformations manifested by, in particular, sloping terrain. The structures situated on the deforming subsoil are subject to uneven subsidence which is leading in consequence to their deflection. Before a building rectification process takes place by, e.g. uneven raising, the structure's deflection direction and value is determined so that the structure is restored to its vertical position as a result of the undertaken remedial measures. Deflection can be determined by applying classical as well as modern measurement techniques. The article presents examples of measurement methods used considering the measured elements of building structures' constructions and field measurements. Moreover, for a given example of a mining area, the existing deflections of buildings were compared with mining terrain sloping.

  4. Efficient simulation of intrinsic, extrinsic and external noise in biochemical systems.

    PubMed

    Pischel, Dennis; Sundmacher, Kai; Flassig, Robert J

    2017-07-15

    Biological cells operate in a noisy regime influenced by intrinsic, extrinsic and external noise, which leads to large differences of individual cell states. Stochastic effects must be taken into account to characterize biochemical kinetics accurately. Since the exact solution of the chemical master equation, which governs the underlying stochastic process, cannot be derived for most biochemical systems, approximate methods are used to obtain a solution. In this study, a method to efficiently simulate the various sources of noise simultaneously is proposed and benchmarked on several examples. The method relies on the combination of the sigma point approach to describe extrinsic and external variability and the τ -leaping algorithm to account for the stochasticity due to probabilistic reactions. The comparison of our method to extensive Monte Carlo calculations demonstrates an immense computational advantage while losing an acceptable amount of accuracy. Additionally, the application to parameter optimization problems in stochastic biochemical reaction networks is shown, which is rarely applied due to its huge computational burden. To give further insight, a MATLAB script is provided including the proposed method applied to a simple toy example of gene expression. MATLAB code is available at Bioinformatics online. flassig@mpi-magdeburg.mpg.de. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  5. Scientific rigour in qualitative research--examples from a study of women's health in family practice.

    PubMed

    Hamberg, K; Johansson, E; Lindgren, G; Westman, G

    1994-06-01

    The increase in qualitative research in family medicine raises a demand for critical discussions about design, methods and conclusions. This article shows how scientific claims for truthful findings and neutrality can be assessed. Established concepts such as validity, reliability, objectivity and generalization cannot be used in qualitative research. Alternative criteria for scientific rigour, initially introduced by Lincoln and Guba, are presented: credibility, dependability, confirmability and transferability. These criteria have been applied to a research project, a qualitative study with in-depth interviews with female patients suffering from chronic pain in the locomotor system. The interview data were analysed on the basis of grounded theory. The proposed indicators for scientific rigour were shown to be useful when applied to the research project. Several examples are given. Difficulties in the use of the alternative criteria are also discussed.

  6. Free-energy landscapes from adaptively biased methods: Application to quantum systems

    NASA Astrophysics Data System (ADS)

    Calvo, F.

    2010-10-01

    Several parallel adaptive biasing methods are applied to the calculation of free-energy pathways along reaction coordinates, choosing as a difficult example the double-funnel landscape of the 38-atom Lennard-Jones cluster. In the case of classical statistics, the Wang-Landau and adaptively biased molecular-dynamics (ABMD) methods are both found efficient if multiple walkers and replication and deletion schemes are used. An extension of the ABMD technique to quantum systems, implemented through the path-integral MD framework, is presented and tested on Ne38 against the quantum superposition method.

  7. EMC analysis of MOS-1

    NASA Astrophysics Data System (ADS)

    Ishizawa, Y.; Abe, K.; Shirako, G.; Takai, T.; Kato, H.

    The electromagnetic compatibility (EMC) control method, system EMC analysis method, and system test method which have been applied to test the components of the MOS-1 satellite are described. The merits and demerits of the problem solving, specification, and system approaches to EMC control are summarized, and the data requirements of the SEMCAP (specification and electromagnetic compatibility analysis program) computer program for verifying the EMI safety margin of the components are sumamrized. Examples of EMC design are mentioned, and the EMC design process and selection method for EMC critical points are shown along with sample EMC test results.

  8. The Evaluation of Classroom Social Structure by Three-Way Multidimensional Scaling of Sociometric Data.

    ERIC Educational Resources Information Center

    Langeheine, Rolf

    1978-01-01

    A three-way multidimensional scaling model is presented as a method for identifying classroom cliques, by simultaneous analysis of three variables (for example, chooser/choosen/criteria). Two scaling models--Carroll and Chang's INDSCAL and Lingoes' PINDIS--are presented and applied to two sets of empirical data. (CP)

  9. Change Detection in Rough Time Series

    DTIC Science & Technology

    2014-09-01

    Business Statistics : An Inferential Approach, Dellen: San Francisco. [18] Winston, W. (1997) Operations Research Applications and Algorithms, Duxbury...distribution that can present significant challenges to conventional statistical tracking techniques. To address this problem the proposed method...applies hybrid fuzzy statistical techniques to series granules instead of to individual measures. Three examples demonstrated the robust nature of the

  10. Inside the Black Box: Revealing the Process in Applying a Grounded Theory Analysis

    ERIC Educational Resources Information Center

    Rich, Peter

    2012-01-01

    Qualitative research methods have long set an example of rich description, in which data and researchers' hermeneutics work together to inform readers of findings in specific contexts. Among published works, insight into the analytical process is most often represented in the form of methodological propositions or research results. This paper…

  11. Evaluation Methodology. The Evaluation Exchange. Volume 11, Number 2, Summer 2005

    ERIC Educational Resources Information Center

    Coffman, Julia, Ed.

    2005-01-01

    This is the third issue of "The Evaluation Exchange" devoted entirely to the theme of methodology, though every issue tries to identify new methodological choices, the instructive ways in which people have applied or combined different methods, and emerging methodological trends. For example, lately "theories of change" have gained almost…

  12. Order, topology and preference

    NASA Technical Reports Server (NTRS)

    Sertel, M. R.

    1971-01-01

    Some standard order-related and topological notions, facts, and methods are brought to bear on central topics in the theory of preference and the theory of optimization. Consequences of connectivity are considered, especially from the viewpoint of normally preordered spaces. Examples are given showing how the theory of preference, or utility theory, can be applied to social analysis.

  13. Evaluating the Use of Metaphor in Online Learning Environments

    ERIC Educational Resources Information Center

    Falconer, Liz

    2008-01-01

    Metaphor appears to be an innate tendency in human communication and can be shown to have significant potential when applied to the design of online learning environments. This paper describes and discusses an example of an online research methods learning resource that employs metaphoric navigation. Feedback from the tutors who design and…

  14. Dynamic Programming Method for Impulsive Control Problems

    ERIC Educational Resources Information Center

    Balkew, Teshome Mogessie

    2015-01-01

    In many control systems changes in the dynamics occur unexpectedly or are applied by a controller as needed. The time at which a controller implements changes is not necessarily known a priori. For example, many manufacturing systems and flight operations have complicated control systems, and changes in the control systems may be automatically…

  15. Estimation of Latent Group Effects: Psychometric Technical Report No. 2.

    ERIC Educational Resources Information Center

    Mislevy, Robert J.

    Conventional methods of multivariate normal analysis do not apply when the variables of interest are not observed directly, but must be inferred from fallible or incomplete data. For example, responses to mental test items may depend upon latent aptitude variables, which modeled in turn as functions of demographic effects in the population. A…

  16. A Materials Index--Its Storage, Retrieval, and Display

    ERIC Educational Resources Information Center

    Rosen, Carol Z.

    1973-01-01

    An experimental procedure for indexing physical materials based on simple syntactical rules was tested by encoding the materials in the journal, Applied Physics Letters,'' to produce a materials index. The syntax and numerous examples together with an indication of the method by which retrieval can be effected are presented. (5 references)…

  17. A Guide to Computer Adaptive Testing Systems

    ERIC Educational Resources Information Center

    Davey, Tim

    2011-01-01

    Some brand names are used generically to describe an entire class of products that perform the same function. "Kleenex," "Xerox," "Thermos," and "Band-Aid" are good examples. The term "computerized adaptive testing" (CAT) is similar in that it is often applied uniformly across a diverse family of testing methods. Although the various members of…

  18. Applying Longitudinal Mean and Covariance Structures (LMACS) Analysis to Assess Construct Stability Over Two Time Points: An Example Using Psychological Entitlement

    ERIC Educational Resources Information Center

    Bashkov, Bozhidar M.; Finney, Sara J.

    2013-01-01

    Traditional methods of assessing construct stability are reviewed and longitudinal mean and covariance structures (LMACS) analysis, a modern approach, is didactically illustrated using psychological entitlement data. Measurement invariance and latent variable stability results are interpreted, emphasizing substantive implications for educators and…

  19. 20 CFR 404.1256 - Limitation on State's liability for contributions for multiple employment situations-for wages...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND DISABILITY INSURANCE (1950... class or classes of employees to whose wages this method of computing contributions applies. For example... any class or classes of employees identified in an agreement or modification. In its notification, the...

  20. Applying Bayesian statistics to the study of psychological trauma: A suggestion for future research.

    PubMed

    Yalch, Matthew M

    2016-03-01

    Several contemporary researchers have noted the virtues of Bayesian methods of data analysis. Although debates continue about whether conventional or Bayesian statistics is the "better" approach for researchers in general, there are reasons why Bayesian methods may be well suited to the study of psychological trauma in particular. This article describes how Bayesian statistics offers practical solutions to the problems of data non-normality, small sample size, and missing data common in research on psychological trauma. After a discussion of these problems and the effects they have on trauma research, this article explains the basic philosophical and statistical foundations of Bayesian statistics and how it provides solutions to these problems using an applied example. Results of the literature review and the accompanying example indicates the utility of Bayesian statistics in addressing problems common in trauma research. Bayesian statistics provides a set of methodological tools and a broader philosophical framework that is useful for trauma researchers. Methodological resources are also provided so that interested readers can learn more. (c) 2016 APA, all rights reserved).

  1. Standardizing the atomic description, axis and centre of biological ion channels.

    PubMed

    Kaats, Adrian J; Galiana, Henrietta L; Nadeau, Jay L

    2007-09-15

    A general representation of the atomic co-ordinates of a biological ion channel is obtained from a definition of channel axis and centre. Through rotation and translation of the channel, its centre becomes the origin of the standard co-ordinate system, and the channel axis becomes the system's z-axis. A method for determining the channel axis and centre based on the concepts of mass centre and mass moment of inertia is presented. The method for determining the channel axis can be directly applied to channels that adhere to two specific conditions regarding their geometry and mass distribution. Specific examples are given for Gramicidin A (GA), and the mammalian potassium channel Kv 1.2. For channels that do not adhere to these conditions, minor modifications of these procedures can be applied in determining the channel axis. Specific examples are given for the outer membrane bacterial porin OmpF, and for the staphylococcal pore-forming toxin alpha-hemolysin (alpha HL). The definitions and procedures presented are made in an effort to establish a standard basis for performing, sharing, and comparing computations in a consistent manner.

  2. Optimization of Immobilization of Nanodiamonds on Graphene

    NASA Astrophysics Data System (ADS)

    Pille, A.; Lange, S.; Utt, K.; Eltermann, M.

    2015-04-01

    We report using simple dip-coating method to cover the surface of graphene with nanodiamonds for future optical detection of defects on graphene. Most important part of the immobilization process is the pre-functionalization of both, nanodiamond and graphene surfaces to obtain the selectiveness of the method. This work focuses on an example of using electrostatic attraction to confine nanodiamonds to graphene. Raman spectroscopy, microluminescence imaging and scanning electron microscopy were applied to characterize obtained samples.

  3. Seeking instructional specificity: An example from analogical instruction

    NASA Astrophysics Data System (ADS)

    Kuo, Eric; Wieman, Carl E.

    2015-12-01

    Broad instructional methods like "interactive engagement" have been shown to be effective, but such general characterization provides little guidance on the details of how to structure instructional materials. In this study, we seek instructional specificity by comparing two ways of using an analogy to learn a target physical principle: (i) applying the analogy to the target physical domain on a case-by-case basis and (ii) using the analogy to create a general rule in the target physical domain. In the discussion sections of a large, introductory physics course (N =2 3 1 ), students who sought a general rule were better able to discover and apply a correct physics principle than students who analyzed the examples case by case. The difference persisted at a reduced level after subsequent direct instruction. We argue that students who performed case-by-case analyses were more likely to focus on idiosyncratic problem-specific features rather than the deep structural features. This study provides an example of investigations into how the specific structure of instructional materials can be consequential for what is learned.

  4. IMPLEMENTATION OF THE IMPROVED QUASI-STATIC METHOD IN RATTLESNAKE/MOOSE FOR TIME-DEPENDENT RADIATION TRANSPORT MODELLING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zachary M. Prince; Jean C. Ragusa; Yaqi Wang

    Because of the recent interest in reactor transient modeling and the restart of the Transient Reactor (TREAT) Facility, there has been a need for more efficient, robust methods in computation frameworks. This is the impetus of implementing the Improved Quasi-Static method (IQS) in the RATTLESNAKE/MOOSE framework. IQS has implemented with CFEM diffusion by factorizing flux into time-dependent amplitude and spacial- and weakly time-dependent shape. The shape evaluation is very similar to a flux diffusion solve and is computed at large (macro) time steps. While the amplitude evaluation is a PRKE solve where the parameters are dependent on the shape andmore » is computed at small (micro) time steps. IQS has been tested with a custom one-dimensional example and the TWIGL ramp benchmark. These examples prove it to be a viable and effective method for highly transient cases. More complex cases are intended to be applied to further test the method and its implementation.« less

  5. Formation and Control of Fluidic Species

    NASA Technical Reports Server (NTRS)

    Link, Darren Roy (Inventor); Marquez-Sanchez, Manuel (Inventor); Cheng, Zhengdong (Inventor); Weitz, David A. (Inventor)

    2015-01-01

    This invention generally relates to systems and methods for the formation and/or control of fluidic species, and articles produced by such systems and methods. In some cases, the invention involves unique fluid channels, systems, controls, and/or restrictions, and combinations thereof. In certain embodiments, the invention allows fluidic streams (which can be continuous or discontinuous, i.e., droplets) to be formed and/or combined, at a variety of scales, including microfluidic scales. In one set of embodiments, a fluidic stream may be produced from a channel, where a cross-sectional dimension of the fluidic stream is smaller than that of the channel, for example, through the use of structural elements, other fluids, and/or applied external fields, etc. In some cases, a Taylor cone may be produced. In another set of embodiments, a fluidic stream may be manipulated in some fashion, for example, to create tubes (which may be hollow or solid), droplets, nested tubes or droplets, arrays of tubes or droplets, meshes of tubes, etc. In some cases, droplets produced using certain embodiments of the invention may be charged or substantially charged, which may allow their further manipulation, for instance, using applied external fields. Non-limiting examples of such manipulations include producing charged droplets, coalescing droplets (especially at the microscale), synchronizing droplet formation, aligning molecules within the droplet, etc. In some cases, the droplets and/or the fluidic streams may include colloids, cells, therapeutic agents, and the like.

  6. Multi-ball and one-ball geolocation

    NASA Astrophysics Data System (ADS)

    Nelson, D. J.; Townsend, J. L.

    2017-05-01

    We present analysis methods that may be used to geolocate emitters using one or more moving receivers. While some of the methods we present may apply to a broader class of signals, our primary interest is locating and tracking ships from short pulsed transmissions, such as the maritime Automatic Identification System (AIS.) The AIS signal is difficult to process and track since the pulse duration is only 25 milliseconds, and the pulses may only be transmitted every six to ten seconds. In this article, we address several problems including accurate TDOA and FDOA estimation methods that do not require searching a two dimensional surface such as the cross-ambiguity surface. As an example, we apply these methods to identify and process AIS pulses from a single emitter, making it possible to geolocate the AIS signal using a single moving receiver.

  7. NegGOA: negative GO annotations selection using ontology structure.

    PubMed

    Fu, Guangyuan; Wang, Jun; Yang, Bo; Yu, Guoxian

    2016-10-01

    Predicting the biological functions of proteins is one of the key challenges in the post-genomic era. Computational models have demonstrated the utility of applying machine learning methods to predict protein function. Most prediction methods explicitly require a set of negative examples-proteins that are known not carrying out a particular function. However, Gene Ontology (GO) almost always only provides the knowledge that proteins carry out a particular function, and functional annotations of proteins are incomplete. GO structurally organizes more than tens of thousands GO terms and a protein is annotated with several (or dozens) of these terms. For these reasons, the negative examples of a protein can greatly help distinguishing true positive examples of the protein from such a large candidate GO space. In this paper, we present a novel approach (called NegGOA) to select negative examples. Specifically, NegGOA takes advantage of the ontology structure, available annotations and potentiality of additional annotations of a protein to choose negative examples of the protein. We compare NegGOA with other negative examples selection algorithms and find that NegGOA produces much fewer false negatives than them. We incorporate the selected negative examples into an efficient function prediction model to predict the functions of proteins in Yeast, Human, Mouse and Fly. NegGOA also demonstrates improved accuracy than these comparing algorithms across various evaluation metrics. In addition, NegGOA is less suffered from incomplete annotations of proteins than these comparing methods. The Matlab and R codes are available at https://sites.google.com/site/guoxian85/neggoa gxyu@swu.edu.cn Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Estimating groundwater recharge

    USGS Publications Warehouse

    Healy, Richard W.; Scanlon, Bridget R.

    2010-01-01

    Understanding groundwater recharge is essential for successful management of water resources and modeling fluid and contaminant transport within the subsurface. This book provides a critical evaluation of the theory and assumptions that underlie methods for estimating rates of groundwater recharge. Detailed explanations of the methods are provided - allowing readers to apply many of the techniques themselves without needing to consult additional references. Numerous practical examples highlight benefits and limitations of each method. Approximately 900 references allow advanced practitioners to pursue additional information on any method. For the first time, theoretical and practical considerations for selecting and applying methods for estimating groundwater recharge are covered in a single volume with uniform presentation. Hydrogeologists, water-resource specialists, civil and agricultural engineers, earth and environmental scientists and agronomists will benefit from this informative and practical book. It can serve as the primary text for a graduate-level course on groundwater recharge or as an adjunct text for courses on groundwater hydrology or hydrogeology.

  9. GPS/DR Error Estimation for Autonomous Vehicle Localization.

    PubMed

    Lee, Byung-Hyun; Song, Jong-Hwa; Im, Jun-Hyuck; Im, Sung-Hyuck; Heo, Moon-Beom; Jee, Gyu-In

    2015-08-21

    Autonomous vehicles require highly reliable navigation capabilities. For example, a lane-following method cannot be applied in an intersection without lanes, and since typical lane detection is performed using a straight-line model, errors can occur when the lateral distance is estimated in curved sections due to a model mismatch. Therefore, this paper proposes a localization method that uses GPS/DR error estimation based on a lane detection method with curved lane models, stop line detection, and curve matching in order to improve the performance during waypoint following procedures. The advantage of using the proposed method is that position information can be provided for autonomous driving through intersections, in sections with sharp curves, and in curved sections following a straight section. The proposed method was applied in autonomous vehicles at an experimental site to evaluate its performance, and the results indicate that the positioning achieved accuracy at the sub-meter level.

  10. GPS/DR Error Estimation for Autonomous Vehicle Localization

    PubMed Central

    Lee, Byung-Hyun; Song, Jong-Hwa; Im, Jun-Hyuck; Im, Sung-Hyuck; Heo, Moon-Beom; Jee, Gyu-In

    2015-01-01

    Autonomous vehicles require highly reliable navigation capabilities. For example, a lane-following method cannot be applied in an intersection without lanes, and since typical lane detection is performed using a straight-line model, errors can occur when the lateral distance is estimated in curved sections due to a model mismatch. Therefore, this paper proposes a localization method that uses GPS/DR error estimation based on a lane detection method with curved lane models, stop line detection, and curve matching in order to improve the performance during waypoint following procedures. The advantage of using the proposed method is that position information can be provided for autonomous driving through intersections, in sections with sharp curves, and in curved sections following a straight section. The proposed method was applied in autonomous vehicles at an experimental site to evaluate its performance, and the results indicate that the positioning achieved accuracy at the sub-meter level. PMID:26307997

  11. [Archaeology and criminology--Strengths and weaknesses of interdisciplinary cooperation].

    PubMed

    Bachhiesl, Christian

    2015-01-01

    Interdisciplinary cooperation of archaeology and criminology is often focussed on the scientific methods applied in both fields of knowledge. In combination with the humanistic methods traditionally used in archaeology, the finding of facts can be enormously increased and the subsequent hermeneutic deduction of human behaviour in the past can take place on a more solid basis. Thus, interdisciplinary cooperation offers direct and indirect advantages. But it can also cause epistemological problems, if the weaknesses and limits of one method are to be corrected by applying methods used in other disciplines. This may result in the application of methods unsuitable for the problem to be investigated so that, in a way, the methodological and epistemological weaknesses of two disciplines potentiate each other. An example of this effect is the quantification of qualia. These epistemological reflections are compared with the interdisciplinary approach using the concrete case of the "Eulau Crime Scene".

  12. A glossary for big data in population and public health: discussion and commentary on terminology and research methods.

    PubMed

    Fuller, Daniel; Buote, Richard; Stanley, Kevin

    2017-11-01

    The volume and velocity of data are growing rapidly and big data analytics are being applied to these data in many fields. Population and public health researchers may be unfamiliar with the terminology and statistical methods used in big data. This creates a barrier to the application of big data analytics. The purpose of this glossary is to define terms used in big data and big data analytics and to contextualise these terms. We define the five Vs of big data and provide definitions and distinctions for data mining, machine learning and deep learning, among other terms. We provide key distinctions between big data and statistical analysis methods applied to big data. We contextualise the glossary by providing examples where big data analysis methods have been applied to population and public health research problems and provide brief guidance on how to learn big data analysis methods. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  13. Filaments from the galaxy distribution and from the velocity field in the local universe

    NASA Astrophysics Data System (ADS)

    Libeskind, Noam I.; Tempel, Elmo; Hoffman, Yehuda; Tully, R. Brent; Courtois, Hélène

    2015-10-01

    The cosmic web that characterizes the large-scale structure of the Universe can be quantified by a variety of methods. For example, large redshift surveys can be used in combination with point process algorithms to extract long curvilinear filaments in the galaxy distribution. Alternatively, given a full 3D reconstruction of the velocity field, kinematic techniques can be used to decompose the web into voids, sheets, filaments and knots. In this Letter, we look at how two such algorithms - the Bisous model and the velocity shear web - compare with each other in the local Universe (within 100 Mpc), finding good agreement. This is both remarkable and comforting, given that the two methods are radically different in ideology and applied to completely independent and different data sets. Unsurprisingly, the methods are in better agreement when applied to unbiased and complete data sets, like cosmological simulations, than when applied to observational samples. We conclude that more observational data is needed to improve on these methods, but that both methods are most likely properly tracing the underlying distribution of matter in the Universe.

  14. Cost efficient CFD simulations: Proper selection of domain partitioning strategies

    NASA Astrophysics Data System (ADS)

    Haddadi, Bahram; Jordan, Christian; Harasek, Michael

    2017-10-01

    Computational Fluid Dynamics (CFD) is one of the most powerful simulation methods, which is used for temporally and spatially resolved solutions of fluid flow, heat transfer, mass transfer, etc. One of the challenges of Computational Fluid Dynamics is the extreme hardware demand. Nowadays super-computers (e.g. High Performance Computing, HPC) featuring multiple CPU cores are applied for solving-the simulation domain is split into partitions for each core. Some of the different methods for partitioning are investigated in this paper. As a practical example, a new open source based solver was utilized for simulating packed bed adsorption, a common separation method within the field of thermal process engineering. Adsorption can for example be applied for removal of trace gases from a gas stream or pure gases production like Hydrogen. For comparing the performance of the partitioning methods, a 60 million cell mesh for a packed bed of spherical adsorbents was created; one second of the adsorption process was simulated. Different partitioning methods available in OpenFOAM® (Scotch, Simple, and Hierarchical) have been used with different numbers of sub-domains. The effect of the different methods and number of processor cores on the simulation speedup and also energy consumption were investigated for two different hardware infrastructures (Vienna Scientific Clusters VSC 2 and VSC 3). As a general recommendation an optimum number of cells per processor core was calculated. Optimized simulation speed, lower energy consumption and consequently the cost effects are reported here.

  15. An integrated lean-methods approach to hospital facilities redesign.

    PubMed

    Nicholas, John

    2012-01-01

    Lean production methods for eliminating waste and improving processes in manufacturing are now being applied in healthcare. As the author shows, the methods are appropriate for redesigning hospital facilities. When used in an integrated manner and employing teams of mostly clinicians, the methods produce facility designs that are custom-fit to patient needs and caregiver work processes, and reduce operational costs. The author reviews lean methods and an approach for integrating them in the redesign of hospital facilities. A case example of the redesign of an emergency department shows the feasibility and benefits of the approach.

  16. Comparing floral and isotopic paleoelevation estimates: Examples from the western United States

    NASA Astrophysics Data System (ADS)

    Hyland, E. G.; Huntington, K. W.; Sheldon, N. D.; Smith, S. Y.; Strömberg, C. A. E.

    2016-12-01

    Describing paleoelevations is crucial to understanding tectonic processes and deconvolving the effects of uplift and climate on environmental change in the past. Decades of work has gone into estimating past elevation from various proxy archives, particularly using modern relationships between elevation and temperature, floral assemblage compositions, or oxygen isotope values. While these methods have been used widely and refined through time, they are rarely applied in tandem; here we provide two examples from the western United States using new multiproxy methods: 1) combining clumped isotopes and macrofloral assemblages to estimate paleoelevations along the Colorado Plateau, and 2) combining oxygen isotopes and phytolith methods to estimate paleoelevations within the greater Yellowstone region. Clumped isotope measurements and refined floral coexistence methods from sites on the northern Colorado Plateau like Florissant and Creede (CO) consistently estimate low (< 2km) elevations through the Eocene/Oligocene, suggesting slower uplift and a south-north propagation of the plateau. Oxygen isotope measurements and C4 phytolith estimates from sites surrounding the Yellowstone hotspot consistently estimate moderate uplift (0.2-0.7km) propagating along the hotspot track, suggesting migrating dynamic topography associated with the region. These examples provide support for the emerging practice of using multiproxy methods to estimate paleoelevations for important time periods, and can help integrate environmental and tectonic records of the past.

  17. Product Recommendation System Based on Personal Preference Model Using CAM

    NASA Astrophysics Data System (ADS)

    Murakami, Tomoko; Yoshioka, Nobukazu; Orihara, Ryohei; Furukawa, Koichi

    Product recommendation system is realized by applying business rules acquired by data maining techniques. Business rules such as demographical patterns of purchase, are able to cover the groups of users that have a tendency to purchase products, but it is difficult to recommend products adaptive to various personal preferences only by utilizing them. In addition to that, it is very costly to gather the large volume of high quality survey data, which is necessary for good recommendation based on personal preference model. A method collecting kansei information automatically without questionnaire survey is required. The constructing personal preference model from less favor data is also necessary, since it is costly for the user to input favor data. In this paper, we propose product recommendation system based on kansei information extracted by text mining and user's preference model constructed by Category-guided Adaptive Modeling, CAM for short. CAM is a feature construction method that can generate new features constructing the space where same labeled examples are close and different labeled examples are far away from some labeled examples. It is possible to construct personal preference model by CAM despite less information of likes and dislikes categories. In the system, retrieval agent gathers the products' specification and user agent manages preference model, user's likes and dislikes. Kansei information of the products is gained by applying text mining technique to the reputation documents about the products on the web site. We carry out some experimental studies to make sure that prefrence model obtained by our method performs effectively.

  18. 26 CFR 1.1014-7 - Example applying rules of §§ 1.1014-4 through 1.1014-6 to case involving multiple interests.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 11 2010-04-01 2010-04-01 true Example applying rules of §§ 1.1014-4 through... REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Basis Rules of General Application § 1.1014-7 Example applying rules of §§ 1.1014-4 through 1.1014-6 to case involving...

  19. Photometric redshift estimation via deep learning. Generalized and pre-classification-less, image based, fully probabilistic redshifts

    NASA Astrophysics Data System (ADS)

    D'Isanto, A.; Polsterer, K. L.

    2018-01-01

    Context. The need to analyze the available large synoptic multi-band surveys drives the development of new data-analysis methods. Photometric redshift estimation is one field of application where such new methods improved the results, substantially. Up to now, the vast majority of applied redshift estimation methods have utilized photometric features. Aims: We aim to develop a method to derive probabilistic photometric redshift directly from multi-band imaging data, rendering pre-classification of objects and feature extraction obsolete. Methods: A modified version of a deep convolutional network was combined with a mixture density network. The estimates are expressed as Gaussian mixture models representing the probability density functions (PDFs) in the redshift space. In addition to the traditional scores, the continuous ranked probability score (CRPS) and the probability integral transform (PIT) were applied as performance criteria. We have adopted a feature based random forest and a plain mixture density network to compare performances on experiments with data from SDSS (DR9). Results: We show that the proposed method is able to predict redshift PDFs independently from the type of source, for example galaxies, quasars or stars. Thereby the prediction performance is better than both presented reference methods and is comparable to results from the literature. Conclusions: The presented method is extremely general and allows us to solve of any kind of probabilistic regression problems based on imaging data, for example estimating metallicity or star formation rate of galaxies. This kind of methodology is tremendously important for the next generation of surveys.

  20. A three operator split-step method covering a larger set of non-linear partial differential equations

    NASA Astrophysics Data System (ADS)

    Zia, Haider

    2017-06-01

    This paper describes an updated exponential Fourier based split-step method that can be applied to a greater class of partial differential equations than previous methods would allow. These equations arise in physics and engineering, a notable example being the generalized derivative non-linear Schrödinger equation that arises in non-linear optics with self-steepening terms. These differential equations feature terms that were previously inaccessible to model accurately with low computational resources. The new method maintains a 3rd order error even with these additional terms and models the equation in all three spatial dimensions and time. The class of non-linear differential equations that this method applies to is shown. The method is fully derived and implementation of the method in the split-step architecture is shown. This paper lays the mathematical ground work for an upcoming paper employing this method in white-light generation simulations in bulk material.

  1. An Investment Level Decision Method to Secure Long-term Reliability

    NASA Astrophysics Data System (ADS)

    Bamba, Satoshi; Yabe, Kuniaki; Seki, Tomomichi; Shibaya, Tetsuji

    The slowdown in power demand increase and facility replacement causes the aging and lower reliability in power facility. And the aging is followed by the rapid increase of repair and replacement when many facilities reach their lifetime in future. This paper describes a method to estimate the repair and replacement costs in future by applying the life-cycle cost model and renewal theory to the historical data. This paper also describes a method to decide the optimum investment plan, which replaces facilities in the order of cost-effectiveness by setting replacement priority formula, and the minimum investment level to keep the reliability. Estimation examples applied to substation facilities show that the reasonable and leveled future cash-out can keep the reliability by lowering the percentage of replacements caused by fatal failures.

  2. Analytical method to estimate waterflood performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cremonini, A.S.

    A method to predict oil production resulting from the injection of immiscible fluids is described. The method is based on two models: one of them considers the vertical and displacement efficiencies, assuming unit areal efficiency and, therefore, a linear flow. It is a layered model without crossflow in which Buckley-Leveret`s displacement theory is used for each layer. The results obtained in the linear model are applied to a streamchannel model similar to the one used by Higgins and Leighton. In this way, areal efficiency is taken into account. The principal innovation is the possibility of applying different relative permeability curvesmore » to each layer. A numerical example in a five-spot pattern which uses relative permeability data obtained from reservoir core samples is presented.« less

  3. The Workings of a Multicultural Research Team

    PubMed Central

    Friedemann, Marie-Luise; Pagan-Coss, Harald; Mayorga, Carlos

    2013-01-01

    Purpose Transcultural nurse researchers are exposed to the challenges of developing and maintaining a multiethnic team. With the example of a multicultural research study of family caregivers conducted in the Miami-Dade area, the authors guide the readers through steps of developing a culturally competent and effective team. Design Pointing out challenges and successes, the authors illustrate team processes and successful strategies relative to recruitment of qualified members, training and team maintenance, and evaluation of team effectiveness. Method With relevant concepts from the literature applied to practical examples, the authors demonstrate how cultural team competence grows in a supportive work environment. PMID:18390824

  4. 3D first-arrival traveltime tomography with modified total variation regularization

    NASA Astrophysics Data System (ADS)

    Jiang, Wenbin; Zhang, Jie

    2018-02-01

    Three-dimensional (3D) seismic surveys have become a major tool in the exploration and exploitation of hydrocarbons. 3D seismic first-arrival traveltime tomography is a robust method for near-surface velocity estimation. A common approach for stabilizing the ill-posed inverse problem is to apply Tikhonov regularization to the inversion. However, the Tikhonov regularization method recovers smooth local structures while blurring the sharp features in the model solution. We present a 3D first-arrival traveltime tomography method with modified total variation (MTV) regularization to preserve sharp velocity contrasts and improve the accuracy of velocity inversion. To solve the minimization problem of the new traveltime tomography method, we decouple the original optimization problem into two following subproblems: a standard traveltime tomography problem with the traditional Tikhonov regularization and a L2 total variation problem. We apply the conjugate gradient method and split-Bregman iterative method to solve these two subproblems, respectively. Our synthetic examples show that the new method produces higher resolution models than the conventional traveltime tomography with Tikhonov regularization. We apply the technique to field data. The stacking section shows significant improvements with static corrections from the MTV traveltime tomography.

  5. Design of a large-scale femtoliter droplet array for single-cell analysis of drug-tolerant and drug-resistant bacteria.

    PubMed

    Iino, Ryota; Matsumoto, Yoshimi; Nishino, Kunihiko; Yamaguchi, Akihito; Noji, Hiroyuki

    2013-01-01

    Single-cell analysis is a powerful method to assess the heterogeneity among individual cells, enabling the identification of very rare cells with properties that differ from those of the majority. In this Methods Article, we describe the use of a large-scale femtoliter droplet array to enclose, isolate, and analyze individual bacterial cells. As a first example, we describe the single-cell detection of drug-tolerant persisters of Pseudomonas aeruginosa treated with the antibiotic carbenicillin. As a second example, this method was applied to the single-cell evaluation of drug efflux activity, which causes acquired antibiotic resistance of bacteria. The activity of the MexAB-OprM multidrug efflux pump system from Pseudomonas aeruginosa was expressed in Escherichia coli and the effect of an inhibitor D13-9001 were assessed at the single cell level.

  6. Optimum runway orientation relative to crosswinds

    NASA Technical Reports Server (NTRS)

    Falls, L. W.; Brown, S. C.

    1972-01-01

    Specific magnitudes of crosswinds may exist that could be constraints to the success of an aircraft mission such as the landing of the proposed space shuttle. A method is required to determine the orientation or azimuth of the proposed runway which will minimize the probability of certain critical crosswinds. Two procedures for obtaining the optimum runway orientation relative to minimizing a specified crosswind speed are described and illustrated with examples. The empirical procedure requires only hand calculations on an ordinary wind rose. The theoretical method utilizes wind statistics computed after the bivariate normal elliptical distribution is applied to a data sample of component winds. This method requires only the assumption that the wind components are bivariate normally distributed. This assumption seems to be reasonable. Studies are currently in progress for testing wind components for bivariate normality for various stations. The close agreement between the theoretical and empirical results for the example chosen substantiates the bivariate normal assumption.

  7. Rtop - an R package for interpolation of data with a variable spatial support - examples from river networks

    NASA Astrophysics Data System (ADS)

    Olav Skøien, Jon; Laaha, Gregor; Koffler, Daniel; Blöschl, Günter; Pebesma, Edzer; Parajka, Juraj; Viglione, Alberto

    2013-04-01

    Geostatistical methods have been applied only to a limited extent for spatial interpolation in applications where the observations have an irregular support, such as runoff characteristics or population health data. Several studies have shown the potential of such methods (Gottschalk 1993, Sauquet et al. 2000, Gottschalk et al. 2006, Skøien et al. 2006, Goovaerts 2008), but these developments have so far not led to easily accessible, versatile, easy to apply and open source software. Based on the top-kriging approach suggested by Skøien et al. (2006), we will here present the package rtop, which has been implemented in the statistical environment R (R Core Team 2012). Taking advantage of the existing methods in R for analysis of spatial objects (Bivand et al. 2008), and the extensive possibilities for visualizing the results, rtop makes it easy to apply geostatistical interpolation methods when observations have a non-point spatial support. Although the package is flexible regarding data input, the main application so far has been for interpolation along river networks. We will present some examples showing how the package can easily be used for such interpolation. The model will soon be uploaded to CRAN, but is in the meantime also available from R-forge and can be installed by: > install.packages("rtop", repos="http://R-Forge.R-project.org") Bivand, R.S., Pebesma, E.J. & Gómez-Rubio, V., 2008. Applied spatial data analysis with r: Springer. Goovaerts, P., 2008. Kriging and semivariogram deconvolution in the presence of irregular geographical units. Mathematical Geosciences, 40 (1), 101-128. Gottschalk, L., 1993. Interpolation of runoff applying objective methods. Stochastic Hydrology and Hydraulics, 7, 269-281. Gottschalk, L., Krasovskaia, I., Leblois, E. & Sauquet, E., 2006. Mapping mean and variance of runoff in a river basin. Hydrology and Earth System Sciences, 10, 469-484. R Core Team, 2012. R: A language and environment for statistical computing. Vienna, Austria, ISBN 3-900051-07-0. Sauquet, E., Gottschalk, L. & Leblois, E., 2000. Mapping average annual runoff: A hierarchical approach applying a stochastic interpolation scheme. Hydrological Sciences Journal, 45 (6), 799-815. Skøien, J.O., Merz, R. & Blöschl, G., 2006. Top-kriging - geostatistics on stream networks. Hydrology and Earth System Sciences, 10, 277-287.

  8. Ultra-high resolution computed tomography imaging

    DOEpatents

    Paulus, Michael J.; Sari-Sarraf, Hamed; Tobin, Jr., Kenneth William; Gleason, Shaun S.; Thomas, Jr., Clarence E.

    2002-01-01

    A method for ultra-high resolution computed tomography imaging, comprising the steps of: focusing a high energy particle beam, for example x-rays or gamma-rays, onto a target object; acquiring a 2-dimensional projection data set representative of the target object; generating a corrected projection data set by applying a deconvolution algorithm, having an experimentally determined a transfer function, to the 2-dimensional data set; storing the corrected projection data set; incrementally rotating the target object through an angle of approximately 180.degree., and after each the incremental rotation, repeating the radiating, acquiring, generating and storing steps; and, after the rotating step, applying a cone-beam algorithm, for example a modified tomographic reconstruction algorithm, to the corrected projection data sets to generate a 3-dimensional image. The size of the spot focus of the beam is reduced to not greater than approximately 1 micron, and even to not greater than approximately 0.5 microns.

  9. System and Method for Providing Model-Based Alerting of Spatial Disorientation to a Pilot

    NASA Technical Reports Server (NTRS)

    Johnson, Steve (Inventor); Conner, Kevin J (Inventor); Mathan, Santosh (Inventor)

    2015-01-01

    A system and method monitor aircraft state parameters, for example, aircraft movement and flight parameters, applies those inputs to a spatial disorientation model, and makes a prediction of when pilot may become spatially disoriented. Once the system predicts a potentially disoriented pilot, the sensitivity for alerting the pilot to conditions exceeding a threshold can be increased and allow for an earlier alert to mitigate the possibility of an incorrect control input.

  10. An integrand reconstruction method for three-loop amplitudes

    NASA Astrophysics Data System (ADS)

    Badger, Simon; Frellesvig, Hjalte; Zhang, Yang

    2012-08-01

    We consider the maximal cut of a three-loop four point function with massless kinematics. By applying Gröbner bases and primary decomposition we develop a method which extracts all ten propagator master integral coefficients for an arbitrary triple-box configuration via generalized unitarity cuts. As an example we present analytic results for the three loop triple-box contribution to gluon-gluon scattering in Yang-Mills with adjoint fermions and scalars in terms of three master integrals.

  11. Optimization of spent fuel pool weir gate driving mechanism

    NASA Astrophysics Data System (ADS)

    Liu, Chao; Du, Lin; Tao, Xinlei; Wang, Shijie; Shang, Ertao; Yu, Jianjiang

    2018-04-01

    Spent fuel pool is crucial facility for fuel storage and nuclear safety, and the spent fuel pool weir gate is the key related equipment. In order to achieve a goal of more efficient driving force transfer, loading during the opening/closing process is analyzed and an optimized calculation method for dimensions of driving mechanism is proposed. The result of optimizing example shows that the method can be applied to weir gates' design with similar driving mechanism.

  12. Analysis of off-axis solenoid fields using the magnetic scalar potential: An application to a Zeeman-slower for cold atoms

    NASA Astrophysics Data System (ADS)

    Muniz, Sérgio R.; Bagnato, Vanderlei S.; Bhattacharya, M.

    2015-06-01

    In a region free of currents, magnetostatics can be described by the Laplace equation of a scalar magnetic potential, and one can apply the same methods commonly used in electrostatics. Here, we show how to calculate the general vector field inside a real (finite) solenoid, using only the magnitude of the field along the symmetry axis. Our method does not require integration or knowledge of the current distribution and is presented through practical examples, including a nonuniform finite solenoid used to produce cold atomic beams via laser cooling. These examples allow educators to discuss the nontrivial calculation of fields off-axis using concepts familiar to most students, while offering the opportunity to introduce themes of current modern research.

  13. A new method of search design of refrigerating systems containing a liquid and gaseous working medium based on the graph model of the physical operating principle

    NASA Astrophysics Data System (ADS)

    Yakovlev, A. A.; Sorokin, V. S.; Mishustina, S. N.; Proidakova, N. V.; Postupaeva, S. G.

    2017-01-01

    The article describes a new method of search design of refrigerating systems, the basis of which is represented by a graph model of the physical operating principle based on thermodynamical description of physical processes. The mathematical model of the physical operating principle has been substantiated, and the basic abstract theorems relatively semantic load applied to nodes and edges of the graph have been represented. The necessity and the physical operating principle, sufficient for the given model and intended for the considered device class, were demonstrated by the example of a vapour-compression refrigerating plant. The example of obtaining a multitude of engineering solutions of a vapour-compression refrigerating plant has been considered.

  14. The Data-to-Action Framework: A Rapid Program Improvement Process.

    PubMed

    Zakocs, Ronda; Hill, Jessica A; Brown, Pamela; Wheaton, Jocelyn; Freire, Kimberley E

    2015-08-01

    Although health education programs may benefit from quality improvement methods, scant resources exist to help practitioners apply these methods for program improvement. The purpose of this article is to describe the Data-to-Action framework, a process that guides practitioners through rapid-feedback cycles in order to generate actionable data to improve implementation of ongoing programs. The framework was designed while implementing DELTA PREP, a 3-year project aimed at building the primary prevention capacities of statewide domestic violence coalitions. The authors describe the framework's main steps and provide a case example of a rapid-feedback cycle and several examples of rapid-feedback memos produced during the project period. The authors also discuss implications for health education evaluation and practice. © 2015 Society for Public Health Education.

  15. Inverting Monotonic Nonlinearities by Entropy Maximization

    PubMed Central

    López-de-Ipiña Pena, Karmele; Caiafa, Cesar F.

    2016-01-01

    This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results. PMID:27780261

  16. Inverting Monotonic Nonlinearities by Entropy Maximization.

    PubMed

    Solé-Casals, Jordi; López-de-Ipiña Pena, Karmele; Caiafa, Cesar F

    2016-01-01

    This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results.

  17. Creating Learning Organizations: The Deming Management Method Applied to Instruction (Quality Teaching & Quality Learning). A Paradigm Application.

    ERIC Educational Resources Information Center

    Loehr, Peter

    This paper presents W. Edwards Deming's 14 management points, 7 deadly diseases, and 4 obstacles that thwart productivity, and discusses how these principles relate to teaching and learning. Application of these principles is expected to increase the quality of learning in classrooms from kindergarten through graduate level. Examples of the…

  18. Cost-Effectiveness Analysis of Early Reading Programs: A Demonstration with Recommendations for Future Research

    ERIC Educational Resources Information Center

    Hollands, Fiona M.; Kieffer, Michael J.; Shand, Robert; Pan, Yilin; Cheng, Henan; Levin, Henry M.

    2016-01-01

    We review the value of cost-effectiveness analysis for evaluation and decision making with respect to educational programs and discuss its application to early reading interventions. We describe the conditions for a rigorous cost-effectiveness analysis and illustrate the challenges of applying the method in practice, providing examples of programs…

  19. Physical Applications of a Simple Approximation of Bessel Functions of Integer Order

    ERIC Educational Resources Information Center

    Barsan, V.; Cojocaru, S.

    2007-01-01

    Applications of a simple approximation of Bessel functions of integer order, in terms of trigonometric functions, are discussed for several examples from electromagnetism and optics. The method may be applied in the intermediate regime, bridging the "small values regime" and the "asymptotic" one, and covering, in this way, an area of great…

  20. A Robust New Method for Analzing Community Change and an Example using 83 years of Avian Response to Forest Succession

    EPA Science Inventory

    This manuscript describes a novel statistical analysis technique developed by the authors for use in combining survey data carried out under different field protocols. We apply the technique to 83 years of survey data on avian songbird populations in northern lower Michigan to de...

  1. Using Images, Metaphor, and Hypnosis in Integrating Multiple Personality and Dissociative States: A Review of the Literature.

    ERIC Educational Resources Information Center

    Crawford, Carrie L.

    1990-01-01

    Reviews literature on hypnosis, imagery, and metaphor as applied to the treatment and integration of those with multiple personality disorder (MPD) and dissociative states. Considers diagnostic criteria of MPD; explores current theories of etiology and treatment; and suggests specific examples of various clinical methods of treatment using…

  2. Statistics for Time-Series Spatial Data: Applying Survival Analysis to Study Land-Use Change

    ERIC Educational Resources Information Center

    Wang, Ninghua Nathan

    2013-01-01

    Traditional spatial analysis and data mining methods fall short of extracting temporal information from data. This inability makes their use difficult to study changes and the associated mechanisms of many geographic phenomena of interest, for example, land-use. On the other hand, the growing availability of land-change data over multiple time…

  3. An Empirical Investigation of Methods for Assessing Item Fit for Mixed Format Tests

    ERIC Educational Resources Information Center

    Chon, Kyong Hee; Lee, Won-Chan; Ansley, Timothy N.

    2013-01-01

    Empirical information regarding performance of model-fit procedures has been a persistent need in measurement practice. Statistical procedures for evaluating item fit were applied to real test examples that consist of both dichotomously and polytomously scored items. The item fit statistics used in this study included the PARSCALE's G[squared],…

  4. A 3-D enlarged cell technique (ECT) for elastic wave modelling of a curved free surface

    NASA Astrophysics Data System (ADS)

    Wei, Songlin; Zhou, Jianyang; Zhuang, Mingwei; Liu, Qing Huo

    2016-09-01

    The conventional finite-difference time-domain (FDTD) method for elastic waves suffers from the staircasing error when applied to model a curved free surface because of its structured grid. In this work, an improved, stable and accurate 3-D FDTD method for elastic wave modelling on a curved free surface is developed based on the finite volume method and enlarged cell technique (ECT). To achieve a sufficiently accurate implementation, a finite volume scheme is applied to the curved free surface to remove the staircasing error; in the mean time, to achieve the same stability as the FDTD method without reducing the time step increment, the ECT is introduced to preserve the solution stability by enlarging small irregular cells into adjacent cells under the condition of conservation of force. This method is verified by several 3-D numerical examples. Results show that the method is stable at the Courant stability limit for a regular FDTD grid, and has much higher accuracy than the conventional FDTD method.

  5. Cross-scale integration of knowledge for predicting species ranges: a metamodeling framework.

    PubMed

    Talluto, Matthew V; Boulangeat, Isabelle; Ameztegui, Aitor; Aubin, Isabelle; Berteaux, Dominique; Butler, Alyssa; Doyon, Frédérik; Drever, C Ronnie; Fortin, Marie-Josée; Franceschini, Tony; Liénard, Jean; McKenney, Dan; Solarik, Kevin A; Strigul, Nikolay; Thuiller, Wilfried; Gravel, Dominique

    2016-02-01

    Current interest in forecasting changes to species ranges have resulted in a multitude of approaches to species distribution models (SDMs). However, most approaches include only a small subset of the available information, and many ignore smaller-scale processes such as growth, fecundity, and dispersal. Furthermore, different approaches often produce divergent predictions with no simple method to reconcile them. Here, we present a flexible framework for integrating models at multiple scales using hierarchical Bayesian methods. Eastern North America (as an example). Our framework builds a metamodel that is constrained by the results of multiple sub-models and provides probabilistic estimates of species presence. We applied our approach to a simulated dataset to demonstrate the integration of a correlative SDM with a theoretical model. In a second example, we built an integrated model combining the results of a physiological model with presence-absence data for sugar maple ( Acer saccharum ), an abundant tree native to eastern North America. For both examples, the integrated models successfully included information from all data sources and substantially improved the characterization of uncertainty. For the second example, the integrated model outperformed the source models with respect to uncertainty when modelling the present range of the species. When projecting into the future, the model provided a consensus view of two models that differed substantially in their predictions. Uncertainty was reduced where the models agreed and was greater where they diverged, providing a more realistic view of the state of knowledge than either source model. We conclude by discussing the potential applications of our method and its accessibility to applied ecologists. In ideal cases, our framework can be easily implemented using off-the-shelf software. The framework has wide potential for use in species distribution modelling and can drive better integration of multi-source and multi-scale data into ecological decision-making.

  6. Methods of Transposition of Nurses between Wards

    NASA Astrophysics Data System (ADS)

    Miyazaki, Shigeji; Masuda, Masakazu

    In this paper, a computer-implemented method for automating the transposition of a hospital’s nursing staff is proposed. The model is applied to the real case example ‘O’ hospital, which performs a transposition of its nursing staff once a year. Results are compared with real data obtained from this hospital’s current manual transposition system. The proposed method not only significantly reduces the time taken to construct the transposition, thereby significantly reducing management labor costs, but also is demonstrated to increase nurses’ levels of satisfaction with the process.

  7. Method for fusing bone

    DOEpatents

    Mourant, J.R.; Anderson, G.D.; Bigio, I.J.; Johnson, T.M.

    1996-03-12

    The present invention is a method for joining hard tissue which includes chemically removing the mineral matrix from a thin layer of the surfaces to be joined, placing the two bones together, and heating the joint using electromagnetic radiation. The goal of the method is not to produce a full-strength weld of, for example, a cortical bone of the tibia, but rather to produce a weld of sufficient strength to hold the bone halves in registration while either external fixative devices are applied to stabilize the bone segments, or normal healing processes restore full strength to the tibia.

  8. Anti-reflective and anti-soiling coatings for self-cleaning properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brophy, Brenor L.; Nair, Vinod; Dave, Bakul Champaklal

    The disclosure discloses abrasion resistant, persistently hydrophobic and oleophobic, anti-reflective and anti-soiling coatings for glass. The coatings described herein have wide application, including for example the front cover glass of solar modules. Methods of applying the coatings using various apparatus are disclosed. Methods for using the coatings in solar energy generation plants to achieve greater energy yield and reduced operations costs are disclosed. Coating materials are formed by combinations of hydrolyzed silane-base precursors through sol-gel processes. Several methods of synthesis and formulation of coating materials are disclosed.

  9. Flat-topped broadband rugate filters.

    PubMed

    Imenes, Anne G; McKenzie, David R

    2006-10-20

    A method of creating rugate interference filters that have flat-topped reflectance across an extended spectral region is presented. The method applies known relations from the classical coupled wave theory to develop a set of equations that gives the spatial frequency distribution of rugate cycles to achieve constant reflectance across a given spectral region. Two examples of the application of this method are discussed: a highly reflective coating for eye protection against harmful laser radiation incident from normal to 45 degrees , and a spectral beam splitter for efficient solar power conversion.

  10. Harmony Search Method: Theory and Applications

    PubMed Central

    Gao, X. Z.; Govindasamy, V.; Xu, H.; Wang, X.; Zenger, K.

    2015-01-01

    The Harmony Search (HS) method is an emerging metaheuristic optimization algorithm, which has been employed to cope with numerous challenging tasks during the past decade. In this paper, the essential theory and applications of the HS algorithm are first described and reviewed. Several typical variants of the original HS are next briefly explained. As an example of case study, a modified HS method inspired by the idea of Pareto-dominance-based ranking is also presented. It is further applied to handle a practical wind generator optimal design problem. PMID:25945083

  11. Unconditionally stable WLP-FDTD method for the modeling of electromagnetic wave propagation in gyrotropic materials.

    PubMed

    Li, Zheng-Wei; Xi, Xiao-Li; Zhang, Jin-Sheng; Liu, Jiang-fan

    2015-12-14

    The unconditional stable finite-difference time-domain (FDTD) method based on field expansion with weighted Laguerre polynomials (WLPs) is applied to model electromagnetic wave propagation in gyrotropic materials. The conventional Yee cell is modified to have the tightly coupled current density components located at the same spatial position. The perfectly matched layer (PML) is formulated in a stretched-coordinate (SC) system with the complex-frequency-shifted (CFS) factor to achieve good absorption performance. Numerical examples are shown to validate the accuracy and efficiency of the proposed method.

  12. The Expanding Role of Applications in the Development and Validation of CFD at NASA

    NASA Technical Reports Server (NTRS)

    Schuster, David M.

    2010-01-01

    This paper focuses on the recent escalation in application of CFD to manned and unmanned flight projects at NASA and the need to often apply these methods to problems for which little or no previous validation data directly applies. The paper discusses the evolution of NASA.s CFD development from a strict Develop, Validate, Apply strategy to sometimes allowing for a Develop, Apply, Validate approach. The risks of this approach and some of its unforeseen benefits are discussed and tied to specific operational examples. There are distinct advantages for the CFD developer that is able to operate in this paradigm, and recommendations are provided for those inclined and willing to work in this environment.

  13. An Application of Gröbner Basis in Differential Equations of Physics

    NASA Astrophysics Data System (ADS)

    Chaharbashloo, Mohammad Saleh; Basiri, Abdolali; Rahmany, Sajjad; Zarrinkamar, Saber

    2013-11-01

    We apply the Gröbner basis to the ansatz method in quantum mechanics to obtain the energy eigenvalues and the wave functions in a very simple manner. There are important physical potentials such as the Cornell interaction which play significant roles in particle physics and can be treated via this technique. As a typical example, the algorithm is applied to the semi-relativistic spinless Salpeter equation under the Cornell interaction. Many other applications of the idea in a wide range of physical fields are listed as well.

  14. Extended Aperture Photometry of K2 RR Lyrae stars

    NASA Astrophysics Data System (ADS)

    Plachy, Emese; Klagyivik, Péter; Molnár, László; Sódor, Ádám; Szabó, Róbert

    2017-10-01

    We present the method of the Extended Aperture Photometry (EAP) that we applied on K2 RR Lyrae stars. Our aim is to minimize the instrumental variations of attitude control maneuvers by using apertures that cover the positional changes in the field of view thus contain the stars during the whole observation. We present example light curves that we compared to the light curves from the K2 Systematics Correction (K2SC) pipeline applied on the automated Single Aperture Photometry (SAP) and on the Pre-search Data Conditioning Simple Aperture Photometry (PDCSAP) data.

  15. High-order ENO schemes applied to two- and three-dimensional compressible flow

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang; Erlebacher, Gordon; Zang, Thomas A.; Whitaker, David; Osher, Stanley

    1991-01-01

    High order essentially non-oscillatory (ENO) finite difference schemes are applied to the 2-D and 3-D compressible Euler and Navier-Stokes equations. Practical issues, such as vectorization, efficiency of coding, cost comparison with other numerical methods, and accuracy degeneracy effects, are discussed. Numerical examples are provided which are representative of computational problems of current interest in transition and turbulence physics. These require both nonoscillatory shock capturing and high resolution for detailed structures in the smooth regions and demonstrate the advantage of ENO schemes.

  16. The Art and Science of Learning, Teaching, and Delivering Feedback in Psychosomatic Medicine.

    PubMed

    Lokko, Hermioni N; Gatchel, Jennifer R; Becker, Madeleine A; Stern, Theodore A

    2016-01-01

    The teaching and learning of psychosomatic medicine has evolved with the better understanding of effective teaching methods and feedback delivery in medicine and psychiatry. We sought to review the variety of teaching methods used in psychosomatic medicine, to present principles of adult learning (and how these theories can be applied to students of psychosomatic medicine), and to discuss the role of effective feedback delivery in the process of teaching and learning psychosomatic medicine. In addition to drawing on the clinical and teaching experiences of the authors of the paper, we reviewed the literature on teaching methods, adult learning theories, and effective feedback delivery methods in medicine to draw parallels for psychosomatic medicine education. We provide a review of teaching methods that have been employed to teach psychosomatic medicine over the past few decades. We outline examples of educational methods using the affective, behavioral, and cognitive domains. We provide examples of learning styles together with the principles of adult learning theory and how they can be applied to psychosomatic medicine learners. We discuss barriers to feedback delivery and offer suggestions as to how to give feedback to trainees on a psychosomatic medicine service. The art of teaching psychosomatic medicine is dynamic and will continue to evolve with advances in the field. Psychosomatic medicine educators must familiarize themselves with learning domains, learning styles, and principles of adult learning in order to be impactful. Effective feedback delivery methods are critical to fostering a robust learning environment for psychosomatic medicine. Copyright © 2016 The Academy of Psychosomatic Medicine. Published by Elsevier Inc. All rights reserved.

  17. An Autonomous Sensor Tasking Approach for Large Scale Space Object Cataloging

    NASA Astrophysics Data System (ADS)

    Linares, R.; Furfaro, R.

    The field of Space Situational Awareness (SSA) has progressed over the last few decades with new sensors coming online, the development of new approaches for making observations, and new algorithms for processing them. Although there has been success in the development of new approaches, a missing piece is the translation of SSA goals to sensors and resource allocation; otherwise known as the Sensor Management Problem (SMP). This work solves the SMP using an artificial intelligence approach called Deep Reinforcement Learning (DRL). Stable methods for training DRL approaches based on neural networks exist, but most of these approaches are not suitable for high dimensional systems. The Asynchronous Advantage Actor-Critic (A3C) method is a recently developed and effective approach for high dimensional systems, and this work leverages these results and applies this approach to decision making in SSA. The decision space for the SSA problems can be high dimensional, even for tasking of a single telescope. Since the number of SOs in space is relatively high, each sensor will have a large number of possible actions at a given time. Therefore, efficient DRL approaches are required when solving the SMP for SSA. This work develops a A3C based method for DRL applied to SSA sensor tasking. One of the key benefits of DRL approaches is the ability to handle high dimensional data. For example DRL methods have been applied to image processing for the autonomous car application. For example, a 256x256 RGB image has 196608 parameters (256*256*3=196608) which is very high dimensional, and deep learning approaches routinely take images like this as inputs. Therefore, when applied to the whole catalog the DRL approach offers the ability to solve this high dimensional problem. This work has the potential to, for the first time, solve the non-myopic sensor tasking problem for the whole SO catalog (over 22,000 objects) providing a truly revolutionary result.

  18. High-coverage quantitative proteomics using amine-specific isotopic labeling.

    PubMed

    Melanson, Jeremy E; Avery, Steven L; Pinto, Devanand M

    2006-08-01

    Peptide dimethylation with isotopically coded formaldehydes was evaluated as a potential alternative to techniques such as the iTRAQ method for comparative proteomics. The isotopic labeling strategy and custom-designed protein quantitation software were tested using protein standards and then applied to measure proteins levels associated with Alzheimer's disease (AD). The method provided high accuracy (10% error), precision (14% RSD) and coverage (70%) when applied to the analysis of a standard solution of BSA by LC-MS/MS. The technique was then applied to measure protein abundance levels in brain tissue afflicted with AD relative to normal brain tissue. 2-D LC-MS analysis identified 548 unique proteins (p<0.05). Of these, 349 were quantified with two or more peptides that met the statistical criteria used in this study. Several classes of proteins exhibited significant changes in abundance. For example, elevated levels of antioxidant proteins and decreased levels of mitochondrial electron transport proteins were observed. The results demonstrate the utility of the labeling method for high-throughput quantitative analysis.

  19. Probabilistic Exposure Analysis for Chemical Risk Characterization

    PubMed Central

    Bogen, Kenneth T.; Cullen, Alison C.; Frey, H. Christopher; Price, Paul S.

    2009-01-01

    This paper summarizes the state of the science of probabilistic exposure assessment (PEA) as applied to chemical risk characterization. Current probabilistic risk analysis methods applied to PEA are reviewed. PEA within the context of risk-based decision making is discussed, including probabilistic treatment of related uncertainty, interindividual heterogeneity, and other sources of variability. Key examples of recent experience gained in assessing human exposures to chemicals in the environment, and other applications to chemical risk characterization and assessment, are presented. It is concluded that, although improvements continue to be made, existing methods suffice for effective application of PEA to support quantitative analyses of the risk of chemically induced toxicity that play an increasing role in key decision-making objectives involving health protection, triage, civil justice, and criminal justice. Different types of information required to apply PEA to these different decision contexts are identified, and specific PEA methods are highlighted that are best suited to exposure assessment in these separate contexts. PMID:19223660

  20. A Practical Guide to Interpretation of Large Collections of Incident Narratives Using the QUORUM Method

    NASA Technical Reports Server (NTRS)

    McGreevy, Michael W.

    1997-01-01

    Analysis of incident reports plays an important role in aviation safety. Typically, a narrative description, written by a participant, is a central part of an incident report. Because there are so many reports, and the narratives contain so much detail, it can be difficult to efficiently and effectively recognize patterns among them. Recognizing and addressing recurring problems, however, is vital to continuing safety in commercial aviation operations. A practical way to interpret large collections of incident narratives is to apply the QUORUM method of text analysis, modeling, and relevance ranking. In this paper, QUORUM text analysis and modeling are surveyed, and QUORUM relevance ranking is described in detail with many examples. The examples are based on several large collections of reports from the Aviation Safety Reporting System (ASRS) database, and a collection of news stories describing the disaster of TWA Flight 800, the Boeing 747 which exploded in mid- air and crashed near Long Island, New York, on July 17, 1996. Reader familiarity with this disaster should make the relevance-ranking examples more understandable. The ASRS examples illustrate the practical application of QUORUM relevance ranking.

  1. Two and three dimensional grid generation by an algebraic homotopy procedure

    NASA Technical Reports Server (NTRS)

    Moitra, Anutosh

    1990-01-01

    An algebraic method for generating two- and three-dimensional grid systems for aerospace vehicles is presented. The method is based on algebraic procedures derived from homotopic relations for blending between inner and outer boundaries of any given configuration. Stable properties of homotopic maps have been exploited to provide near-orthogonality and specified constant spacing at the inner boundary. The method has been successfully applied to analytically generated blended wing-body configurations as well as discretely defined geometries such as the High-Speed Civil Transport Aircraft. Grid examples representative of the capabilities of the method are presented.

  2. Method for fusing bone

    DOEpatents

    Mourant, Judith R.; Anderson, Gerhard D.; Bigio, Irving J.; Johnson, Tamara M.

    1996-01-01

    Method for fusing bone. The present invention is a method for joining hard tissue which includes chemically removing the mineral matrix from a thin layer of the surfaces to be joined, placing the two bones together, and heating the joint using electromagnetic radiation. The goal of the method is not to produce a full-strength weld of, for example, a cortical bone of the tibia, but rather to produce a weld of sufficient strength to hold the bone halves in registration while either external fixative devices are applied to stabilize the bone segments, or normal healing processes restore full strength to the tibia.

  3. The ratio method: A new tool to study one-neutron halo nuclei

    DOE PAGES

    Capel, Pierre; Johnson, R. C.; Nunes, F. M.

    2013-10-02

    Recently a new observable to study halo nuclei was introduced, based on the ratio between breakup and elastic angular cross sections. This new observable is shown by the analysis of specific reactions to be independent of the reaction mechanism and to provide nuclear-structure information of the projectile. Here we explore the details of this ratio method, including the sensitivity to binding energy and angular momentum of the projectile. We also study the reliability of the method with breakup energy. Lastly, we provide guidelines and specific examples for experimentalists who wish to apply this method.

  4. A unified convergence theory of a numerical method, and applications to the replenishment policies.

    PubMed

    Mi, Xiang-jiang; Wang, Xing-hua

    2004-01-01

    In determining the replenishment policy for an inventory system, some researchers advocated that the iterative method of Newton could be applied to the derivative of the total cost function in order to get the optimal solution. But this approach requires calculation of the second derivative of the function. Avoiding this complex computation we use another iterative method presented by the second author. One of the goals of this paper is to present a unified convergence theory of this method. Then we give a numerical example to show the application of our theory.

  5. Applications of asynoptic space - Time Fourier transform methods to scanning satellite measurements

    NASA Technical Reports Server (NTRS)

    Lait, Leslie R.; Stanford, John L.

    1988-01-01

    A method proposed by Salby (1982) for computing the zonal space-time Fourier transform of asynoptically acquired satellite data is discussed. The method and its relationship to other techniques are briefly described, and possible problems in applying it to real data are outlined. Examples of results obtained using this technique are given which demonstrate its sensitivity to small-amplitude signals. A number of waves are found which have previously been observed as well as two not heretofore reported. A possible extension of the method which could increase temporal and longitudinal resolution is described.

  6. Search automation of the generalized method of device operational characteristics improvement

    NASA Astrophysics Data System (ADS)

    Petrova, I. Yu; Puchkova, A. A.; Zaripova, V. M.

    2017-01-01

    The article presents brief results of analysis of existing search methods of the closest patents, which can be applied to determine generalized methods of device operational characteristics improvement. There were observed the most widespread clustering algorithms and metrics for determining the proximity degree between two documents. The article proposes the technique of generalized methods determination; it has two implementation variants and consists of 7 steps. This technique has been implemented in the “Patents search” subsystem of the “Intellect” system. Also the article gives an example of the use of the proposed technique.

  7. Numerical solution of distributed order fractional differential equations

    NASA Astrophysics Data System (ADS)

    Katsikadelis, John T.

    2014-02-01

    In this paper a method for the numerical solution of distributed order FDEs (fractional differential equations) of a general form is presented. The method applies to both linear and nonlinear equations. The Caputo type fractional derivative is employed. The distributed order FDE is approximated with a multi-term FDE, which is then solved by adjusting appropriately the numerical method developed for multi-term FDEs by Katsikadelis. Several example equations are solved and the response of mechanical systems described by such equations is studied. The convergence and the accuracy of the method for linear and nonlinear equations are demonstrated through well corroborated numerical results.

  8. Aligned and Unaligned Coherence: A New Diagnostic Tool

    NASA Technical Reports Server (NTRS)

    Miles, Jeffrey Hilton

    2006-01-01

    The study of combustion noise from turbofan engines has become important again as the noise from other sources like the fan and jet are reduced. A method has been developed to help identify combustion noise spectra using an aligned and unaligned coherence technique. When used with the well known three signal coherent power method and coherent power method it provides new information by separating tonal information from random process information. Examples are presented showing the underlying tonal structure which is buried under broadband noise and jet noise. The method is applied to data from a Pratt and Whitney PW4098 turbofan engine.

  9. A generalized least squares regression approach for computing effect sizes in single-case research: application examples.

    PubMed

    Maggin, Daniel M; Swaminathan, Hariharan; Rogers, Helen J; O'Keeffe, Breda V; Sugai, George; Horner, Robert H

    2011-06-01

    A new method for deriving effect sizes from single-case designs is proposed. The strategy is applicable to small-sample time-series data with autoregressive errors. The method uses Generalized Least Squares (GLS) to model the autocorrelation of the data and estimate regression parameters to produce an effect size that represents the magnitude of treatment effect from baseline to treatment phases in standard deviation units. In this paper, the method is applied to two published examples using common single case designs (i.e., withdrawal and multiple-baseline). The results from these studies are described, and the method is compared to ten desirable criteria for single-case effect sizes. Based on the results of this application, we conclude with observations about the use of GLS as a support to visual analysis, provide recommendations for future research, and describe implications for practice. Copyright © 2011 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  10. Modern Focused-Ion-Beam-Based Site-Specific Specimen Preparation for Atom Probe Tomography.

    PubMed

    Prosa, Ty J; Larson, David J

    2017-04-01

    Approximately 30 years after the first use of focused ion beam (FIB) instruments to prepare atom probe tomography specimens, this technique has grown to be used by hundreds of researchers around the world. This past decade has seen tremendous advances in atom probe applications, enabled by the continued development of FIB-based specimen preparation methodologies. In this work, we provide a short review of the origin of the FIB method and the standard methods used today for lift-out and sharpening, using the annular milling method as applied to atom probe tomography specimens. Key steps for enabling correlative analysis with transmission electron-beam backscatter diffraction, transmission electron microscopy, and atom probe tomography are presented, and strategies for preparing specimens for modern microelectronic device structures are reviewed and discussed in detail. Examples are used for discussion of the steps for each of these methods. We conclude with examples of the challenges presented by complex topologies such as nanowires, nanoparticles, and organic materials.

  11. Improving the local wavenumber method by automatic DEXP transformation

    NASA Astrophysics Data System (ADS)

    Abbas, Mahmoud Ahmed; Fedi, Maurizio; Florio, Giovanni

    2014-12-01

    In this paper we present a new method for source parameter estimation, based on the local wavenumber function. We make use of the stable properties of the Depth from EXtreme Points (DEXP) method, in which the depth to the source is determined at the extreme points of the field scaled with a power-law of the altitude. Thus the method results particularly suited to deal with local wavenumber of high-order, as it is able to overcome its known instability caused by the use of high-order derivatives. The DEXP transformation enjoys a relevant feature when applied to the local wavenumber function: the scaling-law is in fact independent of the structural index. So, differently from the DEXP transformation applied directly to potential fields, the Local Wavenumber DEXP transformation is fully automatic and may be implemented as a very fast imaging method, mapping every kind of source at the correct depth. Also the simultaneous presence of sources with different homogeneity degree can be easily and correctly treated. The method was applied to synthetic and real examples from Bulgaria and Italy and the results agree well with known information about the causative sources.

  12. Reaction schemes visualized in network form: the syntheses of strychnine as an example.

    PubMed

    Proudfoot, John R

    2013-05-24

    Representation of synthesis sequences in a network form provides an effective method for the comparison of multiple reaction schemes and an opportunity to emphasize features such as reaction scale that are often relegated to experimental sections. An example of data formatting that allows construction of network maps in Cytoscape is presented, along with maps that illustrate the comparison of multiple reaction sequences, comparison of scaffold changes within sequences, and consolidation to highlight common key intermediates used across sequences. The 17 different synthetic routes reported for strychnine are used as an example basis set. The reaction maps presented required a significant data extraction and curation, and a standardized tabular format for reporting reaction information, if applied in a consistent way, could allow the automated combination of reaction information across different sources.

  13. A Mathematical Model of the Color Preference Scale Construction in Quality Management at the Machine-Building Enterprise

    NASA Astrophysics Data System (ADS)

    Averchenkov, V. I.; Kondratenko, S. V.; Potapov, L. A.; Spasennikov, V. V.

    2017-01-01

    In this article, the author consider the basic features of color preferences. The famous scientists’ works confirm their identity and independence of subjective factors. The article examines the method of constructing the respondent’s color preference individual scale on the basis of L Thurstone’s pair election method. The practical example of applying this technique for constructing the respondent’s color preference individual scale is given. The result of this method application is the color preference individual scale with the weight value of each color. The authors also developed and presented the algorithm of applying this method within the program complex to determine the respondents’ attitude to the issues under investigation based on their color preferences. Also, the article considers the possibility of using the software at the industrial enterprises to improve the quality of the consumer quality products.

  14. Effective quadrature formula in solving linear integro-differential equations of order two

    NASA Astrophysics Data System (ADS)

    Eshkuvatov, Z. K.; Kammuji, M.; Long, N. M. A. Nik; Yunus, Arif A. M.

    2017-08-01

    In this note, we solve general form of Fredholm-Volterra integro-differential equations (IDEs) of order 2 with boundary condition approximately and show that proposed method is effective and reliable. Initially, IDEs is reduced into integral equation of the third kind by using standard integration techniques and identity between multiple and single integrals then truncated Legendre series are used to estimate the unknown function. For the kernel integrals, we have applied Gauss-Legendre quadrature formula and collocation points are chosen as the roots of the Legendre polynomials. Finally, reduce the integral equations of the third kind into the system of algebraic equations and Gaussian elimination method is applied to get approximate solutions. Numerical examples and comparisons with other methods reveal that the proposed method is very effective and dominated others in many cases. General theory of existence of the solution is also discussed.

  15. Ghost artifact cancellation using phased array processing.

    PubMed

    Kellman, P; McVeigh, E R

    2001-08-01

    In this article, a method for phased array combining is formulated which may be used to cancel ghosts caused by a variety of distortion mechanisms, including space variant distortions such as local flow or off-resonance. This method is based on a constrained optimization, which optimizes SNR subject to the constraint of nulling ghost artifacts at known locations. The resultant technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation it is applied to full field-of-view (FOV) images. The method is applied to multishot EPI with noninterleaved phase encode acquisition. A number of benefits, as compared to the conventional interleaved approach, are reduced distortion due to off-resonance, in-plane flow, and EPI delay misalignment, as well as eliminating the need for echo-shifting. Experimental results demonstrate the cancellation for both phantom as well as cardiac imaging examples.

  16. Ghost Artifact Cancellation Using Phased Array Processing

    PubMed Central

    Kellman, Peter; McVeigh, Elliot R.

    2007-01-01

    In this article, a method for phased array combining is formulated which may be used to cancel ghosts caused by a variety of distortion mechanisms, including space variant distortions such as local flow or off-resonance. This method is based on a constrained optimization, which optimizes SNR subject to the constraint of nulling ghost artifacts at known locations. The resultant technique is similar to the method known as sensitivity encoding (SENSE) used for accelerated imaging; however, in this formulation it is applied to full field-of-view (FOV) images. The method is applied to multishot EPI with noninterleaved phase encode acquisition. A number of benefits, as compared to the conventional interleaved approach, are reduced distortion due to off-resonance, in-plane flow, and EPI delay misalignment, as well as eliminating the need for echo-shifting. Experimental results demonstrate the cancellation for both phantom as well as cardiac imaging examples. PMID:11477638

  17. Asynchronous multilevel adaptive methods for solving partial differential equations on multiprocessors - Performance results

    NASA Technical Reports Server (NTRS)

    Mccormick, S.; Quinlan, D.

    1989-01-01

    The fast adaptive composite grid method (FAC) is an algorithm that uses various levels of uniform grids (global and local) to provide adaptive resolution and fast solution of PDEs. Like all such methods, it offers parallelism by using possibly many disconnected patches per level, but is hindered by the need to handle these levels sequentially. The finest levels must therefore wait for processing to be essentially completed on all the coarser ones. A recently developed asynchronous version of FAC, called AFAC, completely eliminates this bottleneck to parallelism. This paper describes timing results for AFAC, coupled with a simple load balancing scheme, applied to the solution of elliptic PDEs on an Intel iPSC hypercube. These tests include performance of certain processes necessary in adaptive methods, including moving grids and changing refinement. A companion paper reports on numerical and analytical results for estimating convergence factors of AFAC applied to very large scale examples.

  18. Computer-based objective quantitative assessment of pulmonary parenchyma via x-ray CT

    NASA Astrophysics Data System (ADS)

    Uppaluri, Renuka; McLennan, Geoffrey; Sonka, Milan; Hoffman, Eric A.

    1998-07-01

    This paper is a review of our recent studies using a texture- based tissue characterization method called the Adaptive Multiple Feature Method. This computerized method is automated and performs tissue classification based upon the training acquired on a set of representative examples. The AMFM has been applied to several different discrimination tasks including normal subjects, subjects with interstitial lung disease, smokers, asbestos-exposed subjects, and subjects with cystic fibrosis. The AMFM has also been applied to data acquired using different scanners and scanning protocols. The AMFM has shown to be successful and better than other existing techniques in discriminating the tissues under consideration. We demonstrate that the AMFM is considerably more sensitive and specific in characterizing the lung, especially in the presence of mixed pathology, as compared to more commonly used methods. Evidence is presented suggesting that the AMFM is highly sensitive to some of the earliest disease processes.

  19. Multigrid methods for bifurcation problems: The self adjoint case

    NASA Technical Reports Server (NTRS)

    Taasan, Shlomo

    1987-01-01

    This paper deals with multigrid methods for computational problems that arise in the theory of bifurcation and is restricted to the self adjoint case. The basic problem is to solve for arcs of solutions, a task that is done successfully with an arc length continuation method. Other important issues are, for example, detecting and locating singular points as part of the continuation process, switching branches at bifurcation points, etc. Multigrid methods have been applied to continuation problems. These methods work well at regular points and at limit points, while they may encounter difficulties in the vicinity of bifurcation points. A new continuation method that is very efficient also near bifurcation points is presented here. The other issues mentioned above are also treated very efficiently with appropriate multigrid algorithms. For example, it is shown that limit points and bifurcation points can be solved for directly by a multigrid algorithm. Moreover, the algorithms presented here solve the corresponding problems in just a few work units (about 10 or less), where a work unit is the work involved in one local relaxation on the finest grid.

  20. BAYESIAN ESTIMATION OF THERMONUCLEAR REACTION RATES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iliadis, C.; Anderson, K. S.; Coc, A.

    The problem of estimating non-resonant astrophysical S -factors and thermonuclear reaction rates, based on measured nuclear cross sections, is of major interest for nuclear energy generation, neutrino physics, and element synthesis. Many different methods have been applied to this problem in the past, almost all of them based on traditional statistics. Bayesian methods, on the other hand, are now in widespread use in the physical sciences. In astronomy, for example, Bayesian statistics is applied to the observation of extrasolar planets, gravitational waves, and Type Ia supernovae. However, nuclear physics, in particular, has been slow to adopt Bayesian methods. We presentmore » astrophysical S -factors and reaction rates based on Bayesian statistics. We develop a framework that incorporates robust parameter estimation, systematic effects, and non-Gaussian uncertainties in a consistent manner. The method is applied to the reactions d(p, γ ){sup 3}He, {sup 3}He({sup 3}He,2p){sup 4}He, and {sup 3}He( α , γ ){sup 7}Be, important for deuterium burning, solar neutrinos, and Big Bang nucleosynthesis.« less

  1. Military applications and examples of near-surface seismic surface wave methods (Invited)

    NASA Astrophysics Data System (ADS)

    sloan, S.; Stevens, R.

    2013-12-01

    Although not always widely known or publicized, the military uses a variety of geophysical methods for a wide range of applications--some that are already common practice in the industry while others are truly novel. Some of those applications include unexploded ordnance detection, general site characterization, anomaly detection, countering improvised explosive devices (IEDs), and security monitoring, to name a few. Techniques used may include, but are not limited to, ground penetrating radar, seismic, electrical, gravity, and electromagnetic methods. Seismic methods employed include surface wave analysis, refraction tomography, and high-resolution reflection methods. Although the military employs geophysical methods, that does not necessarily mean that those methods enable or support combat operations--often times they are being used for humanitarian applications within the military's area of operations to support local populations. The work presented here will focus on the applied use of seismic surface wave methods, including multichannel analysis of surface waves (MASW) and backscattered surface waves, often in conjunction with other methods such as refraction tomography or body-wave diffraction analysis. Multiple field examples will be shown, including explosives testing, tunnel detection, pre-construction site characterization, and cavity detection.

  2. Phylogenetic framework for coevolutionary studies: a compass for exploring jungles of tangled trees.

    PubMed

    Martínez-Aquino, Andrés

    2016-08-01

    Phylogenetics is used to detect past evolutionary events, from how species originated to how their ecological interactions with other species arose, which can mirror cophylogenetic patterns. Cophylogenetic reconstructions uncover past ecological relationships between taxa through inferred coevolutionary events on trees, for example, codivergence, duplication, host-switching, and loss. These events can be detected by cophylogenetic analyses based on nodes and the length and branching pattern of the phylogenetic trees of symbiotic associations, for example, host-parasite. In the past 2 decades, algorithms have been developed for cophylogetenic analyses and implemented in different software, for example, statistical congruence index and event-based methods. Based on the combination of these approaches, it is possible to integrate temporal information into cophylogenetical inference, such as estimates of lineage divergence times between 2 taxa, for example, hosts and parasites. Additionally, the advances in phylogenetic biogeography applying methods based on parametric process models and combined Bayesian approaches, can be useful for interpreting coevolutionary histories in a scenario of biogeographical area connectivity through time. This article briefly reviews the basics of parasitology and provides an overview of software packages in cophylogenetic methods. Thus, the objective here is to present a phylogenetic framework for coevolutionary studies, with special emphasis on groups of parasitic organisms. Researchers wishing to undertake phylogeny-based coevolutionary studies can use this review as a "compass" when "walking" through jungles of tangled phylogenetic trees.

  3. Lyapunov exponents from CHUA's circuit time series using artificial neural networks

    NASA Technical Reports Server (NTRS)

    Gonzalez, J. Jesus; Espinosa, Ismael E.; Fuentes, Alberto M.

    1995-01-01

    In this paper we present the general problem of identifying if a nonlinear dynamic system has a chaotic behavior. If the answer is positive the system will be sensitive to small perturbations in the initial conditions which will imply that there is a chaotic attractor in its state space. A particular problem would be that of identifying a chaotic oscillator. We present an example of three well known different chaotic oscillators where we have knowledge of the equations that govern the dynamical systems and from there we can obtain the corresponding time series. In a similar example we assume that we only know the time series and, finally, in another example we have to take measurements in the Chua's circuit to obtain sample points of the time series. With the knowledge about the time series the phase plane portraits are plotted and from them, by visual inspection, it is concluded whether or not the system is chaotic. This method has the problem of uncertainty and subjectivity and for that reason a different approach is needed. A quantitative approach is the computation of the Lyapunov exponents. We describe several methods for obtaining them and apply a little known method of artificial neural networks to the different examples mentioned above. We end the paper discussing the importance of the Lyapunov exponents in the interpretation of the dynamic behavior of biological neurons and biological neural networks.

  4. Phylogenetic framework for coevolutionary studies: a compass for exploring jungles of tangled trees

    PubMed Central

    2016-01-01

    Abstract Phylogenetics is used to detect past evolutionary events, from how species originated to how their ecological interactions with other species arose, which can mirror cophylogenetic patterns. Cophylogenetic reconstructions uncover past ecological relationships between taxa through inferred coevolutionary events on trees, for example, codivergence, duplication, host-switching, and loss. These events can be detected by cophylogenetic analyses based on nodes and the length and branching pattern of the phylogenetic trees of symbiotic associations, for example, host–parasite. In the past 2 decades, algorithms have been developed for cophylogetenic analyses and implemented in different software, for example, statistical congruence index and event-based methods. Based on the combination of these approaches, it is possible to integrate temporal information into cophylogenetical inference, such as estimates of lineage divergence times between 2 taxa, for example, hosts and parasites. Additionally, the advances in phylogenetic biogeography applying methods based on parametric process models and combined Bayesian approaches, can be useful for interpreting coevolutionary histories in a scenario of biogeographical area connectivity through time. This article briefly reviews the basics of parasitology and provides an overview of software packages in cophylogenetic methods. Thus, the objective here is to present a phylogenetic framework for coevolutionary studies, with special emphasis on groups of parasitic organisms. Researchers wishing to undertake phylogeny-based coevolutionary studies can use this review as a “compass” when “walking” through jungles of tangled phylogenetic trees. PMID:29491928

  5. Method and apparatus for testing surface characteristics of a material

    NASA Technical Reports Server (NTRS)

    Johnson, David L. (Inventor); Kersker, Karl D. (Inventor); Stratton, Troy C. (Inventor); Richardson, David E. (Inventor)

    2006-01-01

    A method, apparatus and system for testing characteristics of a material sample is provided. The system includes an apparatus configured to house the material test sample while defining a sealed volume against a surface of the material test sample. A source of pressurized fluid is in communication with, and configured to pressurize, the sealed volume. A load applying apparatus is configured to apply a defined load to the material sample while the sealed volume is monitored for leakage of the pressurized fluid. Thus, the inducement of surface defects such as microcracking and crazing may be detected and their effects analyzed for a given material. The material test samples may include laminar structures formed of, for example, carbon cloth phenolic, glass cloth phenolic, silica cloth phenolic materials or carbon-carbon materials. In one embodiment the system may be configured to analyze the material test sample while an across-ply loading is applied thereto.

  6. Parallel Implicit Runge-Kutta Methods Applied to Coupled Orbit/Attitude Propagation

    NASA Astrophysics Data System (ADS)

    Hatten, Noble; Russell, Ryan P.

    2017-12-01

    A variable-step Gauss-Legendre implicit Runge-Kutta (GLIRK) propagator is applied to coupled orbit/attitude propagation. Concepts previously shown to improve efficiency in 3DOF propagation are modified and extended to the 6DOF problem, including the use of variable-fidelity dynamics models. The impact of computing the stage dynamics of a single step in parallel is examined using up to 23 threads and 22 associated GLIRK stages; one thread is reserved for an extra dynamics function evaluation used in the estimation of the local truncation error. Efficiency is found to peak for typical examples when using approximately 8 to 12 stages for both serial and parallel implementations. Accuracy and efficiency compare favorably to explicit Runge-Kutta and linear-multistep solvers for representative scenarios. However, linear-multistep methods are found to be more efficient for some applications, particularly in a serial computing environment, or when parallelism can be applied across multiple trajectories.

  7. Detection of Unknown Crypts under the Floor in the Holy Trinity Church (Dominican Monastery) in Krakow, Poland

    NASA Astrophysics Data System (ADS)

    Strzępowicz, Anna; Łyskowski, Mikołaj; Ziętek, Jerzy; Tomecka-Suchoń, Sylwia

    2018-03-01

    The GPR surveying method belongs to non-invasive and quick geophysical methods, applied also in archaeological prospection. It allows for detecting archaeological artefacts buried under historical layers, and also those which can be found within buildings of historical value. Most commonly, just as in this particular case, it is used in churches, where other non-invasive localisation methods cannot be applied. In a majority of cases, surveys bring about highly positive results, enabling the site and size of a specific object to be indicated. A good example are the results obtained from the measurements carried out in the Basilica of Holy Trinity, belonging to the Dominican Monastery in Krakow. They allowed for confirming the location of the already existing crypts and for indicating so-far unidentified objects.

  8. Hesitant fuzzy linguistic multicriteria decision-making method based on generalized prioritized aggregation operator.

    PubMed

    Wu, Jia-ting; Wang, Jian-qiang; Wang, Jing; Zhang, Hong-yu; Chen, Xiao-hong

    2014-01-01

    Based on linguistic term sets and hesitant fuzzy sets, the concept of hesitant fuzzy linguistic sets was introduced. The focus of this paper is the multicriteria decision-making (MCDM) problems in which the criteria are in different priority levels and the criteria values take the form of hesitant fuzzy linguistic numbers (HFLNs). A new approach to solving these problems is proposed, which is based on the generalized prioritized aggregation operator of HFLNs. Firstly, the new operations and comparison method for HFLNs are provided and some linguistic scale functions are applied. Subsequently, two prioritized aggregation operators and a generalized prioritized aggregation operator of HFLNs are developed and applied to MCDM problems. Finally, an illustrative example is given to illustrate the effectiveness and feasibility of the proposed method, which are then compared to the existing approach.

  9. Impulsive control of stochastic systems with applications in chaos control, chaos synchronization, and neural networks.

    PubMed

    Li, Chunguang; Chen, Luonan; Aihara, Kazuyuki

    2008-06-01

    Real systems are often subject to both noise perturbations and impulsive effects. In this paper, we study the stability and stabilization of systems with both noise perturbations and impulsive effects. In other words, we generalize the impulsive control theory from the deterministic case to the stochastic case. The method is based on extending the comparison method to the stochastic case. The method presented in this paper is general and easy to apply. Theoretical results on both stability in the pth mean and stability with disturbance attenuation are derived. To show the effectiveness of the basic theory, we apply it to the impulsive control and synchronization of chaotic systems with noise perturbations, and to the stability of impulsive stochastic neural networks. Several numerical examples are also presented to verify the theoretical results.

  10. Computation of subsonic flow around airfoil systems with multiple separation

    NASA Technical Reports Server (NTRS)

    Jacob, K.

    1982-01-01

    A numerical method for computing the subsonic flow around multi-element airfoil systems was developed, allowing for flow separation at one or more elements. Besides multiple rear separation also sort bubbles on the upper surface and cove bubbles can approximately be taken into account. Also, compressibility effects for pure subsonic flow are approximately accounted for. After presentation the method is applied to several examples and improved in some details. Finally, the present limitations and desirable extensions are discussed.

  11. Least Squares Best Fit Method for the Three Parameter Weibull Distribution: Analysis of Tensile and Bend Specimens with Volume or Surface Flaw Failure

    NASA Technical Reports Server (NTRS)

    Gross, Bernard

    1996-01-01

    Material characterization parameters obtained from naturally flawed specimens are necessary for reliability evaluation of non-deterministic advanced ceramic structural components. The least squares best fit method is applied to the three parameter uniaxial Weibull model to obtain the material parameters from experimental tests on volume or surface flawed specimens subjected to pure tension, pure bending, four point or three point loading. Several illustrative example problems are provided.

  12. Graph-theoretic strengths of contextuality

    NASA Astrophysics Data System (ADS)

    de Silva, Nadish

    2017-03-01

    Cabello-Severini-Winter and Abramsky-Hardy (building on the framework of Abramsky-Brandenburger) both provide classes of Bell and contextuality inequalities for very general experimental scenarios using vastly different mathematical techniques. We review both approaches, carefully detail the links between them, and give simple, graph-theoretic methods for finding inequality-free proofs of nonlocality and contextuality and for finding states exhibiting strong nonlocality and/or contextuality. Finally, we apply these methods to concrete examples in stabilizer quantum mechanics relevant to understanding contextuality as a resource in quantum computation.

  13. Design of spur gears for improved efficiency

    NASA Technical Reports Server (NTRS)

    Anderson, N. E.; Loewenthal, S. H.

    1981-01-01

    A method to calculate spur gear system power loss for a wide range of gear geometries and operating conditions is used to determine design requirements for an efficient gearset. The effects of spur gear size, pitch, ratio, pitch-line-velocity and load on efficiency are shown. A design example is given to illustrate how the method is to be applied. In general, peak efficiencies were found to be greater for larger diameter and fine pitched gears and tare (no-load) losses were found to be significant.

  14. Measures and models for angular correlation and angular-linear correlation. [correlation of random variables

    NASA Technical Reports Server (NTRS)

    Johnson, R. A.; Wehrly, T.

    1976-01-01

    Population models for dependence between two angular measurements and for dependence between an angular and a linear observation are proposed. The method of canonical correlations first leads to new population and sample measures of dependence in this latter situation. An example relating wind direction to the level of a pollutant is given. Next, applied to pairs of angular measurements, the method yields previously proposed sample measures in some special cases and a new sample measure in general.

  15. Deferred discrimination algorithm (nibbling) for target filter management

    NASA Astrophysics Data System (ADS)

    Caulfield, H. John; Johnson, John L.

    1999-07-01

    A new method of classifying objects is presented. Rather than trying to form the classifier in one step or in one training algorithm, it is done in a series of small steps, or nibbles. This leads to an efficient and versatile system that is trained in series with single one-shot examples but applied in parallel, is implemented with single layer perceptrons, yet maintains its fully sequential hierarchical structure. Based on the nibbling algorithm, a basic new method of target reference filter management is described.

  16. On Solutions for the Transient Response of Beams

    NASA Technical Reports Server (NTRS)

    Leonard, Robert W.

    1959-01-01

    Williams type modal solutions of the elementary and Timoshenko beam equations are presented for the response of several uniform beams to a general applied load. Example computations are shown for a free-free beam subject to various concentrated loads at its center. Discussion includes factors influencing the convergence of modal solutions and factors to be considered in a choice of beam theory. Results obtained by two numerical procedures, the traveling-wave method and Houbolt's method, are also presented and discussed.

  17. Stability analysis of piecewise non-linear systems and its application to chaotic synchronisation with intermittent control

    NASA Astrophysics Data System (ADS)

    Wang, Qingzhi; Tan, Guanzheng; He, Yong; Wu, Min

    2017-10-01

    This paper considers a stability analysis issue of piecewise non-linear systems and applies it to intermittent synchronisation of chaotic systems. First, based on piecewise Lyapunov function methods, more general and less conservative stability criteria of piecewise non-linear systems in periodic and aperiodic cases are presented, respectively. Next, intermittent synchronisation conditions of chaotic systems are derived which extend existing results. Finally, Chua's circuit is taken as an example to verify the validity of our methods.

  18. Distinguishing time-delayed causal interactions using convergent cross mapping

    PubMed Central

    Ye, Hao; Deyle, Ethan R.; Gilarranz, Luis J.; Sugihara, George

    2015-01-01

    An important problem across many scientific fields is the identification of causal effects from observational data alone. Recent methods (convergent cross mapping, CCM) have made substantial progress on this problem by applying the idea of nonlinear attractor reconstruction to time series data. Here, we expand upon the technique of CCM by explicitly considering time lags. Applying this extended method to representative examples (model simulations, a laboratory predator-prey experiment, temperature and greenhouse gas reconstructions from the Vostok ice core, and long-term ecological time series collected in the Southern California Bight), we demonstrate the ability to identify different time-delayed interactions, distinguish between synchrony induced by strong unidirectional-forcing and true bidirectional causality, and resolve transitive causal chains. PMID:26435402

  19. BOREHOLE NEUTRON ACTIVATION: THE RARE EARTHS.

    USGS Publications Warehouse

    Mikesell, J.L.; Senftle, F.E.

    1987-01-01

    Neutron-induced borehole gamma-ray spectroscopy has been widely used as a geophysical exploration technique by the petroleum industry, but its use for mineral exploration is not as common. Nuclear methods can be applied to mineral exploration, for determining stratigraphy and bed correlations, for mapping ore deposits, and for studying mineral concentration gradients. High-resolution detectors are essential for mineral exploration, and by using them an analysis of the major element concentrations in a borehole can usually be made. A number of economically important elements can be detected at typical ore-grade concentrations using this method. Because of the application of the rare-earth elements to high-temperature superconductors, these elements are examined in detail as an example of how nuclear techniques can be applied to mineral exploration.

  20. Acoustics outreach program for the deaf

    NASA Astrophysics Data System (ADS)

    Vongsawad, Cameron T.; Berardi, Mark L.; Whiting, Jennifer K.; Lawler, M. Jeannette; Gee, Kent L.; Neilsen, Tracianne B.

    2016-03-01

    The Hear and See methodology has often been used as a means of enhancing pedagogy by focusing on the two strongest learning senses, but this naturally does not apply to deaf or hard of hearing students. Because deaf students' prior nonaural experiences with sound will vary significantly from those of students with typical hearing, different methods must be used to build understanding. However, the sensory-focused pedagogical principle can be applied in a different way for the Deaf by utilizing the senses of touch and sight, called here the ``See and Feel'' method. This presentation will provide several examples of how acoustics demonstrations have been adapted to create an outreach program for a group of junior high students from a school for the Deaf and discuss challenges encountered.

  1. Estimating conditional proportion curves by regression residuals.

    PubMed

    Han, Bing; Lim, Nelson

    2010-06-15

    Researchers often derive a categorical outcome from an observed continuous measurement y. For example, human obesity status can be defined by the body mass index. They proceed to estimate the conditional proportion curve p(x) = P(y

  2. Fourth order exponential time differencing method with local discontinuous Galerkin approximation for coupled nonlinear Schrodinger equations

    DOE PAGES

    Liang, Xiao; Khaliq, Abdul Q. M.; Xing, Yulong

    2015-01-23

    In this paper, we study a local discontinuous Galerkin method combined with fourth order exponential time differencing Runge-Kutta time discretization and a fourth order conservative method for solving the nonlinear Schrödinger equations. Based on different choices of numerical fluxes, we propose both energy-conserving and energy-dissipative local discontinuous Galerkin methods, and have proven the error estimates for the semi-discrete methods applied to linear Schrödinger equation. The numerical methods are proven to be highly efficient and stable for long-range soliton computations. Finally, extensive numerical examples are provided to illustrate the accuracy, efficiency and reliability of the proposed methods.

  3. Research on filter’s parameter selection based on PROMETHEE method

    NASA Astrophysics Data System (ADS)

    Zhu, Hui-min; Wang, Hang-yu; Sun, Shi-yan

    2018-03-01

    The selection of filter’s parameters in target recognition was studied in this paper. The PROMETHEE method was applied to the optimization problem of Gabor filter parameters decision, the correspondence model of the elemental relation between two methods was established. The author took the identification of military target as an example, problem about the filter’s parameter decision was simulated and calculated by PROMETHEE. The result showed that using PROMETHEE method for the selection of filter’s parameters was more scientific. The human disturbance caused by the experts method and empirical method could be avoided by this way. The method can provide reference for the parameter configuration scheme decision of the filter.

  4. Multichannel-Hadamard calibration of high-order adaptive optics systems.

    PubMed

    Guo, Youming; Rao, Changhui; Bao, Hua; Zhang, Ang; Zhang, Xuejun; Wei, Kai

    2014-06-02

    we present a novel technique of calibrating the interaction matrix for high-order adaptive optics systems, called the multichannel-Hadamard method. In this method, the deformable mirror actuators are firstly divided into a series of channels according to their coupling relationship, and then the voltage-oriented Hadamard method is applied to these channels. Taking the 595-element adaptive optics system as an example, the procedure is described in detail. The optimal channel dividing is discussed and tested by numerical simulation. The proposed method is also compared with the voltage-oriented Hadamard only method and the multichannel only method by experiments. Results show that the multichannel-Hadamard method can produce significant improvement on interaction matrix measurement.

  5. Laser Pencil Beam Based Techniques for Visualization and Analysis of Interfaces Between Media

    NASA Technical Reports Server (NTRS)

    Adamovsky, Grigory; Giles, Sammie, Jr.

    1998-01-01

    Traditional optical methods that include interferometry, Schlieren, and shadowgraphy have been used successfully for visualization and evaluation of various media. Aerodynamics and hydrodynamics are major fields where these methods have been applied. However, these methods have such major drawbacks as a relatively low power density and suppression of the secondary order phenomena. A novel method introduced at NASA Lewis Research Center minimizes disadvantages of the 'classical' methods. The method involves a narrow pencil-like beam that penetrates a medium of interest. The paper describes the laser pencil beam flow visualization methods in detail. Various system configurations are presented. The paper also discusses interfaces between media in general terms and provides examples of interfaces.

  6. Goal-based angular adaptivity applied to a wavelet-based discretisation of the neutral particle transport equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goffin, Mark A., E-mail: mark.a.goffin@gmail.com; Buchan, Andrew G.; Dargaville, Steven

    2015-01-15

    A method for applying goal-based adaptive methods to the angular resolution of the neutral particle transport equation is presented. The methods are applied to an octahedral wavelet discretisation of the spherical angular domain which allows for anisotropic resolution. The angular resolution is adapted across both the spatial and energy dimensions. The spatial domain is discretised using an inner-element sub-grid scale finite element method. The goal-based adaptive methods optimise the angular discretisation to minimise the error in a specific functional of the solution. The goal-based error estimators require the solution of an adjoint system to determine the importance to the specifiedmore » functional. The error estimators and the novel methods to calculate them are described. Several examples are presented to demonstrate the effectiveness of the methods. It is shown that the methods can significantly reduce the number of unknowns and computational time required to obtain a given error. The novelty of the work is the use of goal-based adaptive methods to obtain anisotropic resolution in the angular domain for solving the transport equation. -- Highlights: •Wavelet angular discretisation used to solve transport equation. •Adaptive method developed for the wavelet discretisation. •Anisotropic angular resolution demonstrated through the adaptive method. •Adaptive method provides improvements in computational efficiency.« less

  7. Electromagnetic Inverse Methods and Applications for Inhomogeneous Media Probing and Synthesis.

    NASA Astrophysics Data System (ADS)

    Xia, Jake Jiqing

    The electromagnetic inverse scattering problems concerned in this thesis are to find unknown inhomogeneous permittivity and conductivity profiles in a medium from the scattering data. Both analytical and numerical methods are studied in the thesis. The inverse methods can be applied to geophysical medium probing, non-destructive testing, medical imaging, optical waveguide synthesis and material characterization. An introduction is given in Chapter 1. The first part of the thesis presents inhomogeneous media probing. The Riccati equation approach is discussed in Chapter 2 for a one-dimensional planar profile inversion problem. Two types of the Riccati equations are derived and distinguished. New renormalized formulae based inverting one specific type of the Riccati equation are derived. Relations between the inverse methods of Green's function, the Riccati equation and the Gel'fand-Levitan-Marchenko (GLM) theory are studied. In Chapter 3, the renormalized source-type integral equation (STIE) approach is formulated for inversion of cylindrically inhomogeneous permittivity and conductivity profiles. The advantages of the renormalized STIE approach are demonstrated in numerical examples. The cylindrical profile inversion problem has an application for borehole inversion. In Chapter 4 the renormalized STIE approach is extended to a planar case where the two background media are different. Numerical results have shown fast convergence. This formulation is applied to inversion of the underground soil moisture profiles in remote sensing. The second part of the thesis presents the synthesis problem of inhomogeneous dielectric waveguides using the electromagnetic inverse methods. As a particular example, the rational function representation of reflection coefficients in the GLM theory is used. The GLM method is reviewed in Chapter 5. Relations between modal structures and transverse reflection coefficients of an inhomogeneous medium are established in Chapter 6. A stratified medium model is used to derive the guidance condition and the reflection coefficient. Results obtained in Chapter 6 provide the physical foundation for applying the inverse methods for the waveguide design problem. In Chapter 7, a global guidance condition for continuously varying medium is derived using the Riccati equation. It is further shown that the discrete modes in an inhomogeneous medium have the same wave vectors as the poles of the transverse reflection coefficient. An example of synthesizing an inhomogeneous dielectric waveguide using a rational reflection coefficient is presented. A summary of the thesis is given in Chapter 8. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.).

  8. Dynamic target ionization using an ultrashort pulse of a laser field

    NASA Astrophysics Data System (ADS)

    Makarov, D. N.; Matveev, V. I.; Makarova, K. A.

    2014-09-01

    Ionization processes under the interaction of an ultrashort pulse of an electromagnetic field with atoms in nonstationary states are considered. As an example, the ionization probability of the hydrogen-like atom upon the decay of quasi-stationary state is calculated. The method developed can be applied to complex systems, including targets in collisional states and various chemical reactions.

  9. Electrical Connector for Graphite Heating Elements

    NASA Technical Reports Server (NTRS)

    Mackintosh, B. H.

    1982-01-01

    Connection method applies force to two interfaces: that between heating element proper and heating-element support members and between heating-element support members and metal conductor. Inner rod of new connector system is maintained in tension by a spring (for example, Belleville washers). Connection is sufficiently complaint so tension remains within desired range, regardless of thermal expansion and contraction of various elements.

  10. Material Development to Raise Awareness of Using Smart Boards: An Example Design and Development Research

    ERIC Educational Resources Information Center

    Günaydin, Serpil; Karamete, Aysen

    2016-01-01

    This study aims to develop training material that will help raise awareness in prospective teachers regarding the benefits of using smart boards in the classroom. In this study, a Type 2 design and development research method (DDR) was used. The material was developed by applying phases of ADDIE--an instructional systems design model. The…

  11. Examples as Method? My Attempts to Understand Assessment and Fairness (in the Spirit of the Later Wittgenstein)

    ERIC Educational Resources Information Center

    Davis, Andrew

    2009-01-01

    What is "fairness" in the context of educational assessment? I apply this question to a number of contemporary educational assessment practices and policies. My approach to philosophy of education owes much to Wittgenstein. A commentary set apart from the main body of the paper focuses on my style of philosophising. Wittgenstein teaches us to…

  12. Mobile Microblogging: Using Twitter and Mobile Devices in an Online Course to Promote Learning in Authentic Contexts

    ERIC Educational Resources Information Center

    Hsu, Yu-Chang; Ching, Yu-Hui

    2012-01-01

    This research applied a mixed-method design to explore how best to promote learning in authentic contexts in an online graduate course in instructional message design. The students used Twitter apps on their mobile devices to collect, share, and comment on authentic design examples found in their daily lives. The data sources included tweets…

  13. An Example of Prepared-Planned Creative Drama in Second Grade Mathematics Education

    ERIC Educational Resources Information Center

    Özsoy, Nesrin; Özyer, Sinan; Akdeniz, Nesibe; Alkoç, Aysenur

    2017-01-01

    The aim of this research is teaching addition with natural numbers and the concept of large and small natural numbers in the second grade mathematics course, through creative drama method. The study has been applied to 31 elementary school second grade students studying at a public school in the province of Aydin. In this research, case study…

  14. Measurement of the Space Thermoacoustic Refrigerator Performance

    DTIC Science & Technology

    1990-09-01

    the refrigerator was a requisite towards simplifying the process of selecting the operating frequency . The simplest method allowing for the most...LIST OF FIGURES I-1 Pulse Tube Refrigerator.............................. 3 1-2 Hofler Refrigerator.................................. 5 1-3 Acoustical...qualitative manner as did Rayleigh. The first example of an acoustic heat pump was the pulse - tube refrigerator in which Gifford and Longsworth, by applying

  15. Comparing The Effectiveness of a90/95 Calculations (Preprint)

    DTIC Science & Technology

    2006-09-01

    Nachtsheim, John Neter, William Li, Applied Linear Statistical Models , 5th ed., McGraw-Hill/Irwin, 2005 5. Mood, Graybill and Boes, Introduction...curves is based on methods that are only valid for ordinary linear regression. Requirements for a valid Ordinary Least-Squares Regression Model There... linear . For example is a linear model ; is not. 2. Uniform variance (homoscedasticity

  16. Economic analysis of the gypsy moth problem in the northeast: I. applied to commercial forest stands

    Treesearch

    Roger E. McCay; William B. White

    1973-01-01

    A method of calculating immediate and future losses caused by the gypsy moth is presented, using examples of pulpwood and sawtimber stands. Discounting of future losses to evaluate their cost in terms of current expenditure is explained. The effect of infestation on forest management is discussed and a format is given for considering control decisions.

  17. Validation of Satellite-Based Objective Overshooting Cloud-Top Detection Methods Using CloudSat Cloud Profiling Radar Observations

    NASA Technical Reports Server (NTRS)

    Bedka, Kristopher M.; Dworak, Richard; Brunner, Jason; Feltz, Wayne

    2012-01-01

    Two satellite infrared-based overshooting convective cloud-top (OT) detection methods have recently been described in the literature: 1) the 11-mm infrared window channel texture (IRW texture) method, which uses IRW channel brightness temperature (BT) spatial gradients and thresholds, and 2) the water vapor minus IRW BT difference (WV-IRW BTD). While both methods show good performance in published case study examples, it is important to quantitatively validate these methods relative to overshooting top events across the globe. Unfortunately, no overshooting top database currently exists that could be used in such study. This study examines National Aeronautics and Space Administration CloudSat Cloud Profiling Radar data to develop an OT detection validation database that is used to evaluate the IRW-texture and WV-IRW BTD OT detection methods. CloudSat data were manually examined over a 1.5-yr period to identify cases in which the cloud top penetrates above the tropopause height defined by a numerical weather prediction model and the surrounding cirrus anvil cloud top, producing 111 confirmed overshooting top events. When applied to Moderate Resolution Imaging Spectroradiometer (MODIS)-based Geostationary Operational Environmental Satellite-R Series (GOES-R) Advanced Baseline Imager proxy data, the IRW-texture (WV-IRW BTD) method offered a 76% (96%) probability of OT detection (POD) and 16% (81%) false-alarm ratio. Case study examples show that WV-IRW BTD.0 K identifies much of the deep convective cloud top, while the IRW-texture method focuses only on regions with a spatial scale near that of commonly observed OTs. The POD decreases by 20% when IRW-texture is applied to current geostationary imager data, highlighting the importance of imager spatial resolution for observing and detecting OT regions.

  18. The Contribution of Particle Swarm Optimization to Three-Dimensional Slope Stability Analysis

    PubMed Central

    A Rashid, Ahmad Safuan; Ali, Nazri

    2014-01-01

    Over the last few years, particle swarm optimization (PSO) has been extensively applied in various geotechnical engineering including slope stability analysis. However, this contribution was limited to two-dimensional (2D) slope stability analysis. This paper applied PSO in three-dimensional (3D) slope stability problem to determine the critical slip surface (CSS) of soil slopes. A detailed description of adopted PSO was presented to provide a good basis for more contribution of this technique to the field of 3D slope stability problems. A general rotating ellipsoid shape was introduced as the specific particle for 3D slope stability analysis. A detailed sensitivity analysis was designed and performed to find the optimum values of parameters of PSO. Example problems were used to evaluate the applicability of PSO in determining the CSS of 3D slopes. The first example presented a comparison between the results of PSO and PLAXI-3D finite element software and the second example compared the ability of PSO to determine the CSS of 3D slopes with other optimization methods from the literature. The results demonstrated the efficiency and effectiveness of PSO in determining the CSS of 3D soil slopes. PMID:24991652

  19. The contribution of particle swarm optimization to three-dimensional slope stability analysis.

    PubMed

    Kalatehjari, Roohollah; Rashid, Ahmad Safuan A; Ali, Nazri; Hajihassani, Mohsen

    2014-01-01

    Over the last few years, particle swarm optimization (PSO) has been extensively applied in various geotechnical engineering including slope stability analysis. However, this contribution was limited to two-dimensional (2D) slope stability analysis. This paper applied PSO in three-dimensional (3D) slope stability problem to determine the critical slip surface (CSS) of soil slopes. A detailed description of adopted PSO was presented to provide a good basis for more contribution of this technique to the field of 3D slope stability problems. A general rotating ellipsoid shape was introduced as the specific particle for 3D slope stability analysis. A detailed sensitivity analysis was designed and performed to find the optimum values of parameters of PSO. Example problems were used to evaluate the applicability of PSO in determining the CSS of 3D slopes. The first example presented a comparison between the results of PSO and PLAXI-3D finite element software and the second example compared the ability of PSO to determine the CSS of 3D slopes with other optimization methods from the literature. The results demonstrated the efficiency and effectiveness of PSO in determining the CSS of 3D soil slopes.

  20. Bioinformatics by Example: From Sequence to Target

    NASA Astrophysics Data System (ADS)

    Kossida, Sophia; Tahri, Nadia; Daizadeh, Iraj

    2002-12-01

    With the completion of the human genome, and the imminent completion of other large-scale sequencing and structure-determination projects, computer-assisted bioscience is aimed to become the new paradigm for conducting basic and applied research. The presence of these additional bioinformatics tools stirs great anxiety for experimental researchers (as well as for pedagogues), since they are now faced with a wider and deeper knowledge of differing disciplines (biology, chemistry, physics, mathematics, and computer science). This review targets those individuals who are interested in using computational methods in their teaching or research. By analyzing a real-life, pharmaceutical, multicomponent, target-based example the reader will experience this fascinating new discipline.

  1. Nonpolarizing beam splitter designed by frustrated total internal reflection inside a glass cube.

    PubMed

    Xu, Xueke; Shao, Jianda; Fan, Zhengxiu

    2006-06-20

    A method for the design of an all-dielectric nonpolarizing prism beam splitter utilizing the principle of frustrated total internal reflection is reported. The nonpolarizing condition for a prism beam splitter is discussed, and some single layer design examples are elaborated. The concept can be applied to a wide range of wavelengths and arbitrary transmittance values, and with the help of a computer design program examples of 400-700 nm, T(p)=T(s)=0.5+/-0.01, with incident angles of 45 degrees and 62 degrees are given. In addition, the sensitivity and application of the design are also discussed.

  2. Patient Dose In Diagnostic Radiology: When & How?

    NASA Astrophysics Data System (ADS)

    Lassen, Margit; Gorson, Robert O.

    1980-08-01

    Different situations are discussed in which it is of value to know radiation dose to the patient in diagnostic radiology. Radiation dose to specific organs is determined using the Handbook on Organ Doses published by the Bureau of Radiological Health of the Food and Drug Administration; the method is applied to a specific case. In this example dose to an embryo is calculated in examinations involving both fluoroscopy and radiography. In another example dose is determined to a fetus in late pregnancy using tissue air ratios. Patient inquiries about radiation dose are discussed, and some answers are suggested. The reliability of dose calculations is examined.

  3. Tolerance allocation for an electronic system using neural network/Monte Carlo approach

    NASA Astrophysics Data System (ADS)

    Al-Mohammed, Mohammed; Esteve, Daniel; Boucher, Jaque

    2001-12-01

    The intense global competition to produce quality products at a low cost has led many industrial nations to consider tolerances as a key factor to bring about cost as well as to remain competitive. In actually, Tolerance allocation stays widely applied on the Mechanic System. It is known that to study the tolerances in an electronic domain, Monte-Carlo method well be used. But the later method spends a long time. This paper reviews several methods (Worst-case, Statistical Method, Least Cost Allocation by Optimization methods) that can be used for treating the tolerancing problem for an Electronic System and explains their advantages and their limitations. Then, it proposes an efficient method based on the Neural Networks associated with Monte-Carlo method as basis data. The network is trained using the Error Back Propagation Algorithm to predict the individual part tolerances, minimizing the total cost of the system by a method of optimization. This proposed approach has been applied on Small-Signal Amplifier Circuit as an example. This method can be easily extended to a complex system of n-components.

  4. General Method for Constructing Local Hidden Variable Models for Entangled Quantum States

    NASA Astrophysics Data System (ADS)

    Cavalcanti, D.; Guerini, L.; Rabelo, R.; Skrzypczyk, P.

    2016-11-01

    Entanglement allows for the nonlocality of quantum theory, which is the resource behind device-independent quantum information protocols. However, not all entangled quantum states display nonlocality. A central question is to determine the precise relation between entanglement and nonlocality. Here we present the first general test to decide whether a quantum state is local, and show that the test can be implemented by semidefinite programing. This method can be applied to any given state and for the construction of new examples of states with local hidden variable models for both projective and general measurements. As applications, we provide a lower-bound estimate of the fraction of two-qubit local entangled states and present new explicit examples of such states, including those that arise from physical noise models, Bell-diagonal states, and noisy Greenberger-Horne-Zeilinger and W states.

  5. Multiple Frequency Contrast Source Inversion Method for Vertical Electromagnetic Profiling: 2D Simulation Results and Analyses

    NASA Astrophysics Data System (ADS)

    Li, Jinghe; Song, Linping; Liu, Qing Huo

    2016-02-01

    A simultaneous multiple frequency contrast source inversion (CSI) method is applied to reconstructing hydrocarbon reservoir targets in a complex multilayered medium in two dimensions. It simulates the effects of a salt dome sedimentary formation in the context of reservoir monitoring. In this method, the stabilized biconjugate-gradient fast Fourier transform (BCGS-FFT) algorithm is applied as a fast solver for the 2D volume integral equation for the forward computation. The inversion technique with CSI combines the efficient FFT algorithm to speed up the matrix-vector multiplication and the stable convergence of the simultaneous multiple frequency CSI in the iteration process. As a result, this method is capable of making quantitative conductivity image reconstruction effectively for large-scale electromagnetic oil exploration problems, including the vertical electromagnetic profiling (VEP) survey investigated here. A number of numerical examples have been demonstrated to validate the effectiveness and capacity of the simultaneous multiple frequency CSI method for a limited array view in VEP.

  6. Applying Standard Interfaces to a Process-Control Language

    NASA Technical Reports Server (NTRS)

    Berthold, Richard T.

    2005-01-01

    A method of applying open-operating-system standard interfaces to the NASA User Interface Language (UIL) has been devised. UIL is a computing language that can be used in monitoring and controlling automated processes: for example, the Timeliner computer program, written in UIL, is a general-purpose software system for monitoring and controlling sequences of automated tasks in a target system. In providing the major elements of connectivity between UIL and the target system, the present method offers advantages over the prior method. Most notably, unlike in the prior method, the software description of the target system can be made independent of the applicable compiler software and need not be linked to the applicable executable compiler image. Also unlike in the prior method, it is not necessary to recompile the source code and relink the source code to a new executable compiler image. Abstraction of the description of the target system to a data file can be defined easily, with intuitive syntax, and knowledge of the source-code language is not needed for the definition.

  7. Learn from every mistake! Hierarchical information combination in astronomy

    NASA Astrophysics Data System (ADS)

    Süveges, Maria; Fotopoulou, Sotiria; Coupon, Jean; Paltani, Stéphane; Eyer, Laurent; Rimoldini, Lorenzo

    2017-06-01

    Throughout the processing and analysis of survey data, a ubiquitous issue nowadays is that we are spoilt for choice when we need to select a methodology for some of its steps. The alternative methods usually fail and excel in different data regions, and have various advantages and drawbacks, so a combination that unites the strengths of all while suppressing the weaknesses is desirable. We propose to use a two-level hierarchy of learners. Its first level consists of training and applying the possible base methods on the first part of a known set. At the second level, we feed the output probability distributions from all base methods to a second learner trained on the remaining known objects. Using classification of variable stars and photometric redshift estimation as examples, we show that the hierarchical combination is capable of achieving general improvement over averaging-type combination methods, correcting systematics present in all base methods, is easy to train and apply, and thus, it is a promising tool in the astronomical ``Big Data'' era.

  8. Epistemic uncertainty propagation in energy flows between structural vibrating systems

    NASA Astrophysics Data System (ADS)

    Xu, Menghui; Du, Xiaoping; Qiu, Zhiping; Wang, Chong

    2016-03-01

    A dimension-wise method for predicting fuzzy energy flows between structural vibrating systems coupled by joints with epistemic uncertainties is established. Based on its Legendre polynomial approximation at α=0, both the minimum and maximum point vectors of the energy flow of interest are calculated dimension by dimension within the space spanned by the interval parameters determined by fuzzy those at α=0 and the resulted interval bounds are used to assemble the concerned fuzzy energy flows. Besides the proposed method, vertex method as well as two current methods is also applied. Comparisons among results by different methods are accomplished by two numerical examples and the accuracy of all methods is simultaneously verified by Monte Carlo simulation.

  9. Two-Level Hierarchical FEM Method for Modeling Passive Microwave Devices

    NASA Astrophysics Data System (ADS)

    Polstyanko, Sergey V.; Lee, Jin-Fa

    1998-03-01

    In recent years multigrid methods have been proven to be very efficient for solving large systems of linear equations resulting from the discretization of positive definite differential equations by either the finite difference method or theh-version of the finite element method. In this paper an iterative method of the multiple level type is proposed for solving systems of algebraic equations which arise from thep-version of the finite element analysis applied to indefinite problems. A two-levelV-cycle algorithm has been implemented and studied with a Gauss-Seidel iterative scheme used as a smoother. The convergence of the method has been investigated, and numerical results for a number of numerical examples are presented.

  10. From nonlinear optimization to convex optimization through firefly algorithm and indirect approach with applications to CAD/CAM.

    PubMed

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently.

  11. From Nonlinear Optimization to Convex Optimization through Firefly Algorithm and Indirect Approach with Applications to CAD/CAM

    PubMed Central

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380

  12. Simultaneous Gaussian and exponential inversion for improved analysis of shales by NMR relaxometry

    USGS Publications Warehouse

    Washburn, Kathryn E.; Anderssen, Endre; Vogt, Sarah J.; Seymour, Joseph D.; Birdwell, Justin E.; Kirkland, Catherine M.; Codd, Sarah L.

    2014-01-01

    Nuclear magnetic resonance (NMR) relaxometry is commonly used to provide lithology-independent porosity and pore-size estimates for petroleum resource evaluation based on fluid-phase signals. However in shales, substantial hydrogen content is associated with solid and fluid signals and both may be detected. Depending on the motional regime, the signal from the solids may be best described using either exponential or Gaussian decay functions. When the inverse Laplace transform, the standard method for analysis of NMR relaxometry results, is applied to data containing Gaussian decays, this can lead to physically unrealistic responses such as signal or porosity overcall and relaxation times that are too short to be determined using the applied instrument settings. We apply a new simultaneous Gaussian-Exponential (SGE) inversion method to simulated data and measured results obtained on a variety of oil shale samples. The SGE inversion produces more physically realistic results than the inverse Laplace transform and displays more consistent relaxation behavior at high magnetic field strengths. Residuals for the SGE inversion are consistently lower than for the inverse Laplace method and signal overcall at short T2 times is mitigated. Beyond geological samples, the method can also be applied in other fields where the sample relaxation consists of both Gaussian and exponential decays, for example in material, medical and food sciences.

  13. Analysis of pilot control strategy

    NASA Technical Reports Server (NTRS)

    Heffley, R. K.; Hanson, G. D.; Jewell, W. F.; Clement, W. F.

    1983-01-01

    Methods for nonintrusive identification of pilot control strategy and task execution dynamics are presented along with examples based on flight data. The specific analysis technique is Nonintrusive Parameter Identification Procedure (NIPIP), which is described in a companion user's guide (NASA CR-170398). Quantification of pilot control strategy and task execution dynamics is discussed in general terms followed by a more detailed description of how NIPIP can be applied. The examples are based on flight data obtained from the NASA F-8 digital fly by wire airplane. These examples involve various piloting tasks and control axes as well as a demonstration of how the dynamics of the aircraft itself are identified using NIPIP. Application of NIPIP to the AFTI/F-16 flight test program is discussed. Recommendations are made for flight test applications in general and refinement of NIPIP to include interactive computer graphics.

  14. Ensemble Grouping Strategies for Embedded Stochastic Collocation Methods Applied to Anisotropic Diffusion Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D'Elia, M.; Edwards, H. C.; Hu, J.

    Previous work has demonstrated that propagating groups of samples, called ensembles, together through forward simulations can dramatically reduce the aggregate cost of sampling-based uncertainty propagation methods [E. Phipps, M. D'Elia, H. C. Edwards, M. Hoemmen, J. Hu, and S. Rajamanickam, SIAM J. Sci. Comput., 39 (2017), pp. C162--C193]. However, critical to the success of this approach when applied to challenging problems of scientific interest is the grouping of samples into ensembles to minimize the total computational work. For example, the total number of linear solver iterations for ensemble systems may be strongly influenced by which samples form the ensemble whenmore » applying iterative linear solvers to parameterized and stochastic linear systems. In this paper we explore sample grouping strategies for local adaptive stochastic collocation methods applied to PDEs with uncertain input data, in particular canonical anisotropic diffusion problems where the diffusion coefficient is modeled by truncated Karhunen--Loève expansions. Finally, we demonstrate that a measure of the total anisotropy of the diffusion coefficient is a good surrogate for the number of linear solver iterations for each sample and therefore provides a simple and effective metric for grouping samples.« less

  15. Ensemble Grouping Strategies for Embedded Stochastic Collocation Methods Applied to Anisotropic Diffusion Problems

    DOE PAGES

    D'Elia, M.; Edwards, H. C.; Hu, J.; ...

    2018-01-18

    Previous work has demonstrated that propagating groups of samples, called ensembles, together through forward simulations can dramatically reduce the aggregate cost of sampling-based uncertainty propagation methods [E. Phipps, M. D'Elia, H. C. Edwards, M. Hoemmen, J. Hu, and S. Rajamanickam, SIAM J. Sci. Comput., 39 (2017), pp. C162--C193]. However, critical to the success of this approach when applied to challenging problems of scientific interest is the grouping of samples into ensembles to minimize the total computational work. For example, the total number of linear solver iterations for ensemble systems may be strongly influenced by which samples form the ensemble whenmore » applying iterative linear solvers to parameterized and stochastic linear systems. In this paper we explore sample grouping strategies for local adaptive stochastic collocation methods applied to PDEs with uncertain input data, in particular canonical anisotropic diffusion problems where the diffusion coefficient is modeled by truncated Karhunen--Loève expansions. Finally, we demonstrate that a measure of the total anisotropy of the diffusion coefficient is a good surrogate for the number of linear solver iterations for each sample and therefore provides a simple and effective metric for grouping samples.« less

  16. Experimental analysis and modeling of melt growth processes

    NASA Astrophysics Data System (ADS)

    Müller, Georg

    2002-04-01

    Melt growth processes provide the basic crystalline materials for many applications. The research and development of crystal growth processes is therefore driven by the demands which arise from these specific applications; however, common goals include an increased uniformity of the relevant crystal properties at the micro- and macro-scale, a decrease of deleterious crystal defects, and an increase of crystal dimensions. As melt growth equipment and experimentation becomes more and more expensive, little room remains for improvements by trial and error procedures. A more successful strategy is to optimize the crystal growth process by a combined use of experimental process analysis and computer modeling. This will be demonstrated in this paper by several examples from the bulk growth of silicon, gallium arsenide, indium phosphide, and calcium fluoride. These examples also involve the most important melt growth techniques, crystal pulling (Czochralski methods) and vertical gradient freeze (Bridgman-type methods). The power and success of the above optimization strategy, however, is not limited only to the given examples but can be generalized and applied to many types of bulk crystal growth.

  17. Analytical electron microscopy as a powerful tool in plant cell biology: examples using electron energy loss spectroscopy and X-ray microanalysis.

    PubMed

    Lichtenberger, O; Neumann, D

    1997-08-01

    Energy filtering transmission electron microscopy in combination with energy dispersive X-ray analysis (EDX) and quantumchemical calculations opens new possibilities for elemental and bone analysis at the ultrastructural level. The possibilities and limitations of these methods, applied to botanical samples, are discussed and some examples are given. Ca-oxalate crystals in plant cell vacuoles show a specific C K-edge in the electron energy loss spectrum (EELS), which allows a more reliable identification than light microscopical or cytochemical methods. In some dicots crystalline inclusions can be observed in different cell compartments, which are identified as silicon dioxide or calcium silicate by the fine structure of the Si L2,3-edge. Their formation is discussed on the basis of EEL-spectra and quantumchemical calculations. Examples concerning heavy metal detoxification are given for some tolerant plants. In Minuartia Zn is bound as Zn-silicate in cell walls; Armeria accumulates Cu in leaf idioblasts by chelation with phenolic compounds and Cd is precipitated as CdS/phytochelatin-complexes in tomato.

  18. Lagrangian simulation of mixing and reactions in complex geochemical systems

    NASA Astrophysics Data System (ADS)

    Engdahl, Nicholas B.; Benson, David A.; Bolster, Diogo

    2017-04-01

    Simulations of detailed geochemical systems have traditionally been restricted to Eulerian reactive transport algorithms. This note introduces a Lagrangian method for modeling multicomponent reaction systems. The approach uses standard random walk-based methods for the particle motion steps but allows the particles to interact with each other by exchanging mass of their various chemical species. The colocation density of each particle pair is used to calculate the mass transfer rate, which creates a local disequilibrium that is then relaxed back toward equilibrium using the reaction engine PhreeqcRM. The mass exchange is the only step where the particles interact and the remaining transport and reaction steps are entirely independent for each particle. Several validation examples are presented, which reproduce well-known analytical solutions. These are followed by two demonstration examples of a competitive decay chain and an acid-mine drainage system. The source code, entitled Complex Reaction on Particles (CRP), and files needed to run these examples are hosted openly on GitHub (https://github.com/nbengdahl/CRP), so as to enable interested readers to readily apply this approach with minimal modifications.

  19. Optimal atlas construction through hierarchical image registration

    NASA Astrophysics Data System (ADS)

    Grevera, George J.; Udupa, Jayaram K.; Odhner, Dewey; Torigian, Drew A.

    2016-03-01

    Atlases (digital or otherwise) are common in medicine. However, there is no standard framework for creating them from medical images. One traditional approach is to pick a representative subject and then proceed to label structures/regions of interest in this image. Another is to create a "mean" or average subject. Atlases may also contain more than a single representative (e.g., the Visible Human contains both a male and a female data set). Other criteria besides gender may be used as well, and the atlas may contain many examples for a given criterion. In this work, we propose that atlases be created in an optimal manner using a well-established graph theoretic approach using a min spanning tree (or more generally, a collection of them). The resulting atlases may contain many examples for a given criterion. In fact, our framework allows for the addition of new subjects to the atlas to allow it to evolve over time. Furthermore, one can apply segmentation methods to the graph (e.g., graph-cut, fuzzy connectedness, or cluster analysis) which allow it to be separated into "sub-atlases" as it evolves. We demonstrate our method by applying it to 50 3D CT data sets of the chest region, and by comparing it to a number of traditional methods using measures such as Mean Squared Difference, Mattes Mutual Information, and Correlation, and for rigid registration. Our results demonstrate that optimal atlases can be constructed in this manner and outperform other methods of construction using freely available software.

  20. Rapid Structured Volume Grid Smoothing and Adaption Technique

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    2006-01-01

    A rapid, structured volume grid smoothing and adaption technique, based on signal processing methods, was developed and applied to the Shuttle Orbiter at hypervelocity flight conditions in support of the Columbia Accident Investigation. Because of the fast pace of the investigation, computational aerothermodynamicists, applying hypersonic viscous flow solving computational fluid dynamic (CFD) codes, refined and enhanced a grid for an undamaged baseline vehicle to assess a variety of damage scenarios. Of the many methods available to modify a structured grid, most are time-consuming and require significant user interaction. By casting the grid data into different coordinate systems, specifically two computational coordinates with arclength as the third coordinate, signal processing methods are used for filtering the data [Taubin, CG v/29 1995]. Using a reverse transformation, the processed data are used to smooth the Cartesian coordinates of the structured grids. By coupling the signal processing method with existing grid operations within the Volume Grid Manipulator tool, problems related to grid smoothing are solved efficiently and with minimal user interaction. Examples of these smoothing operations are illustrated for reductions in grid stretching and volume grid adaptation. In each of these examples, other techniques existed at the time of the Columbia accident, but the incorporation of signal processing techniques reduced the time to perform the corrections by nearly 60%. This reduction in time to perform the corrections therefore enabled the assessment of approximately twice the number of damage scenarios than previously possible during the allocated investigation time.

  1. Rapid Structured Volume Grid Smoothing and Adaption Technique

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    2004-01-01

    A rapid, structured volume grid smoothing and adaption technique, based on signal processing methods, was developed and applied to the Shuttle Orbiter at hypervelocity flight conditions in support of the Columbia Accident Investigation. Because of the fast pace of the investigation, computational aerothermodynamicists, applying hypersonic viscous flow solving computational fluid dynamic (CFD) codes, refined and enhanced a grid for an undamaged baseline vehicle to assess a variety of damage scenarios. Of the many methods available to modify a structured grid, most are time-consuming and require significant user interaction. By casting the grid data into different coordinate systems, specifically two computational coordinates with arclength as the third coordinate, signal processing methods are used for filtering the data [Taubin, CG v/29 1995]. Using a reverse transformation, the processed data are used to smooth the Cartesian coordinates of the structured grids. By coupling the signal processing method with existing grid operations within the Volume Grid Manipulator tool, problems related to grid smoothing are solved efficiently and with minimal user interaction. Examples of these smoothing operations are illustrated for reduction in grid stretching and volume grid adaptation. In each of these examples, other techniques existed at the time of the Columbia accident, but the incorporation of signal processing techniques reduced the time to perform the corrections by nearly 60%. This reduction in time to perform the corrections therefore enabled the assessment of approximately twice the number of damage scenarios than previously possible during the allocated investigation time.

  2. Utility of Computational Methods to Identify the Apoptosis Machinery in Unicellular Eukaryotes

    PubMed Central

    Durand, Pierre Marcel; Coetzer, Theresa Louise

    2008-01-01

    Apoptosis is the phenotypic result of an active, regulated process of self-destruction. Following various cellular insults, apoptosis has been demonstrated in numerous unicellular eukaryotes, but very little is known about the genes and proteins that initiate and execute this process in this group of organisms. A bioinformatic approach presents an array of powerful methods to direct investigators in the identification of the apoptosis machinery in protozoans. In this review, we discuss some of the available computational methods and illustrate how they may be applied using the identification of a Plasmodium falciparum metacaspase gene as an example. PMID:19812769

  3. Evidence flow graph methods for validation and verification of expert systems

    NASA Technical Reports Server (NTRS)

    Becker, Lee A.; Green, Peter G.; Bhatnagar, Jayant

    1989-01-01

    The results of an investigation into the use of evidence flow graph techniques for performing validation and verification of expert systems are given. A translator to convert horn-clause rule bases into evidence flow graphs, a simulation program, and methods of analysis were developed. These tools were then applied to a simple rule base which contained errors. It was found that the method was capable of identifying a variety of problems, for example that the order of presentation of input data or small changes in critical parameters could affect the output from a set of rules.

  4. An axisymmetric PFEM formulation for bottle forming simulation

    NASA Astrophysics Data System (ADS)

    Ryzhakov, Pavel B.

    2017-01-01

    A numerical model for bottle forming simulation is proposed. It is based upon the Particle Finite Element Method (PFEM) and is developed for the simulation of bottles characterized by rotational symmetry. The PFEM strategy is adapted to suit the problem of interest. Axisymmetric version of the formulation is developed and a modified contact algorithm is applied. This results in a method characterized by excellent computational efficiency and volume conservation characteristics. The model is validated. An example modelling the final blow process is solved. Bottle wall thickness is estimated and the mass conservation of the method is analysed.

  5. Spline-based Rayleigh-Ritz methods for the approximation of the natural modes of vibration for flexible beams with tip bodies

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.

    1985-01-01

    Rayleigh-Ritz methods for the approximation of the natural modes for a class of vibration problems involving flexible beams with tip bodies using subspaces of piecewise polynomial spline functions are developed. An abstract operator theoretic formulation of the eigenvalue problem is derived and spectral properties investigated. The existing theory for spline-based Rayleigh-Ritz methods applied to elliptic differential operators and the approximation properties of interpolatory splines are useed to argue convergence and establish rates of convergence. An example and numerical results are discussed.

  6. In vivo fluorescence lifetime optical projection tomography

    PubMed Central

    McGinty, James; Taylor, Harriet B.; Chen, Lingling; Bugeon, Laurence; Lamb, Jonathan R.; Dallman, Margaret J.; French, Paul M. W.

    2011-01-01

    We demonstrate the application of fluorescence lifetime optical projection tomography (FLIM-OPT) to in vivo imaging of lysC:GFP transgenic zebrafish embryos (Danio rerio). This method has been applied to unambiguously distinguish between the fluorescent protein (GFP) signal in myeloid cells from background autofluorescence based on the fluorescence lifetime. The combination of FLIM, an inherently ratiometric method, in conjunction with OPT results in a quantitative 3-D tomographic technique that could be used as a robust method for in vivo biological and pharmaceutical research, for example as a readout of Förster resonance energy transfer based interactions. PMID:21559145

  7. A comparative review of optical surface contamination assessment techniques

    NASA Technical Reports Server (NTRS)

    Heaney, James B.

    1987-01-01

    This paper will review the relative sensitivities and practicalities of the common surface analytical methods that are used to detect and identify unwelcome adsorbants on optical surfaces. The compared methods include visual inspection, simple reflectometry and transmissiometry, ellipsometry, infrared absorption and attenuated total reflectance spectroscopy (ATR), Auger electron spectroscopy (AES), scanning electron microscopy (SEM), secondary ion mass spectrometry (SIMS), and mass accretion determined by quartz crystal microbalance (QCM). The discussion is biased toward those methods that apply optical thin film analytical techniques to spacecraft optical contamination problems. Examples are cited from both ground based and in-orbit experiments.

  8. Wentzel-Kramers-Brillouin method in the Bargmann representation. [of quantum mechanics

    NASA Technical Reports Server (NTRS)

    Voros, A.

    1989-01-01

    It is demonstrated that the Bargmann representation of quantum mechanics is ideally suited for semiclassical analysis, using as an example the WKB method applied to the bound-state problem in a single well of one degree of freedom. For the harmonic oscillator, this WKB method trivially gives the exact eigenfunctions in addition to the exact eigenvalues. For an anharmonic well, a self-consistent variational choice of the representation greatly improves the accuracy of the semiclassical ground state. Also, a simple change of scale illuminates the relationship of semiclassical versus linear perturbative expansions, allowing a variety of multidimensional extensions.

  9. Input reconstruction of chaos sensors.

    PubMed

    Yu, Dongchuan; Liu, Fang; Lai, Pik-Yin

    2008-06-01

    Although the sensitivity of sensors can be significantly enhanced using chaotic dynamics due to its extremely sensitive dependence on initial conditions and parameters, how to reconstruct the measured signal from the distorted sensor response becomes challenging. In this paper we suggest an effective method to reconstruct the measured signal from the distorted (chaotic) response of chaos sensors. This measurement signal reconstruction method applies the neural network techniques for system structure identification and therefore does not require the precise information of the sensor's dynamics. We discuss also how to improve the robustness of reconstruction. Some examples are presented to illustrate the measurement signal reconstruction method suggested.

  10. Error Estimates for Approximate Solutions of the Riccati Equation with Real or Complex Potentials

    NASA Astrophysics Data System (ADS)

    Finster, Felix; Smoller, Joel

    2010-09-01

    A method is presented for obtaining rigorous error estimates for approximate solutions of the Riccati equation, with real or complex potentials. Our main tool is to derive invariant region estimates for complex solutions of the Riccati equation. We explain the general strategy for applying these estimates and illustrate the method in typical examples, where the approximate solutions are obtained by gluing together WKB and Airy solutions of corresponding one-dimensional Schrödinger equations. Our method is motivated by, and has applications to, the analysis of linear wave equations in the geometry of a rotating black hole.

  11. Asymptotic approximation method of force reconstruction: Application and analysis of stationary random forces

    NASA Astrophysics Data System (ADS)

    Sanchez, J.

    2018-06-01

    In this paper, the application and analysis of the asymptotic approximation method to a single degree-of-freedom has recently been produced. The original concepts are summarized, and the necessary probabilistic concepts are developed and applied to single degree-of-freedom systems. Then, these concepts are united, and the theoretical and computational models are developed. To determine the viability of the proposed method in a probabilistic context, numerical experiments are conducted, and consist of a frequency analysis, analysis of the effects of measurement noise, and a statistical analysis. In addition, two examples are presented and discussed.

  12. The solitary wave solution of coupled Klein-Gordon-Zakharov equations via two different numerical methods

    NASA Astrophysics Data System (ADS)

    Dehghan, Mehdi; Nikpour, Ahmad

    2013-09-01

    In this research, we propose two different methods to solve the coupled Klein-Gordon-Zakharov (KGZ) equations: the Differential Quadrature (DQ) and Globally Radial Basis Functions (GRBFs) methods. In the DQ method, the derivative value of a function with respect to a point is directly approximated by a linear combination of all functional values in the global domain. The principal work in this method is the determination of weight coefficients. We use two ways for obtaining these coefficients: cosine expansion (CDQ) and radial basis functions (RBFs-DQ), the former is a mesh-based method and the latter categorizes in the set of meshless methods. Unlike the DQ method, the GRBF method directly substitutes the expression of the function approximation by RBFs into the partial differential equation. The main problem in the GRBFs method is ill-conditioning of the interpolation matrix. Avoiding this problem, we study the bases introduced in Pazouki and Schaback (2011) [44]. Some examples are presented to compare the accuracy and easy implementation of the proposed methods. In numerical examples, we concentrate on Inverse Multiquadric (IMQ) and second-order Thin Plate Spline (TPS) radial basis functions. The variable shape parameter (exponentially and random) strategies are applied in the IMQ function and the results are compared with the constant shape parameter.

  13. Green extraction of natural products: concept and principles.

    PubMed

    Chemat, Farid; Vian, Maryline Abert; Cravotto, Giancarlo

    2012-01-01

    The design of green and sustainable extraction methods of natural products is currently a hot research topic in the multidisciplinary area of applied chemistry, biology and technology. Herein we aimed to introduce the six principles of green-extraction, describing a multifaceted strategy to apply this concept at research and industrial level. The mainstay of this working protocol are new and innovative technologies, process intensification, agro-solvents and energy saving. The concept, principles and examples of green extraction here discussed, offer an updated glimpse of the huge technological effort that is being made and the diverse applications that are being developed.

  14. Basic principles of Hasse diagram technique in chemistry.

    PubMed

    Brüggemann, Rainer; Voigt, Kristina

    2008-11-01

    Principles of partial order applied to ranking are explained. The Hasse diagram technique (HDT) is the application of partial order theory based on a data matrix. In this paper, HDT is introduced in a stepwise procedure, and some elementary theorems are exemplified. The focus is to show how the multivariate character of a data matrix is realized by HDT and in which cases one should apply other mathematical or statistical methods. Many simple examples illustrate the basic theoretical ideas. Finally, it is shown that HDT is a useful alternative for the evaluation of antifouling agents, which was originally performed by amoeba diagrams.

  15. Animations, games, and virtual reality for the Jing-Hang Grand Canal.

    PubMed

    Chen, Wenzhi; Zhang, Mingmin; Pan, Zhigeng; Liu, Gengdai; Shen, Huaqing; Chen, Shengnan; Liu, Yong

    2010-01-01

    Digital heritage, an effective method to preserve and present natural and cultural heritage, is engaging many heritage preservation specialists and computer scientists. In particular, computer graphics researchers have become involved, and digital heritage has employed many CG techniques. For example, Daniel Pletinckx and his colleagues employed VR in a real museum at Ename, Belgium, and Zhigeng Pan and his colleagues applied it to construct a virtual Olympics museum. Soo-Chang Pei and his colleagues focused on restoring ancient Chinese paintings. Here, we describe how we've applied animations, computer games, and VR to China's famous Jing-Hang Grand Canal.

  16. Matrix form of Legendre polynomials for solving linear integro-differential equations of high order

    NASA Astrophysics Data System (ADS)

    Kammuji, M.; Eshkuvatov, Z. K.; Yunus, Arif A. M.

    2017-04-01

    This paper presents an effective approximate solution of high order of Fredholm-Volterra integro-differential equations (FVIDEs) with boundary condition. Legendre truncated series is used as a basis functions to estimate the unknown function. Matrix operation of Legendre polynomials is used to transform FVIDEs with boundary conditions into matrix equation of Fredholm-Volterra type. Gauss Legendre quadrature formula and collocation method are applied to transfer the matrix equation into system of linear algebraic equations. The latter equation is solved by Gauss elimination method. The accuracy and validity of this method are discussed by solving two numerical examples and comparisons with wavelet and methods.

  17. Action methods in the classroom: creative strategies for nursing education.

    PubMed

    McLaughlin, Dorcas E; Freed, Patricia E; Tadych, Rita A

    2006-01-01

    Nursing education recognizes the need for a framework of experiential learning that supports the development of professional roles. Action methods, originated by Jacob L. Moreno (1953), can be readily adapted to any nursing classroom to create the conditions under which students learn and practice professional nursing roles. While nurse faculty can learn to use action methods, they may not fully comprehend their theoretical underpinnings or may believe they are only used in therapy. This article explores Moreno's ideas related to psychodrama and sociodrama applied in classroom settings, and presents many examples and tips for classroom teachers who wish to incorporate action methods into their classes.

  18. First-order design of geodetic networks using the simulated annealing method

    NASA Astrophysics Data System (ADS)

    Berné, J. L.; Baselga, S.

    2004-09-01

    The general problem of the optimal design for a geodetic network subject to any extrinsic factors, namely the first-order design problem, can be dealt with as a numeric optimization problem. The classic theory of this problem and the optimization methods are revised. Then the innovative use of the simulated annealing method, which has been successfully applied in other fields, is presented for this classical geodetic problem. This method, belonging to iterative heuristic techniques in operational research, uses a thermodynamical analogy to crystalline networks to offer a solution that converges probabilistically to the global optimum. Basic formulation and some examples are studied.

  19. An information-based network approach for protein classification

    PubMed Central

    Wan, Xiaogeng; Zhao, Xin; Yau, Stephen S. T.

    2017-01-01

    Protein classification is one of the critical problems in bioinformatics. Early studies used geometric distances and polygenetic-tree to classify proteins. These methods use binary trees to present protein classification. In this paper, we propose a new protein classification method, whereby theories of information and networks are used to classify the multivariate relationships of proteins. In this study, protein universe is modeled as an undirected network, where proteins are classified according to their connections. Our method is unsupervised, multivariate, and alignment-free. It can be applied to the classification of both protein sequences and structures. Nine examples are used to demonstrate the efficiency of our new method. PMID:28350835

  20. Measurement of absorption and dispersion from check shot surveys

    NASA Astrophysics Data System (ADS)

    Ganley, D. C.; Kanasewich, E. R.

    1980-10-01

    The spectral ratio method for measuring absorption and also dispersion from seismic data has been examined. Corrections for frequency-dependent losses due to reflections and transmissions have been shown to be an important step in the method. Synthetic examples have been used to illustrate the method, and the method has been applied to one real data case from a sedimentary basin in the Beaufort Sea. Measured Q values were 43±2 for a depth interval of 549-1193 m and 67±6 for a depth interval of 945-1311 m. Dispersion was also measured in the data and is consistent with Futterman's model.

  1. Lessons from comparative effectiveness research methods development projects funded under the Recovery Act.

    PubMed

    Zurovac, Jelena; Esposito, Dominick

    2014-11-01

    The American Recovery and Reinvestment Act of 2009 (ARRA) directed nearly US$29.2 million to comparative effectiveness research (CER) methods development. To help inform future CER methods investments, we describe the ARRA CER methods projects, identify barriers to this research and discuss the alignment of topics with published methods development priorities. We used several existing resources and held discussions with ARRA CER methods investigators. Although funded projects explored many identified priority topics, investigators noted that much work remains. For example, given the considerable investments in CER data infrastructure, the methods development field can benefit from additional efforts to educate researchers about the availability of new data sources and about how best to apply methods to match their research questions and data.

  2. The vibroacoustic response and sound absorption performance of multilayer, microperforated rib-stiffened plates

    NASA Astrophysics Data System (ADS)

    Zhou, Haian; Wang, Xiaoming; Wu, Huayong; Meng, Jianbing

    2017-10-01

    The vibroacoustic response and sound absorption performance of a structure composed of multilayer plates and one rigid back wall are theoretically analyzed. In this structure, all plates are two-dimensional, microperforated, and periodically rib-stiffened. To investigate such a structural system, semianalytical models of one-layer and multilayer plate structures considering the vibration effects are first developed. Then approaches of the space harmonic method and Fourier transforms are applied to a one-layer plate, and finally the cascade connection method is utilized for a multilayer plate structure. Based on fundamental acoustic formulas, the vibroacoustic responses of microperforated stiffened plates are expressed as functions of a series of harmonic amplitudes of plate displacement, which are then solved by employing the numerical truncation method. Applying the inverse Fourier transform, wave propagation, and linear addition properties, the equations of the sound pressures and absorption coefficients for the one-layer and multilayer stiffened plates in physical space are finally derived. Using numerical examples, the effects of the most important physical parameters—for example, the perforation ratio of the plate, sound incident angles, and periodical rib spacing—on sound absorption performance are examined. Numerical results indicate that the sound absorption performance of the studied structure is effectively enhanced by the flexural vibration of the plate in water. Finally, the proposed approaches are validated by comparing the results of stiffened plates of the present work with solutions from previous studies.

  3. Small-aperture seismic array data processing using a representation of seismograms at zero-crossing points

    NASA Astrophysics Data System (ADS)

    Brokešová, Johana; Málek, Jiří

    2018-07-01

    A new method for representing seismograms by using zero-crossing points is described. This method is based on decomposing a seismogram into a set of quasi-harmonic components and, subsequently, on determining the precise zero-crossing times of these components. An analogous approach can be applied to determine extreme points that represent the zero-crossings of the first time derivative of the quasi-harmonics. Such zero-crossing and/or extreme point seismogram representation can be used successfully to reconstruct single-station seismograms, but the main application is to small-aperture array data analysis to which standard methods cannot be applied. The precise times of the zero-crossing and/or extreme points make it possible to determine precise time differences across the array used to retrieve the parameters of a plane wave propagating across the array, namely, its backazimuth and apparent phase velocity along the Earth's surface. The applicability of this method is demonstrated using two synthetic examples. In the real-data example from the Příbram-Háje array in central Bohemia (Czech Republic) for the Mw 6.4 Crete earthquake of October 12, 2013, this method is used to determine the phase velocity dispersion of both Rayleigh and Love waves. The resulting phase velocities are compared with those obtained by employing the seismic plane-wave rotation-to-translation relations. In this approach, the phase velocity is calculated by obtaining the amplitude ratios between the rotation and translation components. Seismic rotations are derived from the array data, for which the small aperture is not only an advantage but also an applicability condition.

  4. Laser beam heat method reported

    NASA Astrophysics Data System (ADS)

    Tsuchiya, Hachiro; Goto, Hidekazu

    1988-07-01

    An outline of research involving the processing method utilizing laser-induced thermochemistry was presented, with the CO2 laser processing of ceramics in CF4 gas used as a practical processing example. It has become clear that it will be possible to conduct laser proccessing of ceramics with high efficiency and high precision by utilizing the thermochemical processes, but it is not believed that the present method is the best one and it is not clear that it can be applied to commercial processing. It is thought that the processing characteristics of this method will be greatly changed by the combination of the atmospheric gas and the material, and it is important to conduct tests on various combinations. However, it is believed that the improvement and development will become possible by theoretically confirming the basic process of the processing, especially of the the thermochemical process between the solid surface and the atmospheric gas molecule. Actually, it is believed that the thermochemical process on the solid surface is quite complicated. For example, it was confirmed that when thermochemical processing the Si monocrystal in the CF4 gas, the processing speed would change by at least 10 times through changing the gas pressure and the mixing O2 gas density. However, conversely speaking, it is believed that the fact that this method is complicated, with many unexplained points and room for research, conceals the possibility of its being applied to various fields, and also, in this sense, the quantitative confirmation of its basic process in an important problem to be solved in the future.

  5. Formal Methods Specification and Analysis Guidebook for the Verification of Software and Computer Systems. Volume 2; A Practitioner's Companion

    NASA Technical Reports Server (NTRS)

    1995-01-01

    This guidebook, the second of a two-volume series, is intended to facilitate the transfer of formal methods to the avionics and aerospace community. The 1st volume concentrates on administrative and planning issues [NASA-95a], and the second volume focuses on the technical issues involved in applying formal methods to avionics and aerospace software systems. Hereafter, the term "guidebook" refers exclusively to the second volume of the series. The title of this second volume, A Practitioner's Companion, conveys its intent. The guidebook is written primarily for the nonexpert and requires little or no prior experience with formal methods techniques and tools. However, it does attempt to distill some of the more subtle ingredients in the productive application of formal methods. To the extent that it succeeds, those conversant with formal methods will also nd the guidebook useful. The discussion is illustrated through the development of a realistic example, relevant fragments of which appear in each chapter. The guidebook focuses primarily on the use of formal methods for analysis of requirements and high-level design, the stages at which formal methods have been most productively applied. Although much of the discussion applies to low-level design and implementation, the guidebook does not discuss issues involved in the later life cycle application of formal methods.

  6. VizieR Online Data Catalog: Bayesian method for detecting stellar flares (Pitkin+, 2014)

    NASA Astrophysics Data System (ADS)

    Pitkin, M.; Williams, D.; Fletcher, L.; Grant, S. D. T.

    2015-05-01

    We present a Bayesian-odds-ratio-based algorithm for detecting stellar flares in light-curve data. We assume flares are described by a model in which there is a rapid rise with a half-Gaussian profile, followed by an exponential decay. Our signal model also contains a polynomial background model required to fit underlying light-curve variations in the data, which could otherwise partially mimic a flare. We characterize the false alarm probability and efficiency of this method under the assumption that any unmodelled noise in the data is Gaussian, and compare it with a simpler thresholding method based on that used in Walkowicz et al. We find our method has a significant increase in detection efficiency for low signal-to-noise ratio (S/N) flares. For a conservative false alarm probability our method can detect 95 per cent of flares with S/N less than 20, as compared to S/N of 25 for the simpler method. We also test how well the assumption of Gaussian noise holds by applying the method to a selection of 'quiet' Kepler stars. As an example we have applied our method to a selection of stars in Kepler Quarter 1 data. The method finds 687 flaring stars with a total of 1873 flares after vetos have been applied. For these flares we have made preliminary characterizations of their durations and and S/N. (1 data file).

  7. A Bayesian method for detecting stellar flares

    NASA Astrophysics Data System (ADS)

    Pitkin, M.; Williams, D.; Fletcher, L.; Grant, S. D. T.

    2014-12-01

    We present a Bayesian-odds-ratio-based algorithm for detecting stellar flares in light-curve data. We assume flares are described by a model in which there is a rapid rise with a half-Gaussian profile, followed by an exponential decay. Our signal model also contains a polynomial background model required to fit underlying light-curve variations in the data, which could otherwise partially mimic a flare. We characterize the false alarm probability and efficiency of this method under the assumption that any unmodelled noise in the data is Gaussian, and compare it with a simpler thresholding method based on that used in Walkowicz et al. We find our method has a significant increase in detection efficiency for low signal-to-noise ratio (S/N) flares. For a conservative false alarm probability our method can detect 95 per cent of flares with S/N less than 20, as compared to S/N of 25 for the simpler method. We also test how well the assumption of Gaussian noise holds by applying the method to a selection of `quiet' Kepler stars. As an example we have applied our method to a selection of stars in Kepler Quarter 1 data. The method finds 687 flaring stars with a total of 1873 flares after vetos have been applied. For these flares we have made preliminary characterizations of their durations and and S/N.

  8. Applying fluvial geomorphology to river channel management: Background for progress towards a palaeohydrology protocol

    NASA Astrophysics Data System (ADS)

    Gregory, K. J.; Benito, G.; Downs, P. W.

    2008-06-01

    Significant developments have been achieved in applicable and applied fluvial geomorphology as shown in publications of the last three decades, analyzed as the basis for using results of studies of environmental change as a basis for management. The range of types of publications and of activities are more pertinent to river channel management as a result of concern with sustainability, global climate change, environmental ethics, ecosystem health concepts and public participation. Possible applications, with particular reference to river channel changes, include those concerned with form and process, assessment of channel change, urbanization, channelization, extractive industries, impact of engineering works, historical changes in land use, and restoration with specific examples illustrated in Table 1. In order to achieve general significance for fluvial geomorphology, more theory and extension by modelling methods is needed, and examples related to morphology and process characteristics, integrated approaches, and changes of the fluvial system are collected in Table 2. The ways in which potential applications are communicated to decision-makers range from applicable outputs including publications ranging from review papers, book chapters, and books, to applied outputs which include interdisciplinary problem solving, educational outreach, and direct involvement, with examples summarized in Table 3. On the basis of results gained from investigations covering periods longer than continuous records, a protocol embracing palaeohydrological inputs for application to river channel management is illustrated and developed as a synopsis version (Table 4), demonstrating how conclusions from geomorphological research can be expressed in a format which can be considered by managers.

  9. On the convergence of an iterative formulation of the electromagnetic scattering from an infinite grating of thin wires

    NASA Technical Reports Server (NTRS)

    Brand, J. C.

    1985-01-01

    Contraction theory is applied to an iterative formulation of electromagnetic scattering from periodic structures and a computational method for insuring convergence is developed. A short history of spectral (or k-space) formulation is presented with an emphasis on application to periodic surfaces. The mathematical background for formulating an iterative equation is covered using straightforward single variable examples including an extension to vector spaces. To insure a convergent solution of the iterative equation, a process called the contraction corrector method is developed. Convergence properties of previously presented iterative solutions to one-dimensional problems are examined utilizing contraction theory and the general conditions for achieving a convergent solution are explored. The contraction corrector method is then applied to several scattering problems including an infinite grating of thin wires with the solution data compared to previous works.

  10. One-way ANOVA based on interval information

    NASA Astrophysics Data System (ADS)

    Hesamian, Gholamreza

    2016-08-01

    This paper deals with extending the one-way analysis of variance (ANOVA) to the case where the observed data are represented by closed intervals rather than real numbers. In this approach, first a notion of interval random variable is introduced. Especially, a normal distribution with interval parameters is introduced to investigate hypotheses about the equality of interval means or test the homogeneity of interval variances assumption. Moreover, the least significant difference (LSD method) for investigating multiple comparison of interval means is developed when the null hypothesis about the equality of means is rejected. Then, at a given interval significance level, an index is applied to compare the interval test statistic and the related interval critical value as a criterion to accept or reject the null interval hypothesis of interest. Finally, the method of decision-making leads to some degrees to accept or reject the interval hypotheses. An applied example will be used to show the performance of this method.

  11. A Lyapunov method for stability analysis of piecewise-affine systems over non-invariant domains

    NASA Astrophysics Data System (ADS)

    Rubagotti, Matteo; Zaccarian, Luca; Bemporad, Alberto

    2016-05-01

    This paper analyses stability of discrete-time piecewise-affine systems, defined on possibly non-invariant domains, taking into account the possible presence of multiple dynamics in each of the polytopic regions of the system. An algorithm based on linear programming is proposed, in order to prove exponential stability of the origin and to find a positively invariant estimate of its region of attraction. The results are based on the definition of a piecewise-affine Lyapunov function, which is in general discontinuous on the boundaries of the regions. The proposed method is proven to lead to feasible solutions in a broader range of cases as compared to a previously proposed approach. Two numerical examples are shown, among which a case where the proposed method is applied to a closed-loop system, to which model predictive control was applied without a-priori guarantee of stability.

  12. Human-computer interface including haptically controlled interactions

    DOEpatents

    Anderson, Thomas G.

    2005-10-11

    The present invention provides a method of human-computer interfacing that provides haptic feedback to control interface interactions such as scrolling or zooming within an application. Haptic feedback in the present method allows the user more intuitive control of the interface interactions, and allows the user's visual focus to remain on the application. The method comprises providing a control domain within which the user can control interactions. For example, a haptic boundary can be provided corresponding to scrollable or scalable portions of the application domain. The user can position a cursor near such a boundary, feeling its presence haptically (reducing the requirement for visual attention for control of scrolling of the display). The user can then apply force relative to the boundary, causing the interface to scroll the domain. The rate of scrolling can be related to the magnitude of applied force, providing the user with additional intuitive, non-visual control of scrolling.

  13. Extrapolation techniques applied to matrix methods in neutron diffusion problems

    NASA Technical Reports Server (NTRS)

    Mccready, Robert R

    1956-01-01

    A general matrix method is developed for the solution of characteristic-value problems of the type arising in many physical applications. The scheme employed is essentially that of Gauss and Seidel with appropriate modifications needed to make it applicable to characteristic-value problems. An iterative procedure produces a sequence of estimates to the answer; and extrapolation techniques, based upon previous behavior of iterants, are utilized in speeding convergence. Theoretically sound limits are placed on the magnitude of the extrapolation that may be tolerated. This matrix method is applied to the problem of finding criticality and neutron fluxes in a nuclear reactor with control rods. The two-dimensional finite-difference approximation to the two-group neutron fluxes in a nuclear reactor with control rods. The two-dimensional finite-difference approximation to the two-group neutron-diffusion equations is treated. Results for this example are indicated.

  14. Reconstructing signals from noisy data with unknown signal and noise covariance.

    PubMed

    Oppermann, Niels; Robbers, Georg; Ensslin, Torsten A

    2011-10-01

    We derive a method to reconstruct Gaussian signals from linear measurements with Gaussian noise. This new algorithm is intended for applications in astrophysics and other sciences. The starting point of our considerations is the principle of minimum Gibbs free energy, which was previously used to derive a signal reconstruction algorithm handling uncertainties in the signal covariance. We extend this algorithm to simultaneously uncertain noise and signal covariances using the same principles in the derivation. The resulting equations are general enough to be applied in many different contexts. We demonstrate the performance of the algorithm by applying it to specific example situations and compare it to algorithms not allowing for uncertainties in the noise covariance. The results show that the method we suggest performs very well under a variety of circumstances and is indeed qualitatively superior to the other methods in cases where uncertainty in the noise covariance is present.

  15. The method of abstraction in the design of databases and the interoperability

    NASA Astrophysics Data System (ADS)

    Yakovlev, Nikolay

    2018-03-01

    When designing the database structure oriented to the contents of indicators presented in the documents and communications subject area. First, the method of abstraction is applied by expansion of the indices of new, artificially constructed abstract concepts. The use of abstract concepts allows to avoid registration of relations many-to-many. For this reason, when built using abstract concepts, demonstrate greater stability in the processes. The example abstract concepts to address structure - a unique house number. Second, the method of abstraction can be used in the transformation of concepts by omitting some attributes that are unnecessary for solving certain classes of problems. Data processing associated with the amended concepts is more simple without losing the possibility of solving the considered classes of problems. For example, the concept "street" loses the binding to the land. The content of the modified concept of "street" are only the relations of the houses to the declared name. For most accounting tasks and ensure communication is enough.

  16. Characterizing crustal and uppermost mantle anisotropy with a depth-dependent tilted hexagonally symmetric elastic tensor: theory and examples

    NASA Astrophysics Data System (ADS)

    Feng, L.; Xie, J.; Ritzwoller, M. H.

    2017-12-01

    Two major types of surface wave anisotropy are commonly observed by seismologists but are only rarely interpreted jointly: apparent radial anisotropy, which is the difference in propagation speed between horizontally and vertically polarized waves inferred from Love and Rayleigh waves, and apparent azimuthal anisotropy, which is the directional dependence of surface wave speeds (usually Rayleigh waves). We describe a method of inversion that interprets simultaneous observations of radial and azimuthal anisotropy under the assumption of a hexagonally symmetric elastic tensor with a tilted symmetry axis defined by dip and strike angles. With a full-waveform numerical solver based on the spectral element method (SEM), we verify the validity of the forward theory used for the inversion. We also present two examples, in the US and Tibet, in which we have successfully applied the tomographic method to demonstrate that the two types of apparent anisotropy can be interpreted jointly as a tilted hexagonally symmetric medium.

  17. Memory sparing, fast scattering formalism for rigorous diffraction modeling

    NASA Astrophysics Data System (ADS)

    Iff, W.; Kämpfe, T.; Jourlin, Y.; Tishchenko, A. V.

    2017-07-01

    The basics and algorithmic steps of a novel scattering formalism suited for memory sparing and fast electromagnetic calculations are presented. The formalism, called ‘S-vector algorithm’ (by analogy with the known scattering-matrix algorithm), allows the calculation of the collective scattering spectra of individual layered micro-structured scattering objects. A rigorous method of linear complexity is applied to model the scattering at individual layers; here the generalized source method (GSM) resorting to Fourier harmonics as basis functions is used as one possible method of linear complexity. The concatenation of the individual scattering events can be achieved sequentially or in parallel, both having pros and cons. The present development will largely concentrate on a consecutive approach based on the multiple reflection series. The latter will be reformulated into an implicit formalism which will be associated with an iterative solver, resulting in improved convergence. The examples will first refer to 1D grating diffraction for the sake of simplicity and intelligibility, with a final 2D application example.

  18. The use of Spark Plasma Sintering method for high-rate diffusion welding of high-strength UFG titanium alloys

    NASA Astrophysics Data System (ADS)

    Nokhrin, A. V.; Chuvil'deev, V. N.; Boldin, M. S.; Piskunov, A. V.; Kozlova, N. A.; Chegurov, M. K.; Popov, A. A.; Lantcev, E. A.; Kopylov, V. I.; Tabachkova, N. Yu

    2017-07-01

    The article provides an example of applying the technology of spark plasma sintering (SPS) to ensure high-rate diffusion welding of high-strength ultra-fine-grained UFG titanium alloys. Weld seams produced from Ti-5Al-2V UFG titanium alloy and obtained through SPS are characterized by high density, hardness and corrosion resistance.

  19. A Bootstrap Algorithm for Mixture Models and Interval Data in Inter-Comparisons

    DTIC Science & Technology

    2001-07-01

    parametric bootstrap. The present algorithm will be applied to a thermometric inter-comparison, where data cannot be assumed to be normally distributed. 2 Data...experimental methods, used in each laboratory) often imply that the statistical assumptions are not satisfied, as for example in several thermometric ...triangular). Indeed, in thermometric experiments these three probabilistic models can represent several common stochastic variabilities for

  20. Transformation of gram positive bacteria by sonoporation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Yunfeng; Li, Yongchao

    The present invention provides a sonoporation-based method that can be universally applied for delivery of compounds into Gram positive bacteria. Gram positive bacteria which can be transformed by sonoporation include, for example, Bacillus, Streptococcus, Acetobacterium, and Clostridium. Compounds which can be delivered into Gram positive bacteria via sonoporation include nucleic acids (DNA or RNA), proteins, lipids, carbohydrates, viruses, small organic and inorganic molecules, and nano-particles.

  1. Computer analysis of potentiometric data of complexes formation in the solution

    NASA Astrophysics Data System (ADS)

    Jastrzab, Renata; Kaczmarek, Małgorzata T.; Tylkowski, Bartosz; Odani, Akira

    2018-02-01

    The determination of equilibrium constants is an important process for many branches of chemistry. In this review we provide the readers with a discussion on computer methods which have been applied for elaboration of potentiometric experimental data generated during complexes formation in solution. The review describes both: general basis of modeling tools and examples of the use of calculated stability constants.

  2. Tuning of PID controllers for boiler-turbine units.

    PubMed

    Tan, Wen; Liu, Jizhen; Fang, Fang; Chen, Yanqiao

    2004-10-01

    A simple two-by-two model for a boiler-turbine unit is demonstrated in this paper. The model can capture the essential dynamics of a unit. The design of a coordinated controller is discussed based on this model. A PID control structure is derived, and a tuning procedure is proposed. The examples show that the method is easy to apply and can achieve acceptable performance.

  3. On the Assessment of Psychometric Adequacy in Correlation Matrices.

    ERIC Educational Resources Information Center

    Dziuban, Charles D.; Shirkey, Edwin C.

    Three techniques for assessing the adequacy of correlation matrices for factor analysis were applied to four examples from the literature. The methods compared were: (1) inspection of the off diagonal elements of the anti-image covariance matrix S(to the 2nd) R(to the -1) and S(to the 2nd); (2) the Measure of Sampling Adequacy (M.S.A.), and (3)…

  4. Smoothing and Equating Methods Applied to Different Types of Test Score Distributions and Evaluated with Respect to Multiple Equating Criteria. Research Report. ETS RR-11-20

    ERIC Educational Resources Information Center

    Moses, Tim; Liu, Jinghua

    2011-01-01

    In equating research and practice, equating functions that are smooth are typically assumed to be more accurate than equating functions with irregularities. This assumption presumes that population test score distributions are relatively smooth. In this study, two examples were used to reconsider common beliefs about smoothing and equating. The…

  5. Computer-generated formulas for three-center nuclear-attraction integrals (electrostatic potential) for Slater-type orbitals

    NASA Technical Reports Server (NTRS)

    Jones, H. W.

    1984-01-01

    The computer-assisted C-matrix, Loewdin-alpha-function, single-center expansion method in spherical harmonics has been applied to the three-center nuclear-attraction integral (potential due to the product of separated Slater-type orbitals). Exact formulas are produced for 13 terms of an infinite series that permits evaluation to ten decimal digits of an example using 1s orbitals.

  6. Mobile micro-colorimeter and micro-spectrometer sensor modules as enablers for the replacement of subjective inspections by objective measurements for optically clear colored liquids in-field

    NASA Astrophysics Data System (ADS)

    Dittrich, Paul-Gerald; Grunert, Fred; Ehehalt, Jörg; Hofmann, Dietrich

    2015-03-01

    Aim of the paper is to show that the colorimetric characterization of optically clear colored liquids can be performed with different measurement methods and their application specific multichannel spectral sensors. The possible measurement methods are differentiated by the applied types of multichannel spectral sensors and therefore by their spectral resolution, measurement speed, measurement accuracy and measurement costs. The paper describes how different types of multichannel spectral sensors are calibrated with different types of calibration methods and how the measurement values can be used for further colorimetric calculations. The different measurement methods and the different application specific calibration methods will be explained methodically and theoretically. The paper proofs that and how different multichannel spectral sensor modules with different calibration methods can be applied with smartpads for the calculation of measurement results both in laboratory and in field. A given practical example is the application of different multichannel spectral sensors for the colorimetric characterization of petroleum oils and fuels and their colorimetric characterization by the Saybolt color scale.

  7. Shape optimization of self-avoiding curves

    NASA Astrophysics Data System (ADS)

    Walker, Shawn W.

    2016-04-01

    This paper presents a softened notion of proximity (or self-avoidance) for curves. We then derive a sensitivity result, based on shape differential calculus, for the proximity. This is combined with a gradient-based optimization approach to compute three-dimensional, parameterized curves that minimize the sum of an elastic (bending) energy and a proximity energy that maintains self-avoidance by a penalization technique. Minimizers are computed by a sequential-quadratic-programming (SQP) method where the bending energy and proximity energy are approximated by a finite element method. We then apply this method to two problems. First, we simulate adsorbed polymer strands that are constrained to be bound to a surface and be (locally) inextensible. This is a basic model of semi-flexible polymers adsorbed onto a surface (a current topic in material science). Several examples of minimizing curve shapes on a variety of surfaces are shown. An advantage of the method is that it can be much faster than using molecular dynamics for simulating polymer strands on surfaces. Second, we apply our proximity penalization to the computation of ideal knots. We present a heuristic scheme, utilizing the SQP method above, for minimizing rope-length and apply it in the case of the trefoil knot. Applications of this method could be for generating good initial guesses to a more accurate (but expensive) knot-tightening algorithm.

  8. Identifying outliers of non-Gaussian groundwater state data based on ensemble estimation for long-term trends

    NASA Astrophysics Data System (ADS)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kueyoung; Choung, Sungwook; Chung, Il Moon

    2017-05-01

    A hydrogeological dataset often includes substantial deviations that need to be inspected. In the present study, three outlier identification methods - the three sigma rule (3σ), inter quantile range (IQR), and median absolute deviation (MAD) - that take advantage of the ensemble regression method are proposed by considering non-Gaussian characteristics of groundwater data. For validation purposes, the performance of the methods is compared using simulated and actual groundwater data with a few hypothetical conditions. In the validations using simulated data, all of the proposed methods reasonably identify outliers at a 5% outlier level; whereas, only the IQR method performs well for identifying outliers at a 30% outlier level. When applying the methods to real groundwater data, the outlier identification performance of the IQR method is found to be superior to the other two methods. However, the IQR method shows limitation by identifying excessive false outliers, which may be overcome by its joint application with other methods (for example, the 3σ rule and MAD methods). The proposed methods can be also applied as potential tools for the detection of future anomalies by model training based on currently available data.

  9. An Engineering Method of Civil Jet Requirements Validation Based on Requirements Project Principle

    NASA Astrophysics Data System (ADS)

    Wang, Yue; Gao, Dan; Mao, Xuming

    2018-03-01

    A method of requirements validation is developed and defined to meet the needs of civil jet requirements validation in product development. Based on requirements project principle, this method will not affect the conventional design elements, and can effectively connect the requirements with design. It realizes the modern civil jet development concept, which is “requirement is the origin, design is the basis”. So far, the method has been successfully applied in civil jet aircraft development in China. Taking takeoff field length as an example, the validation process and the validation method of the requirements are detailed introduced in the study, with the hope of providing the experiences to other civil jet product design.

  10. Conformal coating of highly structured surfaces

    DOEpatents

    Ginley, David S.; Perkins, John; Berry, Joseph; Gennett, Thomas

    2012-12-11

    Method of applying a conformal coating to a highly structured substrate and devices made by the disclosed methods are disclosed. An example method includes the deposition of a substantially contiguous layer of a material upon a highly structured surface within a deposition process chamber. The highly structured surface may be associated with a substrate or another layer deposited on a substrate. The method includes depositing a material having an amorphous structure on the highly structured surface at a deposition pressure of equal to or less than about 3 mTorr. The method may also include removing a portion of the amorphous material deposited on selected surfaces and depositing additional amorphous material on the highly structured surface.

  11. Solution of second order quasi-linear boundary value problems by a wavelet method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Lei; Zhou, Youhe; Wang, Jizeng, E-mail: jzwang@lzu.edu.cn

    2015-03-10

    A wavelet Galerkin method based on expansions of Coiflet-like scaling function bases is applied to solve second order quasi-linear boundary value problems which represent a class of typical nonlinear differential equations. Two types of typical engineering problems are selected as test examples: one is about nonlinear heat conduction and the other is on bending of elastic beams. Numerical results are obtained by the proposed wavelet method. Through comparing to relevant analytical solutions as well as solutions obtained by other methods, we find that the method shows better efficiency and accuracy than several others, and the rate of convergence can evenmore » reach orders of 5.8.« less

  12. Numerical method of applying shadow theory to all regions of multilayered dielectric gratings in conical mounting.

    PubMed

    Wakabayashi, Hideaki; Asai, Masamitsu; Matsumoto, Keiji; Yamakita, Jiro

    2016-11-01

    Nakayama's shadow theory first discussed the diffraction by a perfectly conducting grating in a planar mounting. In the theory, a new formulation by use of a scattering factor was proposed. This paper focuses on the middle regions of a multilayered dielectric grating placed in conical mounting. Applying the shadow theory to the matrix eigenvalues method, we compose new transformation and improved propagation matrices of the shadow theory for conical mounting. Using these matrices and scattering factors, being the basic quantity of diffraction amplitudes, we formulate a new description of three-dimensional scattering fields which is available even for cases where the eigenvalues are degenerate in any region. Some numerical examples are given for cases where the eigenvalues are degenerate in the middle regions.

  13. Inverse problems in quantum chemistry

    NASA Astrophysics Data System (ADS)

    Karwowski, Jacek

    Inverse problems constitute a branch of applied mathematics with well-developed methodology and formalism. A broad family of tasks met in theoretical physics, in civil and mechanical engineering, as well as in various branches of medical and biological sciences has been formulated as specific implementations of the general theory of inverse problems. In this article, it is pointed out that a number of approaches met in quantum chemistry can (and should) be classified as inverse problems. Consequently, the methodology used in these approaches may be enriched by applying ideas and theorems developed within the general field of inverse problems. Several examples, including the RKR method for the construction of potential energy curves, determining parameter values in semiempirical methods, and finding external potentials for which the pertinent Schrödinger equation is exactly solvable, are discussed in detail.

  14. Kinetics analysis and quantitative calculations for the successive radioactive decay process

    NASA Astrophysics Data System (ADS)

    Zhou, Zhiping; Yan, Deyue; Zhao, Yuliang; Chai, Zhifang

    2015-01-01

    The general radioactive decay kinetics equations with branching were developed and the analytical solutions were derived by Laplace transform method. The time dependence of all the nuclide concentrations can be easily obtained by applying the equations to any known radioactive decay series. Taking the example of thorium radioactive decay series, the concentration evolution over time of various nuclide members in the family has been given by the quantitative numerical calculations with a computer. The method can be applied to the quantitative prediction and analysis for the daughter nuclides in the successive decay with branching of the complicated radioactive processes, such as the natural radioactive decay series, nuclear reactor, nuclear waste disposal, nuclear spallation, synthesis and identification of superheavy nuclides, radioactive ion beam physics and chemistry, etc.

  15. Scale invariance in chaotic time series: Classical and quantum examples

    NASA Astrophysics Data System (ADS)

    Landa, Emmanuel; Morales, Irving O.; Stránský, Pavel; Fossion, Rubén; Velázquez, Victor; López Vieyra, J. C.; Frank, Alejandro

    Important aspects of chaotic behavior appear in systems of low dimension, as illustrated by the Map Module 1. It is indeed a remarkable fact that all systems tha make a transition from order to disorder display common properties, irrespective of their exacta functional form. We discuss evidence for 1/f power spectra in the chaotic time series associated in classical and quantum examples, the one-dimensional map module 1 and the spectrum of 48Ca. A Detrended Fluctuation Analysis (DFA) method is applied to investigate the scaling properties of the energy fluctuations in the spectrum of 48Ca obtained with a large realistic shell model calculation (ANTOINE code) and with a random shell model (TBRE) calculation also in the time series obtained with the map mod 1. We compare the scale invariant properties of the 48Ca nuclear spectrum sith similar analyses applied to the RMT ensambles GOE and GDE. A comparison with the corresponding power spectra is made in both cases. The possible consequences of the results are discussed.

  16. How to apply SHA 2011 at a subnational level in China’s practical situation: take children health expenditure as an example

    PubMed Central

    Li, Mingyang; Zheng, Ang; Duan, Wenjuan; Mu, Xin; Liu, Chunli; Yang, Yang; Wang, Xin

    2018-01-01

    Background System of Health Accounts 2011 (SHA 2011) is a new health care accounts system, revised from SHA 1.0 by the Organisation for Economic Co-operation and Development (OECD), the World Health Organization (WHO) and Eurostat. It keeps the former tri-axial relationship and develops three analytical interfaces, in order to fix the existing shortcomings and make it more convenient for analysis and comparison across countries. SHA 2011 was introduced in China in 2014, and little about its application in China has been reported. This study takes children as an example to study how to apply SHA 2011 at the subnational level in the practical situation of China’s health system. Methods Multistage random sampling method was applied and 3 532 517 samples from 252 institutions were included in the study. Official yearbooks and account reports helped the estimation of provincial data. The formula to calculate Current Health Expenditure (CHE) was introduced step-by-step. STATA 10.0 was used for statistics. Results Under the frame of SHA 2011, the CHE for children in Liaoning was calculated as US$ 0.74 billion in 2014; 98.56% of the expenditure was spent in hospital and the allocation to primary health care institutions was insufficient. Infection, maternal and prenatal diseases cost the most in terms of Global Burden of Disease (GBD), and respiratory system diseases took the leading place in terms of International Classification of Disease Tenth Revision (ICD-10). In addition, medical income contributed most to the health financing. Conclusions The method to apply SHA 2011 at the subnational level is feasible in China. It makes health accounts more adaptable to rapidly developing health systems and makes the financing data more readily available for analytical use. SHA 2011 is a better health expenditure accounts system to reveal the actual burden on residents and deserves further promotion in China as well as around the world. PMID:29862027

  17. Building proteins from C alpha coordinates using the dihedral probability grid Monte Carlo method.

    PubMed Central

    Mathiowetz, A. M.; Goddard, W. A.

    1995-01-01

    Dihedral probability grid Monte Carlo (DPG-MC) is a general-purpose method of conformational sampling that can be applied to many problems in peptide and protein modeling. Here we present the DPG-MC method and apply it to predicting complete protein structures from C alpha coordinates. This is useful in such endeavors as homology modeling, protein structure prediction from lattice simulations, or fitting protein structures to X-ray crystallographic data. It also serves as an example of how DPG-MC can be applied to systems with geometric constraints. The conformational propensities for individual residues are used to guide conformational searches as the protein is built from the amino-terminus to the carboxyl-terminus. Results for a number of proteins show that both the backbone and side chain can be accurately modeled using DPG-MC. Backbone atoms are generally predicted with RMS errors of about 0.5 A (compared to X-ray crystal structure coordinates) and all atoms are predicted to an RMS error of 1.7 A or better. PMID:7549885

  18. Strain gage selection in loads equations using a genetic algorithm

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Traditionally, structural loads are measured using strain gages. A loads calibration test must be done before loads can be accurately measured. In one measurement method, a series of point loads is applied to the structure, and loads equations are derived via the least squares curve fitting algorithm using the strain gage responses to the applied point loads. However, many research structures are highly instrumented with strain gages, and the number and selection of gages used in a loads equation can be problematic. This paper presents an improved technique using a genetic algorithm to choose the strain gages used in the loads equations. Also presented are a comparison of the genetic algorithm performance with the current T-value technique and a variant known as the Best Step-down technique. Examples are shown using aerospace vehicle wings of high and low aspect ratio. In addition, a significant limitation in the current methods is revealed. The genetic algorithm arrived at a comparable or superior set of gages with significantly less human effort, and could be applied in instances when the current methods could not.

  19. A New Method for Setting Calculation Sequence of Directional Relay Protection in Multi-Loop Networks

    NASA Astrophysics Data System (ADS)

    Haijun, Xiong; Qi, Zhang

    2016-08-01

    Workload of relay protection setting calculation in multi-loop networks may be reduced effectively by optimization setting calculation sequences. A new method of setting calculation sequences of directional distance relay protection in multi-loop networks based on minimum broken nodes cost vector (MBNCV) was proposed to solve the problem experienced in current methods. Existing methods based on minimum breakpoint set (MBPS) lead to more break edges when untying the loops in dependent relationships of relays leading to possibly more iterative calculation workloads in setting calculations. A model driven approach based on behavior trees (BT) was presented to improve adaptability of similar problems. After extending the BT model by adding real-time system characters, timed BT was derived and the dependency relationship in multi-loop networks was then modeled. The model was translated into communication sequence process (CSP) models and an optimization setting calculation sequence in multi-loop networks was finally calculated by tools. A 5-nodes multi-loop network was applied as an example to demonstrate effectiveness of the modeling and calculation method. Several examples were then calculated with results indicating the method effectively reduces the number of forced broken edges for protection setting calculation in multi-loop networks.

  20. A critical review of published methods for analysis of red cell antigen-antibody reactions by flow cytometry, and approaches for resolving problems with red cell agglutination.

    PubMed

    Arndt, Patricia A; Garratty, George

    2010-07-01

    Flow cytometry operators often apply familiar white blood cell (WBC) methods when studying red blood cell (RBC) antigens and antibodies. Some WBC methods are not appropriate for RBCs, as the analysis of RBCs requires special considerations, for example, avoidance of agglutination. One hundred seventy-six published articles from 88 groups studying RBC interactions were reviewed. Three fourths of groups used at least one unnecessary WBC procedure for RBCs, and about one fourth did not use any method to prevent/disperse RBC agglutination. Flow cytometric studies were performed to determine the effect of RBC agglutination on results and compare different methods of preventing and/or dispersing agglutination. The presence of RBC agglutinates have been shown to be affected by the type of pipette tip used for mixing RBC suspensions, the number of antigen sites/RBC, the type and concentration of primary antibody, and the type of secondary antibody. For quantitation methods, for example, fetal maternal hemorrhage, the presence of agglutinates have been shown to adversely affect results (fewer fetal D+ RBCs detected). Copyright 2010 Elsevier Inc. All rights reserved.

  1. Combining Modeling and Monitoring to Produce a New Paradigm of an Integrated Approach to Providing Long-Term Control of Contaminants

    NASA Astrophysics Data System (ADS)

    Fogwell, T. W.

    2009-12-01

    Sir David King, Chief Science Advisor to the British government and Cambridge University Professor, stated in October 2005, "The scientific community is considerably more capable than it has been in the past to assist governments to avoid and reduce risk to their own populations. Prime ministers and presidents ignore the advice from the science community at the peril of their own populations." Some of these greater capabilities can be found in better monitoring techniques applied to better modeling methods. These modeling methods can be combined with the information derived from monitoring data in order to decrease the risk of population exposure to dangerous substances and to promote efficient control or cleanup of the contaminants. An introduction is presented of the types of problems that exist for long-term control of radionuclides at DOE sites. A breakdown of the distributions at specific sites is given, together with the associated difficulties. A paradigm for remediation showing the integration of monitoring with modeling is presented. It is based on a feedback system that allows for the monitoring to act as principal sensors in a control system. The resulting system can be optimized to improve performance. Optimizing monitoring automatically entails linking the monitoring with modeling. If monitoring designs were required to be more efficient, thus requiring optimization, then the monitoring automatically becomes linked to modeling. Records of decision could be written to accommodate revisions in monitoring as better modeling evolves. Currently the establishment of a very prescriptive monitoring program fails to have a mechanism for improving models and improving control of the contaminants. The technical pieces of the required paradigm are already available; they just need to be implemented and applied to solve the long-term control of the contaminants. An integration of the various parts of the system is presented. Each part is described, and examples are given. References are given to other projects which bring together similar elements in systems for the control of contaminants. Trends are given for the development of the technical features of a robust system. Examples of monitoring methods for specific sites are given. The examples are used to illustrate how such a system would work. Examples of technology needs are presented. Finally, other examples of integrated modeling-monitoring approaches are presented.

  2. Artificial intelligence in sports on the example of weight training.

    PubMed

    Novatchkov, Hristo; Baca, Arnold

    2013-01-01

    The overall goal of the present study was to illustrate the potential of artificial intelligence (AI) techniques in sports on the example of weight training. The research focused in particular on the implementation of pattern recognition methods for the evaluation of performed exercises on training machines. The data acquisition was carried out using way and cable force sensors attached to various weight machines, thereby enabling the measurement of essential displacement and force determinants during training. On the basis of the gathered data, it was consequently possible to deduce other significant characteristics like time periods or movement velocities. These parameters were applied for the development of intelligent methods adapted from conventional machine learning concepts, allowing an automatic assessment of the exercise technique and providing individuals with appropriate feedback. In practice, the implementation of such techniques could be crucial for the investigation of the quality of the execution, the assistance of athletes but also coaches, the training optimization and for prevention purposes. For the current study, the data was based on measurements from 15 rather inexperienced participants, performing 3-5 sets of 10-12 repetitions on a leg press machine. The initially preprocessed data was used for the extraction of significant features, on which supervised modeling methods were applied. Professional trainers were involved in the assessment and classification processes by analyzing the video recorded executions. The so far obtained modeling results showed good performance and prediction outcomes, indicating the feasibility and potency of AI techniques in assessing performances on weight training equipment automatically and providing sportsmen with prompt advice. Key pointsArtificial intelligence is a promising field for sport-related analysis.Implementations integrating pattern recognition techniques enable the automatic evaluation of data measurements.Artificial neural networks applied for the analysis of weight training data show good performance and high classification rates.

  3. Artificial Intelligence in Sports on the Example of Weight Training

    PubMed Central

    Novatchkov, Hristo; Baca, Arnold

    2013-01-01

    The overall goal of the present study was to illustrate the potential of artificial intelligence (AI) techniques in sports on the example of weight training. The research focused in particular on the implementation of pattern recognition methods for the evaluation of performed exercises on training machines. The data acquisition was carried out using way and cable force sensors attached to various weight machines, thereby enabling the measurement of essential displacement and force determinants during training. On the basis of the gathered data, it was consequently possible to deduce other significant characteristics like time periods or movement velocities. These parameters were applied for the development of intelligent methods adapted from conventional machine learning concepts, allowing an automatic assessment of the exercise technique and providing individuals with appropriate feedback. In practice, the implementation of such techniques could be crucial for the investigation of the quality of the execution, the assistance of athletes but also coaches, the training optimization and for prevention purposes. For the current study, the data was based on measurements from 15 rather inexperienced participants, performing 3-5 sets of 10-12 repetitions on a leg press machine. The initially preprocessed data was used for the extraction of significant features, on which supervised modeling methods were applied. Professional trainers were involved in the assessment and classification processes by analyzing the video recorded executions. The so far obtained modeling results showed good performance and prediction outcomes, indicating the feasibility and potency of AI techniques in assessing performances on weight training equipment automatically and providing sportsmen with prompt advice. Key points Artificial intelligence is a promising field for sport-related analysis. Implementations integrating pattern recognition techniques enable the automatic evaluation of data measurements. Artificial neural networks applied for the analysis of weight training data show good performance and high classification rates. PMID:24149722

  4. Output Tracking for Systems with Non-Hyperbolic and Near Non-Hyperbolic Internal Dynamics: Helicopter Hover Control

    NASA Technical Reports Server (NTRS)

    Devasia, Santosh

    1996-01-01

    A technique to achieve output tracking for nonminimum phase linear systems with non-hyperbolic and near non-hyperbolic internal dynamics is presented. This approach integrates stable inversion techniques, that achieve exact-tracking, with approximation techniques, that modify the internal dynamics to achieve desirable performance. Such modification of the internal dynamics is used (1) to remove non-hyperbolicity which an obstruction to applying stable inversion techniques and (2) to reduce large pre-actuation time needed to apply stable inversion for near non-hyperbolic cases. The method is applied to an example helicopter hover control problem with near non-hyperbolic internal dynamic for illustrating the trade-off between exact tracking and reduction of pre-actuation time.

  5. Dissipative Prototyping Methods: A Manifesto

    NASA Astrophysics Data System (ADS)

    Beesley, P.

    Taking a designer's unique perspective using examples of practice in experimental installation and digital protoyping, this manifesto acts as provocation for change and unlocking new potential by encouraging changes of perspective about the material realm. Diffusive form-language is proposed as a paradigm for architectural design. This method of design is applied through 3D printing and related digital fabrication methods, offering new qualities that can be implemented in design of realms including present earth and future interplanetary environments. A paradigm shift is encouraged by questioning conventional notions of geometry that minimize interfaces and by proposing the alternatives of maximized interfaces formed by effusive kinds of formal composition. A series of projects from the Canadian research studio of the Hylozoic Architecture group are described, providing examples of component design methods employing diffusive forms within combinations of tension-integrity structural systems integrated with hybrid metabolisms employing synthetic biology. Cultural implications are also discussed, drawing from architectural theory and natural philosophy. The conclusion of this paper suggests that the practice of diffusive prototyping can offer formative strategies contributing to design of future living systems.

  6. A Wavelet-based Fast Discrimination of Transformer Magnetizing Inrush Current

    NASA Astrophysics Data System (ADS)

    Kitayama, Masashi

    Recently customers who need electricity of higher quality have been installing co-generation facilities. They can avoid voltage sags and other distribution system related disturbances by supplying electricity to important load from their generators. For another example, FRIENDS, highly reliable distribution system using semiconductor switches or storage devices based on power electronics technology, is proposed. These examples illustrates that the request for high reliability in distribution system is increasing. In order to realize these systems, fast relaying algorithms are indispensable. The author proposes a new method of detecting magnetizing inrush current using discrete wavelet transform (DWT). DWT provides the function of detecting discontinuity of current waveform. Inrush current occurs when transformer core becomes saturated. The proposed method detects spikes of DWT components derived from the discontinuity of the current waveform at both the beginning and the end of inrush current. Wavelet thresholding, one of the wavelet-based statistical modeling, was applied to detect the DWT component spikes. The proposed method is verified using experimental data using single-phase transformer and the proposed method is proved to be effective.

  7. Systems and methods for the combinatorial synthesis of novel materials

    DOEpatents

    Wu, Xin Di; Wang, Youqi; Goldwasser, Isy

    2000-01-01

    Methods and apparatus for the preparation of a substrate having an array of diverse materials in predefined regions thereon. A substrate having an array of diverse materials thereon is generally prepared by depositing components of target materials to predefined regions on the substrate, and, in some embodiments, simultaneously reacting the components to form at least two resulting materials. In particular, the present invention provides novel masking systems and methods for applying components of target materials onto a substrate in a combinatorial fashion, thus creating arrays of resulting materials that differ slightly in composition, stoichiometry, and/or thickness. Using the novel masking systems of the present invention, components can be delivered to each site in a uniform distribution, or in a gradient of stoichiometries, thicknesses, compositions, etc. Resulting materials which can be prepared using the methods and apparatus of the present invention include, for example, covalent network solids, ionic solids and molecular solids. Once prepared, these resulting materials can be screened sequentially, or in parallel, for useful properties including, for example, electrical, thermal, mechanical, morphological, optical, magnetic, chemical and other properties.

  8. Computing Earthquake Probabilities on Global Scales

    NASA Astrophysics Data System (ADS)

    Holliday, James R.; Graves, William R.; Rundle, John B.; Turcotte, Donald L.

    2016-03-01

    Large devastating events in systems such as earthquakes, typhoons, market crashes, electricity grid blackouts, floods, droughts, wars and conflicts, and landslides can be unexpected and devastating. Events in many of these systems display frequency-size statistics that are power laws. Previously, we presented a new method for calculating probabilities for large events in systems such as these. This method counts the number of small events since the last large event and then converts this count into a probability by using a Weibull probability law. We applied this method to the calculation of large earthquake probabilities in California-Nevada, USA. In that study, we considered a fixed geographic region and assumed that all earthquakes within that region, large magnitudes as well as small, were perfectly correlated. In the present article, we extend this model to systems in which the events have a finite correlation length. We modify our previous results by employing the correlation function for near mean field systems having long-range interactions, an example of which is earthquakes and elastic interactions. We then construct an application of the method and show examples of computed earthquake probabilities.

  9. Philosophy of science and the diagnostic process.

    PubMed

    Willis, Brian H; Beebee, Helen; Lasserson, Daniel S

    2013-10-01

    This is an overview of the principles that underpin philosophy of science and how they may provide a framework for the diagnostic process. Although philosophy dates back to antiquity, it is only more recently that philosophers have begun to enunciate the scientific method. Since Aristotle formulated deduction, other modes of reasoning including induction, inference to best explanation, falsificationism, theory-laden observations and Bayesian inference have emerged. Thus, rather than representing a single overriding dogma, the scientific method is a toolkit of ideas and principles of reasoning. Here we demonstrate that the diagnostic process is an example of science in action and is therefore subject to the principles encompassed by the scientific method. Although a number of the different forms of reasoning are used readily by clinicians in practice, without a clear understanding of their pitfalls and the assumptions on which they are based, it leaves doctors open to diagnostic error. We conclude by providing a case example from the medico-legal literature in which diagnostic errors were made, to illustrate how applying the scientific method may mitigate the chance for diagnostic error.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Won, Yoo Jai; Ki, Hyungson

    A novel picosecond-laser pulsed laser deposition method has been developed for fabricating functionally graded films with pre-designed gradient profiles. Theoretically, the developed method is capable of precisely fabricating films with any thicknesses and any gradient profiles by controlling the laser beam powers for the two different targets based on the film composition profiles. As an implementation example, we have successfully constructed functionally graded diamond-like carbon films with six different gradient profiles: linear, quadratic, cubic, square root, cubic root, and sinusoidal. Energy dispersive X-ray spectroscopy is employed for investigating the chemical composition along the thickness of the film, and the depositionmore » profile and thickness errors are found to be less than 3% and 1.04%, respectively. To the best of the authors' knowledge, this is the first method for fabricating films with designed gradient profiles and has huge potential in many areas of coatings and films, including multifunctional optical films. We believe that this method is not only limited to the example considered in this study, but also can be applied to all material combinations as long as they can be deposited using the pulsed laser deposition technique.« less

  11. A POSTERIORI ERROR ANALYSIS OF TWO STAGE COMPUTATION METHODS WITH APPLICATION TO EFFICIENT DISCRETIZATION AND THE PARAREAL ALGORITHM.

    PubMed

    Chaudhry, Jehanzeb Hameed; Estep, Don; Tavener, Simon; Carey, Varis; Sandelin, Jeff

    2016-01-01

    We consider numerical methods for initial value problems that employ a two stage approach consisting of solution on a relatively coarse discretization followed by solution on a relatively fine discretization. Examples include adaptive error control, parallel-in-time solution schemes, and efficient solution of adjoint problems for computing a posteriori error estimates. We describe a general formulation of two stage computations then perform a general a posteriori error analysis based on computable residuals and solution of an adjoint problem. The analysis accommodates various variations in the two stage computation and in formulation of the adjoint problems. We apply the analysis to compute "dual-weighted" a posteriori error estimates, to develop novel algorithms for efficient solution that take into account cancellation of error, and to the Parareal Algorithm. We test the various results using several numerical examples.

  12. Total decay and transition rates from LQCD

    NASA Astrophysics Data System (ADS)

    Hansen, Maxwell T.; Meyer, Harvey B.; Robaina, Daniel

    2018-03-01

    We present a new technique for extracting total transition rates into final states with any number of hadrons from lattice QCD. The method involves constructing a finite-volume Euclidean four-point function whose corresponding infinite-volume spectral function gives access to the decay and transition rates into all allowed final states. The inverse problem of calculating the spectral function is solved via the Backus-Gilbert method, which automatically includes a smoothing procedure. This smoothing is in fact required so that an infinite-volume limit of the spectral function exists. Using a numerical toy example we find that reasonable precision can be achieved with realistic lattice data. In addition, we discuss possible extensions of our approach and, as an example application, prospects for applying the formalism to study the onset of deep-inelastic scattering. More details are given in the published version of this work, Ref. [1].

  13. Entanglement branching operator

    NASA Astrophysics Data System (ADS)

    Harada, Kenji

    2018-01-01

    We introduce an entanglement branching operator to split a composite entanglement flow in a tensor network which is a promising theoretical tool for many-body systems. We can optimize an entanglement branching operator by solving a minimization problem based on squeezing operators. The entanglement branching is a new useful operation to manipulate a tensor network. For example, finding a particular entanglement structure by an entanglement branching operator, we can improve a higher-order tensor renormalization group method to catch a proper renormalization flow in a tensor network space. This new method yields a new type of tensor network states. The second example is a many-body decomposition of a tensor by using an entanglement branching operator. We can use it for a perfect disentangling among tensors. Applying a many-body decomposition recursively, we conceptually derive projected entangled pair states from quantum states that satisfy the area law of entanglement entropy.

  14. Regioselective, borinic acid-catalyzed monoacylation, sulfonylation and alkylation of diols and carbohydrates: expansion of substrate scope and mechanistic studies.

    PubMed

    Lee, Doris; Williamson, Caitlin L; Chan, Lina; Taylor, Mark S

    2012-05-16

    Synthetic and mechanistic aspects of the diarylborinic acid-catalyzed regioselective monofunctionalization of 1,2- and 1,3-diols are presented. Diarylborinic acid catalysis is shown to be an efficient and general method for monotosylation of pyranoside derivatives bearing three secondary hydroxyl groups (7 examples, 88% average yield). In addition, the scope of the selective acylation, sulfonylation, and alkylation is extended to 1,2- and 1,3-diols not derived from carbohydrates (28 examples); the efficiency, generality, and operational simplicity of this method are competitive with those of state-of-the-art protocols including the broadly applied organotin-catalyzed or -mediated reactions. Mechanistic details of the organoboron-catalyzed processes are explored using competition experiments, kinetics, and catalyst structure-activity relationships. These experiments are consistent with a mechanism in which a tetracoordinate borinate complex reacts with the electrophilic species in the turnover-limiting step of the catalytic cycle.

  15. Rheological Principles for Food Analysis

    NASA Astrophysics Data System (ADS)

    Daubert, Christopher R.; Foegeding, E. Allen

    Food scientists are routinely confronted with the need to measure physical properties related to sensory texture and processing needs. These properties are determined by rheological methods, where rheology is a science devoted to the deformation and flow of all materials. Rheological properties should be considered a subset of the textural properties of foods, because the sensory detection of texture encompasses factors beyond rheological properties. Specifically, rheological methods accurately measure "force," "deformation," and "flow," and food scientists and engineers must determine how best to apply this information. For example, the flow of salad dressing from a bottle, the snapping of a candy bar, or the pumping of cream through a homogenizer are each related to the rheological properties of these materials. In this chapter, we describe fundamental concepts pertinent to the understanding of the subject and discuss typical examples of rheological tests for common foods. A glossary is included as Sect. 30.6 to clarify and summarize rheological definitions throughout the chapter.

  16. A relational metric, its application to domain analysis, and an example analysis and model of a remote sensing domain

    NASA Technical Reports Server (NTRS)

    Mcgreevy, Michael W.

    1995-01-01

    An objective and quantitative method has been developed for deriving models of complex and specialized spheres of activity (domains) from domain-generated verbal data. The method was developed for analysis of interview transcripts, incident reports, and other text documents whose original source is people who are knowledgeable about, and participate in, the domain in question. To test the method, it is applied here to a report describing a remote sensing project within the scope of the Earth Observing System (EOS). The method has the potential to improve the designs of domain-related computer systems and software by quickly providing developers with explicit and objective models of the domain in a form which is useful for design. Results of the analysis include a network model of the domain, and an object-oriented relational analysis report which describes the nodes and relationships in the network model. Other products include a database of relationships in the domain, and an interactive concordance. The analysis method utilizes a newly developed relational metric, a proximity-weighted frequency of co-occurrence. The metric is applied to relations between the most frequently occurring terms (words or multiword entities) in the domain text, and the terms found within the contexts of these terms. Contextual scope is selectable. Because of the discriminating power of the metric, data reduction from the association matrix to the network is simple. In addition to their value for design. the models produced by the method are also useful for understanding the domains themselves. They can, for example, be interpreted as models of presence in the domain.

  17. A Numerical Method for Obtaining Monoenergetic Neutron Flux Distributions and Transmissions in Multiple-Region Slabs

    NASA Technical Reports Server (NTRS)

    Schneider, Harold

    1959-01-01

    This method is investigated for semi-infinite multiple-slab configurations of arbitrary width, composition, and source distribution. Isotropic scattering in the laboratory system is assumed. Isotropic scattering implies that the fraction of neutrons scattered in the i(sup th) volume element or subregion that will make their next collision in the j(sup th) volume element or subregion is the same for all collisions. These so-called "transfer probabilities" between subregions are calculated and used to obtain successive-collision densities from which the flux and transmission probabilities directly follow. For a thick slab with little or no absorption, a successive-collisions technique proves impractical because an unreasonably large number of collisions must be followed in order to obtain the flux. Here the appropriate integral equation is converted into a set of linear simultaneous algebraic equations that are solved for the average total flux in each subregion. When ordinary diffusion theory applies with satisfactory precision in a portion of the multiple-slab configuration, the problem is solved by ordinary diffusion theory, but the flux is plotted only in the region of validity. The angular distribution of neutrons entering the remaining portion is determined from the known diffusion flux and the remaining region is solved by higher order theory. Several procedures for applying the numerical method are presented and discussed. To illustrate the calculational procedure, a symmetrical slab ia vacuum is worked by the numerical, Monte Carlo, and P(sub 3) spherical harmonics methods. In addition, an unsymmetrical double-slab problem is solved by the numerical and Monte Carlo methods. The numerical approach proved faster and more accurate in these examples. Adaptation of the method to anisotropic scattering in slabs is indicated, although no example is included in this paper.

  18. Differential-Integral method in polymer processing: Taking melt electrospinning technique for example

    NASA Astrophysics Data System (ADS)

    Haoyi, Li; Weimin, Yang; Hongbo, Chen; Jing, Tan; Pengcheng, Xie

    2016-03-01

    A concept of Differential-Integral (DI) method applied in polymer processing and molding was proposed, which included melt DI injection molding, DI nano-composites extrusion molding and melt differential electrospinning principle and equipment. Taking the melt differential electrospinning for example to introduce the innovation research progress, two methods preparing polymer ultrafine fiber have been developed: solution electro-spinning and melt electro-spinning, between which solution electro-spinning is much simpler to realize in lab. More than 100 institutions have endeavored to conduct research on it and more than 30 thousand papers have been published. However, its industrialization was restricted to some extend because of the existence of toxic solvent during spinning process and poor mechanical strength of resultant fibers caused by small pores on fiber surface. Solvent-free melt electrospinning is environmentally friendly and highly productive. However, problems such as the high melt viscosity, thick fiber diameter and complex equipment makes it relatively under researched compared with solution electrospinning. With the purpose of solving the shortage of traditional electro-spinning equipment with needles or capillaries, a melt differential electro-spinning method without needles or capillaries was firstly proposed. Nearly 50 related patents have been applied since 2005, and systematic method innovations and experimental studies have also been conducted. The prepared fiber by this method had exhibited small diameter and smooth surface. The average fiber diameter can reach 200-800 nm, and the single nozzle can yield two orders of magnitude more than the capillaries. Based on the above principle, complete commercial techniques and equipment have been developed to produce ultra-fine non-woven fabrics for the applications in air filtration, oil spill recovery and water treatment, etc.

  19. Multiscale global identification of porous structures

    NASA Astrophysics Data System (ADS)

    Hatłas, Marcin; Beluch, Witold

    2018-01-01

    The paper is devoted to the evolutionary identification of the material constants of porous structures based on measurements conducted on a macro scale. Numerical homogenization with the RVE concept is used to determine the equivalent properties of a macroscopically homogeneous material. Finite element method software is applied to solve the boundary-value problem in both scales. Global optimization methods in form of evolutionary algorithm are employed to solve the identification task. Modal analysis is performed to collect the data necessary for the identification. A numerical example presenting the effectiveness of proposed attitude is attached.

  20. The method of Ritz applied to the equation of Hamilton. [for pendulum systems

    NASA Technical Reports Server (NTRS)

    Bailey, C. D.

    1976-01-01

    Without any reference to the theory of differential equations, the initial value problem of the nonlinear, nonconservative double pendulum system is solved by the application of the method of Ritz to the equation of Hamilton. Also shown is an example of the reduction of the traditional eigenvalue problem of linear, homogeneous, differential equations of motion to the solution of a set of nonhomogeneous algebraic equations. No theory of differential equations is used. Solution of the time-space path of the linear oscillator is demonstrated and compared to the exact solution.

  1. 3D modeling of underground objects with the use of SLAM technology on the example of historical mine in Ciechanowice (Ołowiane Range, The Sudetes)

    NASA Astrophysics Data System (ADS)

    Wajs, Jaroslaw; Kasza, Damian; Zagożdżon, Paweł P.; Zagożdżon, Katarzyna D.

    2018-01-01

    Terrestrial Laser Scanning is a currently one of the most popular methods for producing representations of 3D objects. This paper presents the potential of applying the mobile laser scanning method to inventory underground objects. The examined location was a historic crystalline limestone mine situated in the vicinity of Ciechanowice village (Kaczawa Mts., SW Poland). The authors present a methodology for performing measurements and for processing the obtained results, whose accuracy is additionally verified.

  2. Method and apparatus for imparting strength to a material using sliding loads

    DOEpatents

    Hughes, Darcy Anne; Dawson, Daniel B.; Korellis, John S.

    1999-01-01

    A method of enhancing the strength of metals by affecting subsurface zones developed during the application of large sliding loads. Stresses which develop locally within the near surface zone can be many times larger than those predicted from the applied load and the friction coefficient. These stress concentrations arise from two sources: 1) asperity interactions and 2) local and momentary bonding between the two surfaces. By controlling these parameters more desirable strength characteristics can be developed in weaker metals to provide much greater strength to rival that of steel, for example.

  3. Method And Apparatus For Imparting Strength To Materials Using Sliding Loads

    DOEpatents

    Hughes, Darcy Anne; Dawson, Daniel B.; Korellis, John S.

    1999-03-16

    A method of enhancing the strength of metals by affecting subsurface zones developed during the application of large sliding loads. Stresses which develop locally within the near surface zone can be many times larger than those predicted from the applied load and the friction coefficient. These stress concentrations arise from two sources: 1) asperity interactions and 2) local and momentary bonding between the two surfaces. By controlling these parameters more desirable strength characteristics can be developed in weaker metals to provide much greater strength to rival that of steel, for example.

  4. Research on large equipment maintenance system in life cycle

    NASA Astrophysics Data System (ADS)

    Xu, Xiaowei; Wang, Hongxia; Liu, Zhenxing; Zhang, Nan

    2017-06-01

    In order to change the current disadvantages of traditional large equipment maintenance concept, this article plans to apply the technical method of prognostics and health management to optimize equipment maintenance strategy and develop large equipment maintenance system. Combined with the maintenance procedures of various phases in life cycle, it concluded the formulation methods of maintenance program and implement plans of maintenance work. In the meantime, it takes account into the example of the dredger power system of the Waterway Bureau to establish the auxiliary platform of ship maintenance system in life cycle.

  5. Method for preparing a thick film conductor

    DOEpatents

    Nagesh, Voddarahalli K.; Fulrath, deceased, Richard M.

    1978-01-01

    A method for preparing a thick film conductor which comprises providing surface active glass particles, mixing the surface active glass particles with a thermally decomposable organometallic compound, for example, a silver resinate, and then decomposing the organometallic compound by heating, thereby chemically depositing metal on the glass particles. The glass particle mixture is applied to a suitable substrate either before or after the organometallic compound is thermally decomposed. The resulting system is then fired in an oxidizing atmosphere, providing a microstructure of glass particles substantially uniformly coated with metal.

  6. Quasi-cylindrical theory of wing-body interference at supersonic speeds and comparison with experiment

    NASA Technical Reports Server (NTRS)

    Nielsen, Jack N

    1955-01-01

    A theoretical method is presented for calculating the flow field about wing-body combinations employing bodies deviating only slightly in shape from a circular cylinder. The method is applied to the calculation of the pressure field acting between a circular cylindrical body and a rectangular wing. The case of zero body angle of attack and variable wing incidence is considered as well as the case of zero wing incidence and variable body angle of attack. An experiment was performed especially for the purpose of checking the calculative examples.

  7. Overview of chemical imaging methods to address biological questions.

    PubMed

    da Cunha, Marcel Menezes Lyra; Trepout, Sylvain; Messaoudi, Cédric; Wu, Ting-Di; Ortega, Richard; Guerquin-Kern, Jean-Luc; Marco, Sergio

    2016-05-01

    Chemical imaging offers extensive possibilities for better understanding of biological systems by allowing the identification of chemical components at the tissue, cellular, and subcellular levels. In this review, we introduce modern methods for chemical imaging that can be applied to biological samples. This work is mainly addressed to the biological sciences community and includes the bases of different technologies, some examples of its application, as well as an introduction to approaches on combining multimodal data. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Application of geologic-mathematical 3D modeling for complex structure deposits by the example of Lower- Cretaceous period depositions in Western Ust - Balykh oil field (Khanty-Mansiysk Autonomous District)

    NASA Astrophysics Data System (ADS)

    Perevertailo, T.; Nedolivko, N.; Prisyazhnyuk, O.; Dolgaya, T.

    2015-11-01

    The complex structure of the Lower-Cretaceous formation by the example of the reservoir BC101 in Western Ust - Balykh Oil Field (Khanty-Mansiysk Autonomous District) has been studied. Reservoir range relationships have been identified. 3D geologic- mathematical modeling technique considering the heterogeneity and variability of a natural reservoir structure has been suggested. To improve the deposit geological structure integrity methods of mathematical statistics were applied, which, in its turn, made it possible to obtain equal probability models with similar input data and to consider the formation conditions of reservoir rocks and cap rocks.

  9. Role of CFD in propulsion design - Government perspective

    NASA Technical Reports Server (NTRS)

    Schutzenhofer, L. A.; Mcconnaughey, H. V.; Mcconnaughey, P. K.

    1990-01-01

    Various aspects of computational fluid dynamics (CFD), as it relates to design applications in rocket propulsion activities from the government perspective, are discussed. Specific examples are given that demonstrate the application of CFD to support hardware development activities, such as Space Shuttle Main Engine flight issues, and the associated teaming strategy used for solving such problems. In addition, select examples that delineate the motivation, methods of approach, goals and key milestones for several space flight progams are cited. An approach is described toward applying CFD in the design environment from the government perspective. A discussion of benchmark validation, advanced technology hardware concepts, accomplishments, needs, future applications, and near-term expectations from the flight-center perspective is presented.

  10. Bayesian posterior distributions without Markov chains.

    PubMed

    Cole, Stephen R; Chu, Haitao; Greenland, Sander; Hamra, Ghassan; Richardson, David B

    2012-03-01

    Bayesian posterior parameter distributions are often simulated using Markov chain Monte Carlo (MCMC) methods. However, MCMC methods are not always necessary and do not help the uninitiated understand Bayesian inference. As a bridge to understanding Bayesian inference, the authors illustrate a transparent rejection sampling method. In example 1, they illustrate rejection sampling using 36 cases and 198 controls from a case-control study (1976-1983) assessing the relation between residential exposure to magnetic fields and the development of childhood cancer. Results from rejection sampling (odds ratio (OR) = 1.69, 95% posterior interval (PI): 0.57, 5.00) were similar to MCMC results (OR = 1.69, 95% PI: 0.58, 4.95) and approximations from data-augmentation priors (OR = 1.74, 95% PI: 0.60, 5.06). In example 2, the authors apply rejection sampling to a cohort study of 315 human immunodeficiency virus seroconverters (1984-1998) to assess the relation between viral load after infection and 5-year incidence of acquired immunodeficiency syndrome, adjusting for (continuous) age at seroconversion and race. In this more complex example, rejection sampling required a notably longer run time than MCMC sampling but remained feasible and again yielded similar results. The transparency of the proposed approach comes at a price of being less broadly applicable than MCMC.

  11. Simpson's paradox - aggregating and partitioning populations in health disparities of lung cancer patients.

    PubMed

    Fu, P; Panneerselvam, A; Clifford, B; Dowlati, A; Ma, P C; Zeng, G; Halmos, B; Leidner, R S

    2015-12-01

    It is well known that non-small cell lung cancer (NSCLC) is a heterogeneous group of diseases. Previous studies have demonstrated genetic variation among different ethnic groups in the epidermal growth factor receptor (EGFR) in NSCLC. Research by our group and others has recently shown a lower frequency of EGFR mutations in African Americans with NSCLC, as compared to their White counterparts. In this study, we use our original study data of EGFR pathway genetics in African American NSCLC as an example to illustrate that univariate analyses based on aggregation versus partition of data leads to contradictory results, in order to emphasize the importance of controlling statistical confounding. We further investigate analytic approaches in logistic regression for data with separation, as is the case in our example data set, and apply appropriate methods to identify predictors of EGFR mutation. Our simulation shows that with separated or nearly separated data, penalized maximum likelihood (PML) produces estimates with smallest bias and approximately maintains the nominal value with statistical power equal to or better than that from maximum likelihood and exact conditional likelihood methods. Application of the PML method in our example data set shows that race and EGFR-FISH are independently significant predictors of EGFR mutation. © The Author(s) 2011.

  12. A rapid method for creating qualitative images indicative of thick oil emulsion on the ocean's surface from imaging spectrometer data

    USGS Publications Warehouse

    Kokaly, Raymond F.; Hoefen, Todd M.; Livo, K. Eric; Swayze, Gregg A.; Leifer, Ira; McCubbin, Ian B.; Eastwood, Michael L.; Green, Robert O.; Lundeen, Sarah R.; Sarture, Charles M.; Steele, Denis; Ryan, Thomas; Bradley, Eliza S.; Roberts, Dar A.; ,

    2010-01-01

    This report describes a method to create color-composite images indicative of thick oil:water emulsions on the surface of clear, deep ocean water by using normalized difference ratios derived from remotely sensed data collected by an imaging spectrometer. The spectral bands used in the normalized difference ratios are located in wavelength regions where the spectra of thick oil:water emulsions on the ocean's surface have a distinct shape compared to clear water and clouds. In contrast to quantitative analyses, which require rigorous conversion to reflectance, the method described is easily computed and can be applied rapidly to radiance data or data that have been atmospherically corrected or ground-calibrated to reflectance. Examples are shown of the method applied to Airborne Visible/Infrared Imaging Spectrometer data collected May 17 and May 19, 2010, over the oil spill from the Deepwater Horizon offshore oil drilling platform in the Gulf of Mexico.

  13. High-throughput screening of high Monascus pigment-producing strain based on digital image processing.

    PubMed

    Xia, Meng-lei; Wang, Lan; Yang, Zhi-xia; Chen, Hong-zhang

    2016-04-01

    This work proposed a new method which applied image processing and support vector machine (SVM) for screening of mold strains. Taking Monascus as example, morphological characteristics of Monascus colony were quantified by image processing. And the association between the characteristics and pigment production capability was determined by SVM. On this basis, a highly automated screening strategy was achieved. The accuracy of the proposed strategy is 80.6 %, which is compatible with the existing methods (81.1 % for microplate and 85.4 % for flask). Meanwhile, the screening of 500 colonies only takes 20-30 min, which is the highest rate among all published results. By applying this automated method, 13 strains with high-predicted production were obtained and the best one produced as 2.8-fold (226 U/mL) of pigment and 1.9-fold (51 mg/L) of lovastatin compared with the parent strain. The current study provides us with an effective and promising method for strain improvement.

  14. Improving acoustic beamforming maps in a reverberant environment by modifying the cross-correlation matrix

    NASA Astrophysics Data System (ADS)

    Fischer, J.; Doolan, C.

    2017-12-01

    A method to improve the quality of acoustic beamforming in reverberant environments is proposed in this paper. The processing is based on a filtering of the cross-correlation matrix of the microphone signals obtained using a microphone array. The main advantage of the proposed method is that it does not require information about the geometry of the reverberant environment and thus it can be applied to any configuration. The method is applied to the particular example of aeroacoustic testing in a hard-walled low-speed wind tunnel; however, the technique can be used in any reverberant environment. Two test cases demonstrate the technique. The first uses a speaker placed in the hard-walled working section with no wind tunnel flow. In the second test case, an airfoil is placed in a flow and acoustic beamforming maps are obtained. The acoustic maps have been improved, as the reflections observed in the conventional maps have been removed after application of the proposed method.

  15. A review of machine learning in obesity.

    PubMed

    DeGregory, K W; Kuiper, P; DeSilvio, T; Pleuss, J D; Miller, R; Roginski, J W; Fisher, C B; Harness, D; Viswanath, S; Heymsfield, S B; Dungan, I; Thomas, D M

    2018-05-01

    Rich sources of obesity-related data arising from sensors, smartphone apps, electronic medical health records and insurance data can bring new insights for understanding, preventing and treating obesity. For such large datasets, machine learning provides sophisticated and elegant tools to describe, classify and predict obesity-related risks and outcomes. Here, we review machine learning methods that predict and/or classify such as linear and logistic regression, artificial neural networks, deep learning and decision tree analysis. We also review methods that describe and characterize data such as cluster analysis, principal component analysis, network science and topological data analysis. We introduce each method with a high-level overview followed by examples of successful applications. The algorithms were then applied to National Health and Nutrition Examination Survey to demonstrate methodology, utility and outcomes. The strengths and limitations of each method were also evaluated. This summary of machine learning algorithms provides a unique overview of the state of data analysis applied specifically to obesity. © 2018 World Obesity Federation.

  16. Forward calculation of gravity and its gradient using polyhedral representation of density interfaces: an application of spherical or ellipsoidal topographic gravity effect

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Chen, Chao

    2018-02-01

    A density interface modeling method using polyhedral representation is proposed to construct 3-D models of spherical or ellipsoidal interfaces such as the terrain surface of the Earth and applied to forward calculating gravity effect of topography and bathymetry for regional or global applications. The method utilizes triangular facets to fit undulation of the target interface. The model maintains almost equal accuracy and resolution at different locations of the globe. Meanwhile, the exterior gravitational field of the model, including its gravity and gravity gradients, is obtained simultaneously using analytic solutions. Additionally, considering the effect of distant relief, an adaptive computation process is introduced to reduce the computational burden. Then features and errors of the method are analyzed. Subsequently, the method is applied to an area for the ellipsoidal Bouguer shell correction as an example and the result is compared to existing methods, which shows our method provides high accuracy and great computational efficiency. Suggestions for further developments and conclusions are drawn at last.

  17. A Review of Auditing Methods Applied to the Content of Controlled Biomedical Terminologies

    PubMed Central

    Zhu, Xinxin; Fan, Jung-Wei; Baorto, David M.; Weng, Chunhua; Cimino, James J.

    2012-01-01

    Although controlled biomedical terminologies have been with us for centuries, it is only in the last couple of decades that close attention has been paid to the quality of these terminologies. The result of this attention has been the development of auditing methods that apply formal methods to assessing whether terminologies are complete and accurate. We have performed an extensive literature review to identify published descriptions of these methods and have created a framework for characterizing them. The framework considers manual, systematic and heuristic methods that use knowledge (within or external to the terminology) to measure quality factors of different aspects of the terminology content (terms, semantic classification, and semantic relationships). The quality factors examined included concept orientation, consistency, non-redundancy, soundness and comprehensive coverage. We reviewed 130 studies that were retrieved based on keyword search on publications in PubMed, and present our assessment of how they fit into our framework. We also identify which terminologies have been audited with the methods and provide examples to illustrate each part of the framework. PMID:19285571

  18. Nanoscale device architectures derived from biological assemblies: The case of tobacco mosaic virus and (apo)ferritin

    NASA Astrophysics Data System (ADS)

    Calò, Annalisa; Eiben, Sabine; Okuda, Mitsuhiro; Bittner, Alexander M.

    2016-03-01

    Virus particles and proteins are excellent examples of naturally occurring structures with well-defined nanoscale architectures, for example, cages and tubes. These structures can be employed in a bottom-up assembly strategy to fabricate repetitive patterns of hybrid organic-inorganic materials. In this paper, we review methods of assembly that make use of protein and virus scaffolds to fabricate patterned nanostructures with very high spatial control. We chose (apo)ferritin and tobacco mosaic virus (TMV) as model examples that have already been applied successfully in nanobiotechnology. Their interior space and their exterior surfaces can be mineralized with inorganic layers or nanoparticles. Furthermore, their native assembly abilities can be exploited to generate periodic architectures for integration in electrical and magnetic devices. We introduce the state of the art and describe recent advances in biomineralization techniques, patterning and device production with (apo)ferritin and TMV.

  19. 29 CFR 4022.95 - Examples.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 9 2010-07-01 2010-07-01 false Examples. 4022.95 Section 4022.95 Labor Regulations... IN TERMINATED SINGLE-EMPLOYER PLANS Certain Payments Owed Upon Death § 4022.95 Examples. The following examples show how the rules in §§ 4022.91 through 4022.94 apply. For examples on how these rules...

  20. Dictionary learning based noisy image super-resolution via distance penalty weight model

    PubMed Central

    Han, Yulan; Zhao, Yongping; Wang, Qisong

    2017-01-01

    In this study, we address the problem of noisy image super-resolution. Noisy low resolution (LR) image is always obtained in applications, while most of the existing algorithms assume that the LR image is noise-free. As to this situation, we present an algorithm for noisy image super-resolution which can achieve simultaneously image super-resolution and denoising. And in the training stage of our method, LR example images are noise-free. For different input LR images, even if the noise variance varies, the dictionary pair does not need to be retrained. For the input LR image patch, the corresponding high resolution (HR) image patch is reconstructed through weighted average of similar HR example patches. To reduce computational cost, we use the atoms of learned sparse dictionary as the examples instead of original example patches. We proposed a distance penalty model for calculating the weight, which can complete a second selection on similar atoms at the same time. Moreover, LR example patches removed mean pixel value are also used to learn dictionary rather than just their gradient features. Based on this, we can reconstruct initial estimated HR image and denoised LR image. Combined with iterative back projection, the two reconstructed images are applied to obtain final estimated HR image. We validate our algorithm on natural images and compared with the previously reported algorithms. Experimental results show that our proposed method performs better noise robustness. PMID:28759633

  1. Target identification for small bioactive molecules: finding the needle in the haystack.

    PubMed

    Ziegler, Slava; Pries, Verena; Hedberg, Christian; Waldmann, Herbert

    2013-03-04

    Identification and confirmation of bioactive small-molecule targets is a crucial, often decisive step both in academic and pharmaceutical research. Through the development and availability of several new experimental techniques, target identification is, in principle, feasible, and the number of successful examples steadily grows. However, a generic methodology that can successfully be applied in the majority of the cases has not yet been established. Herein we summarize current methods for target identification of small molecules, primarily for a chemistry audience but also the biological community, for example, the chemist or biologist attempting to identify the target of a given bioactive compound. We describe the most frequently employed experimental approaches for target identification and provide several representative examples illustrating the state-of-the-art. Among the techniques currently available, protein affinity isolation using suitable small-molecule probes (pulldown) and subsequent mass spectrometric analysis of the isolated proteins appears to be most powerful and most frequently applied. To provide guidance for rapid entry into the field and based on our own experience we propose a typical workflow for target identification, which centers on the application of chemical proteomics as the key step to generate hypotheses for potential target proteins. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Quantum models with energy-dependent potentials solvable in terms of exceptional orthogonal polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schulze-Halberg, Axel, E-mail: axgeschu@iun.edu; Department of Physics, Indiana University Northwest, 3400 Broadway, Gary IN 46408; Roy, Pinaki, E-mail: pinaki@isical.ac.in

    We construct energy-dependent potentials for which the Schrödinger equations admit solutions in terms of exceptional orthogonal polynomials. Our method of construction is based on certain point transformations, applied to the equations of exceptional Hermite, Jacobi and Laguerre polynomials. We present several examples of boundary-value problems with energy-dependent potentials that admit a discrete spectrum and the corresponding normalizable solutions in closed form.

  3. Dynamics and control for Constrained Multibody Systems modeled with Maggi's equation: Application to Differential Mobile Robots Partll

    NASA Astrophysics Data System (ADS)

    Amengonu, Yawo H.; Kakad, Yogendra P.

    2014-07-01

    Quasivelocity techniques were applied to derive the dynamics of a Differential Wheeled Mobile Robot (DWMR) in the companion paper. The present paper formulates a control system design for trajectory tracking of this class of robots. The method develops a feedback linearization technique for the nonlinear system using dynamic extension algorithm. The effectiveness of the nonlinear controller is illustrated with simulation example.

  4. Applied Warfighter Ergonomics: A Research Method for Evaluating Military Individual Equipment

    DTIC Science & Technology

    2005-09-01

    innovations, as well. 6 Subsequent studies have established that the top official, head of household, or other nominal leader of the organization...alternative products have no meaningful differentiation between them (such as shampoo and instant coffee), consumers preferences can be significantly...example, with his weapon slung over his shoulder . Admin The conventional segment of the scenario was identical for each RPDA. The RPDA segment was

  5. Existence of topological multi-string solutions in Abelian gauge field theories

    NASA Astrophysics Data System (ADS)

    Han, Jongmin; Sohn, Juhee

    2017-11-01

    In this paper, we consider a general form of self-dual equations arising from Abelian gauge field theories coupled with the Einstein equations. By applying the super/subsolution method, we prove that topological multi-string solutions exist for any coupling constant, which improves previously known results. We provide two examples for application: the self-dual Einstein-Maxwell-Higgs model and the gravitational Maxwell gauged O(3) sigma model.

  6. A Journey through Time: From the Present Value to the Future Value and Back Or: Retirement Planning: A Comprehensible Application of the Time Value of Money Concept

    ERIC Educational Resources Information Center

    Schmidt, Carolin E.

    2016-01-01

    Real-life applications of financial concepts are a valuable method to get students engaged in financial topics. While especially non-finance majors often struggle to understand the importance of financial topics for their personal lives, applying these theories to real-life examples can significantly improve their learning experience and increase…

  7. Dynamic analysis of flexible mechanical systems using LATDYN

    NASA Technical Reports Server (NTRS)

    Wu, Shih-Chin; Chang, Che-Wei; Housner, Jerrold M.

    1989-01-01

    A 3-D, finite element based simulation tool for flexible multibody systems is presented. Hinge degrees-of-freedom is built into equations of motion to reduce geometric constraints. The approach avoids the difficulty in selecting deformation modes for flexible components by using assumed mode method. The tool is applied to simulate a practical space structure deployment problem. Results of examples demonstrate the capability of the code and approach.

  8. Ensemble Clustering Classification compete SVM and One-Class classifiers applied on plant microRNAs Data.

    PubMed

    Yousef, Malik; Khalifa, Waleed; AbedAllah, Loai

    2016-12-22

    The performance of many learning and data mining algorithms depends critically on suitable metrics to assess efficiency over the input space. Learning a suitable metric from examples may, therefore, be the key to successful application of these algorithms. We have demonstrated that the k-nearest neighbor (kNN) classification can be significantly improved by learning a distance metric from labeled examples. The clustering ensemble is used to define the distance between points in respect to how they co-cluster. This distance is then used within the framework of the kNN algorithm to define a classifier named ensemble clustering kNN classifier (EC-kNN). In many instances in our experiments we achieved highest accuracy while SVM failed to perform as well. In this study, we compare the performance of a two-class classifier using EC-kNN with different one-class and two-class classifiers. The comparison was applied to seven different plant microRNA species considering eight feature selection methods. In this study, the averaged results show that ECkNN outperforms all other methods employed here and previously published results for the same data. In conclusion, this study shows that the chosen classifier shows high performance when the distance metric is carefully chosen.

  9. Ensemble Clustering Classification Applied to Competing SVM and One-Class Classifiers Exemplified by Plant MicroRNAs Data.

    PubMed

    Yousef, Malik; Khalifa, Waleed; AbdAllah, Loai

    2016-12-01

    The performance of many learning and data mining algorithms depends critically on suitable metrics to assess efficiency over the input space. Learning a suitable metric from examples may, therefore, be the key to successful application of these algorithms. We have demonstrated that the k-nearest neighbor (kNN) classification can be significantly improved by learning a distance metric from labeled examples. The clustering ensemble is used to define the distance between points in respect to how they co-cluster. This distance is then used within the framework of the kNN algorithm to define a classifier named ensemble clustering kNN classifier (EC-kNN). In many instances in our experiments we achieved highest accuracy while SVM failed to perform as well. In this study, we compare the performance of a two-class classifier using EC-kNN with different one-class and two-class classifiers. The comparison was applied to seven different plant microRNA species considering eight feature selection methods. In this study, the averaged results show that EC-kNN outperforms all other methods employed here and previously published results for the same data. In conclusion, this study shows that the chosen classifier shows high performance when the distance metric is carefully chosen.

  10. Time differentiated nuclear resonance spectroscopy coupled with pulsed laser heating in diamond anvil cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kupenko, I., E-mail: kupenko@esrf.fr; Strohm, C.; ESRF-The European Synchrotron, CS 40220, 38043 Grenoble Cedex 9

    2015-11-15

    Developments in pulsed laser heating applied to nuclear resonance techniques are presented together with their applications to studies of geophysically relevant materials. Continuous laser heating in diamond anvil cells is a widely used method to generate extreme temperatures at static high pressure conditions in order to study the structure and properties of materials found in deep planetary interiors. The pulsed laser heating technique has advantages over continuous heating, including prevention of the spreading of heated sample and/or the pressure medium and, thus, a better stability of the heating process. Time differentiated data acquisition coupled with pulsed laser heating in diamondmore » anvil cells was successfully tested at the Nuclear Resonance beamline (ID18) of the European Synchrotron Radiation Facility. We show examples applying the method to investigation of an assemblage containing ε-Fe, FeO, and Fe{sub 3}C using synchrotron Mössbauer source spectroscopy, FeCO{sub 3} using nuclear inelastic scattering, and Fe{sub 2}O{sub 3} using nuclear forward scattering. These examples demonstrate the applicability of pulsed laser heating in diamond anvil cells to spectroscopic techniques with long data acquisition times, because it enables stable pulsed heating with data collection at specific time intervals that are synchronized with laser pulses.« less

  11. An application of model-fitting procedures for marginal structural models.

    PubMed

    Mortimer, Kathleen M; Neugebauer, Romain; van der Laan, Mark; Tager, Ira B

    2005-08-15

    Marginal structural models (MSMs) are being used more frequently to obtain causal effect estimates in observational studies. Although the principal estimator of MSM coefficients has been the inverse probability of treatment weight (IPTW) estimator, there are few published examples that illustrate how to apply IPTW or discuss the impact of model selection on effect estimates. The authors applied IPTW estimation of an MSM to observational data from the Fresno Asthmatic Children's Environment Study (2000-2002) to evaluate the effect of asthma rescue medication use on pulmonary function and compared their results with those obtained through traditional regression methods. Akaike's Information Criterion and cross-validation methods were used to fit the MSM. In this paper, the influence of model selection and evaluation of key assumptions such as the experimental treatment assignment assumption are discussed in detail. Traditional analyses suggested that medication use was not associated with an improvement in pulmonary function--a finding that is counterintuitive and probably due to confounding by symptoms and asthma severity. The final MSM estimated that medication use was causally related to a 7% improvement in pulmonary function. The authors present examples that should encourage investigators who use IPTW estimation to undertake and discuss the impact of model-fitting procedures to justify the choice of the final weights.

  12. Inference of Ancestral Recombination Graphs through Topological Data Analysis

    PubMed Central

    Cámara, Pablo G.; Levine, Arnold J.; Rabadán, Raúl

    2016-01-01

    The recent explosion of genomic data has underscored the need for interpretable and comprehensive analyses that can capture complex phylogenetic relationships within and across species. Recombination, reassortment and horizontal gene transfer constitute examples of pervasive biological phenomena that cannot be captured by tree-like representations. Starting from hundreds of genomes, we are interested in the reconstruction of potential evolutionary histories leading to the observed data. Ancestral recombination graphs represent potential histories that explicitly accommodate recombination and mutation events across orthologous genomes. However, they are computationally costly to reconstruct, usually being infeasible for more than few tens of genomes. Recently, Topological Data Analysis (TDA) methods have been proposed as robust and scalable methods that can capture the genetic scale and frequency of recombination. We build upon previous TDA developments for detecting and quantifying recombination, and present a novel framework that can be applied to hundreds of genomes and can be interpreted in terms of minimal histories of mutation and recombination events, quantifying the scales and identifying the genomic locations of recombinations. We implement this framework in a software package, called TARGet, and apply it to several examples, including small migration between different populations, human recombination, and horizontal evolution in finches inhabiting the Galápagos Islands. PMID:27532298

  13. Two Reconfigurable Flight-Control Design Methods: Robust Servomechanism and Control Allocation

    NASA Technical Reports Server (NTRS)

    Burken, John J.; Lu, Ping; Wu, Zheng-Lu; Bahm, Cathy

    2001-01-01

    Two methods for control system reconfiguration have been investigated. The first method is a robust servomechanism control approach (optimal tracking problem) that is a generalization of the classical proportional-plus-integral control to multiple input-multiple output systems. The second method is a control-allocation approach based on a quadratic programming formulation. A globally convergent fixed-point iteration algorithm has been developed to make onboard implementation of this method feasible. These methods have been applied to reconfigurable entry flight control design for the X-33 vehicle. Examples presented demonstrate simultaneous tracking of angle-of-attack and roll angle commands during failures of the fight body flap actuator. Although simulations demonstrate success of the first method in most cases, the control-allocation method appears to provide uniformly better performance in all cases.

  14. Modeling, Analyzing, and Mitigating Dissonance Between Alerting Systems

    NASA Technical Reports Server (NTRS)

    Song, Lixia; Kuchar, James K.

    2003-01-01

    Alerting systems are becoming pervasive in process operations, which may result in the potential for dissonance or conflict in information from different alerting systems that suggests different threat levels and/or actions to resolve hazards. Little is currently available to help in predicting or solving the dissonance problem. This thesis presents a methodology to model and analyze dissonance between alerting systems, providing both a theoretical foundation for understanding dissonance and a practical basis from which specific problems can be addressed. A state-space representation of multiple alerting system operation is generalized that can be tailored across a variety of applications. Based on the representation, two major causes of dissonance are identified: logic differences and sensor error. Additionally, several possible types of dissonance are identified. A mathematical analysis method is developed to identify the conditions for dissonance originating from logic differences. A probabilistic analysis methodology is developed to estimate the probability of dissonance originating from sensor error, and to compare the relative contribution to dissonance of sensor error against the contribution from logic differences. A hybrid model, which describes the dynamic behavior of the process with multiple alerting systems, is developed to identify dangerous dissonance space, from which the process can lead to disaster. Methodologies to avoid or mitigate dissonance are outlined. Two examples are used to demonstrate the application of the methodology. First, a conceptual In-Trail Spacing example is presented. The methodology is applied to identify the conditions for possible dissonance, to identify relative contribution of logic difference and sensor error, and to identify dangerous dissonance space. Several proposed mitigation methods are demonstrated in this example. In the second example, the methodology is applied to address the dissonance problem between two air traffic alert and avoidance systems: the existing Traffic Alert and Collision Avoidance System (TCAS) vs. the proposed Airborne Conflict Management system (ACM). Conditions on ACM resolution maneuvers are identified to avoid dynamic dissonance between TCAS and ACM. Also included in this report is an Appendix written by Lee Winder about recent and continuing work on alerting systems design. The application of Markov Decision Process (MDP) theory to complex alerting problems is discussed and illustrated with an abstract example system.

  15. Inverse problems with nonnegative and sparse solutions: algorithms and application to the phase retrieval problem

    NASA Astrophysics Data System (ADS)

    Quy Muoi, Pham; Nho Hào, Dinh; Sahoo, Sujit Kumar; Tang, Dongliang; Cong, Nguyen Huu; Dang, Cuong

    2018-05-01

    In this paper, we study a gradient-type method and a semismooth Newton method for minimization problems in regularizing inverse problems with nonnegative and sparse solutions. We propose a special penalty functional forcing the minimizers of regularized minimization problems to be nonnegative and sparse, and then we apply the proposed algorithms in a practical the problem. The strong convergence of the gradient-type method and the local superlinear convergence of the semismooth Newton method are proven. Then, we use these algorithms for the phase retrieval problem and illustrate their efficiency in numerical examples, particularly in the practical problem of optical imaging through scattering media where all the noises from experiment are presented.

  16. A semiparametric separation curve approach for comparing correlated ROC data from multiple markers

    PubMed Central

    Tang, Liansheng Larry; Zhou, Xiao-Hua

    2012-01-01

    In this article we propose a separation curve method to identify the range of false positive rates for which two ROC curves differ or one ROC curve is superior to the other. Our method is based on a general multivariate ROC curve model, including interaction terms between discrete covariates and false positive rates. It is applicable with most existing ROC curve models. Furthermore, we introduce a semiparametric least squares ROC estimator and apply the estimator to the separation curve method. We derive a sandwich estimator for the covariance matrix of the semiparametric estimator. We illustrate the application of our separation curve method through two real life examples. PMID:23074360

  17. On convergence and convergence rates for Ivanov and Morozov regularization and application to some parameter identification problems in elliptic PDEs

    NASA Astrophysics Data System (ADS)

    Kaltenbacher, Barbara; Klassen, Andrej

    2018-05-01

    In this paper we provide a convergence analysis of some variational methods alternative to the classical Tikhonov regularization, namely Ivanov regularization (also called the method of quasi solutions) with some versions of the discrepancy principle for choosing the regularization parameter, and Morozov regularization (also called the method of the residuals). After motivating nonequivalence with Tikhonov regularization by means of an example, we prove well-definedness of the Ivanov and the Morozov method, convergence in the sense of regularization, as well as convergence rates under variational source conditions. Finally, we apply these results to some linear and nonlinear parameter identification problems in elliptic boundary value problems.

  18. Quantum Approximate Methods for the Atomistic Modeling of Multicomponent Alloys. Chapter 7

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo; Garces, Jorge; Mosca, Hugo; Gargano, pablo; Noebe, Ronald D.; Abel, Phillip

    2007-01-01

    This chapter describes the role of quantum approximate methods in the understanding of complex multicomponent alloys at the atomic level. The need to accelerate materials design programs based on economical and efficient modeling techniques provides the framework for the introduction of approximations and simplifications in otherwise rigorous theoretical schemes. As a promising example of the role that such approximate methods might have in the development of complex systems, the BFS method for alloys is presented and applied to Ru-rich Ni-base superalloys and also to the NiAI(Ti,Cu) system, highlighting the benefits that can be obtained from introducing simple modeling techniques to the investigation of such complex systems.

  19. Determination of lateral-stability derivatives and transfer-function coefficients from frequency-response data for lateral motions

    NASA Technical Reports Server (NTRS)

    Donegan, James J; Robinson, Samuel W , Jr; Gates, Ordway, B , jr

    1955-01-01

    A method is presented for determining the lateral-stability derivatives, transfer-function coefficients, and the modes for lateral motion from frequency-response data for a rigid aircraft. The method is based on the application of the vector technique to the equations of lateral motion, so that the three equations of lateral motion can be separated into six equations. The method of least squares is then applied to the data for each of these equations to yield the coefficients of the equations of lateral motion from which the lateral-stability derivatives and lateral transfer-function coefficients are computed. Two numerical examples are given to demonstrate the use of the method.

  20. Gimbal-Angle Vectors of the Nonredundant CMG Cluster

    NASA Astrophysics Data System (ADS)

    Lee, Donghun; Bang, Hyochoong

    2018-05-01

    This paper deals with the method using the preferred gimbal angles of a control moment gyro (CMG) cluster for controlling spacecraft attitude. To apply the method to the nonredundant CMG cluster, analytical gimbal-angle solutions for the zero angular momentum state are derived, and the gimbal-angle vectors for the nonzero angular momentum states are studied by a numerical method. It will be shown that the number of the gimbal-angle vectors is determined from the given skew angle and the angular momentum state of the CMG cluster. Through numerical examples, it is shown that the method using the preferred gimbal-angle is an efficient approach to avoid internal singularities for the nonredundant CMG cluster.

Top