NASA Technical Reports Server (NTRS)
Chesler, L.; Pierce, S.
1971-01-01
Generalized, cyclic, and modified multistep numerical integration methods are developed and evaluated for application to problems of satellite orbit computation. Generalized methods are compared with the presently utilized Cowell methods; new cyclic methods are developed for special second-order differential equations; and several modified methods are developed and applied to orbit computation problems. Special computer programs were written to generate coefficients for these methods, and subroutines were written which allow use of these methods with NASA's GEOSTAR computer program.
Generalized trajectory surface hopping method based on the Zhu-Nakamura theory
NASA Astrophysics Data System (ADS)
Oloyede, Ponmile; Mil'nikov, Gennady; Nakamura, Hiroki
2006-04-01
We present a generalized formulation of the trajectory surface hopping method applicable to a general multidimensional system. The method is based on the Zhu-Nakamura theory of a nonadiabatic transition and therefore includes the treatment of classically forbidden hops. The method uses a generalized recipe for the conservation of angular momentum after forbidden hops and an approximation for determining a nonadiabatic transition direction which is crucial when the coupling vector is unavailable. This method also eliminates the need for a rigorous location of the seam surface, thereby ensuring its applicability to a wide class of chemical systems. In a test calculation, we implement the method for the DH2+ system, and it shows a remarkable agreement with the previous results of C. Zhu, H. Kamisaka, and H. Nakamura, [J. Chem. Phys. 116, 3234 (2002)]. We then apply it to a diatomic-in-molecule model system with a conical intersection, and the results compare well with exact quantum calculations. The successful application to the conical intersection system confirms the possibility of directly extending the present method to an arbitrary potential of general topology.
Teaching General Principles and Applications of Dendrogeomorphology.
ERIC Educational Resources Information Center
Butler, David R.
1987-01-01
Tree-ring analysis in geomorphology can be incorporated into a number of undergraduate methods in order to reconstruct the history of a variety of geomorphic processes. Discusses dendrochronology, general principles of dendrogeomorphology, field sampling methods, laboratory techniques, and examples of applications. (TW)
Seventh NASTRAN User's Colloquium
NASA Technical Reports Server (NTRS)
1978-01-01
The general application of finite element methodology and the specific application of NASTRAN to a wide variety of static and dynamic structural problems are described. Topics include: fluids and thermal applications, NASTRAN programming, substructuring methods, unique new applications, general auxiliary programs, specific applications, and new capabilities.
ERIC Educational Resources Information Center
White, Brian
2004-01-01
This paper presents a generally applicable method for characterizing subjects' hypothesis-testing behaviour based on a synthesis that extends on previous work. Beginning with a transcript of subjects' speech and videotape of their actions, a Reasoning Map is created that depicts the flow of their hypotheses, tests, predictions, results, and…
NASA Astrophysics Data System (ADS)
Nagai, Tetsuro
2017-01-01
Replica-exchange molecular dynamics (REMD) has demonstrated its efficiency by combining trajectories of a wide range of temperatures. As an extension of the method, the author formalizes the mass-manipulating replica-exchange molecular dynamics (MMREMD) method that allows for arbitrary mass scaling with respect to temperature and individual particles. The formalism enables the versatile application of mass-scaling approaches to the REMD method. The key change introduced in the novel formalism is the generalized rules for the velocity and momentum scaling after accepted replica-exchange attempts. As an application of this general formalism, the refinement of the viscosity-REMD (V-REMD) method [P. H. Nguyen,
A generalized simplest equation method and its application to the Boussinesq-Burgers equation.
Sudao, Bilige; Wang, Xiaomin
2015-01-01
In this paper, a generalized simplest equation method is proposed to seek exact solutions of nonlinear evolution equations (NLEEs). In the method, we chose a solution expression with a variable coefficient and a variable coefficient ordinary differential auxiliary equation. This method can yield a Bäcklund transformation between NLEEs and a related constraint equation. By dealing with the constraint equation, we can derive infinite number of exact solutions for NLEEs. These solutions include the traveling wave solutions, non-traveling wave solutions, multi-soliton solutions, rational solutions, and other types of solutions. As applications, we obtained wide classes of exact solutions for the Boussinesq-Burgers equation by using the generalized simplest equation method.
A Generalized Simplest Equation Method and Its Application to the Boussinesq-Burgers Equation
Sudao, Bilige; Wang, Xiaomin
2015-01-01
In this paper, a generalized simplest equation method is proposed to seek exact solutions of nonlinear evolution equations (NLEEs). In the method, we chose a solution expression with a variable coefficient and a variable coefficient ordinary differential auxiliary equation. This method can yield a Bäcklund transformation between NLEEs and a related constraint equation. By dealing with the constraint equation, we can derive infinite number of exact solutions for NLEEs. These solutions include the traveling wave solutions, non-traveling wave solutions, multi-soliton solutions, rational solutions, and other types of solutions. As applications, we obtained wide classes of exact solutions for the Boussinesq-Burgers equation by using the generalized simplest equation method. PMID:25973605
Eleventh NASTRAN User's Colloquium
NASA Technical Reports Server (NTRS)
1983-01-01
NASTRAN (NASA STRUCTURAL ANALYSIS) is a large, comprehensive, nonproprietary, general purpose finite element computer code for structural analysis which was developed under NASA sponsorship. The Eleventh Colloquium provides some comprehensive general papers on the application of finite element methods in engineering, comparisons with other approaches, unique applications, pre- and post-processing or auxiliary programs, and new methods of analysis with NASTRAN.
A general method for decomposing the causes of socioeconomic inequality in health.
Heckley, Gawain; Gerdtham, Ulf-G; Kjellsson, Gustav
2016-07-01
We introduce a general decomposition method applicable to all forms of bivariate rank dependent indices of socioeconomic inequality in health, including the concentration index. The technique is based on recentered influence function regression and requires only the application of OLS to a transformed variable with similar interpretation. Our method requires few identifying assumptions to yield valid estimates in most common empirical applications, unlike current methods favoured in the literature. Using the Swedish Twin Registry and a within twin pair fixed effects identification strategy, our new method finds no evidence of a causal effect of education on income-related health inequality. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
Measurement of residual stresses by the moire method
NASA Astrophysics Data System (ADS)
Sciammarella, C. A.; Albertazzi, A., Jr.
Three different applications of the moire method to the determination of residual stresses and strains are presented. The three applications take advantage of the property of ratings to record the changes of the surface they are printed on. One of the applications deals with thermal residual stresses, another with contact residual stress and the third one is a generalization of the blind hole technique. This last application is based on a computer assisted moire technique and on the generalization of the quasi-heterodyne techniques of fringe pattern analysis.
Iterative computation of generalized inverses, with an application to CMG steering laws
NASA Technical Reports Server (NTRS)
Steincamp, J. W.
1971-01-01
A cubically convergent iterative method for computing the generalized inverse of an arbitrary M X N matrix A is developed and a FORTRAN subroutine by which the method was implemented for real matrices on a CDC 3200 is given, with a numerical example to illustrate accuracy. Application to a redundant single-gimbal CMG assembly steering law is discussed.
ERIC Educational Resources Information Center
Maggin, Daniel M.; Swaminathan, Hariharan; Rogers, Helen J.; O'Keeffe, Breda V.; Sugai, George; Horner, Robert H.
2011-01-01
A new method for deriving effect sizes from single-case designs is proposed. The strategy is applicable to small-sample time-series data with autoregressive errors. The method uses Generalized Least Squares (GLS) to model the autocorrelation of the data and estimate regression parameters to produce an effect size that represents the magnitude of…
A Generalized Method of Image Analysis from an Intercorrelation Matrix which May Be Singular.
ERIC Educational Resources Information Center
Yanai, Haruo; Mukherjee, Bishwa Nath
1987-01-01
This generalized image analysis method is applicable to singular and non-singular correlation matrices (CMs). Using the orthogonal projector and a weaker generalized inverse matrix, image and anti-image covariance matrices can be derived from a singular CM. (SLD)
40 CFR 63.90 - Program overview.
Code of Federal Regulations, 2011 CFR
2011-07-01
... “proven technology” (generally accepted by the scientific community as equivalent or better) that is... enforceable test method involving “proven technology” (generally accepted by the scientific community as... interest; and (3) “Combining” a federally required method with another proven method for application to...
40 CFR 63.90 - Program overview.
Code of Federal Regulations, 2010 CFR
2010-07-01
... “proven technology” (generally accepted by the scientific community as equivalent or better) that is... enforceable test method involving “proven technology” (generally accepted by the scientific community as... interest; and (3) “Combining” a federally required method with another proven method for application to...
A connectionist model for dynamic control
NASA Technical Reports Server (NTRS)
Whitfield, Kevin C.; Goodall, Sharon M.; Reggia, James A.
1989-01-01
The application of a connectionist modeling method known as competition-based spreading activation to a camera tracking task is described. The potential is explored for automation of control and planning applications using connectionist technology. The emphasis is on applications suitable for use in the NASA Space Station and in related space activities. The results are quite general and could be applicable to control systems in general.
40 CFR 18.6 - Method of Application.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 1 2014-07-01 2014-07-01 false Method of Application. 18.6 Section 18.6 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GENERAL ENVIRONMENTAL PROTECTION RESEARCH FELLOWSHIPS AND SPECIAL RESEARCH CONSULTANTS FOR ENVIRONMENTAL PROTECTION § 18.6 Method of...
NASA Astrophysics Data System (ADS)
Silenko, Alexander J.
2016-02-01
General properties of the Foldy-Wouthuysen transformation which is widely used in quantum mechanics and quantum chemistry are considered. Merits and demerits of the original Foldy-Wouthuysen transformation method are analyzed. While this method does not satisfy the Eriksen condition of the Foldy-Wouthuysen transformation, it can be corrected with the use of the Baker-Campbell-Hausdorff formula. We show a possibility of such a correction and propose an appropriate algorithm of calculations. An applicability of the corrected Foldy-Wouthuysen method is restricted by the condition of convergence of a series of relativistic corrections.
Generalized Ordinary Differential Equation Models 1
Miao, Hongyu; Wu, Hulin; Xue, Hongqi
2014-01-01
Existing estimation methods for ordinary differential equation (ODE) models are not applicable to discrete data. The generalized ODE (GODE) model is therefore proposed and investigated for the first time. We develop the likelihood-based parameter estimation and inference methods for GODE models. We propose robust computing algorithms and rigorously investigate the asymptotic properties of the proposed estimator by considering both measurement errors and numerical errors in solving ODEs. The simulation study and application of our methods to an influenza viral dynamics study suggest that the proposed methods have a superior performance in terms of accuracy over the existing ODE model estimation approach and the extended smoothing-based (ESB) method. PMID:25544787
Generalized Ordinary Differential Equation Models.
Miao, Hongyu; Wu, Hulin; Xue, Hongqi
2014-10-01
Existing estimation methods for ordinary differential equation (ODE) models are not applicable to discrete data. The generalized ODE (GODE) model is therefore proposed and investigated for the first time. We develop the likelihood-based parameter estimation and inference methods for GODE models. We propose robust computing algorithms and rigorously investigate the asymptotic properties of the proposed estimator by considering both measurement errors and numerical errors in solving ODEs. The simulation study and application of our methods to an influenza viral dynamics study suggest that the proposed methods have a superior performance in terms of accuracy over the existing ODE model estimation approach and the extended smoothing-based (ESB) method.
The Need for a Contemporary Theory of Job Design.
ERIC Educational Resources Information Center
Martelli, Joseph T.
1982-01-01
Presents a critique of Taylor's scientific management theory and the negative consequences of work simplification. Compares this method with Maslow's, Herzberg's, and Thorsrud's theories of motivation, and contrasts the experiences of General Motors' application of Taylor's model and General Foods' application of Thorsrud's. (SK)
19 CFR 201.9 - Methods employed in obtaining information.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 19 Customs Duties 3 2014-04-01 2014-04-01 false Methods employed in obtaining information. 201.9 Section 201.9 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION GENERAL RULES OF GENERAL APPLICATION Initiation and Conduct of Investigations § 201.9 Methods employed in obtaining information. In...
19 CFR 201.9 - Methods employed in obtaining information.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 19 Customs Duties 3 2013-04-01 2013-04-01 false Methods employed in obtaining information. 201.9 Section 201.9 Customs Duties UNITED STATES INTERNATIONAL TRADE COMMISSION GENERAL RULES OF GENERAL APPLICATION Initiation and Conduct of Investigations § 201.9 Methods employed in obtaining information. In...
Code of Federal Regulations, 2010 CFR
2010-04-01
...) generally constitutes the use of an impermissible method of accounting, requiring a change to a permissible...)(i). (ii) Change in method of accounting; adoption of method of accounting—(A) In general. The annual... change to or from either of these methods is a change in method of accounting that requires the consent...
Tsou, Tsung-Shan
2007-03-30
This paper introduces an exploratory way to determine how variance relates to the mean in generalized linear models. This novel method employs the robust likelihood technique introduced by Royall and Tsou.A urinary data set collected by Ginsberg et al. and the fabric data set analysed by Lee and Nelder are considered to demonstrate the applicability and simplicity of the proposed technique. Application of the proposed method could easily reveal a mean-variance relationship that would generally be left unnoticed, or that would require more complex modelling to detect. Copyright (c) 2006 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
Sgammato, Adrienne N.
2009-01-01
This study examined the applicability of a relatively new unidimensional, unfolding item response theory (IRT) model called the generalized graded unfolding model (GGUM; Roberts, Donoghue, & Laughlin, 2000). A total of four scaling methods were applied. Two commonly used cumulative IRT models for polytomous data, the Partial Credit Model and…
NASA Technical Reports Server (NTRS)
Zimmerle, D.; Bernhard, R. J.
1985-01-01
An alternative method for performing singular boundary element integrals for applications in linear acoustics is discussed. The method separates the integral of the characteristic solution into a singular and nonsingular part. The singular portion is integrated with a combination of analytic and numerical techniques while the nonsingular portion is integrated with standard Gaussian quadrature. The method may be generalized to many types of subparametric elements. The integrals over elements containing the root node are considered, and the characteristic solution for linear acoustic problems are examined. The method may be generalized to most characteristic solutions.
NASA Technical Reports Server (NTRS)
Schmidt, H.; Tango, G. J.; Werby, M. F.
1985-01-01
A new matrix method for rapid wave propagation modeling in generalized stratified media, which has recently been applied to numerical simulations in diverse areas of underwater acoustics, solid earth seismology, and nondestructive ultrasonic scattering is explained and illustrated. A portion of recent efforts jointly undertaken at NATOSACLANT and NORDA Numerical Modeling groups in developing, implementing, and testing a new fast general-applications wave propagation algorithm, SAFARI, formulated at SACLANT is summarized. The present general-applications SAFARI program uses a Direct Global Matrix Approach to multilayer Green's function calculation. A rapid and unconditionally stable solution is readily obtained via simple Gaussian ellimination on the resulting sparsely banded block system, precisely analogous to that arising in the Finite Element Method. The resulting gains in accuracy and computational speed allow consideration of much larger multilayered air/ocean/Earth/engineering material media models, for many more source-receiver configurations than previously possible. The validity and versatility of the SAFARI-DGM method is demonstrated by reviewing three practical examples of engineering interest, drawn from ocean acoustics, engineering seismology and ultrasonic scattering.
NASA Technical Reports Server (NTRS)
Walton, William C., Jr.
1960-01-01
This paper reports the findings of an investigation of a finite - difference method directly applicable to calculating static or simple harmonic flexures of solid plates and potentially useful in other problems of structural analysis. The method, which was proposed in doctoral thesis by John C. Houbolt, is based on linear theory and incorporates the principle of minimum potential energy. Full realization of its advantages requires use of high-speed computing equipment. After a review of Houbolt's method, results of some applications are presented and discussed. The applications consisted of calculations of the natural modes and frequencies of several uniform-thickness cantilever plates and, as a special case of interest, calculations of the modes and frequencies of the uniform free-free beam. Computed frequencies and nodal patterns for the first five or six modes of each plate are compared with existing experiments, and those for one plate are compared with another approximate theory. Beam computations are compared with exact theory. On the basis of the comparisons it is concluded that the method is accurate and general in predicting plate flexures, and additional applications are suggested. An appendix is devoted t o computing procedures which evolved in the progress of the applications and which facilitate use of the method in conjunction with high-speed computing equipment.
Mathematical foundations of hybrid data assimilation from a synchronization perspective
NASA Astrophysics Data System (ADS)
Penny, Stephen G.
2017-12-01
The state-of-the-art data assimilation methods used today in operational weather prediction centers around the world can be classified as generalized one-way coupled impulsive synchronization. This classification permits the investigation of hybrid data assimilation methods, which combine dynamic error estimates of the system state with long time-averaged (climatological) error estimates, from a synchronization perspective. Illustrative results show how dynamically informed formulations of the coupling matrix (via an Ensemble Kalman Filter, EnKF) can lead to synchronization when observing networks are sparse and how hybrid methods can lead to synchronization when those dynamic formulations are inadequate (due to small ensemble sizes). A large-scale application with a global ocean general circulation model is also presented. Results indicate that the hybrid methods also have useful applications in generalized synchronization, in particular, for correcting systematic model errors.
Mathematical foundations of hybrid data assimilation from a synchronization perspective.
Penny, Stephen G
2017-12-01
The state-of-the-art data assimilation methods used today in operational weather prediction centers around the world can be classified as generalized one-way coupled impulsive synchronization. This classification permits the investigation of hybrid data assimilation methods, which combine dynamic error estimates of the system state with long time-averaged (climatological) error estimates, from a synchronization perspective. Illustrative results show how dynamically informed formulations of the coupling matrix (via an Ensemble Kalman Filter, EnKF) can lead to synchronization when observing networks are sparse and how hybrid methods can lead to synchronization when those dynamic formulations are inadequate (due to small ensemble sizes). A large-scale application with a global ocean general circulation model is also presented. Results indicate that the hybrid methods also have useful applications in generalized synchronization, in particular, for correcting systematic model errors.
Computation of viscous incompressible flows
NASA Technical Reports Server (NTRS)
Kwak, Dochan
1989-01-01
Incompressible Navier-Stokes solution methods and their applications to three-dimensional flows are discussed. A brief review of existing methods is given followed by a detailed description of recent progress on development of three-dimensional generalized flow solvers. Emphasis is placed on primitive variable formulations which are most promising and flexible for general three-dimensional computations of viscous incompressible flows. Both steady- and unsteady-solution algorithms and their salient features are discussed. Finally, examples of real world applications of these flow solvers are given.
A quantum–quantum Metropolis algorithm
Yung, Man-Hong; Aspuru-Guzik, Alán
2012-01-01
The classical Metropolis sampling method is a cornerstone of many statistical modeling applications that range from physics, chemistry, and biology to economics. This method is particularly suitable for sampling the thermal distributions of classical systems. The challenge of extending this method to the simulation of arbitrary quantum systems is that, in general, eigenstates of quantum Hamiltonians cannot be obtained efficiently with a classical computer. However, this challenge can be overcome by quantum computers. Here, we present a quantum algorithm which fully generalizes the classical Metropolis algorithm to the quantum domain. The meaning of quantum generalization is twofold: The proposed algorithm is not only applicable to both classical and quantum systems, but also offers a quantum speedup relative to the classical counterpart. Furthermore, unlike the classical method of quantum Monte Carlo, this quantum algorithm does not suffer from the negative-sign problem associated with fermionic systems. Applications of this algorithm include the study of low-temperature properties of quantum systems, such as the Hubbard model, and preparing the thermal states of sizable molecules to simulate, for example, chemical reactions at an arbitrary temperature. PMID:22215584
A second-order shock-expansion method applicable to bodies of revolution near zero lift
NASA Technical Reports Server (NTRS)
1957-01-01
A second-order shock-expansion method applicable to bodies of revolution is developed by the use of the predictions of the generalized shock-expansion method in combination with characteristics theory. Equations defining the zero-lift pressure distributions and the normal-force and pitching-moment derivatives are derived. Comparisons with experimental results show that the method is applicable at values of the similarity parameter, the ratio of free-stream Mach number to nose fineness ratio, from about 0.4 to 2.
29 CFR 4281.17 - Asset valuation methods-in general.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 9 2010-07-01 2010-07-01 false Asset valuation methods-in general. 4281.17 Section 4281.17 Labor Regulations Relating to Labor (Continued) PENSION BENEFIT GUARANTY CORPORATION INSOLVENCY, REORGANIZATION, TERMINATION, AND OTHER RULES APPLICABLE TO MULTIEMPLOYER PLANS DUTIES OF PLAN SPONSOR FOLLOWING...
29 CFR 4281.13 - Benefit valuation methods-in general.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 9 2010-07-01 2010-07-01 false Benefit valuation methods-in general. 4281.13 Section 4281.13 Labor Regulations Relating to Labor (Continued) PENSION BENEFIT GUARANTY CORPORATION INSOLVENCY, REORGANIZATION, TERMINATION, AND OTHER RULES APPLICABLE TO MULTIEMPLOYER PLANS DUTIES OF PLAN SPONSOR FOLLOWING...
Eric H. Wharton; Douglas M. Griffith
1993-01-01
Presents methods for synthesizing information from existing biomass literature when making biomass assessments over extensive geographic areas, such as for a state or region. Described are general applications to the northeastern United States, and specific applications to Ohio. Tables of appropriate regression equations and the tree and shrub species to which these...
A General Symbolic Method with Physical Applications
NASA Astrophysics Data System (ADS)
Smith, Gregory M.
2000-06-01
A solution to the problem of unifying the General Relativistic and Quantum Theoretical formalisms is given which introduces a new non-axiomatic symbolic method and an algebraic generalization of the Calculus to non-finite symbolisms without reference to the concept of a limit. An essential feature of the non-axiomatic method is the inadequacy of any (finite) statements: Identifying this aspect of the theory with the "existence of an external physical reality" both allows for the consistency of the method with the results of experiments and avoids the so-called "measurement problem" of quantum theory.
26 CFR 1.199-0 - Table of contents.
Code of Federal Regulations, 2010 CFR
2010-04-01
... receipts. (1) In general. (2) Reasonable method of allocation. (3) De minimis rules. (i) DPGR. (ii) Non... of completion method. (3) Examples. § 1.199-2Wage limitation. (a) Rules of application. (1) In... reported on return filed with the Social Security Administration. (i) In general. (ii) Corrected return...
NASA Astrophysics Data System (ADS)
Deidda, Roberto; Mamalakis, Antonis; Langousis, Andreas
2015-04-01
One of the most crucial issues in statistical hydrology is the estimation of extreme rainfall from data. To that extent, based on asymptotic arguments from Extreme Excess (EE) theory, several studies have focused on developing new, or improving existing methods to fit a Generalized Pareto Distribution (GPD) model to rainfall excesses above a properly selected threshold u. The latter is generally determined using various approaches that can be grouped into three basic classes: a) non-parametric methods that locate the changing point between extreme and non-extreme regions of the data, b) graphical methods where one studies the dependence of the GPD parameters (or related metrics) to the threshold level u, and c) Goodness of Fit (GoF) metrics that, for a certain level of significance, locate the lowest threshold u that a GPD model is applicable. In this work, we review representative methods for GPD threshold detection, discuss fundamental differences in their theoretical bases, and apply them to daily rainfall records from the NOAA-NCDC open-access database (http://www.ncdc.noaa.gov/oa/climate/ghcn-daily/). We find that non-parametric methods that locate the changing point between extreme and non-extreme regions of the data are generally not reliable, while graphical methods and GoF metrics that rely on limiting arguments for the upper distribution tail lead to unrealistically high thresholds u. The latter is expected, since one checks the validity of the limiting arguments rather than the applicability of a GPD distribution model. Better performance is demonstrated by graphical methods and GoF metrics that rely on GPD properties. Finally, we discuss the effects of data quantization (common in hydrologic applications) on the estimated thresholds. Acknowledgments: The research project is implemented within the framework of the Action «Supporting Postdoctoral Researchers» of the Operational Program "Education and Lifelong Learning" (Action's Beneficiary: General Secretariat for Research and Technology), and is co-financed by the European Social Fund (ESF) and the Greek State.
Advances in satellite oceanography
NASA Technical Reports Server (NTRS)
Brown, O. B.; Cheney, R. E.
1983-01-01
Technical advances and recent applications of active and passive satellite remote sensing techniques to the study of oceanic processes are summarized. The general themes include infrared and visible radiometry, active and passive microwave sensors, and buoy location systems. The surface parameters of sea surface temperature, windstream, sea state, altimetry, color, and ice are treated as applicable under each of the general methods.
NASA Technical Reports Server (NTRS)
Rosenfeld, Moshe
1990-01-01
The main goals are the development, validation, and application of a fractional step solution method of the time-dependent incompressible Navier-Stokes equations in generalized coordinate systems. A solution method that combines a finite volume discretization with a novel choice of the dependent variables and a fractional step splitting to obtain accurate solutions in arbitrary geometries is extended to include more general situations, including cases with moving grids. The numerical techniques are enhanced to gain efficiency and generality.
40 CFR 49.131 - General rule for open burning.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 1 2013-07-01 2013-07-01 false General rule for open burning. 49.131... General Rules for Application to Indian Reservations in Epa Region 10 § 49.131 General rule for open... eliminate open burning disposal practices where alternative methods are feasible and practicable, to...
40 CFR 49.131 - General rule for open burning.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 1 2012-07-01 2012-07-01 false General rule for open burning. 49.131... General Rules for Application to Indian Reservations in Epa Region 10 § 49.131 General rule for open... eliminate open burning disposal practices where alternative methods are feasible and practicable, to...
40 CFR 49.131 - General rule for open burning.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 1 2014-07-01 2014-07-01 false General rule for open burning. 49.131... General Rules for Application to Indian Reservations in Epa Region 10 § 49.131 General rule for open... eliminate open burning disposal practices where alternative methods are feasible and practicable, to...
Twelfth NASTRAN (R) Users' Colloquium
NASA Technical Reports Server (NTRS)
1984-01-01
NASTRAN is a large, comprehensive, nonproprietary, general purpose finite element computer code for structural analysis. The Twelfth Users' Colloquim provides some comprehensive papers on the application of finite element methods in engineering, comparisons with other approaches, unique applications, pre and post processing or auxiliary programs, and new methods of analysis with NASTRAN.
Development of the general interpolants method for the CYBER 200 series of supercomputers
NASA Technical Reports Server (NTRS)
Stalnaker, J. F.; Robinson, M. A.; Spradley, L. W.; Kurzius, S. C.; Thoenes, J.
1988-01-01
The General Interpolants Method (GIM) is a 3-D, time-dependent, hybrid procedure for generating numerical analogs of the conservation laws. This study is directed toward the development and application of the GIM computer code for fluid dynamic research applications as implemented for the Cyber 200 series of supercomputers. An elliptic and quasi-parabolic version of the GIM code are discussed. Turbulence models, algebraic and differential equations, were added to the basic viscous code. An equilibrium reacting chemistry model and an implicit finite difference scheme are also included.
NASA Astrophysics Data System (ADS)
Jia, Xiaodong; Zhao, Ming; Di, Yuan; Li, Pin; Lee, Jay
2018-03-01
Sparsity is becoming a more and more important topic in the area of machine learning and signal processing recently. One big family of sparse measures in current literature is the generalized lp /lq norm, which is scale invariant and is widely regarded as normalized lp norm. However, the characteristics of the generalized lp /lq norm are still less discussed and its application to the condition monitoring of rotating devices has been still unexplored. In this study, we firstly discuss the characteristics of the generalized lp /lq norm for sparse optimization and then propose a method of sparse filtering with the generalized lp /lq norm for the purpose of impulsive signature enhancement. Further driven by the trend of industrial big data and the need of reducing maintenance cost for industrial equipment, the proposed sparse filter is customized for vibration signal processing and also implemented on bearing and gearbox for the purpose of condition monitoring. Based on the results from the industrial implementations in this paper, the proposed method has been found to be a promising tool for impulsive feature enhancement, and the superiority of the proposed method over previous methods is also demonstrated.
Sixth NASTRAN (R) Users' Colloquium
NASA Technical Reports Server (NTRS)
1977-01-01
Papers are presented on NASTRAN programming, and substructuring methods, as well as on fluids and thermal applications. Specific applications and capabilities of NASTRAN were also delineated along with general auxiliary programs.
Foo, Lee Kien; McGree, James; Duffull, Stephen
2012-01-01
Optimal design methods have been proposed to determine the best sampling times when sparse blood sampling is required in clinical pharmacokinetic studies. However, the optimal blood sampling time points may not be feasible in clinical practice. Sampling windows, a time interval for blood sample collection, have been proposed to provide flexibility in blood sampling times while preserving efficient parameter estimation. Because of the complexity of the population pharmacokinetic models, which are generally nonlinear mixed effects models, there is no analytical solution available to determine sampling windows. We propose a method for determination of sampling windows based on MCMC sampling techniques. The proposed method attains a stationary distribution rapidly and provides time-sensitive windows around the optimal design points. The proposed method is applicable to determine sampling windows for any nonlinear mixed effects model although our work focuses on an application to population pharmacokinetic models. Copyright © 2012 John Wiley & Sons, Ltd.
General Purpose Data-Driven Online System Health Monitoring with Applications to Space Operations
NASA Technical Reports Server (NTRS)
Iverson, David L.; Spirkovska, Lilly; Schwabacher, Mark
2010-01-01
Modern space transportation and ground support system designs are becoming increasingly sophisticated and complex. Determining the health state of these systems using traditional parameter limit checking, or model-based or rule-based methods is becoming more difficult as the number of sensors and component interactions grows. Data-driven monitoring techniques have been developed to address these issues by analyzing system operations data to automatically characterize normal system behavior. System health can be monitored by comparing real-time operating data with these nominal characterizations, providing detection of anomalous data signatures indicative of system faults, failures, or precursors of significant failures. The Inductive Monitoring System (IMS) is a general purpose, data-driven system health monitoring software tool that has been successfully applied to several aerospace applications and is under evaluation for anomaly detection in vehicle and ground equipment for next generation launch systems. After an introduction to IMS application development, we discuss these NASA online monitoring applications, including the integration of IMS with complementary model-based and rule-based methods. Although the examples presented in this paper are from space operations applications, IMS is a general-purpose health-monitoring tool that is also applicable to power generation and transmission system monitoring.
Structural Embeddings: Mechanization with Method
NASA Technical Reports Server (NTRS)
Munoz, Cesar; Rushby, John
1999-01-01
The most powerful tools for analysis of formal specifications are general-purpose theorem provers and model checkers, but these tools provide scant methodological support. Conversely, those approaches that do provide a well-developed method generally have less powerful automation. It is natural, therefore, to try to combine the better-developed methods with the more powerful general-purpose tools. An obstacle is that the methods and the tools often employ very different logics. We argue that methods are separable from their logics and are largely concerned with the structure and organization of specifications. We, propose a technique called structural embedding that allows the structural elements of a method to be supported by a general-purpose tool, while substituting the logic of the tool for that of the method. We have found this technique quite effective and we provide some examples of its application. We also suggest how general-purpose systems could be restructured to support this activity better.
A Research Context for Diagnostic and Prescriptive Mathematics.
ERIC Educational Resources Information Center
Engelhardt, Jon; Uprichard, A. Edward
1998-01-01
Argues that a position should be taken on which future research initiatives on learning and instruction will be most worthy if grounded in general systems theory and multiple research methods are employed. Presents an application of general systems theory to research on learning and instruction, including a system of research methods and…
Analytical approximate solutions for a general class of nonlinear delay differential equations.
Căruntu, Bogdan; Bota, Constantin
2014-01-01
We use the polynomial least squares method (PLSM), which allows us to compute analytical approximate polynomial solutions for a very general class of strongly nonlinear delay differential equations. The method is tested by computing approximate solutions for several applications including the pantograph equations and a nonlinear time-delay model from biology. The accuracy of the method is illustrated by a comparison with approximate solutions previously computed using other methods.
Dynamics of local grid manipulations for internal flow problems
NASA Technical Reports Server (NTRS)
Eiseman, Peter R.; Snyder, Aaron; Choo, Yung K.
1991-01-01
The control point method of algebraic grid generation is briefly reviewed. The review proceeds from the general statement of the method in 2-D unencumbered by detailed mathematical formulation. The method is supported by an introspective discussion which provides the basis for confidence in the approach. The more complex 3-D formulation is then presented as a natural generalization. Application of the method is carried out through 2-D examples which demonstrate the technique.
NASA Technical Reports Server (NTRS)
1993-01-01
This plan provides the framework for selection based on merit from among the best qualified candidates available. Selections will be made without regard to political, religious, or labor organization affiliation or nonaffiliation, marital status, race, color, sex, national origin, nondisqualifying disability, or age. This plan does not guarantee promotion but rather ensures that all qualified available candidates receive fair and equitable consideration for positions filled under these competitive procedures. Announcing a vacancy under this plan is only one method of locating applicants for a position and can be used in conjunction with other methods. Subject to applicable law and regulation, selection of an individual to fill a position is the decision of management, as is the decision as to the method(s) to be used in identifying candidates. This plan is applicable to all NASA Installations. It covers all positions in the competitive service at (and below) the GS/GM-15 level (including all trades and labor positions), except positions in the Office of the Inspector General. The requirements herein are not intended to, nor should they be construed to limit in any way, the independent personnel authority of the Inspector General under the Inspector General Act, as Amended.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-05
... determination method (AEDM) for small electric motors, including the statistical requirements to substantiate... restriction to a particular application or type of application; or (2) Standard operating characteristics or... application, and which can be used in most general purpose applications. [[Page 652
Application of volume rendering technique (VRT) for musculoskeletal imaging.
Darecki, Rafał
2002-10-30
A review of the applications of volume rendering technique in musculoskeletal three-dimensional imaging from CT data. General features, potential and indications for applying the method are presented.
NASA Technical Reports Server (NTRS)
Rosenfeld, Moshe
1990-01-01
The development, validation and application of a fractional step solution method of the time-dependent incompressible Navier-Stokes equations in generalized coordinate systems are discussed. A solution method that combines a finite-volume discretization with a novel choice of the dependent variables and a fractional step splitting to obtain accurate solutions in arbitrary geometries was previously developed for fixed-grids. In the present research effort, this solution method is extended to include more general situations, including cases with moving grids. The numerical techniques are enhanced to gain efficiency and generality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perfetti, Christopher M; Rearden, Bradley T
2014-01-01
This work introduces a new approach for calculating sensitivity coefficients for generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The approach presented in this paper, known as the GEAR-MC method, allows for the calculation of generalized sensitivity coefficients for multiple responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here, and proof of principle is demonstrated by using the GEAR-MC method to calculate sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications.
Construction of RFIF using VVSFs with application
NASA Astrophysics Data System (ADS)
Katiyar, Kuldip; Prasad, Bhagwati
2017-10-01
A method of variable vertical scaling factors (VVSFs) is proposed to define the recurrent fractal interpolation function (RFIF) for fitting the data sets. A generalization of one of the recent methods using analytic approach is presented for finding variable vertical scaling factors. An application of it in reconstruction of an EEG signal is also given.
A substructure coupling procedure applicable to general linear time-invariant dynamic systems
NASA Technical Reports Server (NTRS)
Howsman, T. G.; Craig, R. R., Jr.
1984-01-01
A substructure synthesis procedure applicable to structural systems containing general nonconservative terms is presented. In their final form, the nonself-adjoint substructure equations of motion are cast in state vector form through the use of a variational principle. A reduced-order mode for each substructure is implemented by representing the substructure as a combination of a small number of Ritz vectors. For the method presented, the substructure Ritz vectors are identified as a truncated set of substructure eigenmodes, which are typically complex, along with a set of generalized real attachment modes. The formation of the generalized attachment modes does not require any knowledge of the substructure flexible modes; hence, only the eigenmodes used explicitly as Ritz vectors need to be extracted from the substructure eigenproblem. An example problem is presented to illustrate the method.
General image method in a plane-layered elastostatic medium
NASA Technical Reports Server (NTRS)
Fares, N.; Li, V. C.
1988-01-01
The general-image method presently used to obtain the elastostatic fields in plane-layered media relies on the use of potentials in order to represent elastic fields. For the case of a single interface, this method yields the displacement field in closed form, and is applicable to antiplane, plane, and three-dimensional problems. In the case of multiplane interfaces, the image method generates the displacement fields in terms of infinite series whose convergences can be accelerated to improve method efficiency.
NASA Astrophysics Data System (ADS)
Choi, Chu Hwan
2002-09-01
Ab initio chemistry has shown great promise in reproducing experimental results and in its predictive power. The many complicated computational models and methods seem impenetrable to an inexperienced scientist, and the reliability of the results is not easily interpreted. The application of midbond orbitals is used to determine a general method for use in calculating weak intermolecular interactions, especially those involving electron-deficient systems. Using the criteria of consistency, flexibility, accuracy and efficiency we propose a supermolecular method of calculation using the full counterpoise (CP) method of Boys and Bernardi, coupled with Moller-Plesset (MP) perturbation theory as an efficient electron-correlative method. We also advocate the use of the highly efficient and reliable correlation-consistent polarized valence basis sets of Dunning. To these basis sets, we add a general set of midbond orbitals and demonstrate greatly enhanced efficiency in the calculation. The H2-H2 dimer is taken as a benchmark test case for our method, and details of the computation are elaborated. Our method reproduces with great accuracy the dissociation energies of other previous theoretical studies. The added efficiency of extending the basis sets with conventional means is compared with the performance of our midbond-extended basis sets. The improvement found with midbond functions is notably superior in every case tested. Finally, a novel application of midbond functions to the BH5 complex is presented. The system is an unusual van der Waals complex. The interaction potential curves are presented for several standard basis sets and midbond-enhanced basis sets, as well as for two popular, alternative correlation methods. We report that MP theory appears to be superior to coupled-cluster (CC) in speed, while it is more stable than B3LYP, a widely-used density functional theory (DFT). Application of our general method yields excellent results for the midbond basis sets. Again they prove superior to conventional extended basis sets. Based on these results, we recommend our general approach as a highly efficient, accurate method for calculating weakly interacting systems.
Generalizing DTW to the multi-dimensional case requires an adaptive approach
Hu, Bing; Jin, Hongxia; Wang, Jun; Keogh, Eamonn
2017-01-01
In recent years Dynamic Time Warping (DTW) has emerged as the distance measure of choice for virtually all time series data mining applications. For example, virtually all applications that process data from wearable devices use DTW as a core sub-routine. This is the result of significant progress in improving DTW’s efficiency, together with multiple empirical studies showing that DTW-based classifiers at least equal (and generally surpass) the accuracy of all their rivals across dozens of datasets. Thus far, most of the research has considered only the one-dimensional case, with practitioners generalizing to the multi-dimensional case in one of two ways, dependent or independent warping. In general, it appears the community believes either that the two ways are equivalent, or that the choice is irrelevant. In this work, we show that this is not the case. The two most commonly used multi-dimensional DTW methods can produce different classifications, and neither one dominates over the other. This seems to suggest that one should learn the best method for a particular application. However, we will show that this is not necessary; a simple, principled rule can be used on a case-by-case basis to predict which of the two methods we should trust at the time of classification. Our method allows us to ensure that classification results are at least as accurate as the better of the two rival methods, and, in many cases, our method is significantly more accurate. We demonstrate our ideas with the most extensive set of multi-dimensional time series classification experiments ever attempted. PMID:29104448
Numerical methods for large-scale, time-dependent partial differential equations
NASA Technical Reports Server (NTRS)
Turkel, E.
1979-01-01
A survey of numerical methods for time dependent partial differential equations is presented. The emphasis is on practical applications to large scale problems. A discussion of new developments in high order methods and moving grids is given. The importance of boundary conditions is stressed for both internal and external flows. A description of implicit methods is presented including generalizations to multidimensions. Shocks, aerodynamics, meteorology, plasma physics and combustion applications are also briefly described.
Plasticity - Theory and finite element applications.
NASA Technical Reports Server (NTRS)
Armen, H., Jr.; Levine, H. S.
1972-01-01
A unified presentation is given of the development and distinctions associated with various incremental solution procedures used to solve the equations governing the nonlinear behavior of structures, and this is discussed within the framework of the finite-element method. Although the primary emphasis here is on material nonlinearities, consideration is also given to geometric nonlinearities acting separately or in combination with nonlinear material behavior. The methods discussed here are applicable to a broad spectrum of structures, ranging from simple beams to general three-dimensional bodies. The finite-element analysis methods for material nonlinearity are general in the sense that any of the available plasticity theories can be incorporated to treat strain hardening or ideally plastic behavior.
Automatic Extraction of Planetary Image Features
NASA Technical Reports Server (NTRS)
Troglio, G.; LeMoigne, J.; Moser, G.; Serpico, S. B.; Benediktsson, J. A.
2009-01-01
With the launch of several Lunar missions such as the Lunar Reconnaissance Orbiter (LRO) and Chandrayaan-1, a large amount of Lunar images will be acquired and will need to be analyzed. Although many automatic feature extraction methods have been proposed and utilized for Earth remote sensing images, these methods are not always applicable to Lunar data that often present low contrast and uneven illumination characteristics. In this paper, we propose a new method for the extraction of Lunar features (that can be generalized to other planetary images), based on the combination of several image processing techniques, a watershed segmentation and the generalized Hough Transform. This feature extraction has many applications, among which image registration.
Illustrated structural application of universal first-order reliability method
NASA Technical Reports Server (NTRS)
Verderaime, V.
1994-01-01
The general application of the proposed first-order reliability method was achieved through the universal normalization of engineering probability distribution data. The method superimposes prevailing deterministic techniques and practices on the first-order reliability method to surmount deficiencies of the deterministic method and provide benefits of reliability techniques and predictions. A reliability design factor is derived from the reliability criterion to satisfy a specified reliability and is analogous to the deterministic safety factor. Its application is numerically illustrated on several practical structural design and verification cases with interesting results and insights. Two concepts of reliability selection criteria are suggested. Though the method was developed to support affordable structures for access to space, the method should also be applicable for most high-performance air and surface transportation systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Messud, J.; Dinh, P. M.; Suraud, Eric
2009-10-15
We propose a simplification of the time-dependent self-interaction correction (TD-SIC) method using two sets of orbitals, applying the optimized effective potential (OEP) method. The resulting scheme is called time-dependent 'generalized SIC-OEP'. A straightforward approximation, using the spatial localization of one set of orbitals, leads to the 'generalized SIC-Slater' formalism. We show that it represents a great improvement compared to the traditional SIC-Slater and Krieger-Li-Iafrate formalisms.
NASA Astrophysics Data System (ADS)
Messud, J.; Dinh, P. M.; Reinhard, P.-G.; Suraud, Eric
2009-10-01
We propose a simplification of the time-dependent self-interaction correction (TD-SIC) method using two sets of orbitals, applying the optimized effective potential (OEP) method. The resulting scheme is called time-dependent “generalized SIC-OEP.” A straightforward approximation, using the spatial localization of one set of orbitals, leads to the “generalized SIC-Slater” formalism. We show that it represents a great improvement compared to the traditional SIC-Slater and Krieger-Li-Iafrate formalisms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao Yajun
A previously established Hauser-Ernst-type extended double-complex linear system is slightly modified and used to develop an inverse scattering method for the stationary axisymmetric general symplectic gravity model. The reduction procedures in this inverse scattering method are found to be fairly simple, which makes the inverse scattering method applied fine and effective. As an application, a concrete family of soliton double solutions for the considered theory is obtained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gutmacher, R.; Crawford, R.
This comprehensive guide to the analytical capabilities of Lawrence Livermore Laboratory's General Chemistry Division describes each analytical method in terms of its principle, field of application, and qualitative and quantitative uses. Also described are the state and quantity of sample required for analysis, processing time, available instrumentation, and responsible personnel.
Applications to car bodies - Generalized layout design of three-dimensional shells
NASA Technical Reports Server (NTRS)
Fukushima, Junichi; Suzuki, Katsuyuki; Kikuchi, Noboru
1993-01-01
We shall describe applications of the homogenization method, formulated in Part 1, to design layout of car bodies represented by three-dimensional shell structures based on a multi-loading optimization.
A generalized transmultiplexer and its application to mobile satellite communications
NASA Technical Reports Server (NTRS)
Ichiyoshi, Osamu
1990-01-01
A generalization of digital transmultiplexer technology is presented. The proposed method can realize transmultiplexer (TMUX) and transdemultiplexer (TDUX) filter banks whose element filters have bandwidths greater than the channel spacing frequency. This feature is useful in many communications applications. As an example, a satellite switched (SS) Frequency Division Multiple Access (FDMA) system is proposed for spot beam satellite communications, particularly for mobile satellite communications.
Developing a multimodal biometric authentication system using soft computing methods.
Malcangi, Mario
2015-01-01
Robust personal authentication is becoming ever more important in computer-based applications. Among a variety of methods, biometric offers several advantages, mainly in embedded system applications. Hard and soft multi-biometric, combined with hard and soft computing methods, can be applied to improve the personal authentication process and to generalize the applicability. This chapter describes the embedded implementation of a multi-biometric (voiceprint and fingerprint) multimodal identification system based on hard computing methods (DSP) for feature extraction and matching, an artificial neural network (ANN) for soft feature pattern matching, and a fuzzy logic engine (FLE) for data fusion and decision.
General method for designing wave shape transformers.
Ma, Hua; Qu, Shaobo; Xu, Zhuo; Wang, Jiafu
2008-12-22
An effective method for designing wave shape transformers (WSTs) is investigated by adopting the coordinate transformation theory. Following this method, the devices employed to transform electromagnetic (EM) wave fronts from one style with arbitrary shape and size to another style, can be designed. To verify this method, three examples in 2D spaces are also presented. Compared with the methods proposed in other literatures, this method offers the general procedure in designing WSTs, and thus is of great importance for the potential and practical applications possessed by such kinds of devices.
Maggin, Daniel M; Swaminathan, Hariharan; Rogers, Helen J; O'Keeffe, Breda V; Sugai, George; Horner, Robert H
2011-06-01
A new method for deriving effect sizes from single-case designs is proposed. The strategy is applicable to small-sample time-series data with autoregressive errors. The method uses Generalized Least Squares (GLS) to model the autocorrelation of the data and estimate regression parameters to produce an effect size that represents the magnitude of treatment effect from baseline to treatment phases in standard deviation units. In this paper, the method is applied to two published examples using common single case designs (i.e., withdrawal and multiple-baseline). The results from these studies are described, and the method is compared to ten desirable criteria for single-case effect sizes. Based on the results of this application, we conclude with observations about the use of GLS as a support to visual analysis, provide recommendations for future research, and describe implications for practice. Copyright © 2011 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
Efficient Analysis of Complex Structures
NASA Technical Reports Server (NTRS)
Kapania, Rakesh K.
2000-01-01
Last various accomplishments achieved during this project are : (1) A Survey of Neural Network (NN) applications using MATLAB NN Toolbox on structural engineering especially on equivalent continuum models (Appendix A). (2) Application of NN and GAs to simulate and synthesize substructures: 1-D and 2-D beam problems (Appendix B). (3) Development of an equivalent plate-model analysis method (EPA) for static and vibration analysis of general trapezoidal built-up wing structures composed of skins, spars and ribs. Calculation of all sorts of test cases and comparison with measurements or FEA results. (Appendix C). (4) Basic work on using second order sensitivities on simulating wing modal response, discussion of sensitivity evaluation approaches, and some results (Appendix D). (5) Establishing a general methodology of simulating the modal responses by direct application of NN and by sensitivity techniques, in a design space composed of a number of design points. Comparison is made through examples using these two methods (Appendix E). (6) Establishing a general methodology of efficient analysis of complex wing structures by indirect application of NN: the NN-aided Equivalent Plate Analysis. Training of the Neural Networks for this purpose in several cases of design spaces, which can be applicable for actual design of complex wings (Appendix F).
NASA Technical Reports Server (NTRS)
Chan, William M.
1992-01-01
The following papers are presented: (1) numerical methods for the simulation of complex multi-body flows with applications for the Integrated Space Shuttle vehicle; (2) a generalized scheme for 3-D hyperbolic grid generation; (3) collar grids for intersecting geometric components within the Chimera overlapped grid scheme; and (4) application of the Chimera overlapped grid scheme to simulation of Space Shuttle ascent flows.
Application of software technology to automatic test data analysis
NASA Technical Reports Server (NTRS)
Stagner, J. R.
1991-01-01
The verification process for a major software subsystem was partially automated as part of a feasibility demonstration. The methods employed are generally useful and applicable to other types of subsystems. The effort resulted in substantial savings in test engineer analysis time and offers a method for inclusion of automatic verification as a part of regression testing.
Assessment of Automated Measurement and Verification (M&V) Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granderson, Jessica; Touzani, Samir; Custodio, Claudine
This report documents the application of a general statistical methodology to assess the accuracy of baseline energy models, focusing on its application to Measurement and Verification (M&V) of whole-building energy savings.
Ninth NASTRAN (R) Users' Colloquium
NASA Technical Reports Server (NTRS)
1980-01-01
The general application of finite element methodology and the specific application of NASTRAN to a wide variety of static and dynamic structural problems is addressed. Comparison with other approaches and new methods of analysis with nastran are included.
40 CFR 21.4 - Review of application.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 1 2011-07-01 2011-07-01 false Review of application. 21.4 Section 21.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GENERAL SMALL BUSINESS § 21.4 Review of..., efficiency, or technological standpoint. (c) An application which proposes additions, alterations, or methods...
40 CFR 21.4 - Review of application.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 1 2013-07-01 2013-07-01 false Review of application. 21.4 Section 21.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GENERAL SMALL BUSINESS § 21.4 Review of..., efficiency, or technological standpoint. (c) An application which proposes additions, alterations, or methods...
40 CFR 21.4 - Review of application.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 1 2014-07-01 2014-07-01 false Review of application. 21.4 Section 21.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GENERAL SMALL BUSINESS § 21.4 Review of..., efficiency, or technological standpoint. (c) An application which proposes additions, alterations, or methods...
40 CFR 21.4 - Review of application.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 1 2012-07-01 2012-07-01 false Review of application. 21.4 Section 21.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GENERAL SMALL BUSINESS § 21.4 Review of..., efficiency, or technological standpoint. (c) An application which proposes additions, alterations, or methods...
40 CFR 21.4 - Review of application.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Review of application. 21.4 Section 21.4 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GENERAL SMALL BUSINESS § 21.4 Review of..., efficiency, or technological standpoint. (c) An application which proposes additions, alterations, or methods...
ERIC Educational Resources Information Center
Jung, Kwanghee; Takane, Yoshio; Hwang, Heungsun; Woodward, Todd S.
2012-01-01
We propose a new method of structural equation modeling (SEM) for longitudinal and time series data, named Dynamic GSCA (Generalized Structured Component Analysis). The proposed method extends the original GSCA by incorporating a multivariate autoregressive model to account for the dynamic nature of data taken over time. Dynamic GSCA also…
Computing generalized Langevin equations and generalized Fokker-Planck equations.
Darve, Eric; Solomon, Jose; Kia, Amirali
2009-07-07
The Mori-Zwanzig formalism is an effective tool to derive differential equations describing the evolution of a small number of resolved variables. In this paper we present its application to the derivation of generalized Langevin equations and generalized non-Markovian Fokker-Planck equations. We show how long time scales rates and metastable basins can be extracted from these equations. Numerical algorithms are proposed to discretize these equations. An important aspect is the numerical solution of the orthogonal dynamics equation which is a partial differential equation in a high dimensional space. We propose efficient numerical methods to solve this orthogonal dynamics equation. In addition, we present a projection formalism of the Mori-Zwanzig type that is applicable to discrete maps. Numerical applications are presented from the field of Hamiltonian systems.
40 CFR 53.16 - Supersession of reference methods.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Supersession of reference methods. 53... (CONTINUED) AMBIENT AIR MONITORING REFERENCE AND EQUIVALENT METHODS General Provisions § 53.16 Supersession of reference methods. (a) This section prescribes procedures and criteria applicable to requests that...
General theory of conical flows and its application to supersonic aerodynamics
NASA Technical Reports Server (NTRS)
Germain, Paul
1955-01-01
Points treated in this report are: homogeneous flows, the general study of conical flows with infinitesimal cone angles, the numerical or analogous methods for the study of flows flattened in one direction, and a certain number of results. A thorough consideration of the applications on conical flows and demonstration of how one may solve within the scope of linear theory, by combinations of conical flows, the general problems of the supersonic wing, taking into account dihedral and sweepback, and also fuselage and control surface effects.
NASA Astrophysics Data System (ADS)
Tewary, Vinod K.; Fortunko, Christopher M.
The present, time-dependent 3D Green's function method resembles that used to study the propagation of elastic waves in a general, anisotropic half-space in the lattice dynamics of crystals. The method is used to calculate the scattering amplitude of elastic waves from a discontinuity in the half-space; exact results are obtained for 3D pulse propagation in a general, anisotropic half-space that contains either an interior point or a planar scatterer. The results thus obtained are applicable in the design of ultrasonic scattering experiments, especially as an aid in the definition of the spatial and time-domain transducer responses that can maximize detection reliability for specific categories of flaws in highly anisotropic materials.
Nonlinear analysis of structures. [within framework of finite element method
NASA Technical Reports Server (NTRS)
Armen, H., Jr.; Levine, H.; Pifko, A.; Levy, A.
1974-01-01
The development of nonlinear analysis techniques within the framework of the finite-element method is reported. Although the emphasis is concerned with those nonlinearities associated with material behavior, a general treatment of geometric nonlinearity, alone or in combination with plasticity is included, and applications presented for a class of problems categorized as axisymmetric shells of revolution. The scope of the nonlinear analysis capabilities includes: (1) a membrane stress analysis, (2) bending and membrane stress analysis, (3) analysis of thick and thin axisymmetric bodies of revolution, (4) a general three dimensional analysis, and (5) analysis of laminated composites. Applications of the methods are made to a number of sample structures. Correlation with available analytic or experimental data range from good to excellent.
Exploiting the User: Adapting Personas for Use in Security Visualization Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoll, Jennifer C.; McColgin, David W.; Gregory, Michelle L.
It has long been noted that visual representations of complex information can facilitate rapid understanding of data {citation], even with respect to ComSec applications {citation]. Recognizing that visualizations can increase usability in ComSec applications, [Zurko, Sasse] have argued that there is a need to create more usable security visualizations. (VisSec) However, usability of applications generally fall into the domain of Human Computer Interaction (HCI), which generally relies on heavy-weight user-centered design (UCD) processes. For example, the UCD process can involve many prototype iterations, or an ethnographic field study that can take months to complete. The problem is that VisSec projectsmore » generally do not have the resources to perform ethnographic field studies, or to employ complex UCD methods. They often are running on tight deadlines and budgets that can not afford standard UCD methods. In order to help resolve the conflict of needing more usable designs in ComSec, but not having the resources to employ complex UCD methods, in this paper we offer a stripped-down lighter weight version of a UCD process which can help with capturing user requirements. The approach we use is personas which a user requirements capturing method arising out of the Participatory Design philosophy [Grudin02].« less
CRISPR-Cas in Medicinal Chemistry: Applications and Regulatory Concerns.
Duardo-Sanchez, Aliuska
2017-01-01
A rapid search in scientific publication's databases shows how the use of CRISPR-Cas genome editions' technique has considerably expanded, and its growing importance, in modern molecular biology. Just in pub-med platform, the search of the term gives more than 3000 results. Specifically, in Drug Discovery, Medicinal Chemistry and Chemical Biology in general CRISPR method may have multiple applications. Some of these applications are: resistance-selection studies of antimalarial lead organic compounds; investigation of druggability; development of animal models for chemical compounds testing, etc. In this paper, we offer a review of the most relevant scientific literature illustrated with specific examples of application of CRISPR technique to medicinal chemistry and chemical biology. We also present a general overview of the main legal and ethical trends regarding this method of genome editing. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
A general system for automatic biomedical image segmentation using intensity neighborhoods.
Chen, Cheng; Ozolek, John A; Wang, Wei; Rohde, Gustavo K
2011-01-01
Image segmentation is important with applications to several problems in biology and medicine. While extensively researched, generally, current segmentation methods perform adequately in the applications for which they were designed, but often require extensive modifications or calibrations before being used in a different application. We describe an approach that, with few modifications, can be used in a variety of image segmentation problems. The approach is based on a supervised learning strategy that utilizes intensity neighborhoods to assign each pixel in a test image its correct class based on training data. We describe methods for modeling rotations and variations in scales as well as a subset selection for training the classifiers. We show that the performance of our approach in tissue segmentation tasks in magnetic resonance and histopathology microscopy images, as well as nuclei segmentation from fluorescence microscopy images, is similar to or better than several algorithms specifically designed for each of these applications.
Improved Net-Level Filling And Finishing Of Large Castings
NASA Technical Reports Server (NTRS)
Johnson, Erik P.; Brown, Richard F.
1995-01-01
Improved method of vacuum casting of large, generally cylindrical objects to net sizes and shapes reduces amount of direct manual labor by workers in proximity to cast material. Original application for which method devised is fabrication of solid rocket-motor segments containing solid propellant, wherein need to minimize exposure of workers to propellant material being cast. Improved method adaptable to other applications involving large castings of toxic, flammable, or otherwise hazardous materials.
Artificial intelligence in radiology.
Hosny, Ahmed; Parmar, Chintan; Quackenbush, John; Schwartz, Lawrence H; Aerts, Hugo J W L
2018-05-17
Artificial intelligence (AI) algorithms, particularly deep learning, have demonstrated remarkable progress in image-recognition tasks. Methods ranging from convolutional neural networks to variational autoencoders have found myriad applications in the medical image analysis field, propelling it forward at a rapid pace. Historically, in radiology practice, trained physicians visually assessed medical images for the detection, characterization and monitoring of diseases. AI methods excel at automatically recognizing complex patterns in imaging data and providing quantitative, rather than qualitative, assessments of radiographic characteristics. In this Opinion article, we establish a general understanding of AI methods, particularly those pertaining to image-based tasks. We explore how these methods could impact multiple facets of radiology, with a general focus on applications in oncology, and demonstrate ways in which these methods are advancing the field. Finally, we discuss the challenges facing clinical implementation and provide our perspective on how the domain could be advanced.
Multilayer Extreme Learning Machine With Subnetwork Nodes for Representation Learning.
Yang, Yimin; Wu, Q M Jonathan
2016-11-01
The extreme learning machine (ELM), which was originally proposed for "generalized" single-hidden layer feedforward neural networks, provides efficient unified learning solutions for the applications of clustering, regression, and classification. It presents competitive accuracy with superb efficiency in many applications. However, ELM with subnetwork nodes architecture has not attracted much research attentions. Recently, many methods have been proposed for supervised/unsupervised dimension reduction or representation learning, but these methods normally only work for one type of problem. This paper studies the general architecture of multilayer ELM (ML-ELM) with subnetwork nodes, showing that: 1) the proposed method provides a representation learning platform with unsupervised/supervised and compressed/sparse representation learning and 2) experimental results on ten image datasets and 16 classification datasets show that, compared to other conventional feature learning methods, the proposed ML-ELM with subnetwork nodes performs competitively or much better than other feature learning methods.
ERIC Educational Resources Information Center
Tisdell, C. C.
2017-01-01
Solution methods to exact differential equations via integrating factors have a rich history dating back to Euler (1740) and the ideas enjoy applications to thermodynamics and electromagnetism. Recently, Azevedo and Valentino presented an analysis of the generalized Bernoulli equation, constructing a general solution by linearizing the problem…
Applications of Small Area Estimation to Generalization with Subclassification by Propensity Scores
ERIC Educational Resources Information Center
Chan, Wendy
2018-01-01
Policymakers have grown increasingly interested in how experimental results may generalize to a larger population. However, recently developed propensity score-based methods are limited by small sample sizes, where the experimental study is generalized to a population that is at least 20 times larger. This is particularly problematic for methods…
This paper summarizes and discusses recent available U.S. and European information on
ammonia (NH3) emissions from swine farms and assesses the applicability for general use
in the United States. The emission rates for the swine barns calculated by various methods show
g...
Applying Nyquist's method for stability determination to solar wind observations
NASA Astrophysics Data System (ADS)
Klein, Kristopher G.; Kasper, Justin C.; Korreck, K. E.; Stevens, Michael L.
2017-10-01
The role instabilities play in governing the evolution of solar and astrophysical plasmas is a matter of considerable scientific interest. The large number of sources of free energy accessible to such nearly collisionless plasmas makes general modeling of unstable behavior, accounting for the temperatures, densities, anisotropies, and relative drifts of a large number of populations, analytically difficult. We therefore seek a general method of stability determination that may be automated for future analysis of solar wind observations. This work describes an efficient application of the Nyquist instability method to the Vlasov dispersion relation appropriate for hot, collisionless, magnetized plasmas, including the solar wind. The algorithm recovers the familiar proton temperature anisotropy instabilities, as well as instabilities that had been previously identified using fits extracted from in situ observations in Gary et al. (2016). Future proposed applications of this method are discussed.
Code of Federal Regulations, 2010 CFR
2010-07-01
... will consider a sample obtained using any of the applicable sampling methods specified in appendix I to... appendix I sampling methods are not being formally adopted by the Administrator, a person who desires to employ an alternative sampling method is not required to demonstrate the equivalency of his method under...
Methods for Estimating Payload/Vehicle Design Loads
NASA Technical Reports Server (NTRS)
Chen, J. C.; Garba, J. A.; Salama, M. A.; Trubert, M. R.
1983-01-01
Several methods compared with respect to accuracy, design conservatism, and cost. Objective of survey: reduce time and expense of load calculation by selecting approximate method having sufficient accuracy for problem at hand. Methods generally applicable to dynamic load analysis in other aerospace and other vehicle/payload systems.
NASA Technical Reports Server (NTRS)
Reichelt, Mark
1993-01-01
In this paper we describe a novel generalized SOR (successive overrelaxation) algorithm for accelerating the convergence of the dynamic iteration method known as waveform relaxation. A new convolution SOR algorithm is presented, along with a theorem for determining the optimal convolution SOR parameter. Both analytic and experimental results are given to demonstrate that the convergence of the convolution SOR algorithm is substantially faster than that of the more obvious frequency-independent waveform SOR algorithm. Finally, to demonstrate the general applicability of this new method, it is used to solve the differential-algebraic system generated by spatial discretization of the time-dependent semiconductor device equations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
D'Ambra, P.; Vassilevski, P. S.
2014-05-30
Adaptive Algebraic Multigrid (or Multilevel) Methods (αAMG) are introduced to improve robustness and efficiency of classical algebraic multigrid methods in dealing with problems where no a-priori knowledge or assumptions on the near-null kernel of the underlined matrix are available. Recently we proposed an adaptive (bootstrap) AMG method, αAMG, aimed to obtain a composite solver with a desired convergence rate. Each new multigrid component relies on a current (general) smooth vector and exploits pairwise aggregation based on weighted matching in a matrix graph to define a new automatic, general-purpose coarsening process, which we refer to as “the compatible weighted matching”. Inmore » this work, we present results that broaden the applicability of our method to different finite element discretizations of elliptic PDEs. In particular, we consider systems arising from displacement methods in linear elasticity problems and saddle-point systems that appear in the application of the mixed method to Darcy problems.« less
Twentieth NASTRAN (R) Users' Colloquium
NASA Technical Reports Server (NTRS)
1992-01-01
The proceedings of the conference are presented. Some comprehensive general papers are presented on applications of finite elements in engineering, comparisons with other approaches, unique applications, pre and post processing with other auxiliary programs, and new methods of analysis with NASTRAN.
45 CFR 149.345 - Use of information provided.
Code of Federal Regulations, 2012 CFR
2012-10-01
... REQUIREMENTS FOR THE EARLY RETIREE REINSURANCE PROGRAM Reimbursement Methods § 149.345 Use of information... law. Nothing in this section limits the Office of the Inspector General's authority to fulfill the Inspector General's responsibilities in accordance with applicable Federal law. ...
45 CFR 149.345 - Use of information provided.
Code of Federal Regulations, 2013 CFR
2013-10-01
... REQUIREMENTS FOR THE EARLY RETIREE REINSURANCE PROGRAM Reimbursement Methods § 149.345 Use of information... law. Nothing in this section limits the Office of the Inspector General's authority to fulfill the Inspector General's responsibilities in accordance with applicable Federal law. ...
45 CFR 149.345 - Use of information provided.
Code of Federal Regulations, 2014 CFR
2014-10-01
... REQUIREMENTS FOR THE EARLY RETIREE REINSURANCE PROGRAM Reimbursement Methods § 149.345 Use of information... law. Nothing in this section limits the Office of the Inspector General's authority to fulfill the Inspector General's responsibilities in accordance with applicable Federal law. ...
Application of surface geophysics to ground-water investigations
Zohdy, Adel A.R.; Eaton, Gordon P.; Mabey, Don R.
1974-01-01
This manual reviews the standard methods of surface geophysics applicable to ground-water investigations. It covers electrical methods, seismic and gravity methods, and magnetic methods. The general physical principles underlying each method and its capabilities and limitations are described. Possibilities for non-uniqueness of interpretation of geophysical results are noted. Examples of actual use of the methods are given to illustrate applications and interpretation in selected geohydrologic environments. The objective of the manual is to provide the hydrogeologist with a sufficient understanding of the capabilities, imitations, and relative cost of geophysical methods to make sound decisions as to when to use of these methods is desirable. The manual also provides enough information for the hydrogeologist to work with a geophysicist in designing geophysical surveys that differentiate significant hydrogeologic changes.
Thomas B. Lynch; Jeffrey H. Gove
2014-01-01
The typical "double counting" application of the mirage method of boundary correction cannot be applied to sampling systems such as critical height sampling (CHS) that are based on a Monte Carlo sample of a tree (or debris) attribute because the critical height (or other random attribute) sampled from a mirage point is generally not equal to the critical...
A hybrid method for transient wave propagation in a multilayered solid
NASA Astrophysics Data System (ADS)
Tian, Jiayong; Xie, Zhoumin
2009-08-01
We present a hybrid method for the evaluation of transient elastic-wave propagation in a multilayered solid, integrating reverberation matrix method with the theory of generalized rays. Adopting reverberation matrix formulation, Laplace-Fourier domain solutions of elastic waves in the multilayered solid are expanded into the sum of a series of generalized-ray group integrals. Each generalized-ray group integral containing Kth power of reverberation matrix R represents the set of K-times reflections and refractions of source waves arriving at receivers in the multilayered solid, which was computed by fast inverse Laplace transform (FILT) and fast Fourier transform (FFT) algorithms. However, the calculation burden and low precision of FILT-FFT algorithm limit the application of reverberation matrix method. In this paper, we expand each of generalized-ray group integrals into the sum of a series of generalized-ray integrals, each of which is accurately evaluated by Cagniard-De Hoop method in the theory of generalized ray. The numerical examples demonstrate that the proposed method makes it possible to calculate the early-time transient response in the complex multilayered-solid configuration efficiently.
Czakó, Gábor; Szalay, Viktor; Császár, Attila G
2006-01-07
The currently most efficient finite basis representation (FBR) method [Corey et al., in Numerical Grid Methods and Their Applications to Schrodinger Equation, NATO ASI Series C, edited by C. Cerjan (Kluwer Academic, New York, 1993), Vol. 412, p. 1; Bramley et al., J. Chem. Phys. 100, 6175 (1994)] designed specifically to deal with nondirect product bases of structures phi(n) (l)(s)f(l)(u), chi(m) (l)(t)phi(n) (l)(s)f(l)(u), etc., employs very special l-independent grids and results in a symmetric FBR. While highly efficient, this method is not general enough. For instance, it cannot deal with nondirect product bases of the above structure efficiently if the functions phi(n) (l)(s) [and/or chi(m) (l)(t)] are discrete variable representation (DVR) functions of the infinite type. The optimal-generalized FBR(DVR) method [V. Szalay, J. Chem. Phys. 105, 6940 (1996)] is designed to deal with general, i.e., direct and/or nondirect product, bases and grids. This robust method, however, is too general, and its direct application can result in inefficient computer codes [Czako et al., J. Chem. Phys. 122, 024101 (2005)]. It is shown here how the optimal-generalized FBR method can be simplified in the case of nondirect product bases of structures phi(n) (l)(s)f(l)(u), chi(m) (l)(t)phi(n) (l)(s)f(l)(u), etc. As a result the commonly used symmetric FBR is recovered and simplified nonsymmetric FBRs utilizing very special l-dependent grids are obtained. The nonsymmetric FBRs are more general than the symmetric FBR in that they can be employed efficiently even when the functions phi(n) (l)(s) [and/or chi(m) (l)(t)] are DVR functions of the infinite type. Arithmetic operation counts and a simple numerical example presented show unambiguously that setting up the Hamiltonian matrix requires significantly less computer time when using one of the proposed nonsymmetric FBRs than that in the symmetric FBR. Therefore, application of this nonsymmetric FBR is more efficient than that of the symmetric FBR when one wants to diagonalize the Hamiltonian matrix either by a direct or via a basis-set contraction method. Enormous decrease of computer time can be achieved, with respect to a direct application of the optimal-generalized FBR, by employing one of the simplified nonsymmetric FBRs as is demonstrated in noniterative calculations of the low-lying vibrational energy levels of the H3+ molecular ion. The arithmetic operation counts of the Hamiltonian matrix vector products and the properties of a recently developed diagonalization method [Andreozzi et al., J. Phys. A Math. Gen. 35, L61 (2002)] suggest that the nonsymmetric FBR applied along with this particular diagonalization method is suitable to large scale iterative calculations. Whether or not the nonsymmetric FBR is competitive with the symmetric FBR in large-scale iterative calculations still has to be investigated numerically.
46 CFR 111.60-2 - Specialty cable for communication and RF applications.
Code of Federal Regulations, 2012 CFR
2012-10-01
... ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Wiring Materials and Methods § 111.60-2 Specialty cable for communication and RF applications. Specialty cable such as certain coaxial cable that cannot pass the... 46 Shipping 4 2012-10-01 2012-10-01 false Specialty cable for communication and RF applications...
46 CFR 111.60-2 - Specialty cable for communication and RF applications.
Code of Federal Regulations, 2013 CFR
2013-10-01
... ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Wiring Materials and Methods § 111.60-2 Specialty cable for communication and RF applications. Specialty cable such as certain coaxial cable that cannot pass the... 46 Shipping 4 2013-10-01 2013-10-01 false Specialty cable for communication and RF applications...
46 CFR 111.60-2 - Specialty cable for communication and RF applications.
Code of Federal Regulations, 2014 CFR
2014-10-01
... ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Wiring Materials and Methods § 111.60-2 Specialty cable for communication and RF applications. Specialty cable such as certain coaxial cable that cannot pass the... 46 Shipping 4 2014-10-01 2014-10-01 false Specialty cable for communication and RF applications...
46 CFR 111.60-2 - Specialty cable for communication and RF applications.
Code of Federal Regulations, 2010 CFR
2010-10-01
... ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Wiring Materials and Methods § 111.60-2 Specialty cable for communication and RF applications. Specialty cable such as certain coaxial cable that cannot pass the... 46 Shipping 4 2010-10-01 2010-10-01 false Specialty cable for communication and RF applications...
46 CFR 111.60-2 - Specialty cable for communication and RF applications.
Code of Federal Regulations, 2011 CFR
2011-10-01
... ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Wiring Materials and Methods § 111.60-2 Specialty cable for communication and RF applications. Specialty cable such as certain coaxial cable that cannot pass the... 46 Shipping 4 2011-10-01 2011-10-01 false Specialty cable for communication and RF applications...
NASA Technical Reports Server (NTRS)
Craig, R. R., Jr.
1985-01-01
A component mode synthesis method for damped structures was developed and modal test methods were explored which could be employed to determine the relevant parameters required by the component mode synthesis method. Research was conducted on the following topics: (1) Development of a generalized time-domain component mode synthesis technique for damped systems; (2) Development of a frequency-domain component mode synthesis method for damped systems; and (3) Development of a system identification algorithm applicable to general damped systems. Abstracts are presented of the major publications which have been previously issued on these topics.
A New Method with General Diagnostic Utility for the Calculation of Immunoglobulin G Avidity
Korhonen, Maria H.; Brunstein, John; Haario, Heikki; Katnikov, Alexei; Rescaldani, Roberto; Hedman, Klaus
1999-01-01
The reference method for immunoglobulin G (IgG) avidity determination includes reagent-consuming serum titration. Aiming at better IgG avidity diagnostics, we applied a logistic model for the reproduction of antibody titration curves. This method was tested with well-characterized serum panels for cytomegalovirus, Epstein-Barr virus, rubella virus, parvovirus B19, and Toxoplasma gondii. This approach for IgG avidity calculation is generally applicable and attains the diagnostic performance of the reference method while being less laborious and twice as cost-effective. PMID:10473525
47 CFR 1.958 - Distance computation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Distance computation. 1.958 Section 1.958 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Wireless Radio Services Applications and Proceedings Application Requirements and Procedures § 1.958 Distance computation. The method...
47 CFR 1.958 - Distance computation.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 1 2011-10-01 2011-10-01 false Distance computation. 1.958 Section 1.958 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Wireless Radio Services Applications and Proceedings Application Requirements and Procedures § 1.958 Distance computation. The method...
A self-describing data transfer methodology for ITS applications : executive summary
DOT National Transportation Integrated Search
2000-12-01
A wide variety of remote sensors used in Intelligent Transportation Systems (ITS) applications (loops, probe vehicles, radar, cameras) has created a need for general methods by which data can be shared among agencies and users who disparate computer ...
A self-describing data transfer methodology for ITS applications
DOT National Transportation Integrated Search
1999-01-01
The wide variety of remote sensors used in Intelligent Transportation Systems (ITS) : applications (loops, probe vehicles, radar, cameras, etc.) has created a need for general : methods by which data can be shared among agencies and users who own dis...
Rapid computation of chemical equilibrium composition - An application to hydrocarbon combustion
NASA Technical Reports Server (NTRS)
Erickson, W. D.; Prabhu, R. K.
1986-01-01
A scheme for rapidly computing the chemical equilibrium composition of hydrocarbon combustion products is derived. A set of ten governing equations is reduced to a single equation that is solved by the Newton iteration method. Computation speeds are approximately 80 times faster than the often used free-energy minimization method. The general approach also has application to many other chemical systems.
40 CFR Table 8 to Subpart Zzzz of... - Applicability of General Provisions to Subpart ZZZZ.
Code of Federal Regulations, 2010 CFR
2010-07-01
...) Conduct of performance tests and reduction of data Yes Subpart ZZZZ specifies test methods at § 63.6620... 114 of the CAA Yes. § 63.7(f) Alternative test method provisions Yes. § 63.7(g) Performance test data analysis, recordkeeping, and reporting Yes. § 63.7(h) Waiver of tests Yes. § 63.8(a)(1) Applicability of...
Progress in Application of Generalized Wigner Distribution to Growth and Other Problems
NASA Astrophysics Data System (ADS)
Einstein, T. L.; Morales-Cifuentes, Josue; Pimpinelli, Alberto; Gonzalez, Diego Luis
We recap the use of the (single-parameter) Generalized Wigner Distribution (GWD) to analyze capture-zone distributions associated with submonolayer epitaxial growth. We discuss recent applications to physical systems, as well as key simulations. We pay particular attention to how this method compares with other methods to assess the critical nucleus size characterizing growth. The following talk discusses a particular case when special insight is needed to reconcile the various methods. We discuss improvements that can be achieved by going to a 2-parameter fragmentation approach. At a much larger scale we have applied this approach to various distributions in socio-political phenomena (areas of secondary administrative units [e.g., counties] and distributions of subway stations). Work at UMD supported by NSF CHE 13-05892.
Bucchi, L; Pierri, C; Caprara, L; Cortecchia, S; De Lillo, M; Bondi, A
2003-02-01
This paper presents a computerised system for the monitoring of integrated cervical screening, i.e. the integration of spontaneous Pap smear practice into organised screening. The general characteristics of the system are described, including background and rationale (integrated cervical screening in European countries, impact of integration on monitoring, decentralised organization of screening and levels of monitoring), general methods (definitions, sections, software description, and setting of application), and indicators of participation (distribution by time interval since previous Pap smear, distribution by screening sector--organised screening centres vs public and private clinical settings--, distribution by time interval between the last two Pap smears, and movement of women between the two screening sectors). Also, the paper reports the results of the application of these indicators in the general database of the Pathology Department of Imola Health District in northern Italy.
Langoju, Rajesh; Patil, Abhijit; Rastogi, Pramod
2007-11-20
Signal processing methods based on maximum-likelihood theory, discrete chirp Fourier transform, and spectral estimation methods have enabled accurate measurement of phase in phase-shifting interferometry in the presence of nonlinear response of the piezoelectric transducer to the applied voltage. We present the statistical study of these generalized nonlinear phase step estimation methods to identify the best method by deriving the Cramér-Rao bound. We also address important aspects of these methods for implementation in practical applications and compare the performance of the best-identified method with other bench marking algorithms in the presence of harmonics and noise.
Gel integration for microfluidic applications.
Zhang, Xuanqi; Li, Lingjun; Luo, Chunxiong
2016-05-21
Molecular diffusive membranes or materials are important for biological applications in microfluidic systems. Hydrogels are typical materials that offer several advantages, such as free diffusion for small molecules, biocompatibility with most cells, temperature sensitivity, relatively low cost, and ease of production. With the development of microfluidic applications, hydrogels can be integrated into microfluidic systems by soft lithography, flow-solid processes or UV cure methods. Due to their special properties, hydrogels are widely used as fluid control modules, biochemical reaction modules or biological application modules in different applications. Although hydrogels have been used in microfluidic systems for more than ten years, many hydrogels' properties and integrated techniques have not been carefully elaborated. Here, we systematically review the physical properties of hydrogels, general methods for gel-microfluidics integration and applications of this field. Advanced topics and the outlook of hydrogel fabrication and applications are also discussed. We hope this review can help researchers choose suitable methods for their applications using hydrogels.
Extreme learning machine for ranking: generalization analysis and applications.
Chen, Hong; Peng, Jiangtao; Zhou, Yicong; Li, Luoqing; Pan, Zhibin
2014-05-01
The extreme learning machine (ELM) has attracted increasing attention recently with its successful applications in classification and regression. In this paper, we investigate the generalization performance of ELM-based ranking. A new regularized ranking algorithm is proposed based on the combinations of activation functions in ELM. The generalization analysis is established for the ELM-based ranking (ELMRank) in terms of the covering numbers of hypothesis space. Empirical results on the benchmark datasets show the competitive performance of the ELMRank over the state-of-the-art ranking methods. Copyright © 2014 Elsevier Ltd. All rights reserved.
A fixed mass method for the Kramers-Moyal expansion--application to time series with outliers.
Petelczyc, M; Żebrowski, J J; Orłowska-Baranowska, E
2015-03-01
Extraction of stochastic and deterministic components from empirical data-necessary for the reconstruction of the dynamics of the system-is discussed. We determine both components using the Kramers-Moyal expansion. In our earlier papers, we obtained large fluctuations in the magnitude of both terms for rare or extreme valued events in the data. Calculations for such events are burdened by an unsatisfactory quality of the statistics. In general, the method is sensitive to the binning procedure applied for the construction of histograms. Instead of the commonly used constant width of bins, we use here a constant number of counts for each bin. This approach-the fixed mass method-allows to include in the calculation events, which do not yield satisfactory statistics in the fixed bin width method. The method developed is general. To demonstrate its properties, here, we present the modified Kramers-Moyal expansion method and discuss its properties by the application of the fixed mass method to four representative heart rate variability recordings with different numbers of ectopic beats. These beats may be rare events as well as outlying, i.e., very small or very large heart cycle lengths. The properties of ectopic beats are important not only for medical diagnostic purposes but the occurrence of ectopic beats is a general example of the kind of variability that occurs in a signal with outliers. To show that the method is general, we also present results for two examples of data from very different areas of science: daily temperatures at a large European city and recordings of traffics on a highway. Using the fixed mass method, to assess the dynamics leading to the outlying events we studied the occurrence of higher order terms of the Kramers-Moyal expansion in the recordings. We found that the higher order terms of the Kramers-Moyal expansion are negligible for heart rate variability. This finding opens the possibility of the application of the Langevin equation to the whole range of empirical signals containing rare or outlying events. Note, however, that the higher order terms are non-negligible for the other data studied here and for it the Langevin equation is not applicable as a model.
NASA Technical Reports Server (NTRS)
Chen, L. T.
1975-01-01
A general method for analyzing aerodynamic flows around complex configurations is presented. By applying the Green function method, a linear integral equation relating the unknown, small perturbation potential on the surface of the body, to the known downwash is obtained. The surfaces of the aircraft, wake and diaphragm (if necessary) are divided into small quadrilateral elements which are approximated with hyperboloidal surfaces. The potential and its normal derivative are assumed to be constant within each element. This yields a set of linear algebraic equations and the coefficients are evaluated analytically. By using Gaussian elimination method, equations are solved for the potentials at the centroids of elements. The pressure coefficient is evaluated by the finite different method; the lift and moment coefficients are evaluated by numerical integration. Numerical results are presented, and applications to flutter are also included.
Rotor dynamic simulation and system identification methods for application to vacuum whirl data
NASA Technical Reports Server (NTRS)
Berman, A.; Giansante, N.; Flannelly, W. G.
1980-01-01
Methods of using rotor vacuum whirl data to improve the ability to model helicopter rotors were developed. The work consisted of the formulation of the equations of motion of elastic blades on a hub using a Galerkin method; the development of a general computer program for simulation of these equations; the study and implementation of a procedure for determining physical parameters based on measured data; and the application of a method for computing the normal modes and natural frequencies based on test data.
Generalization of uncertainty relation for quantum and stochastic systems
NASA Astrophysics Data System (ADS)
Koide, T.; Kodama, T.
2018-06-01
The generalized uncertainty relation applicable to quantum and stochastic systems is derived within the stochastic variational method. This relation not only reproduces the well-known inequality in quantum mechanics but also is applicable to the Gross-Pitaevskii equation and the Navier-Stokes-Fourier equation, showing that the finite minimum uncertainty between the position and the momentum is not an inherent property of quantum mechanics but a common feature of stochastic systems. We further discuss the possible implication of the present study in discussing the application of the hydrodynamic picture to microscopic systems, like relativistic heavy-ion collisions.
40 CFR 53.5 - Processing of applications.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Processing of applications. 53.5 Section 53.5 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) AMBIENT AIR MONITORING REFERENCE AND EQUIVALENT METHODS General Provisions § 53.5 Processing of...
Fundamentals of dielectric properties measurements and agricultural applications.
Nelson, Stuart O
2010-01-01
Dielectrics and dielectric properties are defined generally and dielectric measurement methods and equipment are described for various frequency ranges from audio frequencies through microwave frequencies. These include impedance and admittance bridges, resonant frequency, transmission-line, and free-space methods in the frequency domain and time-domain and broadband techniques. Many references are cited describing methods in detail and giving sources of dielectric properties data. Finally a few applications for such data are presented and sources of tabulated and dielectric properties data bases are identified.
Radiation Transport Tools for Space Applications: A Review
NASA Technical Reports Server (NTRS)
Jun, Insoo; Evans, Robin; Cherng, Michael; Kang, Shawn
2008-01-01
This slide presentation contains a brief discussion of nuclear transport codes widely used in the space radiation community for shielding and scientific analyses. Seven radiation transport codes that are addressed. The two general methods (i.e., Monte Carlo Method, and the Deterministic Method) are briefly reviewed.
26 CFR 1.168(i)-5 - Table of contents.
Code of Federal Regulations, 2010 CFR
2010-04-01
... period. (ii) Shorter recovery period. (iii) Less accelerated depreciation method. (iv) More accelerated... in the year of replacement. (i) In general. (ii) Applicable recovery period, depreciation method, and... acquired property. (3) Recovery period and/or depreciation method of the properties are the same, or both...
26 CFR 1.168(i)-5 - Table of contents.
Code of Federal Regulations, 2012 CFR
2012-04-01
... period. (ii) Shorter recovery period. (iii) Less accelerated depreciation method. (iv) More accelerated... in the year of replacement. (i) In general. (ii) Applicable recovery period, depreciation method, and... acquired property. (3) Recovery period and/or depreciation method of the properties are the same, or both...
26 CFR 1.168(i)-5 - Table of contents.
Code of Federal Regulations, 2011 CFR
2011-04-01
... period. (ii) Shorter recovery period. (iii) Less accelerated depreciation method. (iv) More accelerated... in the year of replacement. (i) In general. (ii) Applicable recovery period, depreciation method, and... acquired property. (3) Recovery period and/or depreciation method of the properties are the same, or both...
26 CFR 1.168(i)-5 - Table of contents.
Code of Federal Regulations, 2013 CFR
2013-04-01
... period. (ii) Shorter recovery period. (iii) Less accelerated depreciation method. (iv) More accelerated... in the year of replacement. (i) In general. (ii) Applicable recovery period, depreciation method, and... acquired property. (3) Recovery period and/or depreciation method of the properties are the same, or both...
26 CFR 1.168(i)-5 - Table of contents.
Code of Federal Regulations, 2014 CFR
2014-04-01
... period. (ii) Shorter recovery period. (iii) Less accelerated depreciation method. (iv) More accelerated... in the year of replacement. (i) In general. (ii) Applicable recovery period, depreciation method, and... acquired property. (3) Recovery period and/or depreciation method of the properties are the same, or both...
Supercomputer optimizations for stochastic optimal control applications
NASA Technical Reports Server (NTRS)
Chung, Siu-Leung; Hanson, Floyd B.; Xu, Huihuang
1991-01-01
Supercomputer optimizations for a computational method of solving stochastic, multibody, dynamic programming problems are presented. The computational method is valid for a general class of optimal control problems that are nonlinear, multibody dynamical systems, perturbed by general Markov noise in continuous time, i.e., nonsmooth Gaussian as well as jump Poisson random white noise. Optimization techniques for vector multiprocessors or vectorizing supercomputers include advanced data structures, loop restructuring, loop collapsing, blocking, and compiler directives. These advanced computing techniques and superconducting hardware help alleviate Bellman's curse of dimensionality in dynamic programming computations, by permitting the solution of large multibody problems. Possible applications include lumped flight dynamics models for uncertain environments, such as large scale and background random aerospace fluctuations.
Nguyen, N; Milanfar, P; Golub, G
2001-01-01
In many image restoration/resolution enhancement applications, the blurring process, i.e., point spread function (PSF) of the imaging system, is not known or is known only to within a set of parameters. We estimate these PSF parameters for this ill-posed class of inverse problem from raw data, along with the regularization parameters required to stabilize the solution, using the generalized cross-validation method (GCV). We propose efficient approximation techniques based on the Lanczos algorithm and Gauss quadrature theory, reducing the computational complexity of the GCV. Data-driven PSF and regularization parameter estimation experiments with synthetic and real image sequences are presented to demonstrate the effectiveness and robustness of our method.
40 CFR 63.642 - General standards.
Code of Federal Regulations, 2010 CFR
2010-07-01
... reduction. (4) Data shall be reduced in accordance with the EPA-approved methods specified in the applicable section or, if other test methods are used, the data and methods shall be validated according to the protocol in Method 301 of appendix A of this part. (e) Each owner or operator of a source subject to this...
SPIRAL-SPRITE: a rapid single point MRI technique for application to porous media.
Szomolanyi, P; Goodyear, D; Balcom, B; Matheson, D
2001-01-01
This study presents the application of a new, rapid, single point MRI technique which samples k space with spiral trajectories. The general principles of the technique are outlined along with application to porous concrete samples, solid pharmaceutical tablets and gas phase imaging. Each sample was chosen to highlight specific features of the method.
Second Conference on Artificial Intelligence for Space Applications
NASA Technical Reports Server (NTRS)
Dollman, Thomas (Compiler)
1988-01-01
The proceedings of the conference are presented. This second conference on Artificial Intelligence for Space Applications brings together a diversity of scientific and engineering work and is intended to provide an opportunity for those who employ AI methods in space applications to identify common goals and to discuss issues of general interest in the AI community.
NASA Astrophysics Data System (ADS)
Huang, Wen-Min; Mou, Chung-Yu; Chang, Cheng-Hung
2010-02-01
While the scattering phase for several one-dimensional potentials can be exactly derived, less is known in multi-dimensional quantum systems. This work provides a method to extend the one-dimensional phase knowledge to multi-dimensional quantization rules. The extension is illustrated in the example of Bogomolny's transfer operator method applied in two quantum wells bounded by step potentials of different heights. This generalized semiclassical method accurately determines the energy spectrum of the systems, which indicates the substantial role of the proposed phase correction. Theoretically, the result can be extended to other semiclassical methods, such as Gutzwiller trace formula, dynamical zeta functions, and semiclassical Landauer-Büttiker formula. In practice, this recipe enhances the applicability of semiclassical methods to multi-dimensional quantum systems bounded by general soft potentials.
Multipolar Ewald Methods, 2: Applications Using a Quantum Mechanical Force Field
2015-01-01
A fully quantum mechanical force field (QMFF) based on a modified “divide-and-conquer” (mDC) framework is applied to a series of molecular simulation applications, using a generalized Particle Mesh Ewald method extended to multipolar charge densities. Simulation results are presented for three example applications: liquid water, p-nitrophenylphosphate reactivity in solution, and crystalline N,N-dimethylglycine. Simulations of liquid water using a parametrized mDC model are compared to TIP3P and TIP4P/Ew water models and experiment. The mDC model is shown to be superior for cluster binding energies and generally comparable for bulk properties. Examination of the dissociative pathway for dephosphorylation of p-nitrophenylphosphate shows that the mDC method evaluated with the DFTB3/3OB and DFTB3/OPhyd semiempirical models bracket the experimental barrier, whereas DFTB2 and AM1/d-PhoT QM/MM simulations exhibit deficiencies in the barriers, the latter for which is related, in part, to the anomalous underestimation of the p-nitrophenylate leaving group pKa. Simulations of crystalline N,N-dimethylglycine are performed and the overall structure and atomic fluctuations are compared with the experiment and the general AMBER force field (GAFF). The QMFF, which was not parametrized for this application, was shown to be in better agreement with crystallographic data than GAFF. Our simulations highlight some of the application areas that may benefit from using new QMFFs, and they demonstrate progress toward the development of accurate QMFFs using the recently developed mDC framework. PMID:25691830
A Fully Customized Baseline Removal Framework for Spectroscopic Applications.
Giguere, Stephen; Boucher, Thomas; Carey, C J; Mahadevan, Sridhar; Dyar, M Darby
2017-07-01
The task of proper baseline or continuum removal is common to nearly all types of spectroscopy. Its goal is to remove any portion of a signal that is irrelevant to features of interest while preserving any predictive information. Despite the importance of baseline removal, median or guessed default parameters are commonly employed, often using commercially available software supplied with instruments. Several published baseline removal algorithms have been shown to be useful for particular spectroscopic applications but their generalizability is ambiguous. The new Custom Baseline Removal (Custom BLR) method presented here generalizes the problem of baseline removal by combining operations from previously proposed methods to synthesize new correction algorithms. It creates novel methods for each technique, application, and training set, discovering new algorithms that maximize the predictive accuracy of the resulting spectroscopic models. In most cases, these learned methods either match or improve on the performance of the best alternative. Examples of these advantages are shown for three different scenarios: quantification of components in near-infrared spectra of corn and laser-induced breakdown spectroscopy data of rocks, and classification/matching of minerals using Raman spectroscopy. Software to implement this optimization is available from the authors. By removing subjectivity from this commonly encountered task, Custom BLR is a significant step toward completely automatic and general baseline removal in spectroscopic and other applications.
Code of Federal Regulations, 2010 CFR
2010-01-01
... methods and principles of accounting prescribed by the state regulatory body having jurisdiction over the... telecommunications companies (47 CFR part 32), as those methods and principles of accounting are supplemented from... instruments by prescribing accounting principles, methodologies, and procedures applicable to all...
Alvarez, Isaac; de la Torre, Angel; Sainz, Manuel; Roldan, Cristina; Schoesser, Hansjoerg; Spitzer, Philipp
2007-09-15
Stimulus artifact is one of the main limitations when considering electrically evoked compound action potential for clinical applications. Alternating stimulation (average of recordings obtained with anodic-cathodic and cathodic-anodic bipolar stimulation pulses) is an effective method to reduce stimulus artifact when evoked potentials are recorded. In this paper we extend the concept of alternating stimulation by combining anodic-cathodic and cathodic-anodic recordings with a weight in general different to 0.5. We also provide an automatic method to obtain an estimation of the optimal weights. Comparison with conventional alternating, triphasic stimulation and masker-probe paradigm shows that the generalized alternating method improves the quality of electrically evoked compound action potential responses.
A fixed mass method for the Kramers-Moyal expansion—Application to time series with outliers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petelczyc, M.; Żebrowski, J. J.; Orłowska-Baranowska, E.
2015-03-15
Extraction of stochastic and deterministic components from empirical data—necessary for the reconstruction of the dynamics of the system—is discussed. We determine both components using the Kramers-Moyal expansion. In our earlier papers, we obtained large fluctuations in the magnitude of both terms for rare or extreme valued events in the data. Calculations for such events are burdened by an unsatisfactory quality of the statistics. In general, the method is sensitive to the binning procedure applied for the construction of histograms. Instead of the commonly used constant width of bins, we use here a constant number of counts for each bin. Thismore » approach—the fixed mass method—allows to include in the calculation events, which do not yield satisfactory statistics in the fixed bin width method. The method developed is general. To demonstrate its properties, here, we present the modified Kramers-Moyal expansion method and discuss its properties by the application of the fixed mass method to four representative heart rate variability recordings with different numbers of ectopic beats. These beats may be rare events as well as outlying, i.e., very small or very large heart cycle lengths. The properties of ectopic beats are important not only for medical diagnostic purposes but the occurrence of ectopic beats is a general example of the kind of variability that occurs in a signal with outliers. To show that the method is general, we also present results for two examples of data from very different areas of science: daily temperatures at a large European city and recordings of traffics on a highway. Using the fixed mass method, to assess the dynamics leading to the outlying events we studied the occurrence of higher order terms of the Kramers-Moyal expansion in the recordings. We found that the higher order terms of the Kramers-Moyal expansion are negligible for heart rate variability. This finding opens the possibility of the application of the Langevin equation to the whole range of empirical signals containing rare or outlying events. Note, however, that the higher order terms are non-negligible for the other data studied here and for it the Langevin equation is not applicable as a model.« less
40 CFR 53.10 - Appeal from rejection of application.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 5 2010-07-01 2010-07-01 false Appeal from rejection of application. 53.10 Section 53.10 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) AMBIENT AIR MONITORING REFERENCE AND EQUIVALENT METHODS General Provisions § 53.10...
12 CFR 324.201 - Purpose, applicability, and reservation of authority.
Code of Federal Regulations, 2014 CFR
2014-01-01
... STATEMENTS OF GENERAL POLICY CAPITAL ADEQUACY OF FDIC-SUPERVISED INSTITUTIONS Risk-Weighted Assets-Market... market risk, provides methods for these FDIC-supervised institutions to calculate their standardized measure for market risk and, if applicable, advanced measure for market risk, and establishes public...
Practical sliced configuration spaces for curved planar pairs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sacks, E.
1999-01-01
In this article, the author presents a practical configuration-space computation algorithm for pairs of curved planar parts, based on the general algorithm developed by Bajaj and the author. The general algorithm advances the theoretical understanding of configuration-space computation, but is too slow and fragile for some applications. The new algorithm solves these problems by restricting the analysis to parts bounded by line segments and circular arcs, whereas the general algorithm handles rational parametric curves. The trade-off is worthwhile, because the restricted class handles most robotics and mechanical engineering applications. The algorithm reduces run time by a factor of 60 onmore » nine representative engineering pairs, and by a factor of 9 on two human-knee pairs. It also handles common special pairs by specialized methods. A survey of 2,500 mechanisms shows that these methods cover 90% of pairs and yield an additional factor of 10 reduction in average run time. The theme of this article is that application requirements, as well as intrinsic theoretical interest, should drive configuration-space research.« less
NASA Technical Reports Server (NTRS)
Marko, H.
1978-01-01
A general spectral transformation is proposed and described. Its spectrum can be interpreted as a Fourier spectrum or a Laplace spectrum. The laws and functions of the method are discussed in comparison with the known transformations, and a sample application is shown.
An improved design method for EPC middleware
NASA Astrophysics Data System (ADS)
Lou, Guohuan; Xu, Ran; Yang, Chunming
2014-04-01
For currently existed problems and difficulties during the small and medium enterprises use EPC (Electronic Product Code) ALE (Application Level Events) specification to achieved middleware, based on the analysis of principle of EPC Middleware, an improved design method for EPC middleware is presented. This method combines the powerful function of MySQL database, uses database to connect reader-writer with upper application system, instead of development of ALE application program interface to achieve a middleware with general function. This structure is simple and easy to implement and maintain. Under this structure, different types of reader-writers added can be configured conveniently and the expandability of the system is improved.
SCALE 6.2 Continuous-Energy TSUNAMI-3D Capabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perfetti, Christopher M; Rearden, Bradley T
2015-01-01
The TSUNAMI (Tools for Sensitivity and UNcertainty Analysis Methodology Implementation) capabilities within the SCALE code system make use of sensitivity coefficients for an extensive number of criticality safety applications, such as quantifying the data-induced uncertainty in the eigenvalue of critical systems, assessing the neutronic similarity between different systems, quantifying computational biases, and guiding nuclear data adjustment studies. The need to model geometrically complex systems with improved ease of use and fidelity and the desire to extend TSUNAMI analysis to advanced applications have motivated the development of a SCALE 6.2 module for calculating sensitivity coefficients using three-dimensional (3D) continuous-energy (CE) Montemore » Carlo methods: CE TSUNAMI-3D. This paper provides an overview of the theory, implementation, and capabilities of the CE TSUNAMI-3D sensitivity analysis methods. CE TSUNAMI contains two methods for calculating sensitivity coefficients in eigenvalue sensitivity applications: (1) the Iterated Fission Probability (IFP) method and (2) the Contributon-Linked eigenvalue sensitivity/Uncertainty estimation via Track length importance CHaracterization (CLUTCH) method. This work also presents the GEneralized Adjoint Response in Monte Carlo method (GEAR-MC), a first-of-its-kind approach for calculating adjoint-weighted, generalized response sensitivity coefficients—such as flux responses or reaction rate ratios—in CE Monte Carlo applications. The accuracy and efficiency of the CE TSUNAMI-3D eigenvalue sensitivity methods are assessed from a user perspective in a companion publication, and the accuracy and features of the CE TSUNAMI-3D GEAR-MC methods are detailed in this paper.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bass, B.R.; Bryan, R.H.; Bryson, J.W.
This paper summarizes the capabilities and applications of the general-purpose and special-purpose computer programs that have been developed for use in fracture mechanics analyses of HSST pressure vessel experiments. Emphasis is placed on the OCA/USA code, which is designed for analysis of pressurized-thermal-shock (PTS) conditions, and on the ORMGEN/ADINA/ORVIRT system which is used for more general analysis. Fundamental features of these programs are discussed, along with applications to pressure vessel experiments.
NASA Technical Reports Server (NTRS)
Chan, William M.
1995-01-01
Algorithms and computer code developments were performed for the overset grid approach to solving computational fluid dynamics problems. The techniques developed are applicable to compressible Navier-Stokes flow for any general complex configurations. The computer codes developed were tested on different complex configurations with the Space Shuttle launch vehicle configuration as the primary test bed. General, efficient and user-friendly codes were produced for grid generation, flow solution and force and moment computation.
Development and application of a unified balancing approach with multiple constraints
NASA Technical Reports Server (NTRS)
Zorzi, E. S.; Lee, C. C.; Giordano, J. C.
1985-01-01
The development of a general analytic approach to constrained balancing that is consistent with past influence coefficient methods is described. The approach uses Lagrange multipliers to impose orbit and/or weight constraints; these constraints are combined with the least squares minimization process to provide a set of coupled equations that result in a single solution form for determining correction weights. Proper selection of constraints results in the capability to: (1) balance higher speeds without disturbing previously balanced modes, thru the use of modal trial weight sets; (2) balance off-critical speeds; and (3) balance decoupled modes by use of a single balance plane. If no constraints are imposed, this solution form reduces to the general weighted least squares influence coefficient method. A test facility used to examine the use of the general constrained balancing procedure and application of modal trial weight ratios is also described.
A large-grain mapping approach for multiprocessor systems through data flow model. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Kim, Hwa-Soo
1991-01-01
A large-grain level mapping method is presented of numerical oriented applications onto multiprocessor systems. The method is based on the large-grain data flow representation of the input application and it assumes a general interconnection topology of the multiprocessor system. The large-grain data flow model was used because such representation best exhibits inherited parallelism in many important applications, e.g., CFD models based on partial differential equations can be presented in large-grain data flow format, very effectively. A generalized interconnection topology of the multiprocessor architecture is considered, including such architectural issues as interprocessor communication cost, with the aim to identify the 'best matching' between the application and the multiprocessor structure. The objective is to minimize the total execution time of the input algorithm running on the target system. The mapping strategy consists of the following: (1) large-grain data flow graph generation from the input application using compilation techniques; (2) data flow graph partitioning into basic computation blocks; and (3) physical mapping onto the target multiprocessor using a priority allocation scheme for the computation blocks.
Applying the scientific method to small catchment studies: Areview of the Panola Mountain experience
Hooper, R.P.
2001-01-01
A hallmark of the scientific method is its iterative application to a problem to increase and refine the understanding of the underlying processes controlling it. A successful iterative application of the scientific method to catchment science (including the fields of hillslope hydrology and biogeochemistry) has been hindered by two factors. First, the scale at which controlled experiments can be performed is much smaller than the scale of the phenomenon of interest. Second, computer simulation models generally have not been used as hypothesis-testing tools as rigorously as they might have been. Model evaluation often has gone only so far as evaluation of goodness of fit, rather than a full structural analysis, which is more useful when treating the model as a hypothesis. An iterative application of a simple mixing model to the Panola Mountain Research Watershed is reviewed to illustrate the increase in understanding gained by this approach and to discern general principles that may be applicable to other studies. The lessons learned include the need for an explicitly stated conceptual model of the catchment, the definition of objective measures of its applicability, and a clear linkage between the scale of observations and the scale of predictions. Published in 2001 by John Wiley & Sons. Ltd.
Sun, Wenchao; Ishidaira, Hiroshi; Bastola, Satish; Yu, Jingshan
2015-05-01
Lacking observation data for calibration constrains applications of hydrological models to estimate daily time series of streamflow. Recent improvements in remote sensing enable detection of river water-surface width from satellite observations, making possible the tracking of streamflow from space. In this study, a method calibrating hydrological models using river width derived from remote sensing is demonstrated through application to the ungauged Irrawaddy Basin in Myanmar. Generalized likelihood uncertainty estimation (GLUE) is selected as a tool for automatic calibration and uncertainty analysis. Of 50,000 randomly generated parameter sets, 997 are identified as behavioral, based on comparing model simulation with satellite observations. The uncertainty band of streamflow simulation can span most of 10-year average monthly observed streamflow for moderate and high flow conditions. Nash-Sutcliffe efficiency is 95.7% for the simulated streamflow at the 50% quantile. These results indicate that application to the target basin is generally successful. Beyond evaluating the method in a basin lacking streamflow data, difficulties and possible solutions for applications in the real world are addressed to promote future use of the proposed method in more ungauged basins. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Composing Models of Geographic Physical Processes
NASA Astrophysics Data System (ADS)
Hofer, Barbara; Frank, Andrew U.
Processes are central for geographic information science; yet geographic information systems (GIS) lack capabilities to represent process related information. A prerequisite to including processes in GIS software is a general method to describe geographic processes independently of application disciplines. This paper presents such a method, namely a process description language. The vocabulary of the process description language is derived formally from mathematical models. Physical processes in geography can be described in two equivalent languages: partial differential equations or partial difference equations, where the latter can be shown graphically and used as a method for application specialists to enter their process models. The vocabulary of the process description language comprises components for describing the general behavior of prototypical geographic physical processes. These process components can be composed by basic models of geographic physical processes, which is shown by means of an example.
Rule Extracting based on MCG with its Application in Helicopter Power Train Fault Diagnosis
NASA Astrophysics Data System (ADS)
Wang, M.; Hu, N. Q.; Qin, G. J.
2011-07-01
In order to extract decision rules for fault diagnosis from incomplete historical test records for knowledge-based damage assessment of helicopter power train structure. A method that can directly extract the optimal generalized decision rules from incomplete information based on GrC was proposed. Based on semantic analysis of unknown attribute value, the granule was extended to handle incomplete information. Maximum characteristic granule (MCG) was defined based on characteristic relation, and MCG was used to construct the resolution function matrix. The optimal general decision rule was introduced, with the basic equivalent forms of propositional logic, the rules were extracted and reduction from incomplete information table. Combined with a fault diagnosis example of power train, the application approach of the method was present, and the validity of this method in knowledge acquisition was proved.
NASA Technical Reports Server (NTRS)
Krishnamurthy, T.; Romero, V. J.
2002-01-01
The usefulness of piecewise polynomials with C1 and C2 derivative continuity for response surface construction method is examined. A Moving Least Squares (MLS) method is developed and compared with four other interpolation methods, including kriging. First the selected methods are applied and compared with one another in a two-design variables problem with a known theoretical response function. Next the methods are tested in a four-design variables problem from a reliability-based design application. In general the piecewise polynomial with higher order derivative continuity methods produce less error in the response prediction. The MLS method was found to be superior for response surface construction among the methods evaluated.
NASA Technical Reports Server (NTRS)
Oconnell, R. F.; Hassig, H. J.; Radovcich, N. A.
1975-01-01
Computational aspects of (1) flutter optimization (minimization of structural mass subject to specified flutter requirements), (2) methods for solving the flutter equation, and (3) efficient methods for computing generalized aerodynamic force coefficients in the repetitive analysis environment of computer-aided structural design are discussed. Specific areas included: a two-dimensional Regula Falsi approach to solving the generalized flutter equation; method of incremented flutter analysis and its applications; the use of velocity potential influence coefficients in a five-matrix product formulation of the generalized aerodynamic force coefficients; options for computational operations required to generate generalized aerodynamic force coefficients; theoretical considerations related to optimization with one or more flutter constraints; and expressions for derivatives of flutter-related quantities with respect to design variables.
NASA Astrophysics Data System (ADS)
Fan, Fan; Yu, Yueyang; Amiri, Seyed Ebrahim Hashemi; Quandt, David; Bimberg, Dieter; Ning, C. Z.
2017-04-01
Semiconductor nanolasers are potentially important for many applications. Their design and fabrication are still in the early stage of research and face many challenges. In this paper, we demonstrate a generally applicable membrane transfer method to release and transfer a strain-balanced InGaAs quantum-well nanomembrane of 260 nm in thickness onto various substrates with a high yield. As an initial device demonstration, nano-ring lasers of 1.5 μm in outer diameter and 500 nm in radial thickness are fabricated on MgF2 substrates. Room temperature single mode operation is achieved under optical pumping with a cavity volume of only 0.43λ03 (λ0 in vacuum). Our nano-membrane based approach represents an advantageous alternative to other design and fabrication approaches and could lead to integration of nanolasers on silicon substrates or with metallic cavity.
Corn response and soil nutrient concentration from subsurface application of poultry litter
USDA-ARS?s Scientific Manuscript database
Nitrogen fertilizer management is vital to corn (Zea mays L.) production from financial and environmental perspectives. Poultry litter as a nutrient source in this cropping system is generally surface broadcast, potentially causing volatilization of NH3. Recently a new application method was devel...
A class of fractional differential hemivariational inequalities with application to contact problem
NASA Astrophysics Data System (ADS)
Zeng, Shengda; Liu, Zhenhai; Migorski, Stanislaw
2018-04-01
In this paper, we study a class of generalized differential hemivariational inequalities of parabolic type involving the time fractional order derivative operator in Banach spaces. We use the Rothe method combined with surjectivity of multivalued pseudomonotone operators and properties of the Clarke generalized gradient to establish existence of solution to the abstract inequality. As an illustrative application, a frictional quasistatic contact problem for viscoelastic materials with adhesion is investigated, in which the friction and contact conditions are described by the Clarke generalized gradient of nonconvex and nonsmooth functionals, and the constitutive relation is modeled by the fractional Kelvin-Voigt law.
Degree of Approximation by a General Cλ -Summability Method
NASA Astrophysics Data System (ADS)
Sonker, S.; Munjal, A.
2018-03-01
In the present study, two theorems explaining the degree of approximation of signals belonging to the class Lip(α, p, w) by a more general C λ -method (Summability method) have been formulated. Improved estimations have been observed in terms of λ(n) where (λ(n))‑α ≤ n ‑α for 0 < α ≤ 1 as compared to previous studies presented in terms of n. These estimations of infinite matrices are very much applicable in solid state physics which further motivates for an investigation of perturbations of matrix valued functions.
A Generalized Approach for Measuring Relationships Among Genes.
Wang, Lijun; Ahsan, Md Asif; Chen, Ming
2017-07-21
Several methods for identifying relationships among pairs of genes have been developed. In this article, we present a generalized approach for measuring relationships between any pairs of genes, which is based on statistical prediction. We derive two particular versions of the generalized approach, least squares estimation (LSE) and nearest neighbors prediction (NNP). According to mathematical proof, LSE is equivalent to the methods based on correlation; and NNP is approximate to one popular method called the maximal information coefficient (MIC) according to the performances in simulations and real dataset. Moreover, the approach based on statistical prediction can be extended from two-genes relationships to multi-genes relationships. This application would help to identify relationships among multi-genes.
NASA Astrophysics Data System (ADS)
He, Xianjin; Zhang, Xinchang; Xin, Qinchuan
2018-02-01
Recognition of building group patterns (i.e., the arrangement and form exhibited by a collection of buildings at a given mapping scale) is important to the understanding and modeling of geographic space and is hence essential to a wide range of downstream applications such as map generalization. Most of the existing methods develop rigid rules based on the topographic relationships between building pairs to identify building group patterns and thus their applications are often limited. This study proposes a method to identify a variety of building group patterns that allow for map generalization. The method first identifies building group patterns from potential building clusters based on a machine-learning algorithm and further partitions the building clusters with no recognized patterns based on the graph partitioning method. The proposed method is applied to the datasets of three cities that are representative of the complex urban environment in Southern China. Assessment of the results based on the reference data suggests that the proposed method is able to recognize both regular (e.g., the collinear, curvilinear, and rectangular patterns) and irregular (e.g., the L-shaped, H-shaped, and high-density patterns) building group patterns well, given that the correctness values are consistently nearly 90% and the completeness values are all above 91% for three study areas. The proposed method shows promises in automated recognition of building group patterns that allows for map generalization.
Emerging Applications for High K Materials in VLSI Technology
Clark, Robert D.
2014-01-01
The current status of High K dielectrics in Very Large Scale Integrated circuit (VLSI) manufacturing for leading edge Dynamic Random Access Memory (DRAM) and Complementary Metal Oxide Semiconductor (CMOS) applications is summarized along with the deposition methods and general equipment types employed. Emerging applications for High K dielectrics in future CMOS are described as well for implementations in 10 nm and beyond nodes. Additional emerging applications for High K dielectrics include Resistive RAM memories, Metal-Insulator-Metal (MIM) diodes, Ferroelectric logic and memory devices, and as mask layers for patterning. Atomic Layer Deposition (ALD) is a common and proven deposition method for all of the applications discussed for use in future VLSI manufacturing. PMID:28788599
Seitz, Max W; Haux, Christian; Knaup, Petra; Schubert, Ingrid; Listl, Stefan
2018-01-01
Associations between dental and chronic-systemic diseases were observed frequently in medical research, however the findings of this research have so far found little relevance in everyday clinical treatment. Major problems are the assessment of evidence for correlations between such diseases and how to integrate current medical knowledge into the intersectoral care of dentists and general practitioners. On the example of dental and chronic-systemic diseases, the Dent@Prevent project develops an interdisciplinary decision support system (DSS), which provides the specialists with information relevant for the treatment of such cases. To provide the physicians with relevant medical knowledge, a mixed-methods approach is developed to acquire the knowledge in an evidence-oriented way. This procedure includes a literature review, routine data analyses, focus groups of dentists and general practitioners as well as the identification and integration of applicable guidelines and Patient Reported Measures (PRMs) into the treatment process. The developed mixed methods approach for an evidence-oriented knowledge acquisition indicates to be applicable and supportable for interdisciplinary projects. It can raise the systematic quality of the knowledge-acquisition process and can be applicable for an evidence-based system development. Further research is necessary to assess the impact on patient care and to evaluate possible applicability in other interdisciplinary areas.
Doha, E.H.; Abd-Elhameed, W.M.; Youssri, Y.H.
2014-01-01
Two families of certain nonsymmetric generalized Jacobi polynomials with negative integer indexes are employed for solving third- and fifth-order two point boundary value problems governed by homogeneous and nonhomogeneous boundary conditions using a dual Petrov–Galerkin method. The idea behind our method is to use trial functions satisfying the underlying boundary conditions of the differential equations and the test functions satisfying the dual boundary conditions. The resulting linear systems from the application of our method are specially structured and they can be efficiently inverted. The use of generalized Jacobi polynomials simplify the theoretical and numerical analysis of the method and also leads to accurate and efficient numerical algorithms. The presented numerical results indicate that the proposed numerical algorithms are reliable and very efficient. PMID:26425358
Manufacturing Complicated Shells And Liners
NASA Technical Reports Server (NTRS)
Sobol, Paul J.; Faucher, Joseph E.
1993-01-01
Explosive forming, wax filling, and any one of welding, diffusion bonding, or brazing used in method of manufacturing large, complicated shell-and-liner vessels or structures. Method conceived for manufacture of film-cooled rocket nozzles but applicable to joining large coaxial shells and liners in general.
Application of Complex Adaptive Systems in Portfolio Management
ERIC Educational Resources Information Center
Su, Zheyuan
2017-01-01
Simulation-based methods are becoming a promising research tool in financial markets. A general Complex Adaptive System can be tailored to different application scenarios. Based on the current research, we built two models that would benefit portfolio management by utilizing Complex Adaptive Systems (CAS) in Agent-based Modeling (ABM) approach.…
Eighteenth NASTRAN (R) Users' Colloquium
NASA Technical Reports Server (NTRS)
1990-01-01
This publication is the proceedings of the Eighteenth NASTRAN Users' Colloquium held in Portland, Oregon, April 23-27, 1990. It provides some comprehensive general papers on the application of finite elements in engineering, comparisons with other approaches, unique applications, pre- and post-processing or auxiliary programs, and new methods of analysis with NASTRAN.
NASA Astrophysics Data System (ADS)
Kruglova, T. V.
2004-01-01
The detailed spectroscope information about highly excited molecules and radicals such us as H+3, H2, HI, H2O, CH2 is needed for a number of applications in the field of laser physics, astrophysics and chemistry. Studies of highly excited molecular vibration-rotation states face several problems connected with slowly convergence or even divergences of perturbation expansions. The physical reason for a perturbation expansion divergence is the large amplitude motion and strong vibration-rotation coupling. In this case one needs to use the special method of series summation. There were a number of papers devoted to this problem: papers 1-10 in the reference list are only example of studies on this topic. The present report is aimed at the application of GET method (Generalized Euler Transformation) to the diatomic molecule. Energy levels of a diatomic molecule is usually represented as Dunham series on rotational J(J+1) and vibrational (V+1/2) quantum numbers (within the perturbation approach). However, perturbation theory is not applicable for highly excited vibration-rotation states because the perturbation expansion in this case becomes divergent. As a consequence one need to use special method for the series summation. The Generalized Euler Transformation (GET) is known to be efficient method for summing of slowly convergent series, it was already used for solving of several quantum problems Refs.13 and 14. In this report the results of Euler transformation of diatomic molecule Dunham series are presented. It is shown that Dunham power series can be represented of functional series that is equivalent to its partial summation. It is also shown that transformed series has the butter convergent properties, than the initial series.
On the Determination of the Orbits of Comets
NASA Astrophysics Data System (ADS)
Englefield, Henry
2013-06-01
Preface; 1. General view of the method; 2. On the motion of the point of intersection of the radius vector and cord; 3. On the comparison of the parabolic cord with the space which answers to the mean velocity of the earth in the same time; 4. Of the reduction of the second longitude of the comet; 5. On the proportion of the three curtate distances of the comet from the earth; 6. Of the graphical declination of the orbit of the earth; 7. Of the numerical quantities to be prepared for the construction or computation of the comet's orbit; 8. Determination of the distances of the comet from the earth and the sun; 9. Determination of the elements of the orbit from the determined distances; 10. Determination of the place of the comet from the earth and sun; 11. Determination of the distances of the comet from the earth and sun; 12. Determination of the comet's orbit; 13. Determination of the place of the comet; 14. Application of the graphical method to the comet of 1769; 15. Application of the distances found; 16. Determination of the place of the comet, for another given time; 17. Application of the trigonometrical method to the comet of 1769; 18. Determination of the elements of the orbit of the comet of 1769; Example of the graphical operation for the orbit of the comet of 1769; Example of the trigonometrical operation for the orbit of the comet of 1769; Conclusion; La Place's general method for determining the orbits of comets; Determination of the two elements of the orbit; Application of La Place's method of finding the approximate perihelion distance; Application of La Place's method for correcting the orbit of a comet, to the comet of 1769; Explanation and use of the tables; Tables; Appendix; Plates.
Application of a boundary element method to the study of dynamical torsion of beams
NASA Technical Reports Server (NTRS)
Czekajski, C.; Laroze, S.; Gay, D.
1982-01-01
During dynamic torsion of beam elements, consideration of nonuniform warping effects involves a more general technical formulation then that of Saint-Venant. Nonclassical torsion constants appear in addition to the well known torsional rigidity. The adaptation of the boundary integral element method to the calculation of these constants for general section shapes is described. The suitability of the formulation is investigated with some examples of thick as well as thin walled cross sections.
A generalized theory for the design of contraction cones and other low speed ducts
NASA Technical Reports Server (NTRS)
Barger, R. L.; Bowen, J. T.
1972-01-01
A generalization of the Tsien method of contraction cone design is described. The design velocity distribution is expressed in such a form that the required high order derivatives can be obtained by recursion rather than by numerical or analytic differentiation. The method is applicable to the design of diffusers and converging-diverging ducts as well as contraction cones. The computer program is described and a FORTRAN listing of the program is provided.
Conformal mapping for multiple terminals
Wang, Weimin; Ma, Wenying; Wang, Qiang; Ren, Hao
2016-01-01
Conformal mapping is an important mathematical tool that can be used to solve various physical and engineering problems in many fields, including electrostatics, fluid mechanics, classical mechanics, and transformation optics. It is an accurate and convenient way to solve problems involving two terminals. However, when faced with problems involving three or more terminals, which are more common in practical applications, existing conformal mapping methods apply assumptions or approximations. A general exact method does not exist for a structure with an arbitrary number of terminals. This study presents a conformal mapping method for multiple terminals. Through an accurate analysis of boundary conditions, additional terminals or boundaries are folded into the inner part of a mapped region. The method is applied to several typical situations, and the calculation process is described for two examples of an electrostatic actuator with three electrodes and of a light beam splitter with three ports. Compared with previously reported results, the solutions for the two examples based on our method are more precise and general. The proposed method is helpful in promoting the application of conformal mapping in analysis of practical problems. PMID:27830746
Theory, Methods, and Applications of Nonlinear Control
2012-08-29
an application to Lotka - Volterra systems,” in Proceedings of the American Control Conference (St. Louis, MO, 10-12 June 2009), pp. 96-101. [MM10a...Mazenc, F., and M. Malisoff, “Strict Lyapunov function constructions under LaSalle conditions with an application to Lotka - Volterra systems,” IEEE...the tracking dynamics, (d) the applicability of the theory to a very general class of reference trajectories, and (e) the use of input-to-state
Viallon, Vivian; Banerjee, Onureena; Jougla, Eric; Rey, Grégoire; Coste, Joel
2014-03-01
Looking for associations among multiple variables is a topical issue in statistics due to the increasing amount of data encountered in biology, medicine, and many other domains involving statistical applications. Graphical models have recently gained popularity for this purpose in the statistical literature. In the binary case, however, exact inference is generally very slow or even intractable because of the form of the so-called log-partition function. In this paper, we review various approximate methods for structure selection in binary graphical models that have recently been proposed in the literature and compare them through an extensive simulation study. We also propose a modification of one existing method, that is shown to achieve good performance and to be generally very fast. We conclude with an application in which we search for associations among causes of death recorded on French death certificates. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Fuzzy set methods for object recognition in space applications
NASA Technical Reports Server (NTRS)
Keller, James M.
1991-01-01
During the reporting period, the development of the theory and application of methodologies for decision making under uncertainty was addressed. Two subreports are included; the first on properties of general hybrid operators, while the second considers some new research on generalized threshold logic units. In the first part, the properties of the additive gamma-model, where the intersection part is first considered to be the product of the input values and the union part is obtained by an extension of De Morgan's law to fuzzy sets, is explored. Then the Yager's class of union and intersection is used in the additive gamma-model. The inputs are weighted to some power that represents their importance and thus their contribution to the compensation process. In the second part, the extension of binary logic synthesis methods to multiple valued logic synthesis methods to enable the synthesis of decision networks when the input/output variables are not binary is discussed.
SDSL-ESR-based protein structure characterization.
Strancar, Janez; Kavalenka, Aleh; Urbancic, Iztok; Ljubetic, Ajasja; Hemminga, Marcus A
2010-03-01
As proteins are key molecules in living cells, knowledge about their structure can provide important insights and applications in science, biotechnology, and medicine. However, many protein structures are still a big challenge for existing high-resolution structure-determination methods, as can be seen in the number of protein structures published in the Protein Data Bank. This is especially the case for less-ordered, more hydrophobic and more flexible protein systems. The lack of efficient methods for structure determination calls for urgent development of a new class of biophysical techniques. This work attempts to address this problem with a novel combination of site-directed spin labelling electron spin resonance spectroscopy (SDSL-ESR) and protein structure modelling, which is coupled by restriction of the conformational spaces of the amino acid side chains. Comparison of the application to four different protein systems enables us to generalize the new method and to establish a general procedure for determination of protein structure.
Fundamentals of bipolar high-frequency surgery.
Reidenbach, H D
1993-04-01
In endoscopic surgery a very precise surgical dissection technique and an efficient hemostasis are of decisive importance. The bipolar technique may be regarded as a method which satisfies both requirements, especially regarding a high safety standard in application. In this context the biophysical and technical fundamentals of this method, which have been known in principle for a long time, are described with regard to the special demands of a newly developed field of modern surgery. After classification of this method into a general and a quasi-bipolar mode, various technological solutions of specific bipolar probes, in a strict and in a generalized sense, are characterized in terms of indication. Experimental results obtained with different bipolar instruments and probes are given. The application of modern microprocessor-controlled high-frequency surgery equipment and, wherever necessary, the integration of additional ancillary technology into the specialized bipolar instruments may result in most useful and efficient tools of a key technology in endoscopic surgery.
Riemann Solvers in Relativistic Hydrodynamics: Basics and Astrophysical Applications
NASA Astrophysics Data System (ADS)
Ibanez, Jose M.
2001-12-01
My contribution to these proceedings summarizes a general overview on t High Resolution Shock Capturing methods (HRSC) in the field of relativistic hydrodynamics with special emphasis on Riemann solvers. HRSC techniques achieve highly accurate numerical approximations (formally second order or better) in smooth regions of the flow, and capture the motion of unresolved steep gradients without creating spurious oscillations. In the first part I will show how these techniques have been extended to relativistic hydrodynamics, making it possible to explore some challenging astrophysical scenarios. I will review recent literature concerning the main properties of different special relativistic Riemann solvers, and discuss several 1D and 2D test problems which are commonly used to evaluate the performance of numerical methods in relativistic hydrodynamics. In the second part I will illustrate the use of HRSC methods in several astrophysical applications where special and general relativistic hydrodynamical processes play a crucial role.
Comprehensive rotorcraft analysis methods
NASA Technical Reports Server (NTRS)
Stephens, Wendell B.; Austin, Edward E.
1988-01-01
The development and application of comprehensive rotorcraft analysis methods in the field of rotorcraft technology are described. These large scale analyses and the resulting computer programs are intended to treat the complex aeromechanical phenomena that describe the behavior of rotorcraft. They may be used to predict rotor aerodynamics, acoustic, performance, stability and control, handling qualities, loads and vibrations, structures, dynamics, and aeroelastic stability characteristics for a variety of applications including research, preliminary and detail design, and evaluation and treatment of field problems. The principal comprehensive methods developed or under development in recent years and generally available to the rotorcraft community because of US Army Aviation Research and Technology Activity (ARTA) sponsorship of all or part of the software systems are the Rotorcraft Flight Simulation (C81), Dynamic System Coupler (DYSCO), Coupled Rotor/Airframe Vibration Analysis Program (SIMVIB), Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics (CAMRAD), General Rotorcraft Aeromechanical Stability Program (GRASP), and Second Generation Comprehensive Helicopter Analysis System (2GCHAS).
Radar studies of the atmosphere using spatial and frequency diversity
NASA Astrophysics Data System (ADS)
Yu, Tian-You
This work provides results from a thorough investigation of atmospheric radar imaging including theory, numerical simulations, observational verification, and applications. The theory is generalized to include the existing imaging techniques of coherent radar imaging (CRI) and range imaging (RIM), which are shown to be special cases of three-dimensional imaging (3D Imaging). Mathematically, the problem of atmospheric radar imaging is posed as an inverse problem. In this study, the Fourier, Capon, and maximum entropy (MaxEnt) methods are proposed to solve the inverse problem. After the introduction of the theory, numerical simulations are used to test, validate, and exercise these techniques. Statistical comparisons of the three methods of atmospheric radar imaging are presented for various signal-to-noise ratio (SNR), receiver configuration, and frequency sampling. The MaxEnt method is shown to generally possess the best performance for low SNR. The performance of the Capon method approaches the performance of the MaxEnt method for high SNR. In limited cases, the Capon method actually outperforms the MaxEnt method. The Fourier method generally tends to distort the model structure due to its limited resolution. Experimental justification of CRI and RIM is accomplished using the Middle and Upper (MU) Atmosphere Radar in Japan and the SOUnding SYstem (SOUSY) in Germany, respectively. A special application of CRI to the observation of polar mesosphere summer echoes (PMSE) is used to show direct evidence of wave steepening and possibly explain gravity wave variations associated with PMSE.
Adding Temporal Characteristics to Geographical Schemata and Instances: A General Framework
NASA Astrophysics Data System (ADS)
Ota, Morishige
2018-05-01
This paper proposes the temporal general feature model (TGFM) as a meta-model for application schemata representing changes of real-world phenomena. It is not very easy to determine history directly from the current application schemata, even if the revision notes are attached to the specification. To solve this problem, the rules for description of the succession between previous and posterior components are added to the general feature model, thus resulting in TGFM. After discussing the concepts associated with the new model, simple examples of application schemata are presented as instances of TGFM. Descriptors for changing properties, the succession of changing properties in moving features, and the succession of features and associations are introduced. The modeling methods proposed in this paper will contribute to the acquisition of consistent and reliable temporal geospatial data.
A generalized vortex lattice method for subsonic and supersonic flow applications
NASA Technical Reports Server (NTRS)
Miranda, L. R.; Elliot, R. D.; Baker, W. M.
1977-01-01
If the discrete vortex lattice is considered as an approximation to the surface-distributed vorticity, then the concept of the generalized principal part of an integral yields a residual term to the vorticity-induced velocity field. The proper incorporation of this term to the velocity field generated by the discrete vortex lines renders the present vortex lattice method valid for supersonic flow. Special techniques for simulating nonzero thickness lifting surfaces and fusiform bodies with vortex lattice elements are included. Thickness effects of wing-like components are simulated by a double (biplanar) vortex lattice layer, and fusiform bodies are represented by a vortex grid arranged on a series of concentrical cylindrical surfaces. The analysis of sideslip effects by the subject method is described. Numerical considerations peculiar to the application of these techniques are also discussed. The method has been implemented in a digital computer code. A users manual is included along with a complete FORTRAN compilation, an executed case, and conversion programs for transforming input for the NASA wave drag program.
Tallman, Sean D; Winburn, Allysha P
2015-09-01
Ancestry assessment from the postcranial skeleton presents a significant challenge to forensic anthropologists. However, metric dimensions of the femur subtrochanteric region are believed to distinguish between individuals of Asian and non-Asian descent. This study tests the discriminatory power of subtrochanteric shape using modern samples of 128 Thai and 77 White American males. Results indicate that the samples' platymeric index distributions are significantly different (p≤0.001), with the Thai platymeric index range generally lower and the White American range generally higher. While the application of ancestry assessment methods developed from Native American subtrochanteric data results in low correct classification rates for the Thai sample (50.8-57.8%), adapting these methods to the current samples leads to better classification. The Thai data may be more useful in forensic analysis than previously published subtrochanteric data derived from Native American samples. Adapting methods to include appropriate geographic and contemporaneous populations increases the accuracy of femur subtrochanteric ancestry methods. © 2015 American Academy of Forensic Sciences.
Huang, Jian; Zhang, Cun-Hui
2013-01-01
The ℓ1-penalized method, or the Lasso, has emerged as an important tool for the analysis of large data sets. Many important results have been obtained for the Lasso in linear regression which have led to a deeper understanding of high-dimensional statistical problems. In this article, we consider a class of weighted ℓ1-penalized estimators for convex loss functions of a general form, including the generalized linear models. We study the estimation, prediction, selection and sparsity properties of the weighted ℓ1-penalized estimator in sparse, high-dimensional settings where the number of predictors p can be much larger than the sample size n. Adaptive Lasso is considered as a special case. A multistage method is developed to approximate concave regularized estimation by applying an adaptive Lasso recursively. We provide prediction and estimation oracle inequalities for single- and multi-stage estimators, a general selection consistency theorem, and an upper bound for the dimension of the Lasso estimator. Important models including the linear regression, logistic regression and log-linear models are used throughout to illustrate the applications of the general results. PMID:24348100
Probabilistic structural analysis methods and applications
NASA Technical Reports Server (NTRS)
Cruse, T. A.; Wu, Y.-T.; Dias, B.; Rajagopal, K. R.
1988-01-01
An advanced algorithm for simulating the probabilistic distribution of structural responses due to statistical uncertainties in loads, geometry, material properties, and boundary conditions is reported. The method effectively combines an advanced algorithm for calculating probability levels for multivariate problems (fast probability integration) together with a general-purpose finite-element code for stress, vibration, and buckling analysis. Application is made to a space propulsion system turbine blade for which the geometry and material properties are treated as random variables.
Law, Jodi Woan-Fei; Ab Mutalib, Nurul-Syakima; Chan, Kok-Gan; Lee, Learn-Han
2015-01-01
The incidence of foodborne diseases has increased over the years and resulted in major public health problem globally. Foodborne pathogens can be found in various foods and it is important to detect foodborne pathogens to provide safe food supply and to prevent foodborne diseases. The conventional methods used to detect foodborne pathogen are time consuming and laborious. Hence, a variety of methods have been developed for rapid detection of foodborne pathogens as it is required in many food analyses. Rapid detection methods can be categorized into nucleic acid-based, biosensor-based and immunological-based methods. This review emphasizes on the principles and application of recent rapid methods for the detection of foodborne bacterial pathogens. Detection methods included are simple polymerase chain reaction (PCR), multiplex PCR, real-time PCR, nucleic acid sequence-based amplification (NASBA), loop-mediated isothermal amplification (LAMP) and oligonucleotide DNA microarray which classified as nucleic acid-based methods; optical, electrochemical and mass-based biosensors which classified as biosensor-based methods; enzyme-linked immunosorbent assay (ELISA) and lateral flow immunoassay which classified as immunological-based methods. In general, rapid detection methods are generally time-efficient, sensitive, specific and labor-saving. The developments of rapid detection methods are vital in prevention and treatment of foodborne diseases. PMID:25628612
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ishida, Muneyuki; Ishida, Shin; Ishida, Taku
1998-05-29
The relation between scattering and production amplitudes are investigated, using a simple field theoretical model, from the general viewpoint of unitarity and the applicability of final state interaction (FSI-) theorem. The IA-method and VMW-method, which are applied to our phenomenological analyses [2,3] suggesting the {sigma}-existence, are obtained as the physical state representations of scattering and production amplitudes, respectively. Moreover, the VMW-method is shown to be an effective method to obtain the resonance properties from general production processes, while the conventional analyses based on the 'universality' of {pi}{pi}-scattering amplitude are powerless for this purpose.
Relation between scattering and production amplitude—Case of intermediate σ-particle in ππ-system—
NASA Astrophysics Data System (ADS)
Ishida, Muneyuki; Ishida, Shin; Ishida, Taku
1998-05-01
The relation between scattering and production amplitudes are investigated, using a simple field theoretical model, from the general viewpoint of unitarity and the applicability of final state interaction (FSI-) theorem. The IA-method and VMW-method, which are applied to our phenomenological analyses [2,3] suggesting the σ-existence, are obtained as the physical state representations of scattering and production amplitudes, respectively. Moreover, the VMW-method is shown to be an effective method to obtain the resonance properties from general production processes, while the conventional analyses based on the "universality" of ππ-scattering amplitude are powerless for this purpose.
NASA Technical Reports Server (NTRS)
Zeng, S.; Wesseling, P.
1993-01-01
The performance of a linear multigrid method using four smoothing methods, called SCGS (Symmetrical Coupled GauBeta-Seidel), CLGS (Collective Line GauBeta-Seidel), SILU (Scalar ILU), and CILU (Collective ILU), is investigated for the incompressible Navier-Stokes equations in general coordinates, in association with Galerkin coarse grid approximation. Robustness and efficiency are measured and compared by application to test problems. The numerical results show that CILU is the most robust, SILU the least, with CLGS and SCGS in between. CLGS is the best in efficiency, SCGS and CILU follow, and SILU is the worst.
Holmes, Robert R.; Dunn, Chad J.
1996-01-01
A simplified method to estimate total-streambed scour was developed for application to bridges in the State of Illinois. Scour envelope curves, developed as empirical relations between calculated total scour and bridge-site chracteristics for 213 State highway bridges in Illinois, are used in the method to estimate the 500-year flood scour. These 213 bridges, geographically distributed throughout Illinois, had been previously evaluated for streambed scour with the application of conventional hydraulic and scour-analysis methods recommended by the Federal Highway Administration. The bridge characteristics necessary for application of the simplified bridge scour-analysis method can be obtained from an office review of bridge plans, examination of topographic maps, and reconnaissance-level site inspection. The estimates computed with the simplified method generally resulted in a larger value of 500-year flood total-streambed scour than with the more detailed conventional method. The simplified method was successfully verified with a separate data set of 106 State highway bridges, which are geographically distributed throughout Illinois, and 15 county highway bridges.
Transient liquid phase diffusion bonding of Udimet 720 for Stirling power converter applications
NASA Technical Reports Server (NTRS)
Mittendorf, Donald L.; Baggenstoss, William G.
1992-01-01
Udimet 720 has been selected for use on Stirling power converters for space applications. Because Udimet 720 is generally considered susceptible to strain age cracking if traditional fusion welding is used, other joining methods are being considered. A process for transient liquid phase diffusion bonding of Udimet 720 has been theoretically developed in an effort to eliminate the strain age crack concern. This development has taken into account such variables as final grain size, joint homogenization, joint efficiency related to bonding aid material, bonding aid material application method, and thermal cycle.
Introduction to Remote Sensing Image Registration
NASA Technical Reports Server (NTRS)
Le Moigne, Jacqueline
2017-01-01
For many applications, accurate and fast image registration of large amounts of multi-source data is the first necessary step before subsequent processing and integration. Image registration is defined by several steps and each step can be approached by various methods which all present diverse advantages and drawbacks depending on the type of data, the type of applications, the a prior information known about the data and the type of accuracy that is required. This paper will first present a general overview of remote sensing image registration and then will go over a few specific methods and their applications
Varandas, A J C; Sarkar, B
2011-05-14
Generalized Born-Oppenheimer equations including the geometrical phase effect are derived for three- and four-fold electronic manifolds in Jahn-Teller systems near the degeneracy seam. The method is readily extendable to N-fold systems of arbitrary dimension. An application is reported for a model threefold system, and the results are compared with Born-Oppenheimer (geometrical phase ignored), extended Born-Oppenheimer, and coupled three-state calculations. The theory shows unprecedented simplicity while depicting all features of more elaborated ones.
Chiu, Huai-Hsuan; Liao, Hsiao-Wei; Shao, Yu-Yun; Lu, Yen-Shen; Lin, Ching-Hung; Tsai, I-Lin; Kuo, Ching-Hua
2018-08-17
Monoclonal antibody (mAb) drugs have generated much interest in recent years for treating various diseases. Immunoglobulin G (IgG) represents a high percentage of mAb drugs that have been approved by the Food and Drug Administration (FDA). To facilitate therapeutic drug monitoring and pharmacokinetic/pharmacodynamic studies, we developed a general liquid chromatography-tandem mass spectrometry (LC-MS/MS) method to quantify the concentration of IgG-based mAbs in human plasma. Three IgG-based drugs (bevacizumab, nivolumab and pembrolizumab) were selected to demonstrate our method. Protein G beads were used for sample pretreatment due to their universal ability to trap IgG-based drugs. Surrogate peptides that were obtained after trypsin digestion were quantified by using LC-MS/MS. To calibrate sample preparation errors and matrix effects that occur during LC-MS/MS analysis, we used two internal standards (IS) method that include the IgG-based drug-IS tocilizumab and post-column infused IS. Using two internal standards was found to effectively improve quantification accuracy, which was within 15% for all mAb drugs that were tested at three different concentrations. This general method was validated in term of its precision, accuracy, linearity and sensitivity for 3 demonstration mAb drugs. The successful application of the method to clinical samples demonstrated its' applicability in clinical analysis. It is anticipated that this general method could be applied to other mAb-based drugs for use in precision medicine and clinical studies. Copyright © 2018 Elsevier B.V. All rights reserved.
General optical discrete z transform: design and application.
Ngo, Nam Quoc
2016-12-20
This paper presents a generalization of the discrete z transform algorithm. It is shown that the GOD-ZT algorithm is a generalization of several important conventional discrete transforms. Based on the GOD-ZT algorithm, a tunable general optical discrete z transform (GOD-ZT) processor is synthesized using the silica-based finite impulse response transversal filter. To demonstrate the effectiveness of the method, the design and simulation of a tunable optical discrete Fourier transform (ODFT) processor as a special case of the synthesized GOD-ZT processor is presented. It is also shown that the ODFT processor can function as a real-time optical spectrum analyzer. The tunable ODFT has an important potential application as a tunable optical demultiplexer at the receiver end of an optical orthogonal frequency-division multiplexing transmission system.
Methods for delineating flood-prone areas in the Great Basin of Nevada and adjacent states
Burkham, D.E.
1988-01-01
The Great Basin is a region of about 210,000 square miles having no surface drainage to the ocean; it includes most of Nevada and parts of Utah, California, Oregon, Idaho, and Wyoming. The area is characterized by many parallel mountain ranges and valleys trending north-south. Stream channels usually are well defined and steep within the mountains, but on reaching the alluvial fan at the canyon mouth, they may diverge into numerous distributary channels, be discontinuous near the apex of the fan, or be deeply entrenched in the alluvial deposits. Larger rivers normally have well-defined channels to or across the valley floors, but all terminate at lakes or playas. Major floods occur in most parts of the Great Basin and result from snowmelt, frontal-storm rainfall, and localized convective rainfall. Snowmelt floods typically occur during April-June. Floods resulting from frontal rain and frontal rain on snow generally occur during November-March. Floods resulting from convective-type rainfall during localized thunderstorms occur most commonly during the summer months. Methods for delineating flood-prone areas are grouped into five general categories: Detailed, historical, analytical, physiographic, and reconnaissance. The detailed and historical methods are comprehensive methods; the analytical and physiographic are intermediate; and the reconnaissance method is only approximate. Other than the reconnaissance method, each method requires determination of a T-year discharge (the peak rate of flow during a flood with long-term average recurrence interval of T years) and T-year profile and the development of a flood-boundary map. The procedure is different, however, for each method. Appraisal of the applicability of each method included consideration of its technical soundness, limitations and uncertainties, ease of use, and costs in time and money. Of the five methods, the detailed method is probably the most accurate, though most expensive. It is applicable to hydraulic and topographic conditions found in many parts of the Great Basin. The historical method is also applicable over a wide range of conditions and is less expensive than the detailed method. However, it requires more historical flood data than are usually available, and experience and judgement are needed to obtain meaningful results. The analytical method is also less expensive than the detailed method and can be used over a wide range of conditions in which the T-year discharge can be determined directly. Experience, good judgement, and thorough knowledge of hydraulic principles are required to obtain adequate results, and the method has limited application in other than rigid-channel situations. The physiographic method is applicable to rigid-boundary channels and is less accurate than the detailed method. The reconnaissance method is relatively imprecise, but it may be the most rational method to use on alluvial fans or valley floors with discontinuous channels. In general, a comprehensive method is most suitable for use with rigid-bank streams in urban areas; only an approximate method seems justified in undeveloped areas.
STOCK Mechanics:. a General Theory and Method of Energy Conservation with Applications on Djia
NASA Astrophysics Data System (ADS)
Tuncay, Çağlar
A new method, based on the original theory of conservation of sum of kinetic and potential energy defined for prices is proposed and applied on the Dow Jones Industrials Average (DJIA). The general trends averaged over months or years gave a roughly conserved total energy, with three different potential energies, i.e., positive definite quadratic, negative definite quadratic and linear potential energy for exponential rises (and falls), sinusoidal oscillations and parabolic trajectories, respectively. Corresponding expressions for force (impact) are also given.
Application of Two-Dimensional AWE Algorithm in Training Multi-Dimensional Neural Network Model
2003-07-01
hybrid scheme . the general neural network method (Table 3.1). The training process of the software- ACKNOWLEDGMENT "Neuralmodeler" is shown in Fig. 3.2...engineering. Artificial neural networks (ANNs) have emerged Training a neural network model is the key of as a powerful technique for modeling general neural...coefficients am, the derivatives method of moments (MoM). The variables in the of matrix I have to be generated . A closed form model are frequency
Analytical Solution of a Generalized Hirota-Satsuma Equation
NASA Astrophysics Data System (ADS)
Kassem, M.; Mabrouk, S.; Abd-el-Malek, M.
A modified version of generalized Hirota-Satsuma is here solved using a two parameter group transformation method. This problem in three dimensions was reduced by Estevez [1] to a two dimensional one through a Lie transformation method and left unsolved. In the present paper, through application of symmetry transformation the Lax pair has been reduced to a system of ordinary equations. Three transformations cases are investigated. The obtained analytical solutions are plotted and show a profile proper to deflagration processes, well described by Degasperis-Procesi equation.
The Inverse of Banded Matrices
2013-01-01
indexed entries all zeros. In this paper, generalizing a method of Mallik (1999) [5], we give the LU factorization and the inverse of the matrix Br,n (if it...r ≤ i ≤ r, 1 ≤ j ≤ r, with the remaining un-indexed entries all zeros. In this paper generalizing a method of Mallik (1999) [5...matrices and applications to piecewise cubic approximation, J. Comput. Appl. Math. 8 (4) (1982) 285–288. [5] R.K. Mallik , The inverse of a lower
Immunoaffinity chromatography: an introduction to applications and recent developments
Moser, Annette C
2010-01-01
Immunoaffinity chromatography (IAC) combines the use of LC with the specific binding of antibodies or related agents. The resulting method can be used in assays for a particular target or for purification and concentration of analytes prior to further examination by another technique. This review discusses the history and principles of IAC and the various formats that can be used with this method. An overview is given of the general properties of antibodies and of antibody-production methods. The supports and immobilization methods used with antibodies in IAC and the selection of application and elution conditions for IAC are also discussed. Several applications of IAC are considered, including its use in purification, immunodepletion, direct sample analysis, chromatographic immunoassays and combined analysis methods. Recent developments include the use of IAC with CE or MS, ultrafast immunoextraction methods and the use of immunoaffinity columns in microanalytical systems. PMID:20640220
NASA Technical Reports Server (NTRS)
Bittker, D. A.; Scullin, V. J.
1972-01-01
A general chemical kinetics program is described for complex, homogeneous ideal-gas reactions in any chemical system. Its main features are flexibility and convenience in treating many different reaction conditions. The program solves numerically the differential equations describing complex reaction in either a static system or one-dimensional inviscid flow. Applications include ignition and combustion, shock wave reactions, and general reactions in a flowing or static system. An implicit numerical solution method is used which works efficiently for the extreme conditions of a very slow or a very fast reaction. The theory is described, and the computer program and users' manual are included.
A general panel sizing computer code and its application to composite structural panels
NASA Technical Reports Server (NTRS)
Anderson, M. S.; Stroud, W. J.
1978-01-01
A computer code for obtaining the dimensions of optimum (least mass) stiffened composite structural panels is described. The procedure, which is based on nonlinear mathematical programming and a rigorous buckling analysis, is applicable to general cross sections under general loading conditions causing buckling. A simplified method of accounting for bow-type imperfections is also included. Design studies in the form of structural efficiency charts for axial compression loading are made with the code for blade and hat stiffened panels. The effects on panel mass of imperfections, material strength limitations, and panel stiffness requirements are also examined. Comparisons with previously published experimental data show that accounting for imperfections improves correlation between theory and experiment.
Code of Federal Regulations, 2010 CFR
2010-01-01
... construction of the project, Agency personnel will work closely and cooperatively with the applicant and their... flexibility to consider reasonable alternatives to the project and development methods to mitigate identified...
1985-01-01
general introduction to the basic principles of flight test instrumentation engineering and is composed from contributions by several specialized authors...Required measuring accuracy 17 OPTICAL METHODS OF TRAJECTORY MEASUREMENTS 19 3.1 Introduction 19 3.2 Kinetheodolites 19 3.2.1 General principles 19...without photographic cameras 30 3.5.1 General introduction 30 3.5.2 Trajectory measurements using lasers 31 3.5.2.1 General aspects 31 3.5.2.2
NASA Technical Reports Server (NTRS)
Stretchberry, D. M.; Hein, G. F.
1972-01-01
The general concepts of costing, budgeting, and benefit-cost ratio and cost-effectiveness analysis are discussed. The three common methods of costing are presented. Budgeting distributions are discussed. The use of discounting procedures is outlined. The benefit-cost ratio and cost-effectiveness analysis is defined and their current application to NASA planning is pointed out. Specific practices and techniques are discussed, and actual costing and budgeting procedures are outlined. The recommended method of calculating benefit-cost ratios is described. A standardized method of cost-effectiveness analysis and long-range planning are also discussed.
Chauvenet, B; Bobin, C; Bouchard, J
2017-12-01
Dead-time correction formulae are established in the general case of superimposed non-homogeneous Poisson processes. Based on the same principles as conventional live-timed counting, this method exploits the additional information made available using digital signal processing systems, and especially the possibility to store the time stamps of live-time intervals. No approximation needs to be made to obtain those formulae. Estimates of the variances of corrected rates are also presented. This method is applied to the activity measurement of short-lived radionuclides. Copyright © 2017 Elsevier Ltd. All rights reserved.
Purely numerical approach for analyzing flow to a well intercepting a vertical fracture
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narasimhan, T.N.; Palen, W.A.
1979-03-01
A numerical method, based on an Integral Finite Difference approach, is presented to investigate wells intercepting fractures in general and vertical fractures in particular. Such features as finite conductivity, wellbore storage, damage, and fracture deformability and its influence as permeability are easily handled. The advantage of the numerical approach is that it is based on fewer assumptions than analytic solutions and hence has greater generality. Illustrative examples are given to validate the method against known solutions. New results are presenteed to demonstrate the applicability of the method to problems not apparently considered in the literature so far.
Rainbow holography and its applications
NASA Astrophysics Data System (ADS)
Vlasov, N. G.; Ivanov, Vladimir S.
1993-09-01
The general equations of the rainbow holography are deduced. Their analysis makes it possible to offer different methods of the rainbow holographic images production. A new way of using the rainbow holograms as optical elements for effective color illuminating of transparent, specular, and polished objects is proposed. Application fields are the advertising industry, shop windows design, etc.
The Landscape Development Index (LDI) has been shown to correspond well with measures of human impacts on natural systems. The LDI is calculated using the empower density of nonrenewable emergy use on the landscape. One question regarding its general applicability is, “How do the...
The Twenty-First NASTRAN (R) Users' Colloquium
NASA Technical Reports Server (NTRS)
1993-01-01
This publication contains the proceedings of the Twenty-First NASTRAN Users' Colloquium held in Tampa, FL, April 26 through April 30, 1993. It provides some comprehensive general papers on the application of finite elements in engineering, comparisons with other approaches, unique applications, pre-and postprocessing with other auxiliary programs and new methods of analysis with NASTRAN.
Modeling of space environment impact on nanostructured materials. General principles
NASA Astrophysics Data System (ADS)
Voronina, Ekaterina; Novikov, Lev
2016-07-01
In accordance with the resolution of ISO TC20/SC14 WG4/WG6 joint meeting, Technical Specification (TS) 'Modeling of space environment impact on nanostructured materials. General principles' which describes computer simulation methods of space environment impact on nanostructured materials is being prepared. Nanomaterials surpass traditional materials for space applications in many aspects due to their unique properties associated with nanoscale size of their constituents. This superiority in mechanical, thermal, electrical and optical properties will evidently inspire a wide range of applications in the next generation spacecraft intended for the long-term (~15-20 years) operation in near-Earth orbits and the automatic and manned interplanetary missions. Currently, ISO activity on developing standards concerning different issues of nanomaterials manufacturing and applications is high enough. Most such standards are related to production and characterization of nanostructures, however there is no ISO documents concerning nanomaterials behavior in different environmental conditions, including the space environment. The given TS deals with the peculiarities of the space environment impact on nanostructured materials (i.e. materials with structured objects which size in at least one dimension lies within 1-100 nm). The basic purpose of the document is the general description of the methodology of applying computer simulation methods which relate to different space and time scale to modeling processes occurring in nanostructured materials under the space environment impact. This document will emphasize the necessity of applying multiscale simulation approach and present the recommendations for the choice of the most appropriate methods (or a group of methods) for computer modeling of various processes that can occur in nanostructured materials under the influence of different space environment components. In addition, TS includes the description of possible approximations and limitations of proposed simulation methods as well as of widely used software codes. This TS may be used as a base for developing a new standard devoted to nanomaterials applications for spacecraft.
A new method for calculating differential distributions directly in Mellin space
NASA Astrophysics Data System (ADS)
Mitov, Alexander
2006-12-01
We present a new method for the calculation of differential distributions directly in Mellin space without recourse to the usual momentum-fraction (or z-) space. The method is completely general and can be applied to any process. It is based on solving the integration-by-parts identities when one of the powers of the propagators is an abstract number. The method retains the full dependence on the Mellin variable and can be implemented in any program for solving the IBP identities based on algebraic elimination, like Laporta. General features of the method are: (1) faster reduction, (2) smaller number of master integrals compared to the usual z-space approach and (3) the master integrals satisfy difference instead of differential equations. This approach generalizes previous results related to fully inclusive observables like the recently calculated three-loop space-like anomalous dimensions and coefficient functions in inclusive DIS to more general processes requiring separate treatment of the various physical cuts. Many possible applications of this method exist, the most notable being the direct evaluation of the three-loop time-like splitting functions in QCD.
Magnesium-based biodegradable alloys: Degradation, application, and alloying elements
Pogorielov, Maksym; Husak, Eugenia; Solodivnik, Alexandr; Zhdanov, Sergii
2017-01-01
In recent years, the paradigm about the metal with improved corrosion resistance for application in surgery and orthopedy was broken. The new class of biodegradable metal emerges as an alternative for biomedical implants. These metals corrode gradually with an appropriate host response and release of corrosion products. And it is absolutely necessary to use essential metals metabolized by hosting organism with local and general nontoxic effect. Magnesium serves this aim best; it plays the essential role in body metabolism and should be completely excreted within a few days after degradation. This review summarizes data from Mg discovery and its first experimental and clinical application of modern concept of Mg alloy development. We focused on biodegradable metal application in general surgery and orthopedic practice and showed the advantages and disadvantages Mg alloys offer. We focused on methods of in vitro and in vivo investigation of degradable Mg alloys and correlation between these methods. Based on the observed data, a better way for new alloy pre-clinical investigation is suggested. This review analyzes possible alloying elements that improve corrosion rate, mechanical properties, and gives the appropriate host response. PMID:28932493
Statistical energy analysis computer program, user's guide
NASA Technical Reports Server (NTRS)
Trudell, R. W.; Yano, L. I.
1981-01-01
A high frequency random vibration analysis, (statistical energy analysis (SEA) method) is examined. The SEA method accomplishes high frequency prediction of arbitrary structural configurations. A general SEA computer program is described. A summary of SEA theory, example problems of SEA program application, and complete program listing are presented.
On the measurement of stationary electric fields in air
NASA Technical Reports Server (NTRS)
Kirkham, H.
2002-01-01
Applications and measurement methods for field measurements are reviewed. Recent developments using optical technology are examined. The various methods are compared. It is concluded that the best general purpose instrument is the isolated cylindrical field mill, but MEMS technology could furnish better instruments in the future.
Selection of species and sampling areas: The importance of inference
Paul Stephen Corn
2009-01-01
Inductive inference, the process of drawing general conclusions from specific observations, is fundamental to the scientific method. Platt (1964) termed conclusions obtained through rigorous application of the scientific method as "strong inference" and noted the following basic steps: generating alternative hypotheses; devising experiments, the...
Machine learning applications in genetics and genomics.
Libbrecht, Maxwell W; Noble, William Stafford
2015-06-01
The field of machine learning, which aims to develop computer algorithms that improve with experience, holds promise to enable computers to assist humans in the analysis of large, complex data sets. Here, we provide an overview of machine learning applications for the analysis of genome sequencing data sets, including the annotation of sequence elements and epigenetic, proteomic or metabolomic data. We present considerations and recurrent challenges in the application of supervised, semi-supervised and unsupervised machine learning methods, as well as of generative and discriminative modelling approaches. We provide general guidelines to assist in the selection of these machine learning methods and their practical application for the analysis of genetic and genomic data sets.
Systematic Expansion of Active Spaces beyond the CASSCF Limit: A GASSCF/SplitGAS Benchmark Study.
Vogiatzis, Konstantinos D; Li Manni, Giovanni; Stoneburner, Samuel J; Ma, Dongxia; Gagliardi, Laura
2015-07-14
The applicability and accuracy of the generalized active space self-consistent field, (GASSCF), and (SplitGAS) methods are presented. The GASSCF method enables the exploration of larger active spaces than with the conventional complete active space SCF, (CASSCF), by fragmentation of a large space into subspaces and by controlling the interspace excitations. In the SplitGAS method, the GAS configuration interaction, CI, expansion is further partitioned in two parts: the principal, which includes the most important configuration state functions, and an extended, containing less relevant but not negligible ones. An effective Hamiltonian is then generated, with the extended part acting as a perturbation to the principal space. Excitation energies of ozone, furan, pyrrole, nickel dioxide, and copper tetrachloride dianion are reported. Various partitioning schemes of the GASSCF and SplitGAS CI expansions are considered and compared with the complete active space followed by second-order perturbation theory, (CASPT2), and multireference CI method, (MRCI), or available experimental data. General guidelines for the optimum applicability of these methods are discussed together with their current limitations.
NASA Technical Reports Server (NTRS)
Barker, L. E., Jr.; Bowles, R. L.; Williams, L. H.
1973-01-01
High angular rates encountered in real-time flight simulation problems may require a more stable and accurate integration method than the classical methods normally used. A study was made to develop a general local linearization procedure of integrating dynamic system equations when using a digital computer in real-time. The procedure is specifically applied to the integration of the quaternion rate equations. For this application, results are compared to a classical second-order method. The local linearization approach is shown to have desirable stability characteristics and gives significant improvement in accuracy over the classical second-order integration methods.
The Baldwin-Lomax model for separated and wake flows using the entropy envelope concept
NASA Technical Reports Server (NTRS)
Brock, J. S.; Ng, W. F.
1992-01-01
Implementation of the Baldwin-Lomax algebraic turbulence model is difficult and ambiguous within flows characterized by strong viscous-inviscid interactions and flow separations. A new method of implementation is proposed which uses an entropy envelope concept and is demonstrated to ensure the proper evaluation of modeling parameters. The method is simple, computationally fast, and applicable to both wake and boundary layer flows. The method is general, making it applicable to any turbulence model which requires the automated determination of the proper maxima of a vorticity-based function. The new method is evalulated within two test cases involving strong viscous-inviscid interaction.
Determination of the spin and recovery characteristics of a typical low-wing general aviation design
NASA Technical Reports Server (NTRS)
Tischler, M. B.; Barlow, J. B.
1980-01-01
The equilibrium spin technique implemented in a graphical form for obtaining spin and recovery characteristics from rotary balance data is outlined. Results of its application to recent rotary balance tests of the NASA Low-Wing General Aviation Aircraft are discussed. The present results, which are an extension of previously published findings, indicate the ability of the equilibrium method to accurately evaluate spin modes and recovery control effectiveness. A comparison of the calculated results with available spin tunnel and full scale findings is presented. The technique is suitable for preliminary design applications as determined from the available results and data base requirements. A full discussion of implementation considerations and a summary of the results obtained from this method to date are presented.
Controllers, observers, and applications thereof
NASA Technical Reports Server (NTRS)
Gao, Zhiqiang (Inventor); Zhou, Wankun (Inventor); Miklosovic, Robert (Inventor); Radke, Aaron (Inventor); Zheng, Qing (Inventor)
2011-01-01
Controller scaling and parameterization are described. Techniques that can be improved by employing the scaling and parameterization include, but are not limited to, controller design, tuning and optimization. The scaling and parameterization methods described here apply to transfer function based controllers, including PID controllers. The parameterization methods also apply to state feedback and state observer based controllers, as well as linear active disturbance rejection (ADRC) controllers. Parameterization simplifies the use of ADRC. A discrete extended state observer (DESO) and a generalized extended state observer (GESO) are described. They improve the performance of the ESO and therefore ADRC. A tracking control algorithm is also described that improves the performance of the ADRC controller. A general algorithm is described for applying ADRC to multi-input multi-output systems. Several specific applications of the control systems and processes are disclosed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Kai; Fu, Shubin; Gibson, Richard L.
It is important to develop fast yet accurate numerical methods for seismic wave propagation to characterize complex geological structures and oil and gas reservoirs. However, the computational cost of conventional numerical modeling methods, such as finite-difference method and finite-element method, becomes prohibitively expensive when applied to very large models. We propose a Generalized Multiscale Finite-Element Method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media, where we construct basis functions from multiple local problems for both the boundaries and interior of a coarse node support or coarse element. The application of multiscale basis functions can capture the fine scale mediummore » property variations, and allows us to greatly reduce the degrees of freedom that are required to implement the modeling compared with conventional finite-element method for wave equation, while restricting the error to low values. We formulate the continuous Galerkin and discontinuous Galerkin formulation of the multiscale method, both of which have pros and cons. Applications of the multiscale method to three heterogeneous models show that our multiscale method can effectively model the elastic wave propagation in anisotropic media with a significant reduction in the degrees of freedom in the modeling system.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Kai, E-mail: kaigao87@gmail.com; Fu, Shubin, E-mail: shubinfu89@gmail.com; Gibson, Richard L., E-mail: gibson@tamu.edu
It is important to develop fast yet accurate numerical methods for seismic wave propagation to characterize complex geological structures and oil and gas reservoirs. However, the computational cost of conventional numerical modeling methods, such as finite-difference method and finite-element method, becomes prohibitively expensive when applied to very large models. We propose a Generalized Multiscale Finite-Element Method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media, where we construct basis functions from multiple local problems for both the boundaries and interior of a coarse node support or coarse element. The application of multiscale basis functions can capture the fine scale mediummore » property variations, and allows us to greatly reduce the degrees of freedom that are required to implement the modeling compared with conventional finite-element method for wave equation, while restricting the error to low values. We formulate the continuous Galerkin and discontinuous Galerkin formulation of the multiscale method, both of which have pros and cons. Applications of the multiscale method to three heterogeneous models show that our multiscale method can effectively model the elastic wave propagation in anisotropic media with a significant reduction in the degrees of freedom in the modeling system.« less
Gao, Kai; Fu, Shubin; Gibson, Richard L.; ...
2015-04-14
It is important to develop fast yet accurate numerical methods for seismic wave propagation to characterize complex geological structures and oil and gas reservoirs. However, the computational cost of conventional numerical modeling methods, such as finite-difference method and finite-element method, becomes prohibitively expensive when applied to very large models. We propose a Generalized Multiscale Finite-Element Method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media, where we construct basis functions from multiple local problems for both the boundaries and interior of a coarse node support or coarse element. The application of multiscale basis functions can capture the fine scale mediummore » property variations, and allows us to greatly reduce the degrees of freedom that are required to implement the modeling compared with conventional finite-element method for wave equation, while restricting the error to low values. We formulate the continuous Galerkin and discontinuous Galerkin formulation of the multiscale method, both of which have pros and cons. Applications of the multiscale method to three heterogeneous models show that our multiscale method can effectively model the elastic wave propagation in anisotropic media with a significant reduction in the degrees of freedom in the modeling system.« less
Computer Graphics-aided systems analysis: application to well completion design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Detamore, J.E.; Sarma, M.P.
1985-03-01
The development of an engineering tool (in the form of a computer model) for solving design and analysis problems related with oil and gas well production operations is discussed. The development of the method is based on integrating the concepts of ''Systems Analysis'' with the techniques of ''Computer Graphics''. The concepts behind the method are very general in nature. This paper, however, illustrates the application of the method in solving gas well completion design problems. The use of the method will save time and improve the efficiency of such design and analysis problems. The method can be extended to othermore » design and analysis aspects of oil and gas wells.« less
[The potential of general magnetic therapy for the treatment and rehabilitation (a review)].
Kulikov, A G; Voronina, D D
2016-01-01
This paper was designed to describe the main characteristics of general magnetic therapy and the mechanisms underlying its biological and therapeutic action. Special attention is given to the extensive application of this method in the routine clinical practice. The publications in the current scientific literature are reviewed in order to evaluate the potential of general magnetic therapy as a component of the combined treatment of various somatic pathologies, rehabilitation of the patients after surgical intervention with special reference to the management of the patients presenting with the oncological problems. The data suggesting good tolerability and high therapeutic effectiveness of the physiotherapeutic method under consideration.
Analytical close-form solutions to the elastic fields of solids with dislocations and surface stress
NASA Astrophysics Data System (ADS)
Ye, Wei; Paliwal, Bhasker; Ougazzaden, Abdallah; Cherkaoui, Mohammed
2013-07-01
The concept of eigenstrain is adopted to derive a general analytical framework to solve the elastic field for 3D anisotropic solids with general defects by considering the surface stress. The formulation shows the elastic constants and geometrical features of the surface play an important role in determining the elastic fields of the solid. As an application, the analytical close-form solutions to the stress fields of an infinite isotropic circular nanowire are obtained. The stress fields are compared with the classical solutions and those of complex variable method. The stress fields from this work demonstrate the impact from the surface stress when the size of the nanowire shrinks but becomes negligible in macroscopic scale. Compared with the power series solutions of complex variable method, the analytical solutions in this work provide a better platform and they are more flexible in various applications. More importantly, the proposed analytical framework profoundly improves the studies of general 3D anisotropic materials with surface effects.
A General Accelerated Degradation Model Based on the Wiener Process.
Liu, Le; Li, Xiaoyang; Sun, Fuqiang; Wang, Ning
2016-12-06
Accelerated degradation testing (ADT) is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses.
A General Accelerated Degradation Model Based on the Wiener Process
Liu, Le; Li, Xiaoyang; Sun, Fuqiang; Wang, Ning
2016-01-01
Accelerated degradation testing (ADT) is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses. PMID:28774107
NASA Technical Reports Server (NTRS)
Balasubramanian, R.; Norrie, D. H.; De Vries, G.
1979-01-01
Abel's integral equation is the governing equation for certain problems in physics and engineering, such as radiation from distributed sources. The finite element method for the solution of this non-linear equation is presented for problems with cylindrical symmetry and the extension to more general integral equations is indicated. The technique was applied to an axisymmetric glow discharge problem and the results show excellent agreement with previously obtained solutions
Research on numerical algorithms for large space structures
NASA Technical Reports Server (NTRS)
Denman, E. D.
1981-01-01
Numerical algorithms for analysis and design of large space structures are investigated. The sign algorithm and its application to decoupling of differential equations are presented. The generalized sign algorithm is given and its application to several problems discussed. The Laplace transforms of matrix functions and the diagonalization procedure for a finite element equation are discussed. The diagonalization of matrix polynomials is considered. The quadrature method and Laplace transforms is discussed and the identification of linear systems by the quadrature method investigated.
On unified modeling, theory, and method for solving multi-scale global optimization problems
NASA Astrophysics Data System (ADS)
Gao, David Yang
2016-10-01
A unified model is proposed for general optimization problems in multi-scale complex systems. Based on this model and necessary assumptions in physics, the canonical duality theory is presented in a precise way to include traditional duality theories and popular methods as special applications. Two conjectures on NP-hardness are proposed, which should play important roles for correctly understanding and efficiently solving challenging real-world problems. Applications are illustrated for both nonconvex continuous optimization and mixed integer nonlinear programming.
Newton–Hooke-type symmetry of anisotropic oscillators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, P.M., E-mail: zhpm@impcas.ac.cn; Horvathy, P.A., E-mail: horvathy@lmpt.univ-tours.fr; Laboratoire de Mathématiques et de Physique Théorique, Université de Tours
2013-06-15
Rotation-less Newton–Hooke-type symmetry, found recently in the Hill problem, and instrumental for explaining the center-of-mass decomposition, is generalized to an arbitrary anisotropic oscillator in the plane. Conversely, the latter system is shown, by the orbit method, to be the most general one with such a symmetry. Full Newton–Hooke symmetry is recovered in the isotropic case. Star escape from a galaxy is studied as an application. -- Highlights: ► Rotation-less Newton–Hooke (NH) symmetry is generalized to an arbitrary anisotropic oscillator. ► The orbit method is used to find the most general case for rotation-less NH symmetry. ► The NH symmetry ismore » decomposed into Heisenberg algebras based on chiral decomposition.« less
NASA Technical Reports Server (NTRS)
Keiter, I. D.
1982-01-01
Studies of several General Aviation aircraft indicated that the application of advanced technologies to General Aviation propellers can reduce fuel consumption in future aircraft by a significant amount. Propeller blade weight reductions achieved through the use of composites, propeller efficiency and noise improvements achieved through the use of advanced concepts and improved propeller analytical design methods result in aircraft with lower operating cost, acquisition cost and gross weight.
Predicting diameters inside bark for 10 important hardwood species
Donald E. Hilt; Everette D. Rast; Herman J. Bailey
1983-01-01
General models for predicting DIB/DOB ratios up the stem, applicable over wide geographic areas, have been developed for 10 important hardwood species. Results indicate that the ratios either decrease or remain constant up the stem. Methods for adjusting the general models to local conditions are presented. The prediction models can be used in conjunction with optical...
Generalized pseudopotential approach for electron-atom scattering.
NASA Technical Reports Server (NTRS)
Zarlingo, D. G.; Ishihara, T.; Poe, R. T.
1972-01-01
A generalized many-electron pseudopotential approach is presented for electron-neutral-atom scattering problems. A calculation based on this formulation is carried out for the singlet s-wave and p-wave electron-hydrogen phase shifts with excellent results. We compare the method with other approaches as well as discuss its applications for inelastic and rearrangement collision problems.
Theory of the Trojan-Horse Method - From the Original Idea to Actual Applications
NASA Astrophysics Data System (ADS)
Typel, Stefan
2018-01-01
The origin and the main features of the Trojan-horse (TH) method are delineated starting with the original idea of Gerhard Baur. Basic theoretical considerations, general experimental conditions and possible problems are discussed. Significant steps in experimental studies towards the implementation of the TH method and the development of the theoretical description are presented. This lead to the successful application of the TH approach by Claudio Spitaleri and his group to determine low-energy cross section that are relevant for astrophysics. An outlook with possible developments in the future are given.
Tsiatis, Anastasios A.; Davidian, Marie; Cao, Weihua
2010-01-01
Summary A routine challenge is that of making inference on parameters in a statistical model of interest from longitudinal data subject to drop out, which are a special case of the more general setting of monotonely coarsened data. Considerable recent attention has focused on doubly robust estimators, which in this context involve positing models for both the missingness (more generally, coarsening) mechanism and aspects of the distribution of the full data, that have the appealing property of yielding consistent inferences if only one of these models is correctly specified. Doubly robust estimators have been criticized for potentially disastrous performance when both of these models are even only mildly misspecified. We propose a doubly robust estimator applicable in general monotone coarsening problems that achieves comparable or improved performance relative to existing doubly robust methods, which we demonstrate via simulation studies and by application to data from an AIDS clinical trial. PMID:20731640
A General Approach for Fluid Patterning and Application in Fabricating Microdevices.
Huang, Zhandong; Yang, Qiang; Su, Meng; Li, Zheng; Hu, Xiaotian; Li, Yifan; Pan, Qi; Ren, Wanjie; Li, Fengyu; Song, Yanlin
2018-06-19
Engineering the fluid interface such as the gas-liquid interface is of great significance for solvent processing applications including functional material assembly, inkjet printing, and high-performance device fabrication. However, precisely controlling the fluid interface remains a great challenge owing to its flexibility and fluidity. Here, a general method to manipulate the fluid interface for fluid patterning using micropillars in the microchannel is reported. The principle of fluid patterning for immiscible fluid pairs including air, water, and oils is proposed. This understanding enables the preparation of programmable multiphase fluid patterns and assembly of multilayer functional materials to fabricate micro-optoelectronic devices. This general strategy of fluid patterning provides a promising platform to study the fundamental processes occurring on the fluid interface, and benefits applications in many subjects, such as microfluidics, microbiology, chemical analysis and detection, material synthesis and assembly, device fabrication, etc. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
26 CFR 1.174-4 - Treatment as deferred expenses.
Code of Federal Regulations, 2010 CFR
2010-04-01
... period. Application for permission to change to a different method of treating research or experimental... as deferred expenses. (a) In general. (1) If a taxpayer has not adopted the method provided in section 174(a) of treating research or experimental expenditures paid or incurred by him in connection...
Scalable Kernel Methods and Algorithms for General Sequence Analysis
ERIC Educational Resources Information Center
Kuksa, Pavel
2011-01-01
Analysis of large-scale sequential data has become an important task in machine learning and pattern recognition, inspired in part by numerous scientific and technological applications such as the document and text classification or the analysis of biological sequences. However, current computational methods for sequence comparison still lack…
Estimating optical imaging system performance for space applications
NASA Technical Reports Server (NTRS)
Sinclair, K. F.
1972-01-01
The critical system elements of an optical imaging system are identified and a method for an initial assessment of system performance is presented. A generalized imaging system is defined. A system analysis is considered, followed by a component analysis. An example of the method is given using a film imaging system.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-01
... Vanguard Study and will discuss general data collection methods and retention strategy and methods. Place... the name, address, telephone number and when applicable, the business or professional affiliation of... visit. (Catalogue of Federal Domestic Assistance Program Nos. 93.864, Population Research; 93.865...
Receptor Surface Models in the Classroom: Introducing Molecular Modeling to Students in a 3-D World
ERIC Educational Resources Information Center
Geldenhuys, Werner J.; Hayes, Michael; Van der Schyf, Cornelis J.; Allen, David D.; Malan, Sarel F.
2007-01-01
A simple, novel and generally applicable method to demonstrate structure-activity associations of a group of biologically interesting compounds in relation to receptor binding is described. This method is useful for undergraduates and graduate students in medicinal chemistry and computer modeling programs.
Segmentation of medical images using explicit anatomical knowledge
NASA Astrophysics Data System (ADS)
Wilson, Laurie S.; Brown, Stephen; Brown, Matthew S.; Young, Jeanne; Li, Rongxin; Luo, Suhuai; Brandt, Lee
1999-07-01
Knowledge-based image segmentation is defined in terms of the separation of image analysis procedures and representation of knowledge. Such architecture is particularly suitable for medical image segmentation, because of the large amount of structured domain knowledge. A general methodology for the application of knowledge-based methods to medical image segmentation is described. This includes frames for knowledge representation, fuzzy logic for anatomical variations, and a strategy for determining the order of segmentation from the modal specification. This method has been applied to three separate problems, 3D thoracic CT, chest X-rays and CT angiography. The application of the same methodology to such a range of applications suggests a major role in medical imaging for segmentation methods incorporating representation of anatomical knowledge.
NASA Technical Reports Server (NTRS)
Eggleston, John M; Mathews, Charles W
1954-01-01
In the process of analyzing the longitudinal frequency-response characteristics of aircraft, information on some of the methods of analysis has been obtained by the Langley Aeronautical Laboratory of the National Advisory Committee for Aeronautics. In the investigation of these methods, the practical applications and limitations were stressed. In general, the methods considered may be classed as: (1) analysis of sinusoidal response, (2) analysis of transient response as to harmonic content through determination of the Fourier integral by manual or machine methods, and (3) analysis of the transient through the use of least-squares solutions of the coefficients of an assumed equation for either the transient time response or frequency response (sometimes referred to as curve-fitting methods). (author)
Vozeh, S; Steimer, J L
1985-01-01
The concept of feedback control methods for drug dosage optimisation is described from the viewpoint of control theory. The control system consists of 5 parts: (a) patient (the controlled process); (b) response (the measured feedback); (c) model (the mathematical description of the process); (d) adaptor (to update the parameters); and (e) controller (to determine optimum dosing strategy). In addition to the conventional distinction between open-loop and closed-loop control systems, a classification is proposed for dosage optimisation techniques which distinguishes between tight-loop and loose-loop methods depending on whether physician's interaction is absent or included as part of the control step. Unlike engineering problems where the process can usually be controlled by fully automated devices, therapeutic situations often require that the physician be included in the decision-making process to determine the 'optimal' dosing strategy. Tight-loop and loose-loop methods can be further divided into adaptive and non-adaptive, depending on the presence of the adaptor. The main application areas of tight-loop feedback control methods are general anaesthesia, control of blood pressure, and insulin delivery devices. Loose-loop feedback methods have been used for oral anticoagulation and in therapeutic drug monitoring. The methodology, advantages and limitations of the different approaches are reviewed. A general feature common to all application areas could be observed: to perform well under routine clinical conditions, which are characterised by large interpatient variability and sometimes also intrapatient changes, control systems should be adaptive. Apart from application in routine drug treatment, feedback control methods represent an important research tool. They can be applied for the investigation of pathophysiological and pharmacodynamic processes. A most promising application is the evaluation of the relationship between an intermediate response (e.g. drug level), which is often used as feedback for dosage adjustment, and the final therapeutic goal.
Water resources by orbital remote sensing: Examples of applications
NASA Technical Reports Server (NTRS)
Martini, P. R. (Principal Investigator)
1984-01-01
Selected applications of orbital remote sensing to water resources undertaken by INPE are described. General specifications of Earth application satellites and technical characteristics of LANDSAT 1, 2, 3, and 4 subsystems are described. Spatial, temporal and spectral image attributes of water as well as methods of image analysis for applications to water resources are discussed. Selected examples are referred to flood monitoring, analysis of water suspended sediments, spatial distribution of pollutants, inventory of surface water bodies and mapping of alluvial aquifers.
Exact test-based approach for equivalence test with parameter margin.
Cassie Dong, Xiaoyu; Bian, Yuanyuan; Tsong, Yi; Wang, Tianhua
2017-01-01
The equivalence test has a wide range of applications in pharmaceutical statistics which we need to test for the similarity between two groups. In recent years, the equivalence test has been used in assessing the analytical similarity between a proposed biosimilar product and a reference product. More specifically, the mean values of the two products for a given quality attribute are compared against an equivalence margin in the form of ±f × σ R , where ± f × σ R is a function of the reference variability. In practice, this margin is unknown and is estimated from the sample as ±f × S R . If we use this estimated margin with the classic t-test statistic on the equivalence test for the means, both Type I and Type II error rates may inflate. To resolve this issue, we develop an exact-based test method and compare this method with other proposed methods, such as the Wald test, the constrained Wald test, and the Generalized Pivotal Quantity (GPQ) in terms of Type I error rate and power. Application of those methods on data analysis is also provided in this paper. This work focuses on the development and discussion of the general statistical methodology and is not limited to the application of analytical similarity.
Military applications and examples of near-surface seismic surface wave methods (Invited)
NASA Astrophysics Data System (ADS)
sloan, S.; Stevens, R.
2013-12-01
Although not always widely known or publicized, the military uses a variety of geophysical methods for a wide range of applications--some that are already common practice in the industry while others are truly novel. Some of those applications include unexploded ordnance detection, general site characterization, anomaly detection, countering improvised explosive devices (IEDs), and security monitoring, to name a few. Techniques used may include, but are not limited to, ground penetrating radar, seismic, electrical, gravity, and electromagnetic methods. Seismic methods employed include surface wave analysis, refraction tomography, and high-resolution reflection methods. Although the military employs geophysical methods, that does not necessarily mean that those methods enable or support combat operations--often times they are being used for humanitarian applications within the military's area of operations to support local populations. The work presented here will focus on the applied use of seismic surface wave methods, including multichannel analysis of surface waves (MASW) and backscattered surface waves, often in conjunction with other methods such as refraction tomography or body-wave diffraction analysis. Multiple field examples will be shown, including explosives testing, tunnel detection, pre-construction site characterization, and cavity detection.
NASA Astrophysics Data System (ADS)
Mamalakis, Antonios; Langousis, Andreas; Deidda, Roberto
2016-04-01
Estimation of extreme rainfall from data constitutes one of the most important issues in statistical hydrology, as it is associated with the design of hydraulic structures and flood water management. To that extent, based on asymptotic arguments from Extreme Excess (EE) theory, several studies have focused on developing new, or improving existing methods to fit a generalized Pareto (GP) distribution model to rainfall excesses above a properly selected threshold u. The latter is generally determined using various approaches, such as non-parametric methods that are intended to locate the changing point between extreme and non-extreme regions of the data, graphical methods where one studies the dependence of GP distribution parameters (or related metrics) on the threshold level u, and Goodness of Fit (GoF) metrics that, for a certain level of significance, locate the lowest threshold u that a GP distribution model is applicable. In this work, we review representative methods for GP threshold detection, discuss fundamental differences in their theoretical bases, and apply them to 1714 daily rainfall records from the NOAA-NCDC open-access database, with more than 110 years of data. We find that non-parametric methods that are intended to locate the changing point between extreme and non-extreme regions of the data are generally not reliable, while methods that are based on asymptotic properties of the upper distribution tail lead to unrealistically high threshold and shape parameter estimates. The latter is justified by theoretical arguments, and it is especially the case in rainfall applications, where the shape parameter of the GP distribution is low; i.e. on the order of 0.1 ÷ 0.2. Better performance is demonstrated by graphical methods and GoF metrics that rely on pre-asymptotic properties of the GP distribution. For daily rainfall, we find that GP threshold estimates range between 2÷12 mm/d with a mean value of 6.5 mm/d, while the existence of quantization in the empirical records, as well as variations in their size, constitute the two most important factors that may significantly affect the accuracy of the obtained results. Acknowledgments The research project was implemented within the framework of the Action «Supporting Postdoctoral Researchers» of the Operational Program "Education and Lifelong Learning" (Action's Beneficiary: General Secretariat for Research and Technology), and co-financed by the European Social Fund (ESF) and the Greek State. The work conducted by Roberto Deidda was funded under the Sardinian Regional Law 7/2007 (funding call 2013).
Wu, Z J; Xu, B; Jiang, H; Zheng, M; Zhang, M; Zhao, W J; Cheng, J
2016-08-20
Objective: To investigate the application of United States Environmental Protection Agency (EPA) inhalation risk assessment model, Singapore semi-quantitative risk assessment model, and occupational hazards risk assessment index method in occupational health risk in enterprises using dimethylformamide (DMF) in a certain area in Jiangsu, China, and to put forward related risk control measures. Methods: The industries involving DMF exposure in Jiangsu province were chosen as the evaluation objects in 2013 and three risk assessment models were used in the evaluation. EPA inhalation risk assessment model: HQ=EC/RfC; Singapore semi-quantitative risk assessment model: Risk= (HR×ER) 1/2 ; Occupational hazards risk assessment index=2 Health effect level ×2 exposure ratio ×Operation condition level. Results: The results of hazard quotient (HQ>1) from EPA inhalation risk assessment model suggested that all the workshops (dry method, wet method and printing) and work positions (pasting, burdening, unreeling, rolling, assisting) were high risk. The results of Singapore semi-quantitative risk assessment model indicated that the workshop risk level of dry method, wet method and printing were 3.5 (high) , 3.5 (high) and 2.8 (general) , and position risk level of pasting, burdening, unreeling, rolling, assisting were 4 (high) , 4 (high) , 2.8 (general) , 2.8 (general) and 2.8 (general) . The results of occupational hazards risk assessment index method demonstrated that the position risk index of pasting, burdening, unreeling, rolling, assisting were 42 (high) , 33 (high) , 23 (middle) , 21 (middle) and 22 (middle) . The results of Singapore semi-quantitative risk assessment model and occupational hazards risk assessment index method were similar, while EPA inhalation risk assessment model indicated all the workshops and positions were high risk. Conclusion: The occupational hazards risk assessment index method fully considers health effects, exposure, and operating conditions and can comprehensively and accurately evaluate occupational health risk caused by DMF.
Generalized minimum dominating set and application in automatic text summarization
NASA Astrophysics Data System (ADS)
Xu, Yi-Zhi; Zhou, Hai-Jun
2016-03-01
For a graph formed by vertices and weighted edges, a generalized minimum dominating set (MDS) is a vertex set of smallest cardinality such that the summed weight of edges from each outside vertex to vertices in this set is equal to or larger than certain threshold value. This generalized MDS problem reduces to the conventional MDS problem in the limiting case of all the edge weights being equal to the threshold value. We treat the generalized MDS problem in the present paper by a replica-symmetric spin glass theory and derive a set of belief-propagation equations. As a practical application we consider the problem of extracting a set of sentences that best summarize a given input text document. We carry out a preliminary test of the statistical physics-inspired method to this automatic text summarization problem.
Generalized spherical and simplicial coordinates
NASA Astrophysics Data System (ADS)
Richter, Wolf-Dieter
2007-12-01
Elementary trigonometric quantities are defined in l2,p analogously to that in l2,2, the sine and cosine functions are generalized for each p>0 as functions sinp and cosp such that they satisfy the basic equation cosp([phi])p+sinp([phi])p=1. The p-generalized radius coordinate of a point [xi][set membership, variant]Rn is defined for each p>0 as . On combining these quantities, ln,p-spherical coordinates are defined. It is shown that these coordinates are nearly related to ln,p-simplicial coordinates. The Jacobians of these generalized coordinate transformations are derived. Applications and interpretations from analysis deal especially with the definition of a generalized surface content on ln,p-spheres which is nearly related to a modified co-area formula and an extension of Cavalieri's and Torricelli's indivisibeln method, and with differential equations. Applications from probability theory deal especially with a geometric interpretation of the uniform probability distribution on the ln,p-sphere and with the derivation of certain generalized statistical distributions.
Biology and Control of Insect and Related Pests of Livestock in Wyoming. MP-23.
ERIC Educational Resources Information Center
Lloyd, John E.
This document provides information that a potential insecticide applicator can utilize to safely and effectively control insects and related pests of livestock. The first section of the manual discusses the general methods of preparation and application of insecticides. The second section concerns itself with the recognition of insect problems,…
1981-11-10
1976), 745-754. 4. (with W. C. Tam) Periodic and traveling wave solutions to Volterra - Lotka equation with diffusion. Bull. Math. Biol. 38 (1976), 643...with applications [17,19,20). (5) A general method for reconstructing the mutual coherent function of a static or moving source from the random
Freeze-drying of “pearl milk tea”: A general strategy for controllable synthesis of porous materials
Zhou, Yingke; Tian, Xiaohui; Wang, Pengcheng; Hu, Min; Du, Guodong
2016-01-01
Porous materials have been widely used in many fields, but the large-scale synthesis of materials with controlled pore sizes, pore volumes, and wall thicknesses remains a considerable challenge. Thus, the controllable synthesis of porous materials is of key general importance. Herein, we demonstrate the “pearl milk tea” freeze-drying method to form porous materials with controllable pore characteristics, which is realized by rapidly freezing the uniformly distributed template-containing precursor solution, followed by freeze-drying and suitable calcination. This general and convenient method has been successfully applied to synthesize various porous phosphate and oxide materials using different templates. The method is promising for the development of tunable porous materials for numerous applications of energy, environment, and catalysis, etc. PMID:27193866
NASA Astrophysics Data System (ADS)
Zhou, Wei-Xing; Sornette, Didier
2007-07-01
We have recently introduced the “thermal optimal path” (TOP) method to investigate the real-time lead-lag structure between two time series. The TOP method consists in searching for a robust noise-averaged optimal path of the distance matrix along which the two time series have the greatest similarity. Here, we generalize the TOP method by introducing a more general definition of distance which takes into account possible regime shifts between positive and negative correlations. This generalization to track possible changes of correlation signs is able to identify possible transitions from one convention (or consensus) to another. Numerical simulations on synthetic time series verify that the new TOP method performs as expected even in the presence of substantial noise. We then apply it to investigate changes of convention in the dependence structure between the historical volatilities of the USA inflation rate and economic growth rate. Several measures show that the new TOP method significantly outperforms standard cross-correlation methods.
A-posteriori error estimation for the finite point method with applications to compressible flow
NASA Astrophysics Data System (ADS)
Ortega, Enrique; Flores, Roberto; Oñate, Eugenio; Idelsohn, Sergio
2017-08-01
An a-posteriori error estimate with application to inviscid compressible flow problems is presented. The estimate is a surrogate measure of the discretization error, obtained from an approximation to the truncation terms of the governing equations. This approximation is calculated from the discrete nodal differential residuals using a reconstructed solution field on a modified stencil of points. Both the error estimation methodology and the flow solution scheme are implemented using the Finite Point Method, a meshless technique enabling higher-order approximations and reconstruction procedures on general unstructured discretizations. The performance of the proposed error indicator is studied and applications to adaptive grid refinement are presented.
Voltera's Solution of the Wave Equation as Applied to Three-Dimensional Supersonic Airfoil Problems
NASA Technical Reports Server (NTRS)
Heslet, Max A; Lomax, Harvard; Jones, Arthur L
1947-01-01
A surface integral is developed which yields solutions of the linearized partial differential equation for supersonic flow. These solutions satisfy boundary conditions arising in wing theory. Particular applications of this general method are made, using acceleration potentials, to flat surfaces and to uniformly loaded lifting surfaces. Rectangular and trapezoidal plan forms are considered along with triangular forms adaptable to swept-forward and swept-back wings. The case of the triangular plan form in sideslip is also included. Emphasis is placed on the systematic application of the method to the lifting surfaces considered and on the possibility of further application.
Simple Additive Weighting to Diagnose Rabbit Disease
NASA Astrophysics Data System (ADS)
Ramadiani; Marissa, Dyna; Jundillah, Muhammad Labib; Azainil; Hatta, Heliza Rahmania
2018-02-01
Rabbit is one of the many pets maintained by the general public in Indonesia. Like other pet, rabbits are also susceptible to various diseases. Society in general does not understand correctly the type of rabbit disease and the way of treatment. To help care for sick rabbits it is necessary a decision support system recommendation diagnosis of rabbit disease. The purpose of this research is to make the application of rabbit disease diagnosis system so that can help user in taking care of rabbit. This application diagnoses the disease by tracing the symptoms and calculating the recommendation of the disease using Simple Additive Weighting method. This research produces a web-based decision support system that is used to help rabbit breeders and the general public.
Acar, Elif F; Sun, Lei
2013-06-01
Motivated by genetic association studies of SNPs with genotype uncertainty, we propose a generalization of the Kruskal-Wallis test that incorporates group uncertainty when comparing k samples. The extended test statistic is based on probability-weighted rank-sums and follows an asymptotic chi-square distribution with k - 1 degrees of freedom under the null hypothesis. Simulation studies confirm the validity and robustness of the proposed test in finite samples. Application to a genome-wide association study of type 1 diabetic complications further demonstrates the utilities of this generalized Kruskal-Wallis test for studies with group uncertainty. The method has been implemented as an open-resource R program, GKW. © 2013, The International Biometric Society.
Regge calculus and observations. II. Further applications.
NASA Astrophysics Data System (ADS)
Williams, Ruth M.; Ellis, G. F. R.
1984-11-01
The method, developed in an earlier paper, for tracing geodesies of particles and light rays through Regge calculus space-times, is applied to a number of problems in the Schwarzschild geometry. It is possible to obtain accurate predictions of light bending by taking sufficiently small Regge blocks. Calculations of perihelion precession, Thomas precession, and the distortion of a ball of fluid moving on a geodesic can also show good agreement with the analytic solution. However difficulties arise in obtaining accurate predictions for general orbits in these space-times. Applications to other problems in general relativity are discussed briefly.
The KS Method in Light of Generalized Euler Parameters.
1980-01-01
motion for the restricted two-body problem is trans- formed via the Kustaanheimo - Stiefel transformation method (KS) into a dynamical equation in the... Kustaanheimo - Stiefel2 transformation method (KS) in the two-body problem. Many papers have appeared in which specific problems or applications have... TRANSFORMATION MATRIX P. Kustaanheimo and E. Stiefel2 proposed a regularization method by intro- ducing a 4 x 4 transformation matrix and four-component
NASA Astrophysics Data System (ADS)
Erhard, Jannis; Bleiziffer, Patrick; Görling, Andreas
2016-09-01
A power series approximation for the correlation kernel of time-dependent density-functional theory is presented. Using this approximation in the adiabatic-connection fluctuation-dissipation (ACFD) theorem leads to a new family of Kohn-Sham methods. The new methods yield reaction energies and barriers of unprecedented accuracy and enable a treatment of static (strong) correlation with an accuracy of high-level multireference configuration interaction methods but are single-reference methods allowing for a black-box-like handling of static correlation. The new methods exhibit a better scaling of the computational effort with the system size than rivaling wave-function-based electronic structure methods. Moreover, the new methods do not suffer from the problem of singularities in response functions plaguing previous ACFD methods and therefore are applicable to any type of electronic system.
Houwink, Elisa J.F.; Muijtjens, Arno M.M.; van Teeffelen, Sarah R.; Henneman, Lidewij; Rethans, Jan Joost; van der Jagt, Liesbeth E.J.; van Luijk, Scheltus J.; Dinant, Geert Jan; van der Vleuten, Cees; Cornel, Martina C.
2014-01-01
Purpose: General practitioners are increasingly called upon to deliver genetic services and could play a key role in translating potentially life-saving advancements in oncogenetic technologies to patient care. If general practitioners are to make an effective contribution in this area, their genetics competencies need to be upgraded. The aim of this study was to investigate whether oncogenetics training for general practitioners improves their genetic consultation skills. Methods: In this pragmatic, blinded, randomized controlled trial, the intervention consisted of a 4-h training (December 2011 and April 2012), covering oncogenetic consultation skills (family history, familial risk assessment, and efficient referral), attitude (medical ethical issues), and clinical knowledge required in primary-care consultations. Outcomes were measured using observation checklists by unannounced standardized patients and self-reported questionnaires. Results: Of 88 randomized general practitioners who initially agreed to participate, 56 completed all measurements. Key consultation skills significantly and substantially improved; regression coefficients after intervention were equivalent to 0.34 and 0.28 at 3-month follow-up, indicating a moderate effect size. Satisfaction and perceived applicability of newly learned skills were highly scored. Conclusion: The general practitioner–specific training proved to be a feasible, satisfactory, and clinically applicable method to improve oncogenetics consultation skills and could be used as an educational framework to inform future training activities with the ultimate aim of improving medical care. PMID:23722870
NASA Astrophysics Data System (ADS)
Asten, M. W.; Hayashi, K.
2018-07-01
Ambient seismic noise or microtremor observations used in spatial auto-correlation (SPAC) array methods consist of a wide frequency range of surface waves from the frequency of about 0.1 Hz to several tens of Hz. The wavelengths (and hence depth sensitivity of such surface waves) allow determination of the site S-wave velocity model from a depth of 1 or 2 m down to a maximum of several kilometres; it is a passive seismic method using only ambient noise as the energy source. Application usually uses a 2D seismic array with a small number of seismometers (generally between 2 and 15) to estimate the phase velocity dispersion curve and hence the S-wave velocity depth profile for the site. A large number of methods have been proposed and used to estimate the dispersion curve; SPAC is the one of the oldest and the most commonly used methods due to its versatility and minimal instrumentation requirements. We show that direct fitting of observed and model SPAC spectra generally gives a superior bandwidth of useable data than does the more common approach of inversion after the intermediate step of constructing an observed dispersion curve. Current case histories demonstrate the method with a range of array types including two-station arrays, L-shaped multi-station arrays, triangular and circular arrays. Array sizes from a few metres to several-km in diameter have been successfully deployed in sites ranging from downtown urban settings to rural and remote desert sites. A fundamental requirement of the method is the ability to average wave propagation over a range of azimuths; this can be achieved with either or both of the wave sources being widely distributed in azimuth, and the use of a 2D array sampling the wave field over a range of azimuths. Several variants of the method extend its applicability to under-sampled data from sparse arrays, the complexity of multiple-mode propagation of energy, and the problem of precise estimation where array geometry departs from an ideal regular array. We find that sparse nested triangular arrays are generally sufficient, and the use of high-density circular arrays is unlikely to be cost-effective in routine applications. We recommend that passive seismic arrays should be the method of first choice when characterizing average S-wave velocity to a depth of 30 m ( V s30) and deeper, with active seismic methods such as multichannel analysis of surface waves (MASW) being a complementary method for use if and when conditions so require. The use of computer inversion methodology allows estimation of not only the S-wave velocity profile but also parameter uncertainties in terms of layer thickness and velocity. The coupling of SPAC methods with horizontal/vertical particle motion spectral ratio analysis generally allows use of lower frequency data, with consequent resolution of deeper layers than is possible with SPAC alone. Considering its non-invasive methodology, logistical flexibility, simplicity, applicability, and stability, the SPAC method and its various modified extensions will play an increasingly important role in site effect evaluation. The paper summarizes the fundamental theory of the SPAC method, reviews recent developments, and offers recommendations for future blind studies.
NASA Astrophysics Data System (ADS)
Asten, M. W.; Hayashi, K.
2018-05-01
Ambient seismic noise or microtremor observations used in spatial auto-correlation (SPAC) array methods consist of a wide frequency range of surface waves from the frequency of about 0.1 Hz to several tens of Hz. The wavelengths (and hence depth sensitivity of such surface waves) allow determination of the site S-wave velocity model from a depth of 1 or 2 m down to a maximum of several kilometres; it is a passive seismic method using only ambient noise as the energy source. Application usually uses a 2D seismic array with a small number of seismometers (generally between 2 and 15) to estimate the phase velocity dispersion curve and hence the S-wave velocity depth profile for the site. A large number of methods have been proposed and used to estimate the dispersion curve; SPAC is the one of the oldest and the most commonly used methods due to its versatility and minimal instrumentation requirements. We show that direct fitting of observed and model SPAC spectra generally gives a superior bandwidth of useable data than does the more common approach of inversion after the intermediate step of constructing an observed dispersion curve. Current case histories demonstrate the method with a range of array types including two-station arrays, L-shaped multi-station arrays, triangular and circular arrays. Array sizes from a few metres to several-km in diameter have been successfully deployed in sites ranging from downtown urban settings to rural and remote desert sites. A fundamental requirement of the method is the ability to average wave propagation over a range of azimuths; this can be achieved with either or both of the wave sources being widely distributed in azimuth, and the use of a 2D array sampling the wave field over a range of azimuths. Several variants of the method extend its applicability to under-sampled data from sparse arrays, the complexity of multiple-mode propagation of energy, and the problem of precise estimation where array geometry departs from an ideal regular array. We find that sparse nested triangular arrays are generally sufficient, and the use of high-density circular arrays is unlikely to be cost-effective in routine applications. We recommend that passive seismic arrays should be the method of first choice when characterizing average S-wave velocity to a depth of 30 m (V s30) and deeper, with active seismic methods such as multichannel analysis of surface waves (MASW) being a complementary method for use if and when conditions so require. The use of computer inversion methodology allows estimation of not only the S-wave velocity profile but also parameter uncertainties in terms of layer thickness and velocity. The coupling of SPAC methods with horizontal/vertical particle motion spectral ratio analysis generally allows use of lower frequency data, with consequent resolution of deeper layers than is possible with SPAC alone. Considering its non-invasive methodology, logistical flexibility, simplicity, applicability, and stability, the SPAC method and its various modified extensions will play an increasingly important role in site effect evaluation. The paper summarizes the fundamental theory of the SPAC method, reviews recent developments, and offers recommendations for future blind studies.
Computational methods for internal flows with emphasis on turbomachinery
NASA Technical Reports Server (NTRS)
Mcnally, W. D.; Sockol, P. M.
1981-01-01
Current computational methods for analyzing flows in turbomachinery and other related internal propulsion components are presented. The methods are divided into two classes. The inviscid methods deal specifically with turbomachinery applications. Viscous methods, deal with generalized duct flows as well as flows in turbomachinery passages. Inviscid methods are categorized into the potential, stream function, and Euler aproaches. Viscous methods are treated in terms of parabolic, partially parabolic, and elliptic procedures. Various grids used in association with these procedures are also discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gibson, N. A.; Forget, B.
2012-07-01
The Discrete Generalized Multigroup (DGM) method uses discrete Legendre orthogonal polynomials to expand the energy dependence of the multigroup neutron transport equation. This allows a solution on a fine energy mesh to be approximated for a cost comparable to a solution on a coarse energy mesh. The DGM method is applied to an ultra-fine energy mesh (14,767 groups) to avoid using self-shielding methodologies without introducing the cost usually associated with such energy discretization. Results show DGM to converge to the reference ultra-fine solution after a small number of recondensation steps for multiple infinite medium compositions. (authors)
A generalization of random matrix theory and its application to statistical physics.
Wang, Duan; Zhang, Xin; Horvatic, Davor; Podobnik, Boris; Eugene Stanley, H
2017-02-01
To study the statistical structure of crosscorrelations in empirical data, we generalize random matrix theory and propose a new method of cross-correlation analysis, known as autoregressive random matrix theory (ARRMT). ARRMT takes into account the influence of auto-correlations in the study of cross-correlations in multiple time series. We first analytically and numerically determine how auto-correlations affect the eigenvalue distribution of the correlation matrix. Then we introduce ARRMT with a detailed procedure of how to implement the method. Finally, we illustrate the method using two examples taken from inflation rates for air pressure data for 95 US cities.
Research and applications: Artificial intelligence
NASA Technical Reports Server (NTRS)
Raphael, B.; Fikes, R. E.; Chaitin, L. J.; Hart, P. E.; Duda, R. O.; Nilsson, N. J.
1971-01-01
A program of research in the field of artificial intelligence is presented. The research areas discussed include automatic theorem proving, representations of real-world environments, problem-solving methods, the design of a programming system for problem-solving research, techniques for general scene analysis based upon television data, and the problems of assembling an integrated robot system. Major accomplishments include the development of a new problem-solving system that uses both formal logical inference and informal heuristic methods, the development of a method of automatic learning by generalization, and the design of the overall structure of a new complete robot system. Eight appendices to the report contain extensive technical details of the work described.
Entering an era of dynamic structural biology….
Orville, Allen M
2018-05-31
A recent paper in BMC Biology presents a general method for mix-and-inject serial crystallography, to facilitate the visualization of enzyme intermediates via time-resolved serial femtosecond crystallography (tr-SFX). They apply their method to resolve in near atomic detail the cleavage and inactivation of the antibiotic ceftriaxone by a β-lactamase enzyme from Mycobacterium tuberculosis. Their work demonstrates the general applicability of time-resolved crystallography, from which dynamic structures, at atomic resolution, can be obtained.See research article: https://bmcbiol.biomedcentral.com/articles/10.1186/s12915-018-0524-5 .
Classification of Phase Transitions by Microcanonical Inflection-Point Analysis
NASA Astrophysics Data System (ADS)
Qi, Kai; Bachmann, Michael
2018-05-01
By means of the principle of minimal sensitivity we generalize the microcanonical inflection-point analysis method by probing derivatives of the microcanonical entropy for signals of transitions in complex systems. A strategy of systematically identifying and locating independent and dependent phase transitions of any order is proposed. The power of the generalized method is demonstrated in applications to the ferromagnetic Ising model and a coarse-grained model for polymer adsorption onto a substrate. The results shed new light on the intrinsic phase structure of systems with cooperative behavior.
ERIC Educational Resources Information Center
Brusco, Michael; Steinley, Douglas
2010-01-01
Structural balance theory (SBT) has maintained a venerable status in the psychological literature for more than 5 decades. One important problem pertaining to SBT is the approximation of structural or generalized balance via the partitioning of the vertices of a signed graph into "K" clusters. This "K"-balance partitioning problem also has more…
21 CFR 177.1960 - Vinyl chloride-hexene-1 copolymers.
Code of Federal Regulations, 2014 CFR
2014-04-01
... determined by any suitable analytical procedure of generally accepted applicability. (ii) Inherent viscosity... D1243-79, “Standard Test Method for Dilute Solution Viscosity of Vinyl Chloride Polymers,” which is...
NASA Astrophysics Data System (ADS)
Zhang, X.-G.; Varga, Kalman; Pantelides, Sokrates T.
2007-07-01
Band-theoretic methods with periodically repeated supercells have been a powerful approach for ground-state electronic structure calculations but have not so far been adapted for quantum transport problems with open boundary conditions. Here, we introduce a generalized Bloch theorem for complex periodic potentials and use a transfer-matrix formulation to cast the transmission probability in a scattering problem with open boundary conditions in terms of the complex wave vectors of a periodic system with absorbing layers, allowing a band technique for quantum transport calculations. The accuracy and utility of the method are demonstrated by the model problems of the transmission of an electron over a square barrier and the scattering of a phonon in an inhomogeneous nanowire. Application to the resistance of a twin boundary in nanocrystalline copper yields excellent agreement with recent experimental data.
[Perioperative use of medical hypnosis. Therapy options for anaesthetists and surgeons].
Hermes, D; Trübger, D; Hakim, S G; Sieg, P
2004-04-01
Surgical treatment of patients under local anaesthesia is quite commonly restricted by limited compliance from the patient. An alternative to treatment under pharmacological sedation or general anaesthesia could be the application of medical hypnosis. With this method, both suggestive and autosuggestive procedures are used for anxiolysis, relaxation, sedation and analgesia of the patient. During a 1-year period of first clinical application, a total of 207 surgical procedures on a non-selected collective of 174 patients were carried out under combined local anaesthesia and medical hypnosis. Medical hypnosis proved to be a standardisable and reliable method by which remarkable improvements in treatment conditions for both patient and surgeons were achievable. Medical hypnosis is not considered to be a substitute for conscious sedation or general anaesthesia but a therapeutic option equally interesting for anaesthesists and surgeons.
A semiparametric separation curve approach for comparing correlated ROC data from multiple markers
Tang, Liansheng Larry; Zhou, Xiao-Hua
2012-01-01
In this article we propose a separation curve method to identify the range of false positive rates for which two ROC curves differ or one ROC curve is superior to the other. Our method is based on a general multivariate ROC curve model, including interaction terms between discrete covariates and false positive rates. It is applicable with most existing ROC curve models. Furthermore, we introduce a semiparametric least squares ROC estimator and apply the estimator to the separation curve method. We derive a sandwich estimator for the covariance matrix of the semiparametric estimator. We illustrate the application of our separation curve method through two real life examples. PMID:23074360
Magnetic pulse cleaning of products
NASA Astrophysics Data System (ADS)
Smolentsev, V. P.; Safonov, S. V.; Smolentsev, E. V.; Fedonin, O. N.
2016-04-01
The article deals with the application of a magnetic impact for inventing new equipment and methods of cleaning cast precision blanks from fragile or granular thickened surface coatings, which are difficult to remove and highly resistant to further mechanical processing. The issues relating to a rational use of the new method for typical products and auxiliary operations have been studied. The calculation and design methods have been elaborated for load-carrying elements of the equipment created. It has been shown, that the application of the magnetic pulse method, combined with a low-frequency vibration process is perspective at enterprises of general and special machine construction, for cleaning lightweight blanks and containers, used for transporting bulk goods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitcher, C.E.; Zimmerman, D.C.; Tonn, E.M.
Methods were developed for controlling the dental team's occupational exposure to nitrous oxide. The most applicable and effective use of these methods included the use of properly maintained gas delivery equipment, a double-walled scavenging nosepiece and vented suction machine, and minimizing speech by the patient. These methods were evaluated by measuring concentrations of nitrous oxide present in the air inspired by dental personnel. Before their use, the dentist inhaled 900 ppM nitrous oxide; their application reduced his inhaled concentration to 31 ppM, representing a 97% reduction. These methods were well accepted during 157 procedures completed by a group of eightmore » dentists engaged in private practice (four general practitioners, two pedodontists, and two oral surgeons).« less
Dominating Scale-Free Networks Using Generalized Probabilistic Methods
Molnár,, F.; Derzsy, N.; Czabarka, É.; Székely, L.; Szymanski, B. K.; Korniss, G.
2014-01-01
We study ensemble-based graph-theoretical methods aiming to approximate the size of the minimum dominating set (MDS) in scale-free networks. We analyze both analytical upper bounds of dominating sets and numerical realizations for applications. We propose two novel probabilistic dominating set selection strategies that are applicable to heterogeneous networks. One of them obtains the smallest probabilistic dominating set and also outperforms the deterministic degree-ranked method. We show that a degree-dependent probabilistic selection method becomes optimal in its deterministic limit. In addition, we also find the precise limit where selecting high-degree nodes exclusively becomes inefficient for network domination. We validate our results on several real-world networks, and provide highly accurate analytical estimates for our methods. PMID:25200937
Forestry sector analysis for developing countries: issues and methods.
R.W. Haynes
1993-01-01
A satellite meeting of the 10th Forestry World Congress focused on the methods used for forest sector analysis and their applications in both developed and developing countries. The results of that meeting are summarized, and a general approach for forest sector modeling is proposed. The approach includes models derived from the existing...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-04
... the current status of Vanguard Study and will discuss general data collection methods and retention strategy and methods. Place: National Institutes of Health, Natcher Conference Center, Room E1/E2, 45... applicable, the business or professional affiliation of the interested person. For additional information...
Using Blogs in Qualitative Educational Research: An Exploration of Method
ERIC Educational Resources Information Center
Harricharan, Michelle; Bhopal, Kalwant
2014-01-01
When compared with wider social research, qualitative educational research has been relatively slow to take up online research methods (ORMs). There is some very notable research in the area but, in general, ORMs have not achieved wide applicability in qualitative educational contexts apart from research that is inherently linked to the Internet,…
Power Analysis for Complex Mediational Designs Using Monte Carlo Methods
ERIC Educational Resources Information Center
Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.
2010-01-01
Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex…
NASA Astrophysics Data System (ADS)
Ghassemi, Aazam; Yazdani, Mostafa; Hedayati, Mohamad
2017-12-01
In this work, based on the First Order Shear Deformation Theory (FSDT), an attempt is made to explore the applicability and accuracy of the Generalized Differential Quadrature Method (GDQM) for bending analysis of composite sandwich plates under static loading. Comparative studies of the bending behavior of composite sandwich plates are made between two types of boundary conditions for different cases. The effects of fiber orientation, ratio of thickness to length of the plate, the ratio of thickness of core to thickness of the face sheet are studied on the transverse displacement and moment resultants. As shown in this study, the role of the core thickness in deformation of these plates can be reversed by the stiffness of the core in comparison with sheets. The obtained graphs give very good results due to optimum design of sandwich plates. In Comparison with existing solutions, fast convergent rates and high accuracy results can be achieved by the GDQ method.
Deep imitation learning for 3D navigation tasks.
Hussein, Ahmed; Elyan, Eyad; Gaber, Mohamed Medhat; Jayne, Chrisina
2018-01-01
Deep learning techniques have shown success in learning from raw high-dimensional data in various applications. While deep reinforcement learning is recently gaining popularity as a method to train intelligent agents, utilizing deep learning in imitation learning has been scarcely explored. Imitation learning can be an efficient method to teach intelligent agents by providing a set of demonstrations to learn from. However, generalizing to situations that are not represented in the demonstrations can be challenging, especially in 3D environments. In this paper, we propose a deep imitation learning method to learn navigation tasks from demonstrations in a 3D environment. The supervised policy is refined using active learning in order to generalize to unseen situations. This approach is compared to two popular deep reinforcement learning techniques: deep-Q-networks and Asynchronous actor-critic (A3C). The proposed method as well as the reinforcement learning methods employ deep convolutional neural networks and learn directly from raw visual input. Methods for combining learning from demonstrations and experience are also investigated. This combination aims to join the generalization ability of learning by experience with the efficiency of learning by imitation. The proposed methods are evaluated on 4 navigation tasks in a 3D simulated environment. Navigation tasks are a typical problem that is relevant to many real applications. They pose the challenge of requiring demonstrations of long trajectories to reach the target and only providing delayed rewards (usually terminal) to the agent. The experiments show that the proposed method can successfully learn navigation tasks from raw visual input while learning from experience methods fail to learn an effective policy. Moreover, it is shown that active learning can significantly improve the performance of the initially learned policy using a small number of active samples.
Recent Applications of Neutron Imaging Methods
NASA Astrophysics Data System (ADS)
Lehmann, E.; Mannes, D.; Kaestner, A.; Grünzweig, C.
The methodical progress in the field of neutron imaging is visible in general but on different levels in the particular labs. Consequently, the access to most suitable beam ports, the usage of advanced imaging detector systems and the professional image processing made the technique competitive to other non-destructive tools like X-ray imaging. Based on this performance gain and by new methodical approaches several new application fields came up - in addition to the already established ones. Accordingly, new image data are now mostly in the third dimension available in the format of tomography volumes. The radiography mode is still the basis of neutron imaging, but the extracted information from superimposed image data (like for a grating interferometer) enables completely new insights. In the consequence, many new applications were created.
Trees, B-series and G-symplectic methods
NASA Astrophysics Data System (ADS)
Butcher, J. C.
2017-07-01
The order conditions for Runge-Kutta methods are intimately connected with the graphs known as rooted trees. The conditions can be expressed in terms of Taylor expansions written as weighted sums of elementary differentials, that is as B-series. Polish notation provides a unifying structure for representing many of the quantities appearing in this theory. Applications include the analysis of general linear methods with special reference to G-symplectic methods. A new order 6 method has recently been constructed.
NASA Astrophysics Data System (ADS)
Wang, Qian; Gao, Jinghuai
2018-02-01
As a powerful tool for hydrocarbon detection and reservoir characterization, the quality factor, Q, provides useful information in seismic data processing and interpretation. In this paper, we propose a novel method for Q estimation. The generalized seismic wavelet (GSW) function was introduced to fit the amplitude spectrum of seismic waveforms with two parameters: fractional value and reference frequency. Then we derive an analytical relation between the GSW function and the Q factor of the medium. When a seismic wave propagates through a viscoelastic medium, the GSW function can be employed to fit the amplitude spectrum of the source and attenuated wavelets, then the fractional values and reference frequencies can be evaluated numerically from the discrete Fourier spectrum. After calculating the peak frequency based on the obtained fractional value and reference frequency, the relationship between the GSW function and the Q factor can be built by the conventional peak frequency shift method. Synthetic tests indicate that our method can achieve higher accuracy and be more robust to random noise compared with existing methods. Furthermore, the proposed method is applicable to different types of source wavelet. Field data application also demonstrates the effectiveness of our method in seismic attenuation and the potential in the reservoir characteristic.
A generic minimization random allocation and blinding system on web.
Cai, Hongwei; Xia, Jielai; Xu, Dezhong; Gao, Donghuai; Yan, Yongping
2006-12-01
Minimization is a dynamic randomization method for clinical trials. Although recommended by many researchers, the utilization of minimization has been seldom reported in randomized trials mainly because of the controversy surrounding the validity of conventional analyses and its complexity in implementation. However, both the statistical and clinical validity of minimization were demonstrated in recent studies. Minimization random allocation system integrated with blinding function that could facilitate the implementation of this method in general clinical trials has not been reported. SYSTEM OVERVIEW: The system is a web-based random allocation system using Pocock and Simon minimization method. It also supports multiple treatment arms within a trial, multiple simultaneous trials, and blinding without further programming. This system was constructed with generic database schema design method, Pocock and Simon minimization method and blinding method. It was coded with Microsoft Visual Basic and Active Server Pages (ASP) programming languages. And all dataset were managed with a Microsoft SQL Server database. Some critical programming codes were also provided. SIMULATIONS AND RESULTS: Two clinical trials were simulated simultaneously to test the system's applicability. Not only balanced groups but also blinded allocation results were achieved in both trials. Practical considerations for minimization method, the benefits, general applicability and drawbacks of the technique implemented in this system are discussed. Promising features of the proposed system are also summarized.
Approaches to Fungal Genome Annotation
Haas, Brian J.; Zeng, Qiandong; Pearson, Matthew D.; Cuomo, Christina A.; Wortman, Jennifer R.
2011-01-01
Fungal genome annotation is the starting point for analysis of genome content. This generally involves the application of diverse methods to identify features on a genome assembly such as protein-coding and non-coding genes, repeats and transposable elements, and pseudogenes. Here we describe tools and methods leveraged for eukaryotic genome annotation with a focus on the annotation of fungal nuclear and mitochondrial genomes. We highlight the application of the latest technologies and tools to improve the quality of predicted gene sets. The Broad Institute eukaryotic genome annotation pipeline is described as one example of how such methods and tools are integrated into a sequencing center’s production genome annotation environment. PMID:22059117
Application of abstract harmonic analysis to the high-speed recognition of images
NASA Technical Reports Server (NTRS)
Usikov, D. A.
1979-01-01
Methods are constructed for rapidly computing correlation functions using the theory of abstract harmonic analysis. The theory developed includes as a particular case the familiar Fourier transform method for a correlation function which makes it possible to find images which are independent of their translation in the plane. Two examples of the application of the general theory described are the search for images, independent of their rotation and scale, and the search for images which are independent of their translations and rotations in the plane.
Rapid iterative reanalysis for automated design
NASA Technical Reports Server (NTRS)
Bhatia, K. G.
1973-01-01
A method for iterative reanalysis in automated structural design is presented for a finite-element analysis using the direct stiffness approach. A basic feature of the method is that the generalized stiffness and inertia matrices are expressed as functions of structural design parameters, and these generalized matrices are expanded in Taylor series about the initial design. Only the linear terms are retained in the expansions. The method is approximate because it uses static condensation, modal reduction, and the linear Taylor series expansions. The exact linear representation of the expansions of the generalized matrices is also described and a basis for the present method is established. Results of applications of the present method to the recalculation of the natural frequencies of two simple platelike structural models are presented and compared with results obtained by using a commonly applied analysis procedure used as a reference. In general, the results are in good agreement. A comparison of the computer times required for the use of the present method and the reference method indicated that the present method required substantially less time for reanalysis. Although the results presented are for relatively small-order problems, the present method will become more efficient relative to the reference method as the problem size increases. An extension of the present method to static reanalysis is described, ana a basis for unifying the static and dynamic reanalysis procedures is presented.
Theory and applications of structured light single pixel imaging
NASA Astrophysics Data System (ADS)
Stokoe, Robert J.; Stockton, Patrick A.; Pezeshki, Ali; Bartels, Randy A.
2018-02-01
Many single-pixel imaging techniques have been developed in recent years. Though the methods of image acquisition vary considerably, the methods share unifying features that make general analysis possible. Furthermore, the methods developed thus far are based on intuitive processes that enable simple and physically-motivated reconstruction algorithms, however, this approach may not leverage the full potential of single-pixel imaging. We present a general theoretical framework of single-pixel imaging based on frame theory, which enables general, mathematically rigorous analysis. We apply our theoretical framework to existing single-pixel imaging techniques, as well as provide a foundation for developing more-advanced methods of image acquisition and reconstruction. The proposed frame theoretic framework for single-pixel imaging results in improved noise robustness, decrease in acquisition time, and can take advantage of special properties of the specimen under study. By building on this framework, new methods of imaging with a single element detector can be developed to realize the full potential associated with single-pixel imaging.
Treder, M; Eter, N
2018-04-19
Deep learning is increasingly becoming the focus of various imaging methods in medicine. Due to the large number of different imaging modalities, ophthalmology is particularly suitable for this field of application. This article gives a general overview on the topic of deep learning and its current applications in the field of optical coherence tomography. For the benefit of the reader it focuses on the clinical rather than the technical aspects.
Zhang, Yan; Zhang, Ting; Feng, Yanye; Lu, Xiuxiu; Lan, Wenxian; Wang, Jufang; Wu, Houming; Cao, Chunyang; Wang, Xiaoning
2011-01-01
The production of recombinant proteins in a large scale is important for protein functional and structural studies, particularly by using Escherichia coli over-expression systems; however, approximate 70% of recombinant proteins are over-expressed as insoluble inclusion bodies. Here we presented an efficient method for generating soluble proteins from inclusion bodies by using two steps of denaturation and one step of refolding. We first demonstrated the advantages of this method over a conventional procedure with one denaturation step and one refolding step using three proteins with different folding properties. The refolded proteins were found to be active using in vitro tests and a bioassay. We then tested the general applicability of this method by analyzing 88 proteins from human and other organisms, all of which were expressed as inclusion bodies. We found that about 76% of these proteins were refolded with an average of >75% yield of soluble proteins. This “two-step-denaturing and refolding” (2DR) method is simple, highly efficient and generally applicable; it can be utilized to obtain active recombinant proteins for both basic research and industrial purposes. PMID:21829569
NASA Technical Reports Server (NTRS)
Ford, Hugh; Turner, C. E.; Fenner, R. T.; Curr, R. M.; Ivankovic, A.
1995-01-01
The objects of the first, exploratory, stage of the project were listed as: (1) to make a detailed and critical review of the Boundary Element method as already published and with regard to elastic-plastic fracture mechanics, to assess its potential for handling present concepts in two-dimensional and three-dimensional cases. To this was subsequently added the Finite Volume method and certain aspects of the Finite Element method for comparative purposes; (2) to assess the further steps needed to apply the methods so far developed to the general field, covering a practical range of geometries, work hardening materials, and composites: to consider their application under higher temperature conditions; (3) to re-assess the present stage of development of the energy dissipation rate, crack tip opening angle and J-integral models in relation to the possibilities of producing a unified technology with the previous two items; and (4) to report on the feasibility and promise of this combined approach and, if appropriate, make recommendations for the second stage aimed at developing a generalized crack growth technology for its application to real-life problems.
Detecting spatial regimes in ecosystems
Research on early warning indicators has generally focused on assessing temporal transitions with limited application of these methods to detecting spatial regimes. Traditional spatial boundary detection procedures that result in ecoregion maps are typically based on ecological ...
Earl, David J; Deem, Michael W
2005-04-14
Adaptive Monte Carlo methods can be viewed as implementations of Markov chains with infinite memory. We derive a general condition for the convergence of a Monte Carlo method whose history dependence is contained within the simulated density distribution. In convergent cases, our result implies that the balance condition need only be satisfied asymptotically. As an example, we show that the adaptive integration method converges.
A general-purpose machine learning framework for predicting properties of inorganic materials
Ward, Logan; Agrawal, Ankit; Choudhary, Alok; ...
2016-08-26
A very active area of materials research is to devise methods that use machine learning to automatically extract predictive models from existing materials data. While prior examples have demonstrated successful models for some applications, many more applications exist where machine learning can make a strong impact. To enable faster development of machine-learning-based models for such applications, we have created a framework capable of being applied to a broad range of materials data. Our method works by using a chemically diverse list of attributes, which we demonstrate are suitable for describing a wide variety of properties, and a novel method formore » partitioning the data set into groups of similar materials to boost the predictive accuracy. In this manuscript, we demonstrate how this new method can be used to predict diverse properties of crystalline and amorphous materials, such as band gap energy and glass-forming ability.« less
A general-purpose machine learning framework for predicting properties of inorganic materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, Logan; Agrawal, Ankit; Choudhary, Alok
A very active area of materials research is to devise methods that use machine learning to automatically extract predictive models from existing materials data. While prior examples have demonstrated successful models for some applications, many more applications exist where machine learning can make a strong impact. To enable faster development of machine-learning-based models for such applications, we have created a framework capable of being applied to a broad range of materials data. Our method works by using a chemically diverse list of attributes, which we demonstrate are suitable for describing a wide variety of properties, and a novel method formore » partitioning the data set into groups of similar materials to boost the predictive accuracy. In this manuscript, we demonstrate how this new method can be used to predict diverse properties of crystalline and amorphous materials, such as band gap energy and glass-forming ability.« less
Automated interferometric alignment system for paraboloidal mirrors
Maxey, L. Curtis
1993-01-01
A method is described for a systematic method of interpreting interference fringes obtained by using a corner cube retroreflector as an alignment aid when aigning a paraboloid to a spherical wavefront. This is applicable to any general case where such alignment is required, but is specifically applicable in the case of aligning an autocollimating test using a diverging beam wavefront. In addition, the method provides information which can be systematically interpreted such that independent information about pitch, yaw and focus errors can be obtained. Thus, the system lends itself readily to automation. Finally, although the method is developed specifically for paraboloids, it can be seen to be applicable to a variety of other aspheric optics when applied in combination with a wavefront corrector that produces a wavefront which, when reflected from the correctly aligned aspheric surface will produce a collimated wavefront like that obtained from the paraboloid when it is correctly aligned to a spherical wavefront.
Automated interferometric alignment system for paraboloidal mirrors
Maxey, L.C.
1993-09-28
A method is described for a systematic method of interpreting interference fringes obtained by using a corner cube retroreflector as an alignment aid when aligning a paraboloid to a spherical wavefront. This is applicable to any general case where such alignment is required, but is specifically applicable in the case of aligning an autocollimating test using a diverging beam wavefront. In addition, the method provides information which can be systematically interpreted such that independent information about pitch, yaw and focus errors can be obtained. Thus, the system lends itself readily to automation. Finally, although the method is developed specifically for paraboloids, it can be seen to be applicable to a variety of other aspheric optics when applied in combination with a wavefront corrector that produces a wavefront which, when reflected from the correctly aligned aspheric surface will produce a collimated wavefront like that obtained from the paraboloid when it is correctly aligned to a spherical wavefront. 14 figures.
NASA Technical Reports Server (NTRS)
Sidi, Avram
1992-01-01
Let F(z) be a vectored-valued function F: C approaches C sup N, which is analytic at z=0 and meromorphic in a neighborhood of z=0, and let its Maclaurin series be given. We use vector-valued rational approximation procedures for F(z) that are based on its Maclaurin series in conjunction with power iterations to develop bona fide generalizations of the power method for an arbitrary N X N matrix that may be diagonalizable or not. These generalizations can be used to obtain simultaneously several of the largest distinct eigenvalues and the corresponding invariant subspaces, and present a detailed convergence theory for them. In addition, it is shown that the generalized power methods of this work are equivalent to some Krylov subspace methods, among them the methods of Arnoldi and Lanczos. Thus, the theory provides a set of completely new results and constructions for these Krylov subspace methods. This theory suggests at the same time a new mode of usage for these Krylov subspace methods that were observed to possess computational advantages over their common mode of usage.
Minimizing Higgs potentials via numerical polynomial homotopy continuation
NASA Astrophysics Data System (ADS)
Maniatis, M.; Mehta, D.
2012-08-01
The study of models with extended Higgs sectors requires to minimize the corresponding Higgs potentials, which is in general very difficult. Here, we apply a recently developed method, called numerical polynomial homotopy continuation (NPHC), which guarantees to find all the stationary points of the Higgs potentials with polynomial-like non-linearity. The detection of all stationary points reveals the structure of the potential with maxima, metastable minima, saddle points besides the global minimum. We apply the NPHC method to the most general Higgs potential having two complex Higgs-boson doublets and up to five real Higgs-boson singlets. Moreover the method is applicable to even more involved potentials. Hence the NPHC method allows to go far beyond the limits of the Gröbner basis approach.
Support vector machines for nuclear reactor state estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zavaljevski, N.; Gross, K. C.
2000-02-14
Validation of nuclear power reactor signals is often performed by comparing signal prototypes with the actual reactor signals. The signal prototypes are often computed based on empirical data. The implementation of an estimation algorithm which can make predictions on limited data is an important issue. A new machine learning algorithm called support vector machines (SVMS) recently developed by Vladimir Vapnik and his coworkers enables a high level of generalization with finite high-dimensional data. The improved generalization in comparison with standard methods like neural networks is due mainly to the following characteristics of the method. The input data space is transformedmore » into a high-dimensional feature space using a kernel function, and the learning problem is formulated as a convex quadratic programming problem with a unique solution. In this paper the authors have applied the SVM method for data-based state estimation in nuclear power reactors. In particular, they implemented and tested kernels developed at Argonne National Laboratory for the Multivariate State Estimation Technique (MSET), a nonlinear, nonparametric estimation technique with a wide range of applications in nuclear reactors. The methodology has been applied to three data sets from experimental and commercial nuclear power reactor applications. The results are promising. The combination of MSET kernels with the SVM method has better noise reduction and generalization properties than the standard MSET algorithm.« less
A survey of kernel-type estimators for copula and their applications
NASA Astrophysics Data System (ADS)
Sumarjaya, I. W.
2017-10-01
Copulas have been widely used to model nonlinear dependence structure. Main applications of copulas include areas such as finance, insurance, hydrology, rainfall to name but a few. The flexibility of copula allows researchers to model dependence structure beyond Gaussian distribution. Basically, a copula is a function that couples multivariate distribution functions to their one-dimensional marginal distribution functions. In general, there are three methods to estimate copula. These are parametric, nonparametric, and semiparametric method. In this article we survey kernel-type estimators for copula such as mirror reflection kernel, beta kernel, transformation method and local likelihood transformation method. Then, we apply these kernel methods to three stock indexes in Asia. The results of our analysis suggest that, albeit variation in information criterion values, the local likelihood transformation method performs better than the other kernel methods.
Austin, Peter C
2018-01-01
Propensity score methods are increasingly being used to estimate the effects of treatments and exposures when using observational data. The propensity score was initially developed for use with binary exposures (e.g., active treatment vs. control). The generalized propensity score is an extension of the propensity score for use with quantitative exposures (e.g., dose or quantity of medication, income, years of education). A crucial component of any propensity score analysis is that of balance assessment. This entails assessing the degree to which conditioning on the propensity score (via matching, weighting, or stratification) has balanced measured baseline covariates between exposure groups. Methods for balance assessment have been well described and are frequently implemented when using the propensity score with binary exposures. However, there is a paucity of information on how to assess baseline covariate balance when using the generalized propensity score. We describe how methods based on the standardized difference can be adapted for use with quantitative exposures when using the generalized propensity score. We also describe a method based on assessing the correlation between the quantitative exposure and each covariate in the sample when weighted using generalized propensity score -based weights. We conducted a series of Monte Carlo simulations to evaluate the performance of these methods. We also compared two different methods of estimating the generalized propensity score: ordinary least squared regression and the covariate balancing propensity score method. We illustrate the application of these methods using data on patients hospitalized with a heart attack with the quantitative exposure being creatinine level.
[The present study situation and application prospect of nail analysis for abused drugs].
Chen, Hang; Xiang, Ping; Shen, Min
2010-10-01
In forensic toxicology analysis, various types of biological samples have their own special characteristics and scope of applications. In this article, the physiological structure of nails, methods for collecting and pre-processing samples, and for analyzing some poisons and drugs in the nails are reviewed with details. This paper introduces the influence factors of drug abuse of the nails. The prospects of its further applications are concluded based on the research results. Nails, as an unconventional bio-sample without general application, show great potential and advantages in forensic toxicology.
NASA Technical Reports Server (NTRS)
Barger, R. L.
1980-01-01
A general procedure for computing the region of influence of a maneuvering vehicle is described. Basic differential geometric relations, including the use of a general trajectory parameter and the introduction of auxiliary variables in the envelope theory are presented. To illustrate the application of the method, the destruct region for a maneuvering fighter firing missiles is computed.
Digital cytology: current state of the art and prospects for the future.
Wilbur, David C
2011-01-01
The growth of digital methods in pathology is accelerating. Digital images can be used for a variety of applications in cytology, including rapid interpretations, primary diagnosis and second opinions, continuing education and proficiency testing. All of these functions can be performed using small static digital images, real-time dynamic digital microscopy, or whole-slide images. This review will discuss the general principles of digital pathology, its methods and applications to cytologic specimens. As cytologic specimens have unique features compared to histopathology specimens, the key differences will be discussed. Technical and administrative issues in digital pathology applications and the outlook for the future of the field will be presented. Copyright © 2011 S. Karger AG, Basel.
REVIEW ARTICLE: Spectrophotometric applications of digital signal processing
NASA Astrophysics Data System (ADS)
Morawski, Roman Z.
2006-09-01
Spectrophotometry is more and more often the method of choice not only in analysis of (bio)chemical substances, but also in the identification of physical properties of various objects and their classification. The applications of spectrophotometry include such diversified tasks as monitoring of optical telecommunications links, assessment of eating quality of food, forensic classification of papers, biometric identification of individuals, detection of insect infestation of seeds and classification of textiles. In all those applications, large numbers of data, generated by spectrophotometers, are processed by various digital means in order to extract measurement information. The main objective of this paper is to review the state-of-the-art methodology for digital signal processing (DSP) when applied to data provided by spectrophotometric transducers and spectrophotometers. First, a general methodology of DSP applications in spectrophotometry, based on DSP-oriented models of spectrophotometric data, is outlined. Then, the most important classes of DSP methods for processing spectrophotometric data—the methods for DSP-aided calibration of spectrophotometric instrumentation, the methods for the estimation of spectra on the basis of spectrophotometric data, the methods for the estimation of spectrum-related measurands on the basis of spectrophotometric data—are presented. Finally, the methods for preprocessing and postprocessing of spectrophotometric data are overviewed. Throughout the review, the applications of DSP are illustrated with numerous examples related to broadly understood spectrophotometry.
NASA Astrophysics Data System (ADS)
Szega, Marcin; Nowak, Grzegorz Tadeusz
2013-12-01
In generalized method of data reconciliation as equations of conditions beside substance and energy balances can be used equations which don't have precisely the status of conservation lows. Empirical coefficients in these equations are traded as unknowns' values. To this kind of equations, in application of the generalized method of data reconciliation in supercritical power unit, can be classified: steam flow capacity of a turbine for a group of stages, adiabatic internal efficiency of group of stages, equations for pressure drop in pipelines and equations for heat transfer in regeneration heat exchangers. Mathematical model of a power unit was developed in the code Thermoflex. Using this model the off-design calculation has been made in several points of loads for the power unit. Using these calculations identification of unknown values and empirical coefficients for generalized method of data reconciliation used in power unit has been made. Additional equations of conditions will be used in the generalized method of data reconciliation which will be used in optimization of measurement placement in redundant measurement system in power unit for new control systems
Computer vision for general purpose visual inspection: a fuzzy logic approach
NASA Astrophysics Data System (ADS)
Chen, Y. H.
In automatic visual industrial inspection, computer vision systems have been widely used. Such systems are often application specific, and therefore require domain knowledge in order to have a successful implementation. Since visual inspection can be viewed as a decision making process, it is argued that the integration of fuzzy logic analysis and computer vision systems provides a practical approach to general purpose visual inspection applications. This paper describes the development of an integrated fuzzy-rule-based automatic visual inspection system. Domain knowledge about a particular application is represented as a set of fuzzy rules. From the status of predefined fuzzy variables, the set of fuzzy rules are defuzzified to give the inspection results. A practical application where IC marks (often in the forms of English characters and a company logo) inspection is demonstrated, which shows a more consistent result as compared to a conventional thresholding method.
26 CFR 1.468A-1T - Nuclear decommissioning costs; general rules (temporary).
Code of Federal Regulations, 2010 CFR
2010-04-01
... an elective method for taking into account nuclear decommissioning costs for Federal income tax... accrual method of accounting that do not elect the application of section 468A are not allowed a deduction... nuclear power plant means any nuclear power reactor that is used predominantly in the trade or business of...
A quantitative polymerase chain reaction (qPCR) method for the detection of entercocci fecal indicator bacteria has been shown to be generally applicable for the analysis of temperate fresh (Great Lakes) and marine coastal waters and for providing risk-based determinations of wat...
Electrical latching of microelectromechanical devices
Garcia, Ernest J.; Sleefe, Gerard E.
2004-11-02
Methods are disclosed for row and column addressing of an array of microelectromechanical (MEM) devices. The methods of the present invention are applicable to MEM micromirrors or memory elements and allow the MEM array to be programmed and maintained latched in a programmed state with a voltage that is generally lower than the voltage required for electrostatically switching the MEM devices.
Risk management in fly-by-wire systems
NASA Technical Reports Server (NTRS)
Knoll, Karyn T.
1993-01-01
A general description of various types of fly-by-wire systems is provided. The risks inherent in digital flight control systems, like those used in the Space Shuttle, are identified. The results of a literature survey examining risk management methods in use throughout the aerospace industry are presented. The applicability of these methods to the Space Shuttle program is discussed.
Improving Generalization Based on l1-Norm Regularization for EEG-Based Motor Imagery Classification
Zhao, Yuwei; Han, Jiuqi; Chen, Yushu; Sun, Hongji; Chen, Jiayun; Ke, Ang; Han, Yao; Zhang, Peng; Zhang, Yi; Zhou, Jin; Wang, Changyong
2018-01-01
Multichannel electroencephalography (EEG) is widely used in typical brain-computer interface (BCI) systems. In general, a number of parameters are essential for a EEG classification algorithm due to redundant features involved in EEG signals. However, the generalization of the EEG method is often adversely affected by the model complexity, considerably coherent with its number of undetermined parameters, further leading to heavy overfitting. To decrease the complexity and improve the generalization of EEG method, we present a novel l1-norm-based approach to combine the decision value obtained from each EEG channel directly. By extracting the information from different channels on independent frequency bands (FB) with l1-norm regularization, the method proposed fits the training data with much less parameters compared to common spatial pattern (CSP) methods in order to reduce overfitting. Moreover, an effective and efficient solution to minimize the optimization object is proposed. The experimental results on dataset IVa of BCI competition III and dataset I of BCI competition IV show that, the proposed method contributes to high classification accuracy and increases generalization performance for the classification of MI EEG. As the training set ratio decreases from 80 to 20%, the average classification accuracy on the two datasets changes from 85.86 and 86.13% to 84.81 and 76.59%, respectively. The classification performance and generalization of the proposed method contribute to the practical application of MI based BCI systems. PMID:29867307
The generalized Lyapunov theorem and its application to quantum channels
NASA Astrophysics Data System (ADS)
Burgarth, Daniel; Giovannetti, Vittorio
2007-05-01
We give a simple and physically intuitive necessary and sufficient condition for a map acting on a compact metric space to be mixing (i.e. infinitely many applications of the map transfer any input into a fixed convergency point). This is a generalization of the 'Lyapunov direct method'. First we prove this theorem in topological spaces and for arbitrary continuous maps. Finally we apply our theorem to maps which are relevant in open quantum systems and quantum information, namely quantum channels. In this context, we also discuss the relations between mixing and ergodicity (i.e. the property that there exists only a single input state which is left invariant by a single application of the map) showing that the two are equivalent when the invariant point of the ergodic map is pure.
NASA Technical Reports Server (NTRS)
Lakes, R.
1991-01-01
Continuum representations of micromechanical phenomena in structured materials are described, with emphasis on cellular solids. These phenomena are interpreted in light of Cosserat elasticity, a generalized continuum theory which admits degrees of freedom not present in classical elasticity. These are the rotation of points in the material, and a couple per unit area or couple stress. Experimental work in this area is reviewed, and other interpretation schemes are discussed. The applicability of Cosserat elasticity to cellular solids and fibrous composite materials is considered as is the application of related generalized continuum theories. New experimental results are presented for foam materials with negative Poisson's ratios.
Bioactives from microalgal dinoflagellates.
Gallardo-Rodríguez, J; Sánchez-Mirón, A; García-Camacho, F; López-Rosales, L; Chisti, Y; Molina-Grima, E
2012-01-01
Dinoflagellate microalgae are an important source of marine biotoxins. Bioactives from dinoflagellates are attracting increasing attention because of their impact on the safety of seafood and potential uses in biomedical, toxicological and pharmacological research. Here we review the potential applications of dinoflagellate toxins and the methods for producing them. Only sparing quantities of dinoflagellate toxins are generally available and this hinders bioactivity characterization and evaluation in possible applications. Approaches to production of increased quantities of dinoflagellate bioactives are discussed. Although many dinoflagellates are fragile and grow slowly, controlled culture in bioreactors appears to be generally suitable for producing many of the metabolites of interest. Copyright © 2012 Elsevier Inc. All rights reserved.
Working covariance model selection for generalized estimating equations.
Carey, Vincent J; Wang, You-Gan
2011-11-20
We investigate methods for data-based selection of working covariance models in the analysis of correlated data with generalized estimating equations. We study two selection criteria: Gaussian pseudolikelihood and a geodesic distance based on discrepancy between model-sensitive and model-robust regression parameter covariance estimators. The Gaussian pseudolikelihood is found in simulation to be reasonably sensitive for several response distributions and noncanonical mean-variance relations for longitudinal data. Application is also made to a clinical dataset. Assessment of adequacy of both correlation and variance models for longitudinal data should be routine in applications, and we describe open-source software supporting this practice. Copyright © 2011 John Wiley & Sons, Ltd.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-26
... (National SMART Grant), and Teacher Education Assistance for College and Higher Education (TEACH) programs..., 2010, regardless of the method that the applicant uses to submit the FAFSA. The deadline date for the...
Traffic data acquisition and distribution (TDAD)
DOT National Transportation Integrated Search
2002-05-01
The wide variety of remote sensors used in Intelligent Transportation Systems (ITS) applications (loops, : probe vehicles, radar, cameras, etc.) has created a need for general methods by which data can be shared : among agencies and users who own dis...
Olbrant, Edgar; Frank, Martin
2010-12-01
In this paper, we study a deterministic method for particle transport in biological tissues. The method is specifically developed for dose calculations in cancer therapy and for radiological imaging. Generalized Fokker-Planck (GFP) theory [Leakeas and Larsen, Nucl. Sci. Eng. 137 (2001), pp. 236-250] has been developed to improve the Fokker-Planck (FP) equation in cases where scattering is forward-peaked and where there is a sufficient amount of large-angle scattering. We compare grid-based numerical solutions to FP and GFP in realistic medical applications. First, electron dose calculations in heterogeneous parts of the human body are performed. Therefore, accurate electron scattering cross sections are included and their incorporation into our model is extensively described. Second, we solve GFP approximations of the radiative transport equation to investigate reflectance and transmittance of light in biological tissues. All results are compared with either Monte Carlo or discrete-ordinates transport solutions.
General simulation algorithm for autocorrelated binary processes.
Serinaldi, Francesco; Lombardo, Federico
2017-02-01
The apparent ubiquity of binary random processes in physics and many other fields has attracted considerable attention from the modeling community. However, generation of binary sequences with prescribed autocorrelation is a challenging task owing to the discrete nature of the marginal distributions, which makes the application of classical spectral techniques problematic. We show that such methods can effectively be used if we focus on the parent continuous process of beta distributed transition probabilities rather than on the target binary process. This change of paradigm results in a simulation procedure effectively embedding a spectrum-based iterative amplitude-adjusted Fourier transform method devised for continuous processes. The proposed algorithm is fully general, requires minimal assumptions, and can easily simulate binary signals with power-law and exponentially decaying autocorrelation functions corresponding, for instance, to Hurst-Kolmogorov and Markov processes. An application to rainfall intermittency shows that the proposed algorithm can also simulate surrogate data preserving the empirical autocorrelation.
Direct Method Transcription for a Human-Class Translunar Injection Trajectory Optimization
NASA Technical Reports Server (NTRS)
Witzberger, Kevin E.; Zeiler, Tom
2012-01-01
This paper presents a new trajectory optimization software package developed in the framework of a low-to-high fidelity 3 degrees-of-freedom (DOF)/6-DOF vehicle simulation program named Mission Analysis Simulation Tool in Fortran (MASTIF) and its application to a translunar trajectory optimization problem. The functionality of the developed optimization package is implemented as a new "mode" in generalized settings to make it applicable for a general trajectory optimization problem. In doing so, a direct optimization method using collocation is employed for solving the problem. Trajectory optimization problems in MASTIF are transcribed to a constrained nonlinear programming (NLP) problem and solved with SNOPT, a commercially available NLP solver. A detailed description of the optimization software developed is provided as well as the transcription specifics for the translunar injection (TLI) problem. The analysis includes a 3-DOF trajectory TLI optimization and a 3-DOF vehicle TLI simulation using closed-loop guidance.
A general optimality criteria algorithm for a class of engineering optimization problems
NASA Astrophysics Data System (ADS)
Belegundu, Ashok D.
2015-05-01
An optimality criteria (OC)-based algorithm for optimization of a general class of nonlinear programming (NLP) problems is presented. The algorithm is only applicable to problems where the objective and constraint functions satisfy certain monotonicity properties. For multiply constrained problems which satisfy these assumptions, the algorithm is attractive compared with existing NLP methods as well as prevalent OC methods, as the latter involve computationally expensive active set and step-size control strategies. The fixed point algorithm presented here is applicable not only to structural optimization problems but also to certain problems as occur in resource allocation and inventory models. Convergence aspects are discussed. The fixed point update or resizing formula is given physical significance, which brings out a strength and trim feature. The number of function evaluations remains independent of the number of variables, allowing the efficient solution of problems with large number of variables.
Direct Optimal Control of Duffing Dynamics
NASA Technical Reports Server (NTRS)
Oz, Hayrani; Ramsey, John K.
2002-01-01
The "direct control method" is a novel concept that is an attractive alternative and competitor to the differential-equation-based methods. The direct method is equally well applicable to nonlinear, linear, time-varying, and time-invariant systems. For all such systems, the method yields explicit closed-form control laws based on minimization of a quadratic control performance measure. We present an application of the direct method to the dynamics and optimal control of the Duffing system where the control performance measure is not restricted to a quadratic form and hence may include a quartic energy term. The results we present in this report also constitute further generalizations of our earlier work in "direct optimal control methodology." The approach is demonstrated for the optimal control of the Duffing equation with a softening nonlinear stiffness.
The local properties of ocean surface waves by the phase-time method
NASA Technical Reports Server (NTRS)
Huang, Norden E.; Long, Steven R.; Tung, Chi-Chao; Donelan, Mark A.; Yuan, Yeli; Lai, Ronald J.
1992-01-01
A new approach using phase information to view and study the properties of frequency modulation, wave group structures, and wave breaking is presented. The method is applied to ocean wave time series data and a new type of wave group (containing the large 'rogue' waves) is identified. The method also has the capability of broad applications in the analysis of time series data in general.
Buu, Anne; Johnson, Norman J.; Li, Runze; Tan, Xianming
2011-01-01
Zero-inflated count data are very common in health surveys. This study develops new variable selection methods for the zero-inflated Poisson regression model. Our simulations demonstrate the negative consequences which arise from the ignorance of zero-inflation. Among the competing methods, the one-step SCAD method is recommended because it has the highest specificity, sensitivity, exact fit, and lowest estimation error. The design of the simulations is based on the special features of two large national databases commonly used in the alcoholism and substance abuse field so that our findings can be easily generalized to the real settings. Applications of the methodology are demonstrated by empirical analyses on the data from a well-known alcohol study. PMID:21563207
Johnston, K M; Gustafson, P; Levy, A R; Grootendorst, P
2008-04-30
A major, often unstated, concern of researchers carrying out epidemiological studies of medical therapy is the potential impact on validity if estimates of treatment are biased due to unmeasured confounders. One technique for obtaining consistent estimates of treatment effects in the presence of unmeasured confounders is instrumental variables analysis (IVA). This technique has been well developed in the econometrics literature and is being increasingly used in epidemiological studies. However, the approach to IVA that is most commonly used in such studies is based on linear models, while many epidemiological applications make use of non-linear models, specifically generalized linear models (GLMs) such as logistic or Poisson regression. Here we present a simple method for applying IVA within the class of GLMs using the generalized method of moments approach. We explore some of the theoretical properties of the method and illustrate its use within both a simulation example and an epidemiological study where unmeasured confounding is suspected to be present. We estimate the effects of beta-blocker therapy on one-year all-cause mortality after an incident hospitalization for heart failure, in the absence of data describing disease severity, which is believed to be a confounder. 2008 John Wiley & Sons, Ltd
Shape optimization using a NURBS-based interface-enriched generalized FEM
Najafi, Ahmad R.; Safdari, Masoud; Tortorelli, Daniel A.; ...
2016-11-26
This study presents a gradient-based shape optimization over a fixed mesh using a non-uniform rational B-splines-based interface-enriched generalized finite element method, applicable to multi-material structures. In the proposed method, non-uniform rational B-splines are used to parameterize the design geometry precisely and compactly by a small number of design variables. An analytical shape sensitivity analysis is developed to compute derivatives of the objective and constraint functions with respect to the design variables. Subtle but important new terms involve the sensitivity of shape functions and their spatial derivatives. As a result, verification and illustrative problems are solved to demonstrate the precision andmore » capability of the method.« less
A physical model for the acousto-ultrasonic method. Ph.D. Thesis Final Report
NASA Technical Reports Server (NTRS)
Kiernan, Michael T.; Duke, John C., Jr.
1990-01-01
A basic physical explanation, a model, and comments on NDE application of the acousto-ultrasonic (AU) method for composite materials are presented. The basis of this work is a set of experiments where a sending and a receiving piezoelectric transducer were both oriented normal to the surface, at different points, on aluminum plates, various composite plates, and a tapered aluminum plate. The purpose and basic idea is introduced. Also, general comments on the AU method are offered. A literature review is offered for areas pertinent, such as composite materials, wave propagation, ultrasonics, and the AU. Special emphasis is given to theory which is used later on and past experimental results that are important to the physical understanding of the AU method. The experimental set-up, procedure, and the ensuing analysis are described. The experimental results are presented in both a quantitative and qualitative manner. A physical understanding of experimental results based on elasticity solution is furnished. Modeling and applications of the AU method is discussed for composite material and general conclusions are stated. The physical model of the AU method for composite materials is offered, something which has been much needed and sorely lacking. This physical understanding is possible due to the extensive set of experimental measurements, also reported.
Qian, Cheng; Kovalchik, Kevin A; MacLennan, Matthew S; Huang, Xiaohua; Chen, David D Y
2017-06-01
Capillary electrophoresis frontal analysis (CE-FA) can be used to determine binding affinity of molecular interactions. However, its current data processing method mandate specific requirement on the mobilities of the binding pair in order to obtain accurate binding constants. This work shows that significant errors are resulted when the mobilities of the interacting species do not meet these requirements. Therefore, the applicability of CE-FA in many real word applications becomes questionable. An electrophoretic mobility-based correction method is developed in this work based on the flux of each species. A simulation program and a pair of model compounds are used to verify the new equations and evaluate the effectiveness of this method. Ibuprofen and hydroxypropyl-β-cyclodextrinare used to demonstrate the differences in the obtained binding constant by CE-FA when different calculation methods are used, and the results are compared with those obtained by affinity capillary electrophoresis (ACE). The results suggest that CE-FA, with the mobility-based correction method, can be a generally applicable method for a much wider range of applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Generalized contact and improved frictional heating in the material point method
NASA Astrophysics Data System (ADS)
Nairn, J. A.; Bardenhagen, S. G.; Smith, G. D.
2017-09-01
The material point method (MPM) has proved to be an effective particle method for computational mechanics modeling of problems involving contact, but all prior applications have been limited to Coulomb friction. This paper generalizes the MPM approach for contact to handle any friction law with examples given for friction with adhesion or with a velocity-dependent coefficient of friction. Accounting for adhesion requires an extra calculation to evaluate contact area. Implementation of velocity-dependent laws usually needs numerical methods to find contacting forces. The friction process involves work which can be converted into heat. This paper provides a new method for calculating frictional heating that accounts for interfacial acceleration during the time step. The acceleration terms is small for many problems, but temporal convergence of heating effects for problems involving vibrations and high contact forces is improved by the new method. Fortunately, the new method needs few extra calculations and therefore is recommended for all simulations.
Generalized contact and improved frictional heating in the material point method
NASA Astrophysics Data System (ADS)
Nairn, J. A.; Bardenhagen, S. G.; Smith, G. D.
2018-07-01
The material point method (MPM) has proved to be an effective particle method for computational mechanics modeling of problems involving contact, but all prior applications have been limited to Coulomb friction. This paper generalizes the MPM approach for contact to handle any friction law with examples given for friction with adhesion or with a velocity-dependent coefficient of friction. Accounting for adhesion requires an extra calculation to evaluate contact area. Implementation of velocity-dependent laws usually needs numerical methods to find contacting forces. The friction process involves work which can be converted into heat. This paper provides a new method for calculating frictional heating that accounts for interfacial acceleration during the time step. The acceleration terms is small for many problems, but temporal convergence of heating effects for problems involving vibrations and high contact forces is improved by the new method. Fortunately, the new method needs few extra calculations and therefore is recommended for all simulations.
NASA Technical Reports Server (NTRS)
Preiswerk, Ernst
1940-01-01
The application is treated in sufficient detail to facilitate as much as possible its application by the engineer who is less familiar with the subject. The present work was undertaken with two objects in view. In the first place, it is considered as a contribution to the water analogy of gas flows, and secondly, a large portion is devoted to the general theory of the two-dimensional supersonic flows.
Fifth Conference on Artificial Intelligence for Space Applications
NASA Technical Reports Server (NTRS)
Odell, Steve L. (Compiler)
1990-01-01
The Fifth Conference on Artificial Intelligence for Space Applications brings together diverse technical and scientific work in order to help those who employ AI methods in space applications to identify common goals and to address issues of general interest in the AI community. Topics include the following: automation for Space Station; intelligent control, testing, and fault diagnosis; robotics and vision; planning and scheduling; simulation, modeling, and tutoring; development tools and automatic programming; knowledge representation and acquisition; and knowledge base/data base integration.
1997-09-30
research is multiscale , interdisciplinary and generic. The methods are applicable to an arbitrary region of the coastal and/or deep ocean and across the...dynamics. OBJECTIVES General objectives are: (I) To determine for the coastal and/or coupled deep ocean the multiscale processes which occur: i) in...Straits and the eastern basin; iii) extension and application of our balance of terms scheme (EVA) to multiscale , interdisciplinary fields with data
Fischer, Marco
2013-01-01
Quantitative assessment of growth of filamentous microorganisms, such as streptomycetes, is generally restricted to determination of dry weight. Here, we describe a straightforward methylene blue-based sorption assay to monitor microbial growth quantitatively, simply, and rapidly. The assay is equally applicable to unicellular and filamentous bacterial and eukaryotic microorganisms. PMID:23666340
Techniques for Microwave Imaging.
1981-01-18
reduce cross-range sidelobes in tht subsequent -’ FT and the array was padd ,,d with 64 additional r,wis containing zeros . The configuration of the array is...of microwave imagery obtained by synthetic aperture processing described in reference 1-2. This type of image. generated by processing radar data...1,000 wavelengths. Althouigh these are the intended applications, the imaging methods con- sidered have general applicability to environments outside
Application of artificial intelligence to impulsive orbital transfers
NASA Technical Reports Server (NTRS)
Burns, Rowland E.
1987-01-01
A generalized technique for the numerical solution of any given class of problems is presented. The technique requires the analytic (or numerical) solution of every applicable equation for all variables that appear in the problem. Conditional blocks are employed to rapidly expand the set of known variables from a minimum of input. The method is illustrated via the use of the Hohmann transfer problem from orbital mechanics.
John D. Shaw; James N. Long
2010-01-01
Reinekeâs Stand Density Index (SDI) has been available to silviculturists for over 75 years, but application of this stand metric has been inconsistent. Originally described as a measurement of relative density in single-species, even-aged stands, it has since been generalized for use in uneven-aged stands and mixed-species stands. However, methods used to establish...
A new method of passive modifications for partial frequency assignment of general structures
NASA Astrophysics Data System (ADS)
Belotti, Roberto; Ouyang, Huajiang; Richiedei, Dario
2018-01-01
The assignment of a subset of natural frequencies to vibrating systems can be conveniently achieved by means of suitable structural modifications. It has been observed that such an approach usually leads to the undesired change of the unassigned natural frequencies, which is a phenomenon known as frequency spill-over. Such an issue has been dealt with in the literature only in simple specific cases. In this paper, a new and general method is proposed that aims to assign a subset of natural frequencies with low spill-over. The optimal structural modifications are determined through a three-step procedure that considers both the prescribed eigenvalues and the feasibility constraints, assuring that the obtained solution is physically realizable. The proposed method is therefore applicable to very general vibrating systems, such as those obtained through the finite element method. The numerical difficulties that may occur as a result of employing the method are also carefully addressed. Finally, the capabilities of the method are validated in three test-cases in which both lumped and distributed parameters are modified to obtain the desired eigenvalues.
Statistical inference for template aging
NASA Astrophysics Data System (ADS)
Schuckers, Michael E.
2006-04-01
A change in classification error rates for a biometric device is often referred to as template aging. Here we offer two methods for determining whether the effect of time is statistically significant. The first of these is the use of a generalized linear model to determine if these error rates change linearly over time. This approach generalizes previous work assessing the impact of covariates using generalized linear models. The second approach uses of likelihood ratio tests methodology. The focus here is on statistical methods for estimation not the underlying cause of the change in error rates over time. These methodologies are applied to data from the National Institutes of Standards and Technology Biometric Score Set Release 1. The results of these applications are discussed.
Yu, Yi-Kuo
2003-08-15
The exact analytical result for a class of integrals involving (associated) Legendre polynomials of complicated argument is presented. The method employed can in principle be generalized to integrals involving other special functions. This class of integrals also proves useful in the electrostatic problems in which dielectric spheres are involved, which is of importance in modeling the dynamics of biological macromolecules. In fact, with this solution, a more robust foundation is laid for the Generalized Born method in modeling the dynamics of biomolecules. c2003 Elsevier B.V. All rights reserved.
Generalized approach to cooling charge-coupled devices using thermoelectric coolers
NASA Technical Reports Server (NTRS)
Petrick, S. Walter
1987-01-01
This paper is concerned with the use of thermoelectric coolers (TECs) to cool charge-coupled devices (CCDs). Heat inputs to the CCD from the warmer environment are identified, and generalized graphs are used to approximate the major heat inputs. A method of choosing and estimating the power consumption of the TEC is discussed. This method includes the use of TEC performance information supplied by the manufacturer and equations derived from this information. Parameters of the equations are tabulated to enable the reader to use the TEC performance equations for choosing and estimating the power needed for specific TEC applications.
Multifunctional Collaborative Modeling and Analysis Methods in Engineering Science
NASA Technical Reports Server (NTRS)
Ransom, Jonathan B.; Broduer, Steve (Technical Monitor)
2001-01-01
Engineers are challenged to produce better designs in less time and for less cost. Hence, to investigate novel and revolutionary design concepts, accurate, high-fidelity results must be assimilated rapidly into the design, analysis, and simulation process. This assimilation should consider diverse mathematical modeling and multi-discipline interactions necessitated by concepts exploiting advanced materials and structures. Integrated high-fidelity methods with diverse engineering applications provide the enabling technologies to assimilate these high-fidelity, multi-disciplinary results rapidly at an early stage in the design. These integrated methods must be multifunctional, collaborative, and applicable to the general field of engineering science and mechanics. Multifunctional methodologies and analysis procedures are formulated for interfacing diverse subdomain idealizations including multi-fidelity modeling methods and multi-discipline analysis methods. These methods, based on the method of weighted residuals, ensure accurate compatibility of primary and secondary variables across the subdomain interfaces. Methods are developed using diverse mathematical modeling (i.e., finite difference and finite element methods) and multi-fidelity modeling among the subdomains. Several benchmark scalar-field and vector-field problems in engineering science are presented with extensions to multidisciplinary problems. Results for all problems presented are in overall good agreement with the exact analytical solution or the reference numerical solution. Based on the results, the integrated modeling approach using the finite element method for multi-fidelity discretization among the subdomains is identified as most robust. The multiple-method approach is advantageous when interfacing diverse disciplines in which each of the method's strengths are utilized. The multifunctional methodology presented provides an effective mechanism by which domains with diverse idealizations are interfaced. This capability rapidly provides the high-fidelity results needed in the early design phase. Moreover, the capability is applicable to the general field of engineering science and mechanics. Hence, it provides a collaborative capability that accounts for interactions among engineering analysis methods.
A Comparative Study of Registration Methods for RGB-D Video of Static Scenes
Morell-Gimenez, Vicente; Saval-Calvo, Marcelo; Azorin-Lopez, Jorge; Garcia-Rodriguez, Jose; Cazorla, Miguel; Orts-Escolano, Sergio; Fuster-Guillo, Andres
2014-01-01
The use of RGB-D sensors for mapping and recognition tasks in robotics or, in general, for virtual reconstruction has increased in recent years. The key aspect of these kinds of sensors is that they provide both depth and color information using the same device. In this paper, we present a comparative analysis of the most important methods used in the literature for the registration of subsequent RGB-D video frames in static scenarios. The analysis begins by explaining the characteristics of the registration problem, dividing it into two representative applications: scene modeling and object reconstruction. Then, a detailed experimentation is carried out to determine the behavior of the different methods depending on the application. For both applications, we used standard datasets and a new one built for object reconstruction. PMID:24834909
Sardinha, Ana Gabriella de Oliveira; Oyama, Ceres Nunes de Resende; de Mendonça Maroja, Armando; Costa, Ivan F
2016-01-01
The aim of this paper is to provide a general discussion, algorithm, and actual working programs of the deformation method for fast simulation of biological tissue formed by fibers and fluid. In order to demonstrate the benefit of the clinical applications software, we successfully used our computational program to deform a 3D breast image acquired from patients, using a 3D scanner, in a real hospital environment. The method implements a quasi-static solution for elastic global deformations of objects. Each pair of vertices of the surface is connected and defines an elastic fiber. The set of all the elastic fibers defines a mesh of smaller size than the volumetric meshes, allowing for simulation of complex objects with less computational effort. The behavior similar to the stress tensor is obtained by the volume conservation equation that mixes the 3D coordinates. Step by step, we show the computational implementation of this approach. As an example, a 2D rectangle formed by only 4 vertices is solved and, for this simple geometry, all intermediate results are shown. On the other hand, actual implementations of these ideas in the form of working computer routines are provided for general 3D objects, including a clinical application.
Pelat, Camille; Bonmarin, Isabelle; Ruello, Marc; Fouillet, Anne; Caserio-Schönemann, Céline; Levy-Bruhl, Daniel; Le Strat, Yann
2017-08-10
The 2014/15 influenza epidemic caused a work overload for healthcare facilities in France. The French national public health agency announced the start of the epidemic - based on indicators aggregated at the national level - too late for many hospitals to prepare. It was therefore decided to improve the influenza alert procedure through (i) the introduction of a pre-epidemic alert level to better anticipate future outbreaks, (ii) the regionalisation of surveillance so that healthcare structures can be informed of the arrival of epidemics in their region, (iii) the standardised use of data sources and statistical methods across regions. A web application was developed to deliver statistical results of three outbreak detection methods applied to three surveillance data sources: emergency departments, emergency general practitioners and sentinel general practitioners. This application was used throughout the 2015/16 influenza season by the epidemiologists of the headquarters and regional units of the French national public health agency. It allowed them to signal the first influenza epidemic alert in week 2016-W03, in Brittany, with 11 other regions in pre-epidemic alert. This application received positive feedback from users and was pivotal for coordinating surveillance across the agency's regional units. This article is copyright of The Authors, 2017.
NASA Astrophysics Data System (ADS)
Pescaru, A.; Oanta, E.; Axinte, T.; Dascalescu, A.-D.
2015-11-01
Computer aided engineering is based on models of the phenomena which are expressed as algorithms. The implementations of the algorithms are usually software applications which are processing a large volume of numerical data, regardless the size of the input data. In this way, the finite element method applications used to have an input data generator which was creating the entire volume of geometrical data, starting from the initial geometrical information and the parameters stored in the input data file. Moreover, there were several data processing stages, such as: renumbering of the nodes meant to minimize the size of the band length of the system of equations to be solved, computation of the equivalent nodal forces, computation of the element stiffness matrix, assemblation of system of equations, solving the system of equations, computation of the secondary variables. The modern software application use pre-processing and post-processing programs to easily handle the information. Beside this example, CAE applications use various stages of complex computation, being very interesting the accuracy of the final results. Along time, the development of CAE applications was a constant concern of the authors and the accuracy of the results was a very important target. The paper presents the various computing techniques which were imagined and implemented in the resulting applications: finite element method programs, finite difference element method programs, applied general numerical methods applications, data generators, graphical applications, experimental data reduction programs. In this context, the use of the extended precision data types was one of the solutions, the limitations being imposed by the size of the memory which may be allocated. To avoid the memory-related problems the data was stored in files. To minimize the execution time, part of the file was accessed using the dynamic memory allocation facilities. One of the most important consequences of the paper is the design of a library which includes the optimized solutions previously tested, that may be used for the easily development of original CAE cross-platform applications. Last but not least, beside the generality of the data type solutions, there is targeted the development of a software library which may be used for the easily development of node-based CAE applications, each node having several known or unknown parameters, the system of equations being automatically generated and solved.
Improved 3D live-wire method with application to 3D CT chest image analysis
NASA Astrophysics Data System (ADS)
Lu, Kongkuo; Higgins, William E.
2006-03-01
The definition of regions of interests (ROIs), such as suspect cancer nodules or lymph nodes in 3D CT chest images, is often difficult because of the complexity of the phenomena that give rise to them. Manual slice tracing has been used widely for years for such problems, because it is easy to implement and guaranteed to work. But the manual method is extremely time-consuming, especially for high-solution 3D images which may have hundreds of slices, and it is subject to operator biases. Numerous automated image-segmentation methods have been proposed, but they are generally strongly application dependent, and even the "most robust" methods have difficulty in defining complex anatomical ROIs. To address this problem, the semi-automatic interactive paradigm referred to as "live wire" segmentation has been proposed by researchers. In live-wire segmentation, the human operator interactively defines an ROI's boundary guided by an active automated method which suggests what to define. This process in general is far faster, more reproducible and accurate than manual tracing, while, at the same time, permitting the definition of complex ROIs having ill-defined boundaries. We propose a 2D live-wire method employing an improved cost over previous works. In addition, we define a new 3D live-wire formulation that enables rapid definition of 3D ROIs. The method only requires the human operator to consider a few slices in general. Experimental results indicate that the new 2D and 3D live-wire approaches are efficient, allow for high reproducibility, and are reliable for 2D and 3D object segmentation.
Development of a general method for obtaining the geometry of microfluidic networks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Razavi, Mohammad Sayed, E-mail: m.sayedrazavi@gmail.com; Salimpour, M. R.; Shirani, Ebrahim
2014-01-15
In the present study, a general method for geometry of fluidic networks is developed with emphasis on pressure-driven flows in the microfluidic applications. The design method is based on general features of network's geometry such as cross-sectional area and length of channels. Also, the method is applicable to various cross-sectional shapes such as circular, rectangular, triangular, and trapezoidal cross sections. Using constructal theory, the flow resistance, energy loss and performance of the network are optimized. Also, by this method, practical design strategies for the fabrication of microfluidic networks can be improved. The design method enables rapid prediction of fluid flowmore » in the complex network of channels and is very useful for improving proper miniaturization and integration of microfluidic networks. Minimization of flow resistance of the network of channels leads to universal constants for consecutive cross-sectional areas and lengths. For a Y-shaped network, the optimal ratios of consecutive cross-section areas (A{sub i+1}/A{sub i}) and lengths (L{sub i+1}/L{sub i}) are obtained as A{sub i+1}/A{sub i} = 2{sup −2/3} and L{sub i+1}/L{sub i} = 2{sup −1/3}, respectively. It is shown that energy loss in the network is proportional to the volume of network. It is also seen when the number of channels is increased both the hydraulic resistance and the volume occupied by the network are increased in a similar manner. Furthermore, the method offers that fabrication of multi-depth and multi-width microchannels should be considered as an integral part of designing procedures. Finally, numerical simulations for the fluid flow in the network have been performed and results show very good agreement with analytic results.« less
Cambered Jet-Flapped Airfoil Theory with Tables and Computer Programs for Application.
1977-09-01
influence function which is a parametric function of the jet-momentum coefficient. In general, the integrals involved must be evaluated by numerical methods. Tables of the necessary influence functions are given in the report.
Petersson, N. Anders; Sjogreen, Bjorn
2015-07-20
We develop a fourth order accurate finite difference method for solving the three-dimensional elastic wave equation in general heterogeneous anisotropic materials on curvilinear grids. The proposed method is an extension of the method for isotropic materials, previously described in the paper by Sjögreen and Petersson (2012) [11]. The method we proposed discretizes the anisotropic elastic wave equation in second order formulation, using a node centered finite difference method that satisfies the principle of summation by parts. The summation by parts technique results in a provably stable numerical method that is energy conserving. Also, we generalize and evaluate the super-grid far-fieldmore » technique for truncating unbounded domains. Unlike the commonly used perfectly matched layers (PML), the super-grid technique is stable for general anisotropic material, because it is based on a coordinate stretching combined with an artificial dissipation. Moreover, the discretization satisfies an energy estimate, proving that the numerical approximation is stable. We demonstrate by numerical experiments that sufficiently wide super-grid layers result in very small artificial reflections. Applications of the proposed method are demonstrated by three-dimensional simulations of anisotropic wave propagation in crystals.« less
NASA Technical Reports Server (NTRS)
Phillips, K.
1976-01-01
A mathematical model for job scheduling in a specified context is presented. The model uses both linear programming and combinatorial methods. While designed with a view toward optimization of scheduling of facility and plant operations at the Deep Space Communications Complex, the context is sufficiently general to be widely applicable. The general scheduling problem including options for scheduling objectives is discussed and fundamental parameters identified. Mathematical algorithms for partitioning problems germane to scheduling are presented.
NASA Technical Reports Server (NTRS)
Lan, C. Edward; Ge, Fuying
1989-01-01
Control system design for general nonlinear flight dynamic models is considered through numerical simulation. The design is accomplished through a numerical optimizer coupled with analysis of flight dynamic equations. The general flight dynamic equations are numerically integrated and dynamic characteristics are then identified from the dynamic response. The design variables are determined iteratively by the optimizer to optimize a prescribed objective function which is related to desired dynamic characteristics. Generality of the method allows nonlinear effects to aerodynamics and dynamic coupling to be considered in the design process. To demonstrate the method, nonlinear simulation models for an F-5A and an F-16 configurations are used to design dampers to satisfy specifications on flying qualities and control systems to prevent departure. The results indicate that the present method is simple in formulation and effective in satisfying the design objectives.
The application of contraction theory to an iterative formulation of electromagnetic scattering
NASA Technical Reports Server (NTRS)
Brand, J. C.; Kauffman, J. F.
1985-01-01
Contraction theory is applied to an iterative formulation of electromagnetic scattering from periodic structures and a computational method for insuring convergence is developed. A short history of spectral (or k-space) formulation is presented with an emphasis on application to periodic surfaces. To insure a convergent solution of the iterative equation, a process called the contraction corrector method is developed. Convergence properties of previously presented iterative solutions to one-dimensional problems are examined utilizing contraction theory and the general conditions for achieving a convergent solution are explored. The contraction corrector method is then applied to several scattering problems including an infinite grating of thin wires with the solution data compared to previous works.
Quasipolynomial generalization of Lotka-Volterra mappings
NASA Astrophysics Data System (ADS)
Hernández-Bermejo, Benito; Brenig, Léon
2002-07-01
In recent years, it has been shown that Lotka-Volterra mappings constitute a valuable tool from both the theoretical and the applied points of view, with developments in very diverse fields such as physics, population dynamics, chemistry and economy. The purpose of this work is to demonstrate that many of the most important ideas and algebraic methods that constitute the basis of the quasipolynomial formalism (originally conceived for the analysis of ordinary differential equations) can be extended into the mapping domain. The extension of the formalism into the discrete-time context is remarkable as far as the quasipolynomial methodology had never been shown to be applicable beyond the differential case. It will be demonstrated that Lotka-Volterra mappings play a central role in the quasipolynomial formalism for the discrete-time case. Moreover, the extension of the formalism into the discrete-time domain allows a significant generalization of Lotka-Volterra mappings as well as a whole transfer of algebraic methods into the discrete-time context. The result is a novel and more general conceptual framework for the understanding of Lotka-Volterra mappings as well as a new range of possibilities that become open not only for the theoretical analysis of Lotka-Volterra mappings and their generalizations, but also for the development of new applications.
Huppert, Theodore J
2016-01-01
Functional near-infrared spectroscopy (fNIRS) is a noninvasive neuroimaging technique that uses low levels of light to measure changes in cerebral blood oxygenation levels. In the majority of NIRS functional brain studies, analysis of this data is based on a statistical comparison of hemodynamic levels between a baseline and task or between multiple task conditions by means of a linear regression model: the so-called general linear model. Although these methods are similar to their implementation in other fields, particularly for functional magnetic resonance imaging, the specific application of these methods in fNIRS research differs in several key ways related to the sources of noise and artifacts unique to fNIRS. In this brief communication, we discuss the application of linear regression models in fNIRS and the modifications needed to generalize these models in order to deal with structured (colored) noise due to systemic physiology and noise heteroscedasticity due to motion artifacts. The objective of this work is to present an overview of these noise properties in the context of the linear model as it applies to fNIRS data. This work is aimed at explaining these mathematical issues to the general fNIRS experimental researcher but is not intended to be a complete mathematical treatment of these concepts.
A class of hybrid finite element methods for electromagnetics: A review
NASA Technical Reports Server (NTRS)
Volakis, J. L.; Chatterjee, A.; Gong, J.
1993-01-01
Integral equation methods have generally been the workhorse for antenna and scattering computations. In the case of antennas, they continue to be the prominent computational approach, but for scattering applications the requirement for large-scale computations has turned researchers' attention to near neighbor methods such as the finite element method, which has low O(N) storage requirements and is readily adaptable in modeling complex geometrical features and material inhomogeneities. In this paper, we review three hybrid finite element methods for simulating composite scatterers, conformal microstrip antennas, and finite periodic arrays. Specifically, we discuss the finite element method and its application to electromagnetic problems when combined with the boundary integral, absorbing boundary conditions, and artificial absorbers for terminating the mesh. Particular attention is given to large-scale simulations, methods, and solvers for achieving low memory requirements and code performance on parallel computing architectures.
Hoy, Erik P; Mazziotti, David A
2015-08-14
Tensor factorization of the 2-electron integral matrix is a well-known technique for reducing the computational scaling of ab initio electronic structure methods toward that of Hartree-Fock and density functional theories. The simplest factorization that maintains the positive semidefinite character of the 2-electron integral matrix is the Cholesky factorization. In this paper, we introduce a family of positive semidefinite factorizations that generalize the Cholesky factorization. Using an implementation of the factorization within the parametric 2-RDM method [D. A. Mazziotti, Phys. Rev. Lett. 101, 253002 (2008)], we study several inorganic molecules, alkane chains, and potential energy curves and find that this generalized factorization retains the accuracy and size extensivity of the Cholesky factorization, even in the presence of multi-reference correlation. The generalized family of positive semidefinite factorizations has potential applications to low-scaling ab initio electronic structure methods that treat electron correlation with a computational cost approaching that of the Hartree-Fock method or density functional theory.
Crawford, Forrest W.; Suchard, Marc A.
2011-01-01
A birth-death process is a continuous-time Markov chain that counts the number of particles in a system over time. In the general process with n current particles, a new particle is born with instantaneous rate λn and a particle dies with instantaneous rate μn. Currently no robust and efficient method exists to evaluate the finite-time transition probabilities in a general birth-death process with arbitrary birth and death rates. In this paper, we first revisit the theory of continued fractions to obtain expressions for the Laplace transforms of these transition probabilities and make explicit an important derivation connecting transition probabilities and continued fractions. We then develop an efficient algorithm for computing these probabilities that analyzes the error associated with approximations in the method. We demonstrate that this error-controlled method agrees with known solutions and outperforms previous approaches to computing these probabilities. Finally, we apply our novel method to several important problems in ecology, evolution, and genetics. PMID:21984359
Computation of three-dimensional nozzle-exhaust flow fields with the GIM code
NASA Technical Reports Server (NTRS)
Spradley, L. W.; Anderson, P. G.
1978-01-01
A methodology is introduced for constructing numerical analogs of the partial differential equations of continuum mechanics. A general formulation is provided which permits classical finite element and many of the finite difference methods to be derived directly. The approach, termed the General Interpolants Method (GIM), can combined the best features of finite element and finite difference methods. A quasi-variational procedure is used to formulate the element equations, to introduce boundary conditions into the method and to provide a natural assembly sequence. A derivation is given in terms of general interpolation functions from this procedure. Example computations for transonic and supersonic flows in two and three dimensions are given to illustrate the utility of GIM. A three-dimensional nozzle-exhaust flow field is solved including interaction with the freestream and a coupled treatment of the shear layer. Potential applications of the GIM code to a variety of computational fluid dynamics problems is then discussed in terms of existing capability or by extension of the methodology.
Localized Energy-Based Normalization of Medical Images: Application to Chest Radiography.
Philipsen, R H H M; Maduskar, P; Hogeweg, L; Melendez, J; Sánchez, C I; van Ginneken, B
2015-09-01
Automated quantitative analysis systems for medical images often lack the capability to successfully process images from multiple sources. Normalization of such images prior to further analysis is a possible solution to this limitation. This work presents a general method to normalize medical images and thoroughly investigates its effectiveness for chest radiography (CXR). The method starts with an energy decomposition of the image in different bands. Next, each band's localized energy is scaled to a reference value and the image is reconstructed. We investigate iterative and local application of this technique. The normalization is applied iteratively to the lung fields on six datasets from different sources, each comprising 50 normal CXRs and 50 abnormal CXRs. The method is evaluated in three supervised computer-aided detection tasks related to CXR analysis and compared to two reference normalization methods. In the first task, automatic lung segmentation, the average Jaccard overlap significantly increased from 0.72±0.30 and 0.87±0.11 for both reference methods to with normalization. The second experiment was aimed at segmentation of the clavicles. The reference methods had an average Jaccard index of 0.57±0.26 and 0.53±0.26; with normalization this significantly increased to . The third experiment was detection of tuberculosis related abnormalities in the lung fields. The average area under the Receiver Operating Curve increased significantly from 0.72±0.14 and 0.79±0.06 using the reference methods to with normalization. We conclude that the normalization can be successfully applied in chest radiography and makes supervised systems more generally applicable to data from different sources.
Calibration of groundwater vulnerability mapping using the generalized reduced gradient method.
Elçi, Alper
2017-12-01
Groundwater vulnerability assessment studies are essential in water resources management. Overlay-and-index methods such as DRASTIC are widely used for mapping of groundwater vulnerability, however, these methods mainly suffer from a subjective selection of model parameters. The objective of this study is to introduce a calibration procedure that results in a more accurate assessment of groundwater vulnerability. The improvement of the assessment is formulated as a parameter optimization problem using an objective function that is based on the correlation between actual groundwater contamination and vulnerability index values. The non-linear optimization problem is solved with the generalized-reduced-gradient (GRG) method, which is numerical algorithm based optimization method. To demonstrate the applicability of the procedure, a vulnerability map for the Tahtali stream basin is calibrated using nitrate concentration data. The calibration procedure is easy to implement and aims the maximization of correlation between observed pollutant concentrations and groundwater vulnerability index values. The influence of each vulnerability parameter in the calculation of the vulnerability index is assessed by performing a single-parameter sensitivity analysis. Results of the sensitivity analysis show that all factors are effective on the final vulnerability index. Calibration of the vulnerability map improves the correlation between index values and measured nitrate concentrations by 19%. The regression coefficient increases from 0.280 to 0.485. It is evident that the spatial distribution and the proportions of vulnerability class areas are significantly altered with the calibration process. Although the applicability of the calibration method is demonstrated on the DRASTIC model, the applicability of the approach is not specific to a certain model and can also be easily applied to other overlay-and-index methods. Copyright © 2017 Elsevier B.V. All rights reserved.
Calibration of groundwater vulnerability mapping using the generalized reduced gradient method
NASA Astrophysics Data System (ADS)
Elçi, Alper
2017-12-01
Groundwater vulnerability assessment studies are essential in water resources management. Overlay-and-index methods such as DRASTIC are widely used for mapping of groundwater vulnerability, however, these methods mainly suffer from a subjective selection of model parameters. The objective of this study is to introduce a calibration procedure that results in a more accurate assessment of groundwater vulnerability. The improvement of the assessment is formulated as a parameter optimization problem using an objective function that is based on the correlation between actual groundwater contamination and vulnerability index values. The non-linear optimization problem is solved with the generalized-reduced-gradient (GRG) method, which is numerical algorithm based optimization method. To demonstrate the applicability of the procedure, a vulnerability map for the Tahtali stream basin is calibrated using nitrate concentration data. The calibration procedure is easy to implement and aims the maximization of correlation between observed pollutant concentrations and groundwater vulnerability index values. The influence of each vulnerability parameter in the calculation of the vulnerability index is assessed by performing a single-parameter sensitivity analysis. Results of the sensitivity analysis show that all factors are effective on the final vulnerability index. Calibration of the vulnerability map improves the correlation between index values and measured nitrate concentrations by 19%. The regression coefficient increases from 0.280 to 0.485. It is evident that the spatial distribution and the proportions of vulnerability class areas are significantly altered with the calibration process. Although the applicability of the calibration method is demonstrated on the DRASTIC model, the applicability of the approach is not specific to a certain model and can also be easily applied to other overlay-and-index methods.
A comparison of the finite difference and finite element methods for heat transfer calculations
NASA Technical Reports Server (NTRS)
Emery, A. F.; Mortazavi, H. R.
1982-01-01
The finite difference method and finite element method for heat transfer calculations are compared by describing their bases and their application to some common heat transfer problems. In general it is noted that neither method is clearly superior, and in many instances, the choice is quite arbitrary and depends more upon the codes available and upon the personal preference of the analyst than upon any well defined advantages of one method. Classes of problems for which one method or the other is better suited are defined.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerner, Ryan; Mann, R.B.
We investigate quantum tunnelling methods for calculating black hole temperature, specifically the null-geodesic method of Parikh and Wilczek and the Hamilton-Jacobi Ansatz method of Angheben et al. We consider application of these methods to a broad class of spacetimes with event horizons, including Rindler and nonstatic spacetimes such as Kerr-Newman and Taub-NUT. We obtain a general form for the temperature of Taub-NUT-AdS black holes that is commensurate with other methods. We examine the limitations of these methods for extremal black holes, taking the extremal Reissner-Nordstrom spacetime as a case in point.
Fault management for data systems
NASA Technical Reports Server (NTRS)
Boyd, Mark A.; Iverson, David L.; Patterson-Hine, F. Ann
1993-01-01
Issues related to automating the process of fault management (fault diagnosis and response) for data management systems are considered. Substantial benefits are to be gained by successful automation of this process, particularly for large, complex systems. The use of graph-based models to develop a computer assisted fault management system is advocated. The general problem is described and the motivation behind choosing graph-based models over other approaches for developing fault diagnosis computer programs is outlined. Some existing work in the area of graph-based fault diagnosis is reviewed, and a new fault management method which was developed from existing methods is offered. Our method is applied to an automatic telescope system intended as a prototype for future lunar telescope programs. Finally, an application of our method to general data management systems is described.
NASA Astrophysics Data System (ADS)
Mercier, Sylvain; Gratton, Serge; Tardieu, Nicolas; Vasseur, Xavier
2017-12-01
Many applications in structural mechanics require the numerical solution of sequences of linear systems typically issued from a finite element discretization of the governing equations on fine meshes. The method of Lagrange multipliers is often used to take into account mechanical constraints. The resulting matrices then exhibit a saddle point structure and the iterative solution of such preconditioned linear systems is considered as challenging. A popular strategy is then to combine preconditioning and deflation to yield an efficient method. We propose an alternative that is applicable to the general case and not only to matrices with a saddle point structure. In this approach, we consider to update an existing algebraic or application-based preconditioner, using specific available information exploiting the knowledge of an approximate invariant subspace or of matrix-vector products. The resulting preconditioner has the form of a limited memory quasi-Newton matrix and requires a small number of linearly independent vectors. Numerical experiments performed on three large-scale applications in elasticity highlight the relevance of the new approach. We show that the proposed method outperforms the deflation method when considering sequences of linear systems with varying matrices.
Techniques for forced response involving discrete nonlinearities. I - Theory. II - Applications
NASA Astrophysics Data System (ADS)
Avitabile, Peter; Callahan, John O.
Several new techniques developed for the forced response analysis of systems containing discrete nonlinear connection elements are presented and compared to the traditional methods. In particular, the techniques examined are the Equivalent Reduced Model Technique (ERMT), Modal Modification Response Technique (MMRT), and Component Element Method (CEM). The general theory of the techniques is presented, and applications are discussed with particular reference to the beam nonlinear system model using ERMT, MMRT, and CEM; frame nonlinear response using the three techniques; and comparison of the results obtained by using the ERMT, MMRT, and CEM models.
Scope and applications of translation invariant wavelets to image registration
NASA Technical Reports Server (NTRS)
Chettri, Samir; LeMoigne, Jacqueline; Campbell, William
1997-01-01
The first part of this article introduces the notion of translation invariance in wavelets and discusses several wavelets that have this property. The second part discusses the possible applications of such wavelets to image registration. In the case of registration of affinely transformed images, we would conclude that the notion of translation invariance is not really necessary. What is needed is affine invariance and one way to do this is via the method of moment invariants. Wavelets or, in general, pyramid processing can then be combined with the method of moment invariants to reduce the computational load.
On the use and computation of the Jordan canonical form in system theory
NASA Technical Reports Server (NTRS)
Sridhar, B.; Jordan, D.
1974-01-01
This paper investigates various aspects of the application of the Jordan canonical form of a matrix in system theory and develops a computational approach to determining the Jordan form for a given matrix. Applications include pole placement, controllability and observability studies, serving as an intermediate step in yielding other canonical forms, and theorem proving. The computational method developed in this paper is both simple and efficient. The method is based on the definition of a generalized eigenvector and a natural extension of Gauss elimination techniques. Examples are included for demonstration purposes.
[The survivability of patients with cervical cancer of IIB stage].
Kryzhanivs'ka, A Ie; Diakiv, I B
2014-01-01
To the present tense finally mine-out not tactic of treatment of patients with the cervical cancer (CC) of IIB stage, but in the standards of diagnostics and treatment there are different variants of treatment of this pathology, and choice, most optimum, as a rule, depends on subjective opinion of doctor. Consequently, purpose of our work--to promote efficiency of treatment of patients on CC IIB the stage, by application of neoadjuvant chemotherapy in the combined treatment. The results of treatment are analysed 291 patients on CC IIB stages which got radical treatment in Ivano-Frankivsk OKOD from 1998 to 2013 years. At the use of neoadjuvant chemotherapy index of general 5-years-survival and nonrecurrence survivability made 74.4% and 70.8%, and to preoperative chemotherapy--70.8% and 68.3% accordingly. At application of independent chemoradial therapy, to the index of general 5-years-survival and nonrecurrence survivability was 51.1% and 49.3%, accordingly. It is not exposed reliable difference (P < 0.05) at comparison of indexes of 5-years-survivability of patients which have got the combined methods of treatment, but a reliable difference is exposed when compared to patients which have got independent chemoradial therapy (P > 0.05). Consequently, application of the combined methods of treatment of patients of CC IIB stages were improved by indexes general 5-years and to nonrecurrence survivability by comparison to independent cheradial therapy. .
NASA Technical Reports Server (NTRS)
Press, Harry; Mazelsky, Bernard
1954-01-01
The applicability of some results from the theory of generalized harmonic analysis (or power-spectral analysis) to the analysis of gust loads on airplanes in continuous rough air is examined. The general relations for linear systems between power spectrums of a random input disturbance and an output response are used to relate the spectrum of airplane load in rough air to the spectrum of atmospheric gust velocity. The power spectrum of loads is shown to provide a measure of the load intensity in terms of the standard deviation (root mean square) of the load distribution for an airplane in flight through continuous rough air. For the case of a load output having a normal distribution, which appears from experimental evidence to apply to homogeneous rough air, the standard deviation is shown to describe the probability distribution of loads or the proportion of total time that the load has given values. Thus, for airplane in flight through homogeneous rough air, the probability distribution of loads may be determined from a power-spectral analysis. In order to illustrate the application of power-spectral analysis to gust-load analysis and to obtain an insight into the relations between loads and airplane gust-response characteristics, two selected series of calculations are presented. The results indicate that both methods of analysis yield results that are consistent to a first approximation.
NASA Astrophysics Data System (ADS)
Barreto, Patricia R. P.; Cruz, Ana Claudia P. S.; Barreto, Rodrigo L. P.; Palazzetti, Federico; Albernaz, Alessandra F.; Lombardi, Andrea; Maciel, Glauciete S.; Aquilanti, Vincenzo
2017-07-01
The spherical-harmonics expansion is a mathematically rigorous procedure and a powerful tool for the representation of potential energy surfaces of interacting molecular systems, determining their spectroscopic and dynamical properties, specifically in van der Waals clusters, with applications also to classical and quantum molecular dynamics simulations. The technique consists in the construction (by ab initio or semiempirical methods) of the expanded potential interaction up to terms that provide the generation of a number of leading configurations sufficient to account for faithful geometrical representations. This paper reports the full general description of the method of the spherical-harmonics expansion as applied to diatomic-molecule - diatomic-molecule systems of increasing complexity: the presentation of the mathematical background is given for providing both the application to the prototypical cases considered previously (O2sbnd O2, N2sbnd N2, and N2sbnd O2 systems) and the generalization to: (i) the COsbnd CO system, where a characteristic feature is the lower symmetry order with respect to the cases studied before, requiring a larger number of expansion terms necessary to adequately represent the potential energy surface; and (ii) the COsbnd HF system, which exhibits the lowest order of symmetry among this class of aggregates and therefore the highest number of leading configurations.
NASA Astrophysics Data System (ADS)
van Haver, Sven; Janssen, Olaf T. A.; Braat, Joseph J. M.; Janssen, Augustus J. E. M.; Urbach, H. Paul; Pereira, Silvania F.
2008-03-01
In this paper we introduce a new mask imaging algorithm that is based on the source point integration method (or Abbe method). The method presented here distinguishes itself from existing methods by exploiting the through-focus imaging feature of the Extended Nijboer-Zernike (ENZ) theory of diffraction. An introduction to ENZ-theory and its application in general imaging is provided after which we describe the mask imaging scheme that can be derived from it. The remainder of the paper is devoted to illustrating the advantages of the new method over existing methods (Hopkins-based). To this extent several simulation results are included that illustrate advantages arising from: the accurate incorporation of isolated structures, the rigorous treatment of the object (mask topography) and the fully vectorial through-focus image formation of the ENZ-based algorithm.
ERIC Educational Resources Information Center
Gale, James R.
The study developed a general method for analyzing the economic impact of international university students on a local or regional economy and applied the methodology to Michigan Technological University. Major findings included the following: international students accounted for $2,693,814 in total direct and indirect expenditures in the region…
Laser surface texturing of polymers for biomedical applications
NASA Astrophysics Data System (ADS)
Riveiro, Antonio; Maçon, Anthony L. B.; del Val, Jesus; Comesaña, Rafael; Pou, Juan
2018-02-01
Polymers are materials widely used in biomedical science because of their biocompatibility, and good mechanical properties (which, in some cases, are similar to those of human tissues); however, these materials are, in general, chemically and biologically inert. Surface characteristics, such as topography (at the macro-, micro, and nanoscale), surface chemistry, surface energy, charge or wettability are interrelated properties, and they cooperatively influence the biological performance of materials when used for biomedical applications. They regulate the biological response at the implant/tissue interface (e.g., influencing the cell adhesion, cell orientation, cell motility, etc.). Several surface processing techniques have been explored to modulate these properties for biomedical applications. Despite their potentials, these methods have limitations that prevent their applicability. In this regard, laser-based methods, in particular laser surface texturing (LST), can be an interesting alternative. Different works have showed the potentiality of this technique to control the surface properties of biomedical polymers and enhance their biological performance; however, more research is needed to obtain the desired biological response. This work provides a general overview of the basics and applications of LST for the surface modification of polymers currently used in the clinical practice (e.g. PEEK, UHMWPE, PP, etc.). The modification of roughness, wettability, and their impact on the biological response is addressed to offer new insights on the surface modification of biomedical polymers.
Gurram, Venkateshwarlu; Akula, Hari K; Garlapati, Ramesh; Pottabathini, Narender; Lakshman, Mahesh K
2015-02-09
Benzotriazoles are a highly important class of compounds with broad-ranging applications in such diverse areas as medicinal chemistry, as auxiliaries in organic synthesis, in metallurgical applications, in aircraft deicing and brake fluids, and as antifog agents in photography. Although there are numerous approaches to N-substituted benzotriazoles, the essentially one general method to N-unsubstituted benzotriazoles is via diazotization of o -phenylenediamines, which can be limited by the availability of suitable precursors. Other methods to N-unsubstitued benzotriazoles are quite specialized. Although reduction of 1-hydroxy-1 H -benzotriazoles is known the reactions are not particularly convenient or broadly applicable. This presents a limitation for easy access to and availability of diverse benzotriazoles. Herein, we demonstrate a new, broadly applicable method to diverse 1 H -benzotriazoles via a mild diboron-reagent mediated deoxygenation of 1-hydroxy-1 H -benzotriazoles. We have also evaluated sequential deoxygenation and Pd-mediated C-C and C-N bond formation as a one-pot process for further diversification of the benzotriazole moiety. However, results indicated that purification of the deoxygenation product prior to the Pd-mediated reaction is critical to the success of such reactions. The overall chemistry allows for facile access to a variety of new benzotriazoles. Along with the several examples presented, a discussion of the advantages of the approaches is described, as also a possible mechanism for the deoxygenation process.
Gurram, Venkateshwarlu; Akula, Hari K.; Garlapati, Ramesh; Pottabathini, Narender; Lakshman, Mahesh K.
2015-01-01
Benzotriazoles are a highly important class of compounds with broad-ranging applications in such diverse areas as medicinal chemistry, as auxiliaries in organic synthesis, in metallurgical applications, in aircraft deicing and brake fluids, and as antifog agents in photography. Although there are numerous approaches to N-substituted benzotriazoles, the essentially one general method to N-unsubstituted benzotriazoles is via diazotization of o-phenylenediamines, which can be limited by the availability of suitable precursors. Other methods to N-unsubstitued benzotriazoles are quite specialized. Although reduction of 1-hydroxy-1H-benzotriazoles is known the reactions are not particularly convenient or broadly applicable. This presents a limitation for easy access to and availability of diverse benzotriazoles. Herein, we demonstrate a new, broadly applicable method to diverse 1H-benzotriazoles via a mild diboron-reagent mediated deoxygenation of 1-hydroxy-1H-benzotriazoles. We have also evaluated sequential deoxygenation and Pd-mediated C–C and C–N bond formation as a one-pot process for further diversification of the benzotriazole moiety. However, results indicated that purification of the deoxygenation product prior to the Pd-mediated reaction is critical to the success of such reactions. The overall chemistry allows for facile access to a variety of new benzotriazoles. Along with the several examples presented, a discussion of the advantages of the approaches is described, as also a possible mechanism for the deoxygenation process. PMID:25729343
Hybrid state vector methods for structural dynamic and aeroelastic boundary value problems
NASA Technical Reports Server (NTRS)
Lehman, L. L.
1982-01-01
A computational technique is developed that is suitable for performing preliminary design aeroelastic and structural dynamic analyses of large aspect ratio lifting surfaces. The method proves to be quite general and can be adapted to solving various two point boundary value problems. The solution method, which is applicable to both fixed and rotating wing configurations, is based upon a formulation of the structural equilibrium equations in terms of a hybrid state vector containing generalized force and displacement variables. A mixed variational formulation is presented that conveniently yields a useful form for these state vector differential equations. Solutions to these equations are obtained by employing an integrating matrix method. The application of an integrating matrix provides a discretization of the differential equations that only requires solutions of standard linear matrix systems. It is demonstrated that matrix partitioning can be used to reduce the order of the required solutions. Results are presented for several example problems in structural dynamics and aeroelasticity to verify the technique and to demonstrate its use. These problems examine various types of loading and boundary conditions and include aeroelastic analyses of lifting surfaces constructed from anisotropic composite materials.
NASA Astrophysics Data System (ADS)
Li, Xiongwei; Wang, Zhe; Lui, Siu-Lung; Fu, Yangting; Li, Zheng; Liu, Jianming; Ni, Weidou
2013-10-01
A bottleneck of the wide commercial application of laser-induced breakdown spectroscopy (LIBS) technology is its relatively high measurement uncertainty. A partial least squares (PLS) based normalization method was proposed to improve pulse-to-pulse measurement precision for LIBS based on our previous spectrum standardization method. The proposed model utilized multi-line spectral information of the measured element and characterized the signal fluctuations due to the variation of plasma characteristic parameters (plasma temperature, electron number density, and total number density) for signal uncertainty reduction. The model was validated by the application of copper concentration prediction in 29 brass alloy samples. The results demonstrated an improvement on both measurement precision and accuracy over the generally applied normalization as well as our previously proposed simplified spectrum standardization method. The average relative standard deviation (RSD), average of the standard error (error bar), the coefficient of determination (R2), the root-mean-square error of prediction (RMSEP), and average value of the maximum relative error (MRE) were 1.80%, 0.23%, 0.992, 1.30%, and 5.23%, respectively, while those for the generally applied spectral area normalization were 3.72%, 0.71%, 0.973, 1.98%, and 14.92%, respectively.
NASA Technical Reports Server (NTRS)
Barkeshli, Kasra; Volakis, John L.
1991-01-01
The theoretical and computational aspects related to the application of the Conjugate Gradient FFT (CGFFT) method in computational electromagnetics are examined. The advantages of applying the CGFFT method to a class of large scale scattering and radiation problems are outlined. The main advantages of the method stem from its iterative nature which eliminates a need to form the system matrix (thus reducing the computer memory allocation requirements) and guarantees convergence to the true solution in a finite number of steps. Results are presented for various radiators and scatterers including thin cylindrical dipole antennas, thin conductive and resistive strips and plates, as well as dielectric cylinders. Solutions of integral equations derived on the basis of generalized impedance boundary conditions (GIBC) are also examined. The boundary conditions can be used to replace the profile of a material coating by an impedance sheet or insert, thus, eliminating the need to introduce unknown polarization currents within the volume of the layer. A general full wave analysis of 2-D and 3-D rectangular grooves and cavities is presented which will also serve as a reference for future work.
Song, Yun S; Steinrücken, Matthias
2012-03-01
The transition density function of the Wright-Fisher diffusion describes the evolution of population-wide allele frequencies over time. This function has important practical applications in population genetics, but finding an explicit formula under a general diploid selection model has remained a difficult open problem. In this article, we develop a new computational method to tackle this classic problem. Specifically, our method explicitly finds the eigenvalues and eigenfunctions of the diffusion generator associated with the Wright-Fisher diffusion with recurrent mutation and arbitrary diploid selection, thus allowing one to obtain an accurate spectral representation of the transition density function. Simplicity is one of the appealing features of our approach. Although our derivation involves somewhat advanced mathematical concepts, the resulting algorithm is quite simple and efficient, only involving standard linear algebra. Furthermore, unlike previous approaches based on perturbation, which is applicable only when the population-scaled selection coefficient is small, our method is nonperturbative and is valid for a broad range of parameter values. As a by-product of our work, we obtain the rate of convergence to the stationary distribution under mutation-selection balance.
Song, Yun S.; Steinrücken, Matthias
2012-01-01
The transition density function of the Wright–Fisher diffusion describes the evolution of population-wide allele frequencies over time. This function has important practical applications in population genetics, but finding an explicit formula under a general diploid selection model has remained a difficult open problem. In this article, we develop a new computational method to tackle this classic problem. Specifically, our method explicitly finds the eigenvalues and eigenfunctions of the diffusion generator associated with the Wright–Fisher diffusion with recurrent mutation and arbitrary diploid selection, thus allowing one to obtain an accurate spectral representation of the transition density function. Simplicity is one of the appealing features of our approach. Although our derivation involves somewhat advanced mathematical concepts, the resulting algorithm is quite simple and efficient, only involving standard linear algebra. Furthermore, unlike previous approaches based on perturbation, which is applicable only when the population-scaled selection coefficient is small, our method is nonperturbative and is valid for a broad range of parameter values. As a by-product of our work, we obtain the rate of convergence to the stationary distribution under mutation–selection balance. PMID:22209899
Generalized Differential Calculus and Applications to Optimization
NASA Astrophysics Data System (ADS)
Rector, Robert Blake Hayden
This thesis contains contributions in three areas: the theory of generalized calculus, numerical algorithms for operations research, and applications of optimization to problems in modern electric power systems. A geometric approach is used to advance the theory and tools used for studying generalized notions of derivatives for nonsmooth functions. These advances specifically pertain to methods for calculating subdifferentials and to expanding our understanding of a certain notion of derivative of set-valued maps, called the coderivative, in infinite dimensions. A strong understanding of the subdifferential is essential for numerical optimization algorithms, which are developed and applied to nonsmooth problems in operations research, including non-convex problems. Finally, an optimization framework is applied to solve a problem in electric power systems involving a smart solar inverter and battery storage system providing energy and ancillary services to the grid.
NASA Technical Reports Server (NTRS)
Condon, Steven; Hendrick, Robert; Stark, Michael E.; Steger, Warren
1997-01-01
The Flight Dynamics Division (FDD) of NASA's Goddard Space Flight Center (GSFC) recently embarked on a far-reaching revision of its process for developing and maintaining satellite support software. The new process relies on an object-oriented software development method supported by a domain specific library of generalized components. This Generalized Support Software (GSS) Domain Engineering Process is currently in use at the NASA GSFC Software Engineering Laboratory (SEL). The key facets of the GSS process are (1) an architecture for rapid deployment of FDD applications, (2) a reuse asset library for FDD classes, and (3) a paradigm shift from developing software to configuring software for mission support. This paper describes the GSS architecture and process, results of fielding the first applications, lessons learned, and future directions
NASA Astrophysics Data System (ADS)
Gsponer, Andre
2009-01-01
The objective of this introduction to Colombeau algebras of generalized functions (in which distributions can be freely multiplied) is to explain in elementary terms the essential concepts necessary for their application to basic nonlinear problems in classical physics. Examples are given in hydrodynamics and electrodynamics. The problem of the self-energy of a point electric charge is worked out in detail: the Coulomb potential and field are defined as Colombeau generalized functions, and integrals of nonlinear expressions corresponding to products of distributions (such as the square of the Coulomb field and the square of the delta function) are calculated. Finally, the methods introduced in Gsponer (2007 Eur. J. Phys. 28 267, 2007 Eur. J. Phys. 28 1021 and 2007 Eur. J. Phys. 28 1241), to deal with point-like singularities in classical electrodynamics are confirmed.
Method for fixating sludges and soils contaminated with mercury and other heavy metals
Broderick, Thomas E.; Roth, Rachel L.; Carlson, Allan L.
2005-06-28
The invention relates to a method, composition and apparatus for stabilizing mercury and other heavy metals present in a particulate material such that the metals will not leach from the particulate material. The method generally involves the application of a metal reagent, a sulfur-containing compound, and the addition of oxygen to the particulate material, either through agitation, sparging or the addition of an oxygen-containing compound.
2016-06-10
and complexity to their learning” that is not present in traditional teaching methods (James and Brookfield 2014, 4). In Engaging Imagination... method described is the use of visually based teaching and learning. James and Brookfield, delineate between looking and seeing (James and Brookfield...learning methods more applicable to some students as opposed to others. However, the exploration of visual teaching techniques through the use of pictures
Methods of Visually Determining the Air Flow Around Airplanes
NASA Technical Reports Server (NTRS)
Gough, Melvin N; Johnson, Ernest
1932-01-01
This report describes methods used by the National Advisory Committee for Aeronautics to study visually the air flow around airplanes. The use of streamers, oil and exhaust gas streaks, lampblack and kerosene, powdered materials, and kerosene smoke is briefly described. The generation and distribution of smoke from candles and from titanium tetrachloride are described in greater detail because they appear most advantageous for general application. Examples are included showing results of the various methods.
Multicomponent Separation Potential. Generalization of the Dirac Theory
NASA Astrophysics Data System (ADS)
Palkin, V. A.; Gadel‧shin, V. M.; Aleksandrov, O. E.; Seleznev, V. D.
2014-05-01
Formulas for the separation potential and the separative power have been obtained in the present work by generalizing the classical theory of Dirac, with the observance of his two axioms, to the case of a multicomponent mixture without considering a concrete cascade scheme. The resulting expressions are general characteristics of a separation process, since they are applicable to any separation methods and are independentof the form of the components in the mixture. They can be used in constructing actual cascades for separation of multicomponent mixtures and in determining the indices of their effi ciency.
1983-03-21
zero , it is necessary that B M(0) be nonzero. In the case considered here, B M(0) is taken to be nonsingula and withot loss of generality it may be set...452. (c.51 D. Levin, " General order Padd type rational approximants defined from a double power series," J. Inst. Maths. Applics., 18, 1976, pp. 1-8...common zeros in the closed unit bidisc, U- 2 . The 2-D setting provides a nice theoretical framework for generalization of these stabilization results to
Liu, Jun; Pu, Huimin; Liu, Shuang; Kan, Juan; Jin, Changhai
2017-10-15
In recent years, increasing attention has been paid to the grafting of phenolic acid onto chitosan in order to enhance the bioactivity and widen the application of chitosan. Here, we present a comprehensive overview on the recent advances of phenolic acid grafted chitosan (phenolic acid-g-chitosan) in many aspects, including the synthetic method, structural characterization, biological activity, physicochemical property and potential application. In general, four kinds of techniques including carbodiimide based coupling, enzyme catalyzed grafting, free radical mediated grafting and electrochemical methods are frequently used for the synthesis of phenolic acid-g-chitosan. The structural characterization of phenolic acid-g-chitosan can be determined by several instrumental methods. The physicochemical properties of chitosan are greatly altered after grafting. As compared with chitosan, phenolic acid-g-chitosan exhibits enhanced antioxidant, antimicrobial, antitumor, anti-allergic, anti-inflammatory, anti-diabetic and acetylcholinesterase inhibitory activities. Notably, phenolic acid-g-chitosan shows potential applications in many fields as coating agent, packing material, encapsulation agent and bioadsorbent. Copyright © 2017 Elsevier Ltd. All rights reserved.
Advanced Ablative Insulators and Methods of Making Them
NASA Technical Reports Server (NTRS)
Congdon, William M.
2005-01-01
Advanced ablative (more specifically, charring) materials that provide temporary protection against high temperatures, and advanced methods of designing and manufacturing insulators based on these materials, are undergoing development. These materials and methods were conceived in an effort to replace the traditional thermal-protection systems (TPSs) of re-entry spacecraft with robust, lightweight, better-performing TPSs that can be designed and manufactured more rapidly and at lower cost. These materials and methods could also be used to make improved TPSs for general aerospace, military, and industrial applications.
Application of ride quality technology to predict ride satisfaction for commuter-type aircraft
NASA Technical Reports Server (NTRS)
Jacobson, I. D.; Kuhlthau, A. R.; Richards, L. G.
1975-01-01
A method was developed to predict passenger satisfaction with the ride environment of a transportation vehicle. This method, a general approach, was applied to a commuter-type aircraft for illustrative purposes. The effect of terrain, altitude and seat location were examined. The method predicts the variation in passengers satisfied for any set of flight conditions. In addition several noncommuter aircraft were analyzed for comparison and other uses of the model described. The method has advantages for design, evaluation, and operating decisions.
GHM method for obtaining rationalsolutions of nonlinear differential equations.
Vazquez-Leal, Hector; Sarmiento-Reyes, Arturo
2015-01-01
In this paper, we propose the application of the general homotopy method (GHM) to obtain rational solutions of nonlinear differential equations. It delivers a high precision representation of the nonlinear differential equation using a few linear algebraic terms. In order to assess the benefits of this proposal, three nonlinear problems are solved and compared against other semi-analytic methods or numerical methods. The obtained results show that GHM is a powerful tool, capable to generate highly accurate rational solutions. AMS subject classification 34L30.
Course 4: Density Functional Theory, Methods, Techniques, and Applications
NASA Astrophysics Data System (ADS)
Chrétien, S.; Salahub, D. R.
Contents 1 Introduction 2 Density functional theory 2.1 Hohenberg and Kohn theorems 2.2 Levy's constrained search 2.3 Kohn-Sham method 3 Density matrices and pair correlation functions 4 Adiabatic connection or coupling strength integration 5 Comparing and constrasting KS-DFT and HF-CI 6 Preparing new functionals 7 Approximate exchange and correlation functionals 7.1 The Local Spin Density Approximation (LSDA) 7.2 Gradient Expansion Approximation (GEA) 7.3 Generalized Gradient Approximation (GGA) 7.4 meta-Generalized Gradient Approximation (meta-GGA) 7.5 Hybrid functionals 7.6 The Optimized Effective Potential method (OEP) 7.7 Comparison between various approximate functionals 8 LAP correlation functional 9 Solving the Kohn-Sham equations 9.1 The Kohn-Sham orbitals 9.2 Coulomb potential 9.3 Exchange-correlation potential 9.4 Core potential 9.5 Other choices and sources of error 9.6 Functionality 10 Applications 10.1 Ab initio molecular dynamics for an alanine dipeptide model 10.2 Transition metal clusters: The ecstasy, and the agony... 10.3 The conversion of acetylene to benzene on Fe clusters 11 Conclusions
Orthorectification by Using Gpgpu Method
NASA Astrophysics Data System (ADS)
Sahin, H.; Kulur, S.
2012-07-01
Thanks to the nature of the graphics processing, the newly released products offer highly parallel processing units with high-memory bandwidth and computational power of more than teraflops per second. The modern GPUs are not only powerful graphic engines but also they are high level parallel programmable processors with very fast computing capabilities and high-memory bandwidth speed compared to central processing units (CPU). Data-parallel computations can be shortly described as mapping data elements to parallel processing threads. The rapid development of GPUs programmability and capabilities attracted the attentions of researchers dealing with complex problems which need high level calculations. This interest has revealed the concepts of "General Purpose Computation on Graphics Processing Units (GPGPU)" and "stream processing". The graphic processors are powerful hardware which is really cheap and affordable. So the graphic processors became an alternative to computer processors. The graphic chips which were standard application hardware have been transformed into modern, powerful and programmable processors to meet the overall needs. Especially in recent years, the phenomenon of the usage of graphics processing units in general purpose computation has led the researchers and developers to this point. The biggest problem is that the graphics processing units use different programming models unlike current programming methods. Therefore, an efficient GPU programming requires re-coding of the current program algorithm by considering the limitations and the structure of the graphics hardware. Currently, multi-core processors can not be programmed by using traditional programming methods. Event procedure programming method can not be used for programming the multi-core processors. GPUs are especially effective in finding solution for repetition of the computing steps for many data elements when high accuracy is needed. Thus, it provides the computing process more quickly and accurately. Compared to the GPUs, CPUs which perform just one computing in a time according to the flow control are slower in performance. This structure can be evaluated for various applications of computer technology. In this study covers how general purpose parallel programming and computational power of the GPUs can be used in photogrammetric applications especially direct georeferencing. The direct georeferencing algorithm is coded by using GPGPU method and CUDA (Compute Unified Device Architecture) programming language. Results provided by this method were compared with the traditional CPU programming. In the other application the projective rectification is coded by using GPGPU method and CUDA programming language. Sample images of various sizes, as compared to the results of the program were evaluated. GPGPU method can be used especially in repetition of same computations on highly dense data, thus finding the solution quickly.
Superstatistical fluctuations in time series: Applications to share-price dynamics and turbulence
NASA Astrophysics Data System (ADS)
van der Straeten, Erik; Beck, Christian
2009-09-01
We report a general technique to study a given experimental time series with superstatistics. Crucial for the applicability of the superstatistics concept is the existence of a parameter β that fluctuates on a large time scale as compared to the other time scales of the complex system under consideration. The proposed method extracts the main superstatistical parameters out of a given data set and examines the validity of the superstatistical model assumptions. We test the method thoroughly with surrogate data sets. Then the applicability of the superstatistical approach is illustrated using real experimental data. We study two examples, velocity time series measured in turbulent Taylor-Couette flows and time series of log returns of the closing prices of some stock market indices.
Fourth Conference on Artificial Intelligence for Space Applications
NASA Technical Reports Server (NTRS)
Odell, Stephen L. (Compiler); Denton, Judith S. (Compiler); Vereen, Mary (Compiler)
1988-01-01
Proceedings of a conference held in Huntsville, Alabama, on November 15-16, 1988. The Fourth Conference on Artificial Intelligence for Space Applications brings together diverse technical and scientific work in order to help those who employ AI methods in space applications to identify common goals and to address issues of general interest in the AI community. Topics include the following: space applications of expert systems in fault diagnostics, in telemetry monitoring and data collection, in design and systems integration; and in planning and scheduling; knowledge representation, capture, verification, and management; robotics and vision; adaptive learning; and automatic programming.
Visual Analytics of integrated Data Systems for Space Weather Purposes
NASA Astrophysics Data System (ADS)
Rosa, Reinaldo; Veronese, Thalita; Giovani, Paulo
Analysis of information from multiple data sources obtained through high resolution instrumental measurements has become a fundamental task in all scientific areas. The development of expert methods able to treat such multi-source data systems, with both large variability and measurement extension, is a key for studying complex scientific phenomena, especially those related to systemic analysis in space and environmental sciences. In this talk, we present a time series generalization introducing the concept of generalized numerical lattice, which represents a discrete sequence of temporal measures for a given variable. In this novel representation approach each generalized numerical lattice brings post-analytical data information. We define a generalized numerical lattice as a set of three parameters representing the following data properties: dimensionality, size and post-analytical measure (e.g., the autocorrelation, Hurst exponent, etc)[1]. From this representation generalization, any multi-source database can be reduced to a closed set of classified time series in spatiotemporal generalized dimensions. As a case study, we show a preliminary application in space science data, highlighting the possibility of a real time analysis expert system. In this particular application, we have selected and analyzed, using detrended fluctuation analysis (DFA), several decimetric solar bursts associated to X flare-classes. The association with geomagnetic activity is also reported. DFA method is performed in the framework of a radio burst automatic monitoring system. Our results may characterize the variability pattern evolution, computing the DFA scaling exponent, scanning the time series by a short windowing before the extreme event [2]. For the first time, the application of systematic fluctuation analysis for space weather purposes is presented. The prototype for visual analytics is implemented in a Compute Unified Device Architecture (CUDA) by using the K20 Nvidia graphics processing units (GPUs) to reduce the integrated analysis runtime. [1] Veronese et al. doi: 10.6062/jcis.2009.01.02.0021, 2010. [2] Veronese et al. doi:http://dx.doi.org/10.1016/j.jastp.2010.09.030, 2011.
2016-01-01
Semiempirical (SE) methods can be derived from either Hartree–Fock or density functional theory by applying systematic approximations, leading to efficient computational schemes that are several orders of magnitude faster than ab initio calculations. Such numerical efficiency, in combination with modern computational facilities and linear scaling algorithms, allows application of SE methods to very large molecular systems with extensive conformational sampling. To reliably model the structure, dynamics, and reactivity of biological and other soft matter systems, however, good accuracy for the description of noncovalent interactions is required. In this review, we analyze popular SE approaches in terms of their ability to model noncovalent interactions, especially in the context of describing biomolecules, water solution, and organic materials. We discuss the most significant errors and proposed correction schemes, and we review their performance using standard test sets of molecular systems for quantum chemical methods and several recent applications. The general goal is to highlight both the value and limitations of SE methods and stimulate further developments that allow them to effectively complement ab initio methods in the analysis of complex molecular systems. PMID:27074247
Objective measurement of bread crumb texture
NASA Astrophysics Data System (ADS)
Wang, Jian; Coles, Graeme D.
1995-01-01
Evaluation of bread crumb texture plays an important role in judging bread quality. This paper discusses the application of image analysis methods to the objective measurement of the visual texture of bread crumb. The application of Fast Fourier Transform and mathematical morphology methods have been discussed by the authors in their previous work, and a commercial bread texture measurement system has been developed. Based on the nature of bread crumb texture, we compare the advantages and disadvantages of the two methods, and a third method based on features derived directly from statistics of edge density in local windows of the bread image. The analysis of various methods and experimental results provides an insight into the characteristics of the bread texture image and interconnection between texture measurement algorithms. The usefulness of the application of general stochastic process modelling of texture is thus revealed; it leads to more reliable and accurate evaluation of bread crumb texture. During the development of these methods, we also gained useful insights into how subjective judges form opinions about bread visual texture. These are discussed here.
Synthetic Aperture Radar (SAR) data processing
NASA Technical Reports Server (NTRS)
Beckner, F. L.; Ahr, H. A.; Ausherman, D. A.; Cutrona, L. J.; Francisco, S.; Harrison, R. E.; Heuser, J. S.; Jordan, R. L.; Justus, J.; Manning, B.
1978-01-01
The available and optimal methods for generating SAR imagery for NASA applications were identified. The SAR image quality and data processing requirements associated with these applications were studied. Mathematical operations and algorithms required to process sensor data into SAR imagery were defined. The architecture of SAR image formation processors was discussed, and technology necessary to implement the SAR data processors used in both general purpose and dedicated imaging systems was addressed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmittroth, F.
1978-01-01
Applications of a new data-adjustment code are given. The method is based on a maximum-likelihood extension of generalized least-squares methods that allow complete covariance descriptions for the input data and the final adjusted data evaluations. The maximum-likelihood approach is used with a generalized log-normal distribution that provides a way to treat problems with large uncertainties and that circumvents the problem of negative values that can occur for physically positive quantities. The computer code, FERRET, is written to enable the user to apply it to a large variety of problems by modifying only the input subroutine. The following applications are discussed:more » A 75-group a priori damage function is adjusted by as much as a factor of two by use of 14 integral measurements in different reactor spectra. Reactor spectra and dosimeter cross sections are simultaneously adjusted on the basis of both integral measurements and experimental proton-recoil spectra. The simultaneous use of measured reaction rates, measured worths, microscopic measurements, and theoretical models are used to evaluate dosimeter and fission-product cross sections. Applications in the data reduction of neutron cross section measurements and in the evaluation of reactor after-heat are also considered. 6 figures.« less
Differential equations driven by rough paths with jumps
NASA Astrophysics Data System (ADS)
Friz, Peter K.; Zhang, Huilin
2018-05-01
We develop the rough path counterpart of Itô stochastic integration and differential equations driven by general semimartingales. This significantly enlarges the classes of (Itô/forward) stochastic differential equations treatable with pathwise methods. A number of applications are discussed.
Analysis of Airport Access - A Methods Review and Research Program
DOT National Transportation Integrated Search
1971-10-01
The report points up the differences and similarities between airport access travel and general urban trip making. Models and surveys developed for, or applicable, to airport access planning are reviewed. A research proram is proposed which would gen...
Dutheil, Julien; Gaillard, Sylvain; Bazin, Eric; Glémin, Sylvain; Ranwez, Vincent; Galtier, Nicolas; Belkhir, Khalid
2006-04-04
A large number of bioinformatics applications in the fields of bio-sequence analysis, molecular evolution and population genetics typically share input/output methods, data storage requirements and data analysis algorithms. Such common features may be conveniently bundled into re-usable libraries, which enable the rapid development of new methods and robust applications. We present Bio++, a set of Object Oriented libraries written in C++. Available components include classes for data storage and handling (nucleotide/amino-acid/codon sequences, trees, distance matrices, population genetics datasets), various input/output formats, basic sequence manipulation (concatenation, transcription, translation, etc.), phylogenetic analysis (maximum parsimony, markov models, distance methods, likelihood computation and maximization), population genetics/genomics (diversity statistics, neutrality tests, various multi-locus analyses) and various algorithms for numerical calculus. Implementation of methods aims at being both efficient and user-friendly. A special concern was given to the library design to enable easy extension and new methods development. We defined a general hierarchy of classes that allow the developer to implement its own algorithms while remaining compatible with the rest of the libraries. Bio++ source code is distributed free of charge under the CeCILL general public licence from its website http://kimura.univ-montp2.fr/BioPP.
Tremblay, Marie-Claude; Brousselle, Astrid; Richard, Lucie; Beaudet, Nicole
2013-10-01
Program designers and evaluators should make a point of testing the validity of a program's intervention theory before investing either in implementation or in any type of evaluation. In this context, logic analysis can be a particularly useful option, since it can be used to test the plausibility of a program's intervention theory using scientific knowledge. Professional development in public health is one field among several that would truly benefit from logic analysis, as it appears to be generally lacking in theorization and evaluation. This article presents the application of this analysis method to an innovative public health professional development program, the Health Promotion Laboratory. More specifically, this paper aims to (1) define the logic analysis approach and differentiate it from similar evaluative methods; (2) illustrate the application of this method by a concrete example (logic analysis of a professional development program); and (3) reflect on the requirements of each phase of logic analysis, as well as on the advantages and disadvantages of such an evaluation method. Using logic analysis to evaluate the Health Promotion Laboratory showed that, generally speaking, the program's intervention theory appeared to have been well designed. By testing and critically discussing logic analysis, this article also contributes to further improving and clarifying the method. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Mutterperl, William
1944-01-01
A method of conformal transformation is developed that maps an airfoil into a straight line, the line being chosen as the extended chord line of the airfoil. The mapping is accomplished by operating directly with the airfoil ordinates. The absence of any preliminary transformation is found to shorten the work substantially over that of previous methods. Use is made of the superposition of solutions to obtain a rigorous counterpart of the approximate methods of thin-airfoils theory. The method is applied to the solution of the direct and inverse problems for arbitrary airfoils and pressure distributions. Numerical examples are given. Applications to more general types of regions, in particular to biplanes and to cascades of airfoils, are indicated. (author)
Koopmans' theorem in the Hartree-Fock method. General formulation
NASA Astrophysics Data System (ADS)
Plakhutin, Boris N.
2018-03-01
This work presents a general formulation of Koopmans' theorem (KT) in the Hartree-Fock (HF) method which is applicable to molecular and atomic systems with arbitrary orbital occupancies and total electronic spin including orbitally degenerate (OD) systems. The new formulation is based on the full set of variational conditions imposed upon the HF orbitals by the variational principle for the total energy and the conditions imposed by KT on the orbitals of an ionized electronic shell [B. N. Plakhutin and E. R. Davidson, J. Chem. Phys. 140, 014102 (2014)]. Based on these conditions, a general form of the restricted open-shell HF method is developed, whose eigenvalues (orbital energies) obey KT for the whole energy spectrum. Particular attention is paid to the treatment of OD systems, for which the new method gives a number of unexpected results. For example, the present method gives four different orbital energies for the triply degenerate atomic level 2p in the second row atoms B to F. Based on both KT conditions and a parallel treatment of atoms B to F within a limited configuration interaction approach, we prove that these four orbital energies, each of which is triply degenerate, are related via KT to the energies of different spin-dependent ionization and electron attachment processes (2p)N → (2p ) N ±1. A discussion is also presented of specific limitations of the validity of KT in the HF method which arise in OD systems. The practical applicability of the theory is verified by comparing KT estimates of the ionization potentials I2s and I2p for the second row open-shell atoms Li to F with the relevant experimental data.
A Formal Approach to Requirements-Based Programming
NASA Technical Reports Server (NTRS)
Hinchey, Michael G.; Rash, James L.; Rouff, Christopher A.
2005-01-01
No significant general-purpose method is currently available to mechanically transform system requirements into a provably equivalent model. The widespread use of such a method represents a necessary step toward high-dependability system engineering for numerous application domains. Current tools and methods that start with a formal model of a system and mechanically produce a provably equivalent implementation are valuable but not sufficient. The "gap" unfilled by such tools and methods is that the formal models cannot be proven to be equivalent to the requirements. We offer a method for mechanically transforming requirements into a provably equivalent formal model that can be used as the basis for code generation and other transformations. This method is unique in offering full mathematical tractability while using notations and techniques that are well known and well trusted. Finally, we describe further application areas we are investigating for use of the approach.
Analogue Transformations in Physics and their Application to Acoustics
García-Meca, C.; Carloni, S.; Barceló, C.; Jannes, G.; Sánchez-Dehesa, J.; Martínez, A.
2013-01-01
Transformation optics has shaped up a revolutionary electromagnetic design paradigm, enabling scientists to build astonishing devices such as invisibility cloaks. Unfortunately, the application of transformation techniques to other branches of physics is often constrained by the structure of the field equations. We develop here a complete transformation method using the idea of analogue spacetimes. The method is general and could be considered as a new paradigm for controlling waves in different branches of physics, from acoustics in quantum fluids to graphene electronics. As an application, we derive an “analogue transformation acoustics” formalism that naturally allows the use of transformations mixing space and time or involving moving fluids, both of which were impossible with the standard approach. To demonstrate the power of our method, we give explicit designs of a dynamic compressor, a spacetime cloak for acoustic waves and a carpet cloak for a moving aircraft. PMID:23774575
Discrimination in a General Algebraic Setting
Fine, Benjamin; Lipschutz, Seymour; Spellman, Dennis
2015-01-01
Discriminating groups were introduced by G. Baumslag, A. Myasnikov, and V. Remeslennikov as an outgrowth of their theory of algebraic geometry over groups. Algebraic geometry over groups became the main method of attack on the solution of the celebrated Tarski conjectures. In this paper we explore the notion of discrimination in a general universal algebra context. As an application we provide a different proof of a theorem of Malcev on axiomatic classes of Ω-algebras. PMID:26171421
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang
2004-01-01
This project is about the investigation of the development of the discontinuous Galerkin finite element methods, for general geometry and triangulations, for solving convection dominated problems, with applications to aeroacoustics. Other related issues in high order WENO finite difference and finite volume methods have also been investigated. methods are two classes of high order, high resolution methods suitable for convection dominated simulations with possible discontinuous or sharp gradient solutions. In [18], we first review these two classes of methods, pointing out their similarities and differences in algorithm formulation, theoretical properties, implementation issues, applicability, and relative advantages. We then present some quantitative comparisons of the third order finite volume WENO methods and discontinuous Galerkin methods for a series of test problems to assess their relative merits in accuracy and CPU timing. In [3], we review the development of the Runge-Kutta discontinuous Galerkin (RKDG) methods for non-linear convection-dominated problems. These robust and accurate methods have made their way into the main stream of computational fluid dynamics and are quickly finding use in a wide variety of applications. They combine a special class of Runge-Kutta time discretizations, that allows the method to be non-linearly stable regardless of its accuracy, with a finite element space discretization by discontinuous approximations, that incorporates the ideas of numerical fluxes and slope limiters coined during the remarkable development of the high-resolution finite difference and finite volume schemes. The resulting RKDG methods are stable, high-order accurate, and highly parallelizable schemes that can easily handle complicated geometries and boundary conditions. We review the theoretical and algorithmic aspects of these methods and show several applications including nonlinear conservation laws, the compressible and incompressible Navier-Stokes equations, and Hamilton-Jacobi-like equations.
Biomimetic polymeric superhydrophobic surfaces and nanostructures: from fabrication to applications.
Wen, Gang; Guo, ZhiGuang; Liu, Weimin
2017-03-09
Numerous research studies have contributed to the development of mature superhydrophobic systems. The fabrication and applications of polymeric superhydrophobic surfaces have been discussed and these have attracted tremendous attention over the past few years due to their excellent properties. In general, roughness and chemical composition, the two most crucial factors with respect to surface wetting, provide the basic criteria for yielding polymeric superhydrophobic materials. Furthermore, with their unique properties and flexible configurations, polymers have been one of the most efficient materials for fabricating superhydrophobic materials. This review aims to summarize the most recent progress in polymeric superhydrophobic surfaces. Significantly, the fundamental theories for designing these materials will be presented, and the original methods will be introduced, followed by a summary of multifunctional superhydrophobic polymers and their applications. The principles of these methods can be divided into two categories: the first involves adding nanoparticles to a low surface energy polymer, and the other involves combining a low surface energy material with a textured surface, followed by chemical modification. Notably, surface-initiated radical polymerization is a versatile method for a variety of vinyl monomers, resulting in controlled molecular weights and low polydispersities. The surfaces produced by these methods not only possess superhydrophobicity but also have many applications, such as self-cleaning, self-healing, anti-icing, anti-bioadhesion, oil-water separation, and even superamphiphobic surfaces. Interestingly, the combination of responsive materials and roughness enhances the responsiveness, which allows the achievement of intelligent transformation between superhydrophobicity and superhydrophilicity. Nevertheless, surfaces with poor physical and chemical properties are generally unable to withstand the severe conditions of the outside world; thus, it is necessary to optimize the performances of such materials to yield durable superhydrophobic surfaces. To sum up, some challenges and perspectives regarding the future research and development of polymeric superhydrophobic surfaces are presented.
Groves, Ethan; Palenik, Skip; Palenik, Christopher S
2018-04-18
While color is arguably the most important optical property of evidential fibers, the actual dyestuffs responsible for its expression in them are, in forensic trace evidence examinations, rarely analyzed and still less often identified. This is due, primarily, to the exceedingly small quantities of dye present in a single fiber as well as to the fact that dye identification is a challenging analytical problem, even when large quantities are available for analysis. Among the practical reasons for this are the wide range of dyestuffs available (and the even larger number of trade names), the low total concentration of dyes in the finished product, the limited amount of sample typically available for analysis in forensic cases, and the complexity of the dye mixtures that may exist within a single fiber. Literature on the topic of dye analysis is often limited to a specific method, subset of dyestuffs, or an approach that is not applicable given the constraints of a forensic analysis. Here, we present a generalized approach to dye identification that ( 1 ) combines several robust analytical methods, ( 2 ) is broadly applicable to a wide range of dye chemistries, application classes, and fiber types, and ( 3 ) can be scaled down to forensic casework-sized samples. The approach is based on the development of a reference collection of 300 commercially relevant textile dyes that have been characterized by a variety of microanalytical methods (HPTLC, Raman microspectroscopy, infrared microspectroscopy, UV-Vis spectroscopy, and visible microspectrophotometry). Although there is no single approach that is applicable to all dyes on every type of fiber, a combination of these analytical methods has been applied using a reproducible approach that permits the use of reference libraries to constrain the identity of and, in many cases, identify the dye (or dyes) present in a textile fiber sample.
Máthé, Koppány; Buşoniu, Lucian
2015-01-01
Unmanned aerial vehicles (UAVs) have gained significant attention in recent years. Low-cost platforms using inexpensive sensor payloads have been shown to provide satisfactory flight and navigation capabilities. In this report, we survey vision and control methods that can be applied to low-cost UAVs, and we list some popular inexpensive platforms and application fields where they are useful. We also highlight the sensor suites used where this information is available. We overview, among others, feature detection and tracking, optical flow and visual servoing, low-level stabilization and high-level planning methods. We then list popular low-cost UAVs, selecting mainly quadrotors. We discuss applications, restricting our focus to the field of infrastructure inspection. Finally, as an example, we formulate two use-cases for railway inspection, a less explored application field, and illustrate the usage of the vision and control techniques reviewed by selecting appropriate ones to tackle these use-cases. To select vision methods, we run a thorough set of experimental evaluations. PMID:26121608
Wirtz, Carolin M.; Radkovsky, Anna; Ebert, David D.; Berking, Matthias
2014-01-01
Objective Deficits in general emotion regulation (ER) skills have been linked to symptoms of depression and are thus considered a promising target in the treatment of Major depressive disorder (MDD). However, at this point, the extent to which such skills are relevant for coping with depression and whether they should instead be considered a transdiagnostic factor remain unclear. Therefore, the present study aimed to investigate whether successful ER skills application is associated with changes in depressive symptom severity (DSS), anxiety symptom severity (ASS), and general distress severity (GDS) over the course of treatment for MDD. Methods Successful ER skills application, DSS, ASS, and GDS were assessed four times during the first three weeks of treatment in 175 inpatients who met the criteria for MDD. We computed Pearson correlations to test whether successful ER skills application and the three indicators of psychopathology are cross-sectionally associated. We then performed latent growth curve modelling to test whether changes in successful ER skills application are negatively associated with a reduction of DSS, ASS, or GDS. Finally, we utilized latent change score models to examine whether successful ER skills application predicts subsequent reduction of DSS, ASS, or GDS. Results Successful ER skills application was cross-sectionally associated with lower levels of DSS, ASS, and GDS at all points of assessment. An increase in successful skills application during treatment was associated with a decrease in DSS and GDS but not ASS. Finally, successful ER skills application predicted changes in subsequent DSS but neither changes in ASS nor changes in GDS. Conclusions Although general ER skills might be relevant for a broad range of psychopathological symptoms, they might be particularly important for the maintenance and treatment of depressive symptoms. PMID:25330159
Li, Qizhai; Hu, Jiyuan; Ding, Juan; Zheng, Gang
2014-04-01
A classical approach to combine independent test statistics is Fisher's combination of $p$-values, which follows the $\\chi ^2$ distribution. When the test statistics are dependent, the gamma distribution (GD) is commonly used for the Fisher's combination test (FCT). We propose to use two generalizations of the GD: the generalized and the exponentiated GDs. We study some properties of mis-using the GD for the FCT to combine dependent statistics when one of the two proposed distributions are true. Our results show that both generalizations have better control of type I error rates than the GD, which tends to have inflated type I error rates at more extreme tails. In practice, common model selection criteria (e.g. Akaike information criterion/Bayesian information criterion) can be used to help select a better distribution to use for the FCT. A simple strategy of the two generalizations of the GD in genome-wide association studies is discussed. Applications of the results to genetic pleiotrophic associations are described, where multiple traits are tested for association with a single marker.
A generalized sizing method for revolutionary concepts under probabilistic design constraints
NASA Astrophysics Data System (ADS)
Nam, Taewoo
Internal combustion (IC) engines that consume hydrocarbon fuels have dominated the propulsion systems of air-vehicles for the first century of aviation. In recent years, however, growing concern over rapid climate changes and national energy security has galvanized the aerospace community into delving into new alternatives that could challenge the dominance of the IC engine. Nevertheless, traditional aircraft sizing methods have significant shortcomings for the design of such unconventionally powered aircraft. First, the methods are specialized for aircraft powered by IC engines, and thus are not flexible enough to assess revolutionary propulsion concepts that produce propulsive thrust through a completely different energy conversion process. Another deficiency associated with the traditional methods is that a user of these methods must rely heavily on experts' experience and advice for determining appropriate design margins. However, the introduction of revolutionary propulsion systems and energy sources is very likely to entail an unconventional aircraft configuration, which inexorably disqualifies the conjecture of such "connoisseurs" as a means of risk management. Motivated by such deficiencies, this dissertation aims at advancing two aspects of aircraft sizing: (1) to develop a generalized aircraft sizing formulation applicable to a wide range of unconventionally powered aircraft concepts and (2) to formulate a probabilistic optimization technique that is able to quantify appropriate design margins that are tailored towards the level of risk deemed acceptable to a decision maker. A more generalized aircraft sizing formulation, named the Architecture Independent Aircraft Sizing Method (AIASM), was developed for sizing revolutionary aircraft powered by alternative energy sources by modifying several assumptions of the traditional aircraft sizing method. Along with advances in deterministic aircraft sizing, a non-deterministic sizing technique, named the Probabilistic Aircraft Sizing Method (PASM), was developed. The method allows one to quantify adequate design margins to account for the various sources of uncertainty via the application of the chance-constrained programming (CCP) strategy to AIASM. In this way, PASM can also provide insights into a good compromise between cost and safety.
Tu, S W; Eriksson, H; Gennari, J H; Shahar, Y; Musen, M A
1995-06-01
PROTEGE-II is a suite of tools and a methodology for building knowledge-based systems and domain-specific knowledge-acquisition tools. In this paper, we show how PROTEGE-II can be applied to the task of providing protocol-based decision support in the domain of treating HIV-infected patients. To apply PROTEGE-II, (1) we construct a decomposable problem-solving method called episodic skeletal-plan refinement, (2) we build an application ontology that consists of the terms and relations in the domain, and of method-specific distinctions not already captured in the domain terms, and (3) we specify mapping relations that link terms from the application ontology to the domain-independent terms used in the problem-solving method. From the application ontology, we automatically generate a domain-specific knowledge-acquisition tool that is custom-tailored for the application. The knowledge-acquisition tool is used for the creation and maintenance of domain knowledge used by the problem-solving method. The general goal of the PROTEGE-II approach is to produce systems and components that are reusable and easily maintained. This is the rationale for constructing ontologies and problem-solving methods that can be composed from a set of smaller-grained methods and mechanisms. This is also why we tightly couple the knowledge-acquisition tools to the application ontology that specifies the domain terms used in the problem-solving systems. Although our evaluation is still preliminary, for the application task of providing protocol-based decision support, we show that these goals of reusability and easy maintenance can be achieved. We discuss design decisions and the tradeoffs that have to be made in the development of the system.
Ideas for Office Occupations Education.
ERIC Educational Resources Information Center
Alverson, Ruby; And Others
Prepared by South Carolina office occupations teachers, this booklet contains ideas for effective and motivating teaching methods in office occupations courses on the secondary school level. Besides ideas generally applicable, suggestions are included for teaching the following specific subjects: (1) accounting, (2) recordkeeping, (3) cooperative…
Magnetic Resonance Angiography Using Fresh Blood Imaging in Oral and Maxillofacial Regions
Oda, Masafumi; Tanaka, Tatsurou; Kito, Shinji; Habu, Manabu; Kodama, Masaaki; Kokuryo, Shinya; Miyamoto, Ikuya; Yoshiga, Daigo; Yamauchi, Kensuke; Nogami, Shinnosuke; Wakasugi-Sato, Nao; Matsumoto-Takeda, Shinobu; Ishikawa, Ayataka; Nishida, Ikuko; Saeki, Katsura; Morikawa, Kazumasa; Matsuo, Kou; Seta, Yuji; Yamashita, Yoshihiro; Maki, Kenshi; Tominaga, Kazuhiro; Morimoto, Yasuhiro
2012-01-01
The present paper provides general dentists with an introduction to the clinical applications and significance of magnetic resonance angiography (MRA) in the oral and maxillofacial regions. Specifically, the method and characteristics of MRA are first explained using the relevant MR sequences. Next, clinical applications to the oral and maxillofacial regions, such as identification of hemangiomas and surrounding vessels by MRA, are discussed. Moreover, the clinical significance of MRA for other regions is presented to elucidate future clinical applications of MRA in the oral and maxillofacial regions. PMID:23118751
2013-01-01
Background Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. Results To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations. The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. Conclusions We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs. PMID:24160725
Hedt-Gauthier, Bethany L; Mitsunaga, Tisha; Hund, Lauren; Olives, Casey; Pagano, Marcello
2013-10-26
Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations.The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs.
A frequency-domain approach to improve ANNs generalization quality via proper initialization.
Chaari, Majdi; Fekih, Afef; Seibi, Abdennour C; Hmida, Jalel Ben
2018-08-01
The ability to train a network without memorizing the input/output data, thereby allowing a good predictive performance when applied to unseen data, is paramount in ANN applications. In this paper, we propose a frequency-domain approach to evaluate the network initialization in terms of quality of training, i.e., generalization capabilities. As an alternative to the conventional time-domain methods, the proposed approach eliminates the approximate nature of network validation using an excess of unseen data. The benefits of the proposed approach are demonstrated using two numerical examples, where two trained networks performed similarly on the training and the validation data sets, yet they revealed a significant difference in prediction accuracy when tested using a different data set. This observation is of utmost importance in modeling applications requiring a high degree of accuracy. The efficiency of the proposed approach is further demonstrated on a real-world problem, where unlike other initialization methods, a more conclusive assessment of generalization is achieved. On the practical front, subtle methodological and implementational facets are addressed to ensure reproducibility and pinpoint the limitations of the proposed approach. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Thylwe, Karl-Erik; McCabe, Patrick
2012-04-01
The classical amplitude-phase method due to Milne, Wilson, Young and Wheeler in the 1930s is known to be a powerful computational tool for determining phase shifts and energy eigenvalues in cases where a sufficiently slowly varying amplitude function can be found. The key for the efficient computations is that the original single-state radial Schrödinger equation is transformed to a nonlinear equation, the Milne equation. Such an equation has solutions that may or may not oscillate, depending on boundary conditions, which then requires a robust recipe for locating the (optimal) ‘almost constant’ solutions for its use in the method. For scattering problems the solutions of the amplitude equations always approach constants as the radial distance r tends to infinity, and there is no problem locating the ‘optimal’ amplitude functions from a low-order semiclassical approximation. In the present work, the amplitude-phase approach is generalized to two coupled Schrödinger equations similar to an earlier generalization to radial Dirac equations. The original scalar amplitude then becomes a vector quantity, and the original Milne equation is generalized accordingly. Numerical applications to resonant electron-atom scattering are illustrated.
A Higher Order Iterative Method for Computing the Drazin Inverse
Soleymani, F.; Stanimirović, Predrag S.
2013-01-01
A method with high convergence rate for finding approximate inverses of nonsingular matrices is suggested and established analytically. An extension of the introduced computational scheme to general square matrices is defined. The extended method could be used for finding the Drazin inverse. The application of the scheme on large sparse test matrices alongside the use in preconditioning of linear system of equations will be presented to clarify the contribution of the paper. PMID:24222747
Domain decomposition methods for systems of conservation laws: Spectral collocation approximations
NASA Technical Reports Server (NTRS)
Quarteroni, Alfio
1989-01-01
Hyperbolic systems of conversation laws are considered which are discretized in space by spectral collocation methods and advanced in time by finite difference schemes. At any time-level a domain deposition method based on an iteration by subdomain procedure was introduced yielding at each step a sequence of independent subproblems (one for each subdomain) that can be solved simultaneously. The method is set for a general nonlinear problem in several space variables. The convergence analysis, however, is carried out only for a linear one-dimensional system with continuous solutions. A precise form of the error reduction factor at each iteration is derived. Although the method is applied here to the case of spectral collocation approximation only, the idea is fairly general and can be used in a different context as well. For instance, its application to space discretization by finite differences is straight forward.
Pot economy and one-pot synthesis.
Hayashi, Yujiro
2016-02-01
The one-pot synthesis of a target molecule in the same reaction vessel is widely considered to be an efficient approach in synthetic organic chemistry. In this review, the characteristics and limitations of various one-pot syntheses of biologically active molecules are explained, primarily involving organocatalytic methods as key tactics. Besides catalysis, the pot-economy concepts presented herein are also applicable to organometallic and organic reaction methods in general.
26 CFR 1.412(c)(1)-1 - Determinations to be made under funding method-terms defined.
Code of Federal Regulations, 2010 CFR
2010-04-01
...-terms defined. 1.412(c)(1)-1 Section 1.412(c)(1)-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT... Plans, Etc. § 1.412(c)(1)-1 Determinations to be made under funding method—terms defined. (a) Actuarial... bargained plans, see § 1.412(c)(1)-2; for principles applicable to funding methods in general, see...
26 CFR 1.412(c)(1)-1 - Determinations to be made under funding method-terms defined.
Code of Federal Regulations, 2011 CFR
2011-04-01
...-terms defined. 1.412(c)(1)-1 Section 1.412(c)(1)-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT..., Stock Bonus Plans, Etc. § 1.412(c)(1)-1 Determinations to be made under funding method—terms defined. (a... collectively bargained plans, see § 1.412(c)(1)-2; for principles applicable to funding methods in general, see...
Taboo Search: An Approach to the Multiple Minima Problem
NASA Astrophysics Data System (ADS)
Cvijovic, Djurdje; Klinowski, Jacek
1995-02-01
Described here is a method, based on Glover's taboo search for discrete functions, of solving the multiple minima problem for continuous functions. As demonstrated by model calculations, the algorithm avoids entrapment in local minima and continues the search to give a near-optimal final solution. Unlike other methods of global optimization, this procedure is generally applicable, easy to implement, derivative-free, and conceptually simple.
Hypergeometric type operators and their supersymmetric partners
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cotfas, Nicolae; Cotfas, Liviu Adrian
2011-05-15
The generalization of the factorization method performed by Mielnik [J. Math. Phys. 25, 3387 (1984)] opened new ways to generate exactly solvable potentials in quantum mechanics. We present an application of Mielnik's method to hypergeometric type operators. It is based on some solvable Riccati equations and leads to a unitary description of the quantum systems exactly solvable in terms of orthogonal polynomials or associated special functions.
Y. Chen; S. J. Seybold
2013-01-01
Instar determination of field-collected insect larvae has generally been based on the analysis of head capsule width frequency distributions or bivariate plotting, but few studies have tested the validity of such methods. We used head capsules from exuviae of known instars of the beet armyworm, Spodoptera exigua (Hübner) (Lepidoptera: Noctuidae),...
ERIC Educational Resources Information Center
Klein, Anna C.; Whitney, Douglas R.
Procedures and related issues involved in the application of trait-treatment interaction (TTI) to institutional research, in general, and to placement and proficiency testing, in particular, are discussed and illustrated. Traditional methods for choosing cut-off scores are compared and proposals for evaluating the results in the TTI framework are…
A nondestructive method for continuously monitoring plant growth.
Schwartzkopf, S H
1985-06-01
In the past, plant growth generally has been measured using destructive methods. This paper describes a nondestructive technique for continuously monitoring plant growth. The technique provides a means of directly and accurately measuring plant growth over both short and long time intervals. Application of this technique to the direct measurement of plant growth rates is illustrated using corn (Zea mays L.) as an example.
26 CFR 1.412(c)(1)-1 - Determinations to be made under funding method-terms defined.
Code of Federal Regulations, 2014 CFR
2014-04-01
...-terms defined. 1.412(c)(1)-1 Section 1.412(c)(1)-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT..., Stock Bonus Plans, Etc. § 1.412(c)(1)-1 Determinations to be made under funding method—terms defined. (a... collectively bargained plans, see § 1.412(c)(1)-2; for principles applicable to funding methods in general, see...
26 CFR 1.412(c)(1)-1 - Determinations to be made under funding method-terms defined.
Code of Federal Regulations, 2013 CFR
2013-04-01
...-terms defined. 1.412(c)(1)-1 Section 1.412(c)(1)-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT..., Stock Bonus Plans, Etc. § 1.412(c)(1)-1 Determinations to be made under funding method—terms defined. (a... collectively bargained plans, see § 1.412(c)(1)-2; for principles applicable to funding methods in general, see...
26 CFR 1.412(c)(1)-1 - Determinations to be made under funding method-terms defined.
Code of Federal Regulations, 2012 CFR
2012-04-01
...-terms defined. 1.412(c)(1)-1 Section 1.412(c)(1)-1 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT..., Stock Bonus Plans, Etc. § 1.412(c)(1)-1 Determinations to be made under funding method—terms defined. (a... collectively bargained plans, see § 1.412(c)(1)-2; for principles applicable to funding methods in general, see...
Aromatic and heterocyclic perfluoroalkyl sulfides. Methods of preparation
2010-01-01
Summary This review covers all of the common methods for the syntheses of aromatic and heterocyclic perfluoroalkyl sulfides, a class of compounds which is finding increasing application as starting materials for the preparation of agrochemicals, pharmaceutical products and, more generally, fine chemicals. A systematic approach is taken depending on the mode of incorporation of the SRF groups and also on the type of reagents used. PMID:20978611
Detecting spatio-temporal modes in multivariate data by entropy field decomposition
NASA Astrophysics Data System (ADS)
Frank, Lawrence R.; Galinsky, Vitaly L.
2016-09-01
A new data analysis method that addresses a general problem of detecting spatio-temporal variations in multivariate data is presented. The method utilizes two recent and complimentary general approaches to data analysis, information field theory (IFT) and entropy spectrum pathways (ESPs). Both methods reformulate and incorporate Bayesian theory, thus use prior information to uncover underlying structure of the unknown signal. Unification of ESP and IFT creates an approach that is non-Gaussian and nonlinear by construction and is found to produce unique spatio-temporal modes of signal behavior that can be ranked according to their significance, from which space-time trajectories of parameter variations can be constructed and quantified. Two brief examples of real world applications of the theory to the analysis of data bearing completely different, unrelated nature, lacking any underlying similarity, are also presented. The first example provides an analysis of resting state functional magnetic resonance imaging data that allowed us to create an efficient and accurate computational method for assessing and categorizing brain activity. The second example demonstrates the potential of the method in the application to the analysis of a strong atmospheric storm circulation system during the complicated stage of tornado development and formation using data recorded by a mobile Doppler radar. Reference implementation of the method will be made available as a part of the QUEST toolkit that is currently under development at the Center for Scientific Computation in Imaging.
47 CFR 22.107 - General application requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... PUBLIC MOBILE SERVICES Licensing Requirements and Procedures Applications and Notifications § 22.107 General application requirements. In general, applications for authorizations, assignments of... 47 Telecommunication 2 2010-10-01 2010-10-01 false General application requirements. 22.107...
Technical Note: Approximate Bayesian parameterization of a process-based tropical forest model
NASA Astrophysics Data System (ADS)
Hartig, F.; Dislich, C.; Wiegand, T.; Huth, A.
2014-02-01
Inverse parameter estimation of process-based models is a long-standing problem in many scientific disciplines. A key question for inverse parameter estimation is how to define the metric that quantifies how well model predictions fit to the data. This metric can be expressed by general cost or objective functions, but statistical inversion methods require a particular metric, the probability of observing the data given the model parameters, known as the likelihood. For technical and computational reasons, likelihoods for process-based stochastic models are usually based on general assumptions about variability in the observed data, and not on the stochasticity generated by the model. Only in recent years have new methods become available that allow the generation of likelihoods directly from stochastic simulations. Previous applications of these approximate Bayesian methods have concentrated on relatively simple models. Here, we report on the application of a simulation-based likelihood approximation for FORMIND, a parameter-rich individual-based model of tropical forest dynamics. We show that approximate Bayesian inference, based on a parametric likelihood approximation placed in a conventional Markov chain Monte Carlo (MCMC) sampler, performs well in retrieving known parameter values from virtual inventory data generated by the forest model. We analyze the results of the parameter estimation, examine its sensitivity to the choice and aggregation of model outputs and observed data (summary statistics), and demonstrate the application of this method by fitting the FORMIND model to field data from an Ecuadorian tropical forest. Finally, we discuss how this approach differs from approximate Bayesian computation (ABC), another method commonly used to generate simulation-based likelihood approximations. Our results demonstrate that simulation-based inference, which offers considerable conceptual advantages over more traditional methods for inverse parameter estimation, can be successfully applied to process-based models of high complexity. The methodology is particularly suitable for heterogeneous and complex data structures and can easily be adjusted to other model types, including most stochastic population and individual-based models. Our study therefore provides a blueprint for a fairly general approach to parameter estimation of stochastic process-based models.
Closed loop problems in biomechanics. Part II--an optimization approach.
Vaughan, C L; Hay, J G; Andrews, J G
1982-01-01
A closed loop problem in biomechanics may be defined as a problem in which there are one or more closed loops formed by the human body in contact with itself or with an external system. Under certain conditions the problem is indeterminate--the unknown forces and torques outnumber the equations. Force transducing devices, which would help solve this problem, have serious drawbacks, and existing methods are inaccurate and non-general. The purposes of the present paper are (1) to develop a general procedure for solving closed loop problems; (2) to illustrate the application of the procedure; and (3) to examine the validity of the procedure. A mathematical optimization approach is applied to the solution of three different closed loop problems--walking up stairs, vertical jumping and cartwheeling. The following conclusions are drawn: (1) the method described is reasonably successful for predicting horizontal and vertical reaction forces at the distal segments although problems exist for predicting the points of application of these forces; (2) the results provide some support for the notion that the human neuromuscular mechanism attempts to minimize the joint torques and thus, to a certain degree, the amount of muscular effort; (3) in the validation procedure it is desirable to have a force device for each of the distal segments in contact with a fixed external system; and (4) the method is sufficiently general to be applied to all classes of closed loop problems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, W.E.
1999-02-10
Evolutionary programs (EPs) and evolutionary pattern search algorithms (EPSAS) are two general classes of evolutionary methods for optimizing on continuous domains. The relative performance of these methods has been evaluated on standard global optimization test functions, and these results suggest that EPSAs more robustly converge to near-optimal solutions than EPs. In this paper we evaluate the relative performance of EPSAs and EPs on a real-world application: flexible ligand binding in the Autodock docking software. We compare the performance of these methods on a suite of docking test problems. Our results confirm that EPSAs and EPs have comparable performance, and theymore » suggest that EPSAs may be more robust on larger, more complex problems.« less
Step-control of electromechanical systems
Lewis, Robert N.
1979-01-01
The response of an automatic control system to a general input signal is improved by applying a test input signal, observing the response to the test input signal and determining correctional constants necessary to provide a modified input signal to be added to the input to the system. A method is disclosed for determining correctional constants. The modified input signal, when applied in conjunction with an operating signal, provides a total system output exhibiting an improved response. This method is applicable to open-loop or closed-loop control systems. The method is also applicable to unstable systems, thus allowing controlled shut-down before dangerous or destructive response is achieved and to systems whose characteristics vary with time, thus resulting in improved adaptive systems.
Application of Patterson-function direct methods to materials characterization.
Rius, Jordi
2014-09-01
The aim of this article is a general description of the so-called Patterson-function direct methods (PFDM), from their origin to their present state. It covers a 20-year period of methodological contributions to crystal structure solution, most of them published in Acta Crystallographica Section A. The common feature of these variants of direct methods is the introduction of the experimental intensities in the form of the Fourier coefficients of origin-free Patterson-type functions, which allows the active use of both strong and weak reflections. The different optimization algorithms are discussed and their performances compared. This review focuses not only on those PFDM applications related to powder diffraction data but also on some recent results obtained with electron diffraction tomography data.
Xu, Yisheng; Tong, Yunxia; Liu, Siyuan; Chow, Ho Ming; AbdulSabur, Nuria Y.; Mattay, Govind S.; Braun, Allen R.
2014-01-01
A comprehensive set of methods based on spatial independent component analysis (sICA) is presented as a robust technique for artifact removal, applicable to a broad range of functional magnetic resonance imaging (fMRI) experiments that have been plagued by motion-related artifacts. Although the applications of sICA for fMRI denoising have been studied previously, three fundamental elements of this approach have not been established as follows: 1) a mechanistically-based ground truth for component classification; 2) a general framework for evaluating the performance and generalizability of automated classifiers; 3) a reliable method for validating the effectiveness of denoising. Here we perform a thorough investigation of these issues and demonstrate the power of our technique by resolving the problem of severe imaging artifacts associated with continuous overt speech production. As a key methodological feature, a dual-mask sICA method is proposed to isolate a variety of imaging artifacts by directly revealing their extracerebral spatial origins. It also plays an important role for understanding the mechanistic properties of noise components in conjunction with temporal measures of physical or physiological motion. The potentials of a spatially-based machine learning classifier and the general criteria for feature selection have both been examined, in order to maximize the performance and generalizability of automated component classification. The effectiveness of denoising is quantitatively validated by comparing the activation maps of fMRI with those of positron emission tomography acquired under the same task conditions. The general applicability of this technique is further demonstrated by the successful reduction of distance-dependent effect of head motion on resting-state functional connectivity. PMID:25225001
Xu, Yisheng; Tong, Yunxia; Liu, Siyuan; Chow, Ho Ming; AbdulSabur, Nuria Y; Mattay, Govind S; Braun, Allen R
2014-12-01
A comprehensive set of methods based on spatial independent component analysis (sICA) is presented as a robust technique for artifact removal, applicable to a broad range of functional magnetic resonance imaging (fMRI) experiments that have been plagued by motion-related artifacts. Although the applications of sICA for fMRI denoising have been studied previously, three fundamental elements of this approach have not been established as follows: 1) a mechanistically-based ground truth for component classification; 2) a general framework for evaluating the performance and generalizability of automated classifiers; and 3) a reliable method for validating the effectiveness of denoising. Here we perform a thorough investigation of these issues and demonstrate the power of our technique by resolving the problem of severe imaging artifacts associated with continuous overt speech production. As a key methodological feature, a dual-mask sICA method is proposed to isolate a variety of imaging artifacts by directly revealing their extracerebral spatial origins. It also plays an important role for understanding the mechanistic properties of noise components in conjunction with temporal measures of physical or physiological motion. The potentials of a spatially-based machine learning classifier and the general criteria for feature selection have both been examined, in order to maximize the performance and generalizability of automated component classification. The effectiveness of denoising is quantitatively validated by comparing the activation maps of fMRI with those of positron emission tomography acquired under the same task conditions. The general applicability of this technique is further demonstrated by the successful reduction of distance-dependent effect of head motion on resting-state functional connectivity. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Baran, A. J.; Hesse, Evelyn; Sourdeval, Odran
2017-03-01
Future satellite missions, from 2022 onwards, will obtain near-global measurements of cirrus at microwave and sub-millimetre frequencies. To realise the potential of these observations, fast and accurate light-scattering methods are required to calculate scattered millimetre and sub-millimetre intensities from complex ice crystals. Here, the applicability of the ray tracing with diffraction on facets method (RTDF) in predicting the bulk scalar optical properties and phase functions of randomly oriented hexagonal ice columns and hexagonal ice aggregates at millimetre frequencies is investigated. The applicability of RTDF is shown to be acceptable down to size parameters of about 18, between the frequencies of 243 and 874 GHz. It is demonstrated that RTDF is generally well within about 10% of T-matrix solutions obtained for the scalar optical properties assuming hexagonal ice columns. Moreover, on replacing electromagnetic scalar optical property solutions obtained for the hexagonal ice aggregate with the RTDF counterparts at size parameter values of about 18 or greater, the bulk scalar optical properties can be calculated to generally well within ±5% of an electromagnetic-based database. The RTDF-derived bulk scalar optical properties result in brightness temperature errors to generally within about ±4 K at 874 GHz. Differing microphysics assumptions can easily exceed such errors. Similar findings are found for the bulk scattering phase functions. This finding is owing to the scattering solutions being dominated by the processes of diffraction and reflection, both being well described by RTDF. The impact of centimetre-sized complex ice crystals on interpreting cirrus polarisation measurements at sub-millimetre frequencies is discussed.
Zhang, Lei; Zeng, Zhi; Ji, Qiang
2011-09-01
Chain graph (CG) is a hybrid probabilistic graphical model (PGM) capable of modeling heterogeneous relationships among random variables. So far, however, its application in image and video analysis is very limited due to lack of principled learning and inference methods for a CG of general topology. To overcome this limitation, we introduce methods to extend the conventional chain-like CG model to CG model with more general topology and the associated methods for learning and inference in such a general CG model. Specifically, we propose techniques to systematically construct a generally structured CG, to parameterize this model, to derive its joint probability distribution, to perform joint parameter learning, and to perform probabilistic inference in this model. To demonstrate the utility of such an extended CG, we apply it to two challenging image and video analysis problems: human activity recognition and image segmentation. The experimental results show improved performance of the extended CG model over the conventional directed or undirected PGMs. This study demonstrates the promise of the extended CG for effective modeling and inference of complex real-world problems.
The Green's functions for peridynamic non-local diffusion.
Wang, L J; Xu, J F; Wang, J X
2016-09-01
In this work, we develop the Green's function method for the solution of the peridynamic non-local diffusion model in which the spatial gradient of the generalized potential in the classical theory is replaced by an integral of a generalized response function in a horizon. We first show that the general solutions of the peridynamic non-local diffusion model can be expressed as functionals of the corresponding Green's functions for point sources, along with volume constraints for non-local diffusion. Then, we obtain the Green's functions by the Fourier transform method for unsteady and steady diffusions in infinite domains. We also demonstrate that the peridynamic non-local solutions converge to the classical differential solutions when the non-local length approaches zero. Finally, the peridynamic analytical solutions are applied to an infinite plate heated by a Gauss source, and the predicted variations of temperature are compared with the classical local solutions. The peridynamic non-local diffusion model predicts a lower rate of variation of the field quantities than that of the classical theory, which is consistent with experimental observations. The developed method is applicable to general diffusion-type problems.
Protocol vulnerability detection based on network traffic analysis and binary reverse engineering.
Wen, Shameng; Meng, Qingkun; Feng, Chao; Tang, Chaojing
2017-01-01
Network protocol vulnerability detection plays an important role in many domains, including protocol security analysis, application security, and network intrusion detection. In this study, by analyzing the general fuzzing method of network protocols, we propose a novel approach that combines network traffic analysis with the binary reverse engineering method. For network traffic analysis, the block-based protocol description language is introduced to construct test scripts, while the binary reverse engineering method employs the genetic algorithm with a fitness function designed to focus on code coverage. This combination leads to a substantial improvement in fuzz testing for network protocols. We build a prototype system and use it to test several real-world network protocol implementations. The experimental results show that the proposed approach detects vulnerabilities more efficiently and effectively than general fuzzing methods such as SPIKE.
Evaluation of generalized degrees of freedom for sparse estimation by replica method
NASA Astrophysics Data System (ADS)
Sakata, A.
2016-12-01
We develop a method to evaluate the generalized degrees of freedom (GDF) for linear regression with sparse regularization. The GDF is a key factor in model selection, and thus its evaluation is useful in many modelling applications. An analytical expression for the GDF is derived using the replica method in the large-system-size limit with random Gaussian predictors. The resulting formula has a universal form that is independent of the type of regularization, providing us with a simple interpretation. Within the framework of replica symmetric (RS) analysis, GDF has a physical meaning as the effective fraction of non-zero components. The validity of our method in the RS phase is supported by the consistency of our results with previous mathematical results. The analytical results in the RS phase are calculated numerically using the belief propagation algorithm.
Seismic Hazard Analysis — Quo vadis?
NASA Astrophysics Data System (ADS)
Klügel, Jens-Uwe
2008-05-01
The paper is dedicated to the review of methods of seismic hazard analysis currently in use, analyzing the strengths and weaknesses of different approaches. The review is performed from the perspective of a user of the results of seismic hazard analysis for different applications such as the design of critical and general (non-critical) civil infrastructures, technical and financial risk analysis. A set of criteria is developed for and applied to an objective assessment of the capabilities of different analysis methods. It is demonstrated that traditional probabilistic seismic hazard analysis (PSHA) methods have significant deficiencies, thus limiting their practical applications. These deficiencies have their roots in the use of inadequate probabilistic models and insufficient understanding of modern concepts of risk analysis, as have been revealed in some recent large scale studies. These deficiencies result in the lack of ability of a correct treatment of dependencies between physical parameters and finally, in an incorrect treatment of uncertainties. As a consequence, results of PSHA studies have been found to be unrealistic in comparison with empirical information from the real world. The attempt to compensate these problems by a systematic use of expert elicitation has, so far, not resulted in any improvement of the situation. It is also shown that scenario-earthquakes developed by disaggregation from the results of a traditional PSHA may not be conservative with respect to energy conservation and should not be used for the design of critical infrastructures without validation. Because the assessment of technical as well as of financial risks associated with potential damages of earthquakes need a risk analysis, current method is based on a probabilistic approach with its unsolved deficiencies. Traditional deterministic or scenario-based seismic hazard analysis methods provide a reliable and in general robust design basis for applications such as the design of critical infrastructures, especially with systematic sensitivity analyses based on validated phenomenological models. Deterministic seismic hazard analysis incorporates uncertainties in the safety factors. These factors are derived from experience as well as from expert judgment. Deterministic methods associated with high safety factors may lead to too conservative results, especially if applied for generally short-lived civil structures. Scenarios used in deterministic seismic hazard analysis have a clear physical basis. They are related to seismic sources discovered by geological, geomorphologic, geodetic and seismological investigations or derived from historical references. Scenario-based methods can be expanded for risk analysis applications with an extended data analysis providing the frequency of seismic events. Such an extension provides a better informed risk model that is suitable for risk-informed decision making.
Some remarks on the numerical solution of parabolic partial differential equations
NASA Astrophysics Data System (ADS)
Campagna, R.; Cuomo, S.; Leveque, S.; Toraldo, G.; Giannino, F.; Severino, G.
2017-11-01
Numerous environmental/engineering applications relying upon the theory of diffusion phenomena into chaotic environments have recently stimulated the interest toward the numerical solution of parabolic partial differential equations (PDEs). In the present paper, we outline a formulation of the mathematical problem underlying a quite general diffusion mechanism in the natural environments, and we shortly emphasize some remarks concerning the applicability of the (straightforward) finite difference method. An illustration example is also presented.
Air Force Engineering Research Initiation Grant Program
1994-06-21
MISFET Structures for High-Frequency Device Applications" RI-B-91-13 Prof. John W. Silvestro Clemson University "The Effect of Scattering by a Near...Synthesis Method for Concurrent Engineering Applications" RI-B-92-03 Prof. Steven H. Collicott Purdue University "An Experimental Study of the Effect of a ...beams is studied. The effect of interply delam- inations on natural frequencies and mode shapes is evaluated analytically. A generalized variational
Detection of coupling delay: A problem not yet solved
NASA Astrophysics Data System (ADS)
Coufal, David; Jakubík, Jozef; Jajcay, Nikola; Hlinka, Jaroslav; Krakovská, Anna; Paluš, Milan
2017-08-01
Nonparametric detection of coupling delay in unidirectionally and bidirectionally coupled nonlinear dynamical systems is examined. Both continuous and discrete-time systems are considered. Two methods of detection are assessed—the method based on conditional mutual information—the CMI method (also known as the transfer entropy method) and the method of convergent cross mapping—the CCM method. Computer simulations show that neither method is generally reliable in the detection of coupling delays. For continuous-time chaotic systems, the CMI method appears to be more sensitive and applicable in a broader range of coupling parameters than the CCM method. In the case of tested discrete-time dynamical systems, the CCM method has been found to be more sensitive, while the CMI method required much stronger coupling strength in order to bring correct results. However, when studied systems contain a strong oscillatory component in their dynamics, results of both methods become ambiguous. The presented study suggests that results of the tested algorithms should be interpreted with utmost care and the nonparametric detection of coupling delay, in general, is a problem not yet solved.
Multi-application controls: Robust nonlinear multivariable aerospace controls applications
NASA Technical Reports Server (NTRS)
Enns, Dale F.; Bugajski, Daniel J.; Carter, John; Antoniewicz, Bob
1994-01-01
This viewgraph presentation describes the general methodology used to apply Honywell's Multi-Application Control (MACH) and the specific application to the F-18 High Angle-of-Attack Research Vehicle (HARV) including piloted simulation handling qualities evaluation. The general steps include insertion of modeling data for geometry and mass properties, aerodynamics, propulsion data and assumptions, requirements and specifications, e.g. definition of control variables, handling qualities, stability margins and statements for bandwidth, control power, priorities, position and rate limits. The specific steps include choice of independent variables for least squares fits to aerodynamic and propulsion data, modifications to the management of the controls with regard to integrator windup and actuation limiting and priorities, e.g. pitch priority over roll, and command limiting to prevent departures and/or undesirable inertial coupling or inability to recover to a stable trim condition. The HARV control problem is characterized by significant nonlinearities and multivariable interactions in the low speed, high angle-of-attack, high angular rate flight regime. Systematic approaches to the control of vehicle motions modeled with coupled nonlinear equations of motion have been developed. This paper will discuss the dynamic inversion approach which explicity accounts for nonlinearities in the control design. Multiple control effectors (including aerodynamic control surfaces and thrust vectoring control) and sensors are used to control the motions of the vehicles in several degrees-of-freedom. Several maneuvers will be used to illustrate performance of MACH in the high angle-of-attack flight regime. Analytical methods for assessing the robust performance of the multivariable control system in the presence of math modeling uncertainty, disturbances, and commands have reached a high level of maturity. The structured singular value (mu) frequency response methodology is presented as a method for analyzing robust performance and the mu-synthesis method will be presented as a method for synthesizing a robust control system. The paper concludes with the author's expectations regarding future applications of robust nonlinear multivariable controls.
Stochastic reconstructions of spectral functions: Application to lattice QCD
NASA Astrophysics Data System (ADS)
Ding, H.-T.; Kaczmarek, O.; Mukherjee, Swagato; Ohno, H.; Shu, H.-T.
2018-05-01
We present a detailed study of the applications of two stochastic approaches, stochastic optimization method (SOM) and stochastic analytical inference (SAI), to extract spectral functions from Euclidean correlation functions. SOM has the advantage that it does not require prior information. On the other hand, SAI is a more generalized method based on Bayesian inference. Under mean field approximation SAI reduces to the often-used maximum entropy method (MEM) and for a specific choice of the prior SAI becomes equivalent to SOM. To test the applicability of these two stochastic methods to lattice QCD, firstly, we apply these methods to various reasonably chosen model correlation functions and present detailed comparisons of the reconstructed spectral functions obtained from SOM, SAI and MEM. Next, we present similar studies for charmonia correlation functions obtained from lattice QCD computations using clover-improved Wilson fermions on large, fine, isotropic lattices at 0.75 and 1.5 Tc, Tc being the deconfinement transition temperature of a pure gluon plasma. We find that SAI and SOM give consistent results to MEM at these two temperatures.
NASA Astrophysics Data System (ADS)
Larson, Peder E. Z.; Kerr, Adam B.; Leon Swisher, Christine; Pauly, John M.; Vigneron, Daniel B.
2012-12-01
In this work, we present a new MR spectroscopy approach for directly observing nuclear spins that undergo exchange, metabolic conversion, or, generally, any frequency shift during a mixing time. Unlike conventional approaches to observe these processes, such as exchange spectroscopy (EXSY), this rapid approach requires only a single encoding step and thus is readily applicable to hyperpolarized MR in which the magnetization is not replenished after T1 decay and RF excitations. This method is based on stimulated-echoes and uses phase-sensitive detection in conjunction with precisely chosen echo times in order to separate spins generated during the mixing time from those present prior to mixing. We are calling the method Metabolic Activity Decomposition Stimulated-echo Acquisition Mode or MAD-STEAM. We have validated this approach as well as applied it in vivo to normal mice and a transgenic prostate cancer mouse model for observing pyruvate-lactate conversion, which has been shown to be elevated in numerous tumor types. In this application, it provides an improved measure of cellular metabolism by separating [1-13C]-lactate produced in tissue by metabolic conversion from [1-13C]-lactate that has flowed into the tissue or is in the blood. Generally, MAD-STEAM can be applied to any system in which spins undergo a frequency shift.
NASA Astrophysics Data System (ADS)
Klein, Kristopher; Kasper, Justin; Korreck, Kelly; Alterman, Benjamin
2017-04-01
The role of free-energy driven instabilities in governing heating and acceleration processes in the heliosphere has been studied for over half a century, with significant recent advancements enabled by the statistical analysis of decades worth of observations from missions such as WIND. Typical studies focus on marginal stability boundaries in a reduced parameter space, such as the canonical plasma beta versus temperature anisotropy plane, due to a single source of free energy. We present a more general method of determining stability, accounting for all possible sources of free energy in the constituent plasma velocity distributions. Through this novel implementation, we can efficiently determine if the plasma is linearly unstable, and if so, how many normal modes are growing. Such identification will enabling us to better pinpoint the dominant heating or acceleration processes in solar wind plasma. The theory behind this approach is reviewed, followed by a discussion of our methods for a robust numerical implementation, and an initial application to portions of the WIND data set. Further application of this method to velocity distribution measurements from current missions, including WIND, upcoming missions, including Solar Probe Plus and Solar Orbiter, and missions currently in preliminary phases, such as ESA's THOR and NASA's IMAP, will help elucidate how instabilities shape the evolution of the heliosphere.
Larson, Peder E Z; Kerr, Adam B; Swisher, Christine Leon; Pauly, John M; Vigneron, Daniel B
2012-12-01
In this work, we present a new MR spectroscopy approach for directly observing nuclear spins that undergo exchange, metabolic conversion, or, generally, any frequency shift during a mixing time. Unlike conventional approaches to observe these processes, such as exchange spectroscopy (EXSY), this rapid approach requires only a single encoding step and thus is readily applicable to hyperpolarized MR in which the magnetization is not replenished after T(1) decay and RF excitations. This method is based on stimulated-echoes and uses phase-sensitive detection in conjunction with precisely chosen echo times in order to separate spins generated during the mixing time from those present prior to mixing. We are calling the method Metabolic Activity Decomposition Stimulated-echo Acquisition Mode or MAD-STEAM. We have validated this approach as well as applied it in vivo to normal mice and a transgenic prostate cancer mouse model for observing pyruvate-lactate conversion, which has been shown to be elevated in numerous tumor types. In this application, it provides an improved measure of cellular metabolism by separating [1-(13)C]-lactate produced in tissue by metabolic conversion from [1-(13)C]-lactate that has flowed into the tissue or is in the blood. Generally, MAD-STEAM can be applied to any system in which spins undergo a frequency shift. Copyright © 2012 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heydari, M.H., E-mail: heydari@stu.yazd.ac.ir; The Laboratory of Quantum Information Processing, Yazd University, Yazd; Hooshmandasl, M.R., E-mail: hooshmandasl@yazd.ac.ir
Because of the nonlinearity, closed-form solutions of many important stochastic functional equations are virtually impossible to obtain. Thus, numerical solutions are a viable alternative. In this paper, a new computational method based on the generalized hat basis functions together with their stochastic operational matrix of Itô-integration is proposed for solving nonlinear stochastic Itô integral equations in large intervals. In the proposed method, a new technique for computing nonlinear terms in such problems is presented. The main advantage of the proposed method is that it transforms problems under consideration into nonlinear systems of algebraic equations which can be simply solved. Errormore » analysis of the proposed method is investigated and also the efficiency of this method is shown on some concrete examples. The obtained results reveal that the proposed method is very accurate and efficient. As two useful applications, the proposed method is applied to obtain approximate solutions of the stochastic population growth models and stochastic pendulum problem.« less
Babin, D; Pižurica, A; Bellens, R; De Bock, J; Shang, Y; Goossens, B; Vansteenkiste, E; Philips, W
2012-07-01
Extraction of structural and geometric information from 3-D images of blood vessels is a well known and widely addressed segmentation problem. The segmentation of cerebral blood vessels is of great importance in diagnostic and clinical applications, with a special application in diagnostics and surgery on arteriovenous malformations (AVM). However, the techniques addressing the problem of the AVM inner structure segmentation are rare. In this work we present a novel method of pixel profiling with the application to segmentation of the 3-D angiography AVM images. Our algorithm stands out in situations with low resolution images and high variability of pixel intensity. Another advantage of our method is that the parameters are set automatically, which yields little manual user intervention. The results on phantoms and real data demonstrate its effectiveness and potentials for fine delineation of AVM structure. Copyright © 2012 Elsevier B.V. All rights reserved.
Mena-Enriquez, Mayra; Flores-Contreras, Lucia; Armendáriz-Borunda, Juan
2012-01-01
Viral vectors based on adeno-associated virus (AAV) are widely used in gene therapy protocols, because they have characteristics that make them valuable for the treatment of genetic and chronic degenerative diseases. AAV2 serotype had been the best characterized to date. However, the AAV vectors developed from other serotypes is of special interest, since they have organ-specific tropism which increases their potential for transgene delivery to target cells for performing their therapeutic effects. This article summarizes AAV generalities, methods for their production and purification. It also discusses the use of these vectors in vitro, in vivo and their application in gene therapy clinical trials.
A risk evaluation model and its application in online retailing trustfulness
NASA Astrophysics Data System (ADS)
Ye, Ruyi; Xu, Yingcheng
2017-08-01
Building a general model for risks evaluation in advance could improve the convenience, normality and comparability of the results of repeating risks evaluation in the case that the repeating risks evaluating are in the same area and for a similar purpose. One of the most convenient and common risks evaluation models is an index system including of several index, according weights and crediting method. One method to build a risk evaluation index system that guarantees the proportional relationship between the resulting credit and the expected risk loss is proposed and an application example is provided in online retailing in this article.
Transportation optimization with fuzzy trapezoidal numbers based on possibility theory.
He, Dayi; Li, Ran; Huang, Qi; Lei, Ping
2014-01-01
In this paper, a parametric method is introduced to solve fuzzy transportation problem. Considering that parameters of transportation problem have uncertainties, this paper develops a generalized fuzzy transportation problem with fuzzy supply, demand and cost. For simplicity, these parameters are assumed to be fuzzy trapezoidal numbers. Based on possibility theory and consistent with decision-makers' subjectiveness and practical requirements, the fuzzy transportation problem is transformed to a crisp linear transportation problem by defuzzifying fuzzy constraints and objectives with application of fractile and modality approach. Finally, a numerical example is provided to exemplify the application of fuzzy transportation programming and to verify the validity of the proposed methods.
Antonello, M.; Baibussinov, B.; Benetti, P.; ...
2013-01-15
Liquid Argon Time Projection Chamber (LAr TPC) detectors offer charged particle imaging capability with remarkable spatial resolution. Precise event reconstruction procedures are critical in order to fully exploit the potential of this technology. In this paper we present a new, general approach to 3D reconstruction for the LAr TPC with a practical application to the track reconstruction. The efficiency of the method is evaluated on a sample of simulated tracks. We present also the application of the method to the analysis of stopping particle tracks collected during the ICARUS T600 detector operation with the CNGS neutrino beam.