Computer-aided linear-circuit design.
NASA Technical Reports Server (NTRS)
Penfield, P.
1971-01-01
Usually computer-aided design (CAD) refers to programs that analyze circuits conceived by the circuit designer. Among the services such programs should perform are direct network synthesis, analysis, optimization of network parameters, formatting, storage of miscellaneous data, and related calculations. The program should be embedded in a general-purpose conversational language such as BASIC, JOSS, or APL. Such a program is MARTHA, a general-purpose linear-circuit analyzer embedded in APL.
NASA Technical Reports Server (NTRS)
1980-01-01
MATHPAC image-analysis library is collection of general-purpose mathematical and statistical routines and special-purpose data-analysis and pattern-recognition routines for image analysis. MATHPAC library consists of Linear Algebra, Optimization, Statistical-Summary, Densities and Distribution, Regression, and Statistical-Test packages.
NASA Technical Reports Server (NTRS)
Utku, S.
1969-01-01
A general purpose digital computer program for the in-core solution of linear equilibrium problems of structural mechanics is documented. The program requires minimum input for the description of the problem. The solution is obtained by means of the displacement method and the finite element technique. Almost any geometry and structure may be handled because of the availability of linear, triangular, quadrilateral, tetrahedral, hexahedral, conical, triangular torus, and quadrilateral torus elements. The assumption of piecewise linear deflection distribution insures monotonic convergence of the deflections from the stiffer side with decreasing mesh size. The stresses are provided by the best-fit strain tensors in the least squares at the mesh points where the deflections are given. The selection of local coordinate systems whenever necessary is automatic. The core memory is used by means of dynamic memory allocation, an optional mesh-point relabelling scheme and imposition of the boundary conditions during the assembly time.
Analyzing longitudinal data with the linear mixed models procedure in SPSS.
West, Brady T
2009-09-01
Many applied researchers analyzing longitudinal data share a common misconception: that specialized statistical software is necessary to fit hierarchical linear models (also known as linear mixed models [LMMs], or multilevel models) to longitudinal data sets. Although several specialized statistical software programs of high quality are available that allow researchers to fit these models to longitudinal data sets (e.g., HLM), rapid advances in general purpose statistical software packages have recently enabled analysts to fit these same models when using preferred packages that also enable other more common analyses. One of these general purpose statistical packages is SPSS, which includes a very flexible and powerful procedure for fitting LMMs to longitudinal data sets with continuous outcomes. This article aims to present readers with a practical discussion of how to analyze longitudinal data using the LMMs procedure in the SPSS statistical software package.
Yock, Adam D; Rao, Arvind; Dong, Lei; Beadle, Beth M; Garden, Adam S; Kudchadker, Rajat J; Court, Laurence E
2014-05-01
The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: -11.6%-23.8%) and 14.6% (range: -7.3%-27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: -6.8%-40.3%) and 13.1% (range: -1.5%-52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: -11.1%-20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography images and facilitate improved treatment management.
NASA Astrophysics Data System (ADS)
Tian, Wenli; Cao, Chengxuan
2017-03-01
A generalized interval fuzzy mixed integer programming model is proposed for the multimodal freight transportation problem under uncertainty, in which the optimal mode of transport and the optimal amount of each type of freight transported through each path need to be decided. For practical purposes, three mathematical methods, i.e. the interval ranking method, fuzzy linear programming method and linear weighted summation method, are applied to obtain equivalents of constraints and parameters, and then a fuzzy expected value model is presented. A heuristic algorithm based on a greedy criterion and the linear relaxation algorithm are designed to solve the model.
A primer for biomedical scientists on how to execute model II linear regression analysis.
Ludbrook, John
2012-04-01
1. There are two very different ways of executing linear regression analysis. One is Model I, when the x-values are fixed by the experimenter. The other is Model II, in which the x-values are free to vary and are subject to error. 2. I have received numerous complaints from biomedical scientists that they have great difficulty in executing Model II linear regression analysis. This may explain the results of a Google Scholar search, which showed that the authors of articles in journals of physiology, pharmacology and biochemistry rarely use Model II regression analysis. 3. I repeat my previous arguments in favour of using least products linear regression analysis for Model II regressions. I review three methods for executing ordinary least products (OLP) and weighted least products (WLP) regression analysis: (i) scientific calculator and/or computer spreadsheet; (ii) specific purpose computer programs; and (iii) general purpose computer programs. 4. Using a scientific calculator and/or computer spreadsheet, it is easy to obtain correct values for OLP slope and intercept, but the corresponding 95% confidence intervals (CI) are inaccurate. 5. Using specific purpose computer programs, the freeware computer program smatr gives the correct OLP regression coefficients and obtains 95% CI by bootstrapping. In addition, smatr can be used to compare the slopes of OLP lines. 6. When using general purpose computer programs, I recommend the commercial programs systat and Statistica for those who regularly undertake linear regression analysis and I give step-by-step instructions in the Supplementary Information as to how to use loss functions. © 2011 The Author. Clinical and Experimental Pharmacology and Physiology. © 2011 Blackwell Publishing Asia Pty Ltd.
NASA Technical Reports Server (NTRS)
Egebrecht, R. A.; Thorbjornsen, A. R.
1967-01-01
Digital computer programs determine steady-state performance characteristics of active and passive linear circuits. The ac analysis program solves the basic circuit parameters. The compiler program solves these circuit parameters and in addition provides a more versatile program by allowing the user to perform mathematical and logical operations.
Factor Scores, Structure and Communality Coefficients: A Primer
ERIC Educational Resources Information Center
Odum, Mary
2011-01-01
(Purpose) The purpose of this paper is to present an easy-to-understand primer on three important concepts of factor analysis: Factor scores, structure coefficients, and communality coefficients. Given that statistical analyses are a part of a global general linear model (GLM), and utilize weights as an integral part of analyses (Thompson, 2006;…
Development and validation of a general purpose linearization program for rigid aircraft models
NASA Technical Reports Server (NTRS)
Duke, E. L.; Antoniewicz, R. F.
1985-01-01
A FORTRAN program that provides the user with a powerful and flexible tool for the linearization of aircraft models is discussed. The program LINEAR numerically determines a linear systems model using nonlinear equations of motion and a user-supplied, nonlinear aerodynamic model. The system model determined by LINEAR consists of matrices for both the state and observation equations. The program has been designed to allow easy selection and definition of the state, control, and observation variables to be used in a particular model. Also, included in the report is a comparison of linear and nonlinear models for a high performance aircraft.
NASA Technical Reports Server (NTRS)
Kibler, K. S.; Mcdaniel, G. A.
1981-01-01
A digital local linearization technique was used to solve a system of stiff differential equations which simulate a magnetic bearing assembly. The results prove the technique to be accurate, stable, and efficient when compared to a general purpose variable order Adams method with a stiff option.
Differentiation of Students' Reasoning on Linear and Quadratic Geometric Number Patterns
ERIC Educational Resources Information Center
Lin, Fou-Lai; Yang, Kai-Lin
2004-01-01
There are two purposes in this study. One is to compare how 7th and 8th graders reason on linear and quadratic geometric number patterns when they have not learned this kind of tasks in school. The other is to explore the hierarchical relations among the four components of reasoning on geometric number patterns: understanding, generalizing,…
40 CFR 51.1009 - Reasonable further progress (RFP) requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... milestone year, emissions will be at a level consistent with generally linear progress in reducing emissions... plan are derived. (6) For purposes of establishing motor vehicle emissions budgets for transportation...
40 CFR 51.1009 - Reasonable further progress (RFP) requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... milestone year, emissions will be at a level consistent with generally linear progress in reducing emissions... plan are derived. (6) For purposes of establishing motor vehicle emissions budgets for transportation...
40 CFR 51.1009 - Reasonable further progress (RFP) requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... milestone year, emissions will be at a level consistent with generally linear progress in reducing emissions... plan are derived. (6) For purposes of establishing motor vehicle emissions budgets for transportation...
40 CFR 51.1009 - Reasonable further progress (RFP) requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... milestone year, emissions will be at a level consistent with generally linear progress in reducing emissions... plan are derived. (6) For purposes of establishing motor vehicle emissions budgets for transportation...
Frequency Response of Synthetic Vocal Fold Models with Linear and Nonlinear Material Properties
Shaw, Stephanie M.; Thomson, Scott L.; Dromey, Christopher; Smith, Simeon
2014-01-01
Purpose The purpose of this study was to create synthetic vocal fold models with nonlinear stress-strain properties and to investigate the effect of linear versus nonlinear material properties on fundamental frequency during anterior-posterior stretching. Method Three materially linear and three materially nonlinear models were created and stretched up to 10 mm in 1 mm increments. Phonation onset pressure (Pon) and fundamental frequency (F0) at Pon were recorded for each length. Measurements were repeated as the models were relaxed in 1 mm increments back to their resting lengths, and tensile tests were conducted to determine the stress-strain responses of linear versus nonlinear models. Results Nonlinear models demonstrated a more substantial frequency response than did linear models and a more predictable pattern of F0 increase with respect to increasing length (although range was inconsistent across models). Pon generally increased with increasing vocal fold length for nonlinear models, whereas for linear models, Pon decreased with increasing length. Conclusions Nonlinear synthetic models appear to more accurately represent the human vocal folds than linear models, especially with respect to F0 response. PMID:22271874
A Multilevel Study of Students' Motivations of Studying Accounting: Implications for Employers
ERIC Educational Resources Information Center
Law, Philip; Yuen, Desmond
2012-01-01
Purpose: The purpose of this study is to examine the influence of factors affecting students' choice of accounting as a study major in Hong Kong. Design/methodology/approach: Multinomial logistic regression and Hierarchical Generalized Linear Modeling (HGLM) are used to analyze the survey data for the level one and level two data, which is the…
NASA Astrophysics Data System (ADS)
Ikelle, Luc T.; Osen, Are; Amundsen, Lasse; Shen, Yunqing
2004-12-01
The classical linear solutions to the problem of multiple attenuation, like predictive deconvolution, τ-p filtering, or F-K filtering, are generally fast, stable, and robust compared to non-linear solutions, which are generally either iterative or in the form of a series with an infinite number of terms. These qualities have made the linear solutions more attractive to seismic data-processing practitioners. However, most linear solutions, including predictive deconvolution or F-K filtering, contain severe assumptions about the model of the subsurface and the class of free-surface multiples they can attenuate. These assumptions limit their usefulness. In a recent paper, we described an exception to this assertion for OBS data. We showed in that paper that a linear and non-iterative solution to the problem of attenuating free-surface multiples which is as accurate as iterative non-linear solutions can be constructed for OBS data. We here present a similar linear and non-iterative solution for attenuating free-surface multiples in towed-streamer data. For most practical purposes, this linear solution is as accurate as the non-linear ones.
MSC products for the simulation of tire behavior
NASA Technical Reports Server (NTRS)
Muskivitch, John C.
1995-01-01
The modeling of tires and the simulation of tire behavior are complex problems. The MacNeal-Schwendler Corporation (MSC) has a number of finite element analysis products that can be used to address the complexities of tire modeling and simulation. While there are many similarities between the products, each product has a number of capabilities that uniquely enable it to be used for a specific aspect of tire behavior. This paper discusses the following programs: (1) MSC/NASTRAN - general purpose finite element program for linear and nonlinear static and dynamic analysis; (2) MSC/ADAQUS - nonlinear statics and dynamics finite element program; (3) MSC/PATRAN AFEA (Advanced Finite Element Analysis) - general purpose finite element program with a subset of linear and nonlinear static and dynamic analysis capabilities with an integrated version of MSC/PATRAN for pre- and post-processing; and (4) MSC/DYTRAN - nonlinear explicit transient dynamics finite element program.
NASA Technical Reports Server (NTRS)
Cheyney, H., III; Arking, A.
1976-01-01
The equations of radiative transfer in anisotropically scattering media are reformulated as linear operator equations in a single independent variable. The resulting equations are suitable for solution by a variety of standard mathematical techniques. The operators appearing in the resulting equations are in general nonsymmetric; however, it is shown that every bounded linear operator equation can be embedded in a symmetric linear operator equation and a variational solution can be obtained in a straightforward way. For purposes of demonstration, a Rayleigh-Ritz variational method is applied to three problems involving simple phase functions. It is to be noted that the variational technique demonstrated is of general applicability and permits simple solutions for a wide range of otherwise difficult mathematical problems in physics.
A Generalized Acoustic Analogy
NASA Technical Reports Server (NTRS)
Goldstein, M. E.
2003-01-01
The purpose of this article is to show that the Navier-Stokes equations can be rewritten as a set of linearized inhomogeneous Euler equations (in convective form) with source terms that are exactly the same as those that would result from externally imposed shear stress and energy flux perturbations. These results are used to develop a mathematical basis for some existing and potential new jet noise models by appropriately choosing the base flow about which the linearization is carried out.
An acceleration framework for synthetic aperture radar algorithms
NASA Astrophysics Data System (ADS)
Kim, Youngsoo; Gloster, Clay S.; Alexander, Winser E.
2017-04-01
Algorithms for radar signal processing, such as Synthetic Aperture Radar (SAR) are computationally intensive and require considerable execution time on a general purpose processor. Reconfigurable logic can be used to off-load the primary computational kernel onto a custom computing machine in order to reduce execution time by an order of magnitude as compared to kernel execution on a general purpose processor. Specifically, Field Programmable Gate Arrays (FPGAs) can be used to accelerate these kernels using hardware-based custom logic implementations. In this paper, we demonstrate a framework for algorithm acceleration. We used SAR as a case study to illustrate the potential for algorithm acceleration offered by FPGAs. Initially, we profiled the SAR algorithm and implemented a homomorphic filter using a hardware implementation of the natural logarithm. Experimental results show a linear speedup by adding reasonably small processing elements in Field Programmable Gate Array (FPGA) as opposed to using a software implementation running on a typical general purpose processor.
Finite element modelling of non-linear magnetic circuits using Cosmic NASTRAN
NASA Technical Reports Server (NTRS)
Sheerer, T. J.
1986-01-01
The general purpose Finite Element Program COSMIC NASTRAN currently has the ability to model magnetic circuits with constant permeablilities. An approach was developed which, through small modifications to the program, allows modelling of non-linear magnetic devices including soft magnetic materials, permanent magnets and coils. Use of the NASTRAN code resulted in output which can be used for subsequent mechanical analysis using a variation of the same computer model. Test problems were found to produce theoretically verifiable results.
DOT National Transportation Integrated Search
1982-06-01
The purpose of this study was to apply mathematical procedures to the Federal Aviation Administration (FAA) pilot medical data to examine the feasibility of devising a linear numbering system such that (1) the cumulative probability distribution func...
Fuzzy branching temporal logic.
Moon, Seong-ick; Lee, Kwang H; Lee, Doheon
2004-04-01
Intelligent systems require a systematic way to represent and handle temporal information containing uncertainty. In particular, a logical framework is needed that can represent uncertain temporal information and its relationships with logical formulae. Fuzzy linear temporal logic (FLTL), a generalization of propositional linear temporal logic (PLTL) with fuzzy temporal events and fuzzy temporal states defined on a linear time model, was previously proposed for this purpose. However, many systems are best represented by branching time models in which each state can have more than one possible future path. In this paper, fuzzy branching temporal logic (FBTL) is proposed to address this problem. FBTL adopts and generalizes concurrent tree logic (CTL*), which is a classical branching temporal logic. The temporal model of FBTL is capable of representing fuzzy temporal events and fuzzy temporal states, and the order relation among them is represented as a directed graph. The utility of FBTL is demonstrated using a fuzzy job shop scheduling problem as an example.
Numerical Technology for Large-Scale Computational Electromagnetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharpe, R; Champagne, N; White, D
The key bottleneck of implicit computational electromagnetics tools for large complex geometries is the solution of the resulting linear system of equations. The goal of this effort was to research and develop critical numerical technology that alleviates this bottleneck for large-scale computational electromagnetics (CEM). The mathematical operators and numerical formulations used in this arena of CEM yield linear equations that are complex valued, unstructured, and indefinite. Also, simultaneously applying multiple mathematical modeling formulations to different portions of a complex problem (hybrid formulations) results in a mixed structure linear system, further increasing the computational difficulty. Typically, these hybrid linear systems aremore » solved using a direct solution method, which was acceptable for Cray-class machines but does not scale adequately for ASCI-class machines. Additionally, LLNL's previously existing linear solvers were not well suited for the linear systems that are created by hybrid implicit CEM codes. Hence, a new approach was required to make effective use of ASCI-class computing platforms and to enable the next generation design capabilities. Multiple approaches were investigated, including the latest sparse-direct methods developed by our ASCI collaborators. In addition, approaches that combine domain decomposition (or matrix partitioning) with general-purpose iterative methods and special purpose pre-conditioners were investigated. Special-purpose pre-conditioners that take advantage of the structure of the matrix were adapted and developed based on intimate knowledge of the matrix properties. Finally, new operator formulations were developed that radically improve the conditioning of the resulting linear systems thus greatly reducing solution time. The goal was to enable the solution of CEM problems that are 10 to 100 times larger than our previous capability.« less
Key-Generation Algorithms for Linear Piece In Hand Matrix Method
NASA Astrophysics Data System (ADS)
Tadaki, Kohtaro; Tsujii, Shigeo
The linear Piece In Hand (PH, for short) matrix method with random variables was proposed in our former work. It is a general prescription which can be applicable to any type of multivariate public-key cryptosystems for the purpose of enhancing their security. Actually, we showed, in an experimental manner, that the linear PH matrix method with random variables can certainly enhance the security of HFE against the Gröbner basis attack, where HFE is one of the major variants of multivariate public-key cryptosystems. In 1998 Patarin, Goubin, and Courtois introduced the plus method as a general prescription which aims to enhance the security of any given MPKC, just like the linear PH matrix method with random variables. In this paper we prove the equivalence between the plus method and the primitive linear PH matrix method, which is introduced by our previous work to explain the notion of the PH matrix method in general in an illustrative manner and not for a practical use to enhance the security of any given MPKC. Based on this equivalence, we show that the linear PH matrix method with random variables has the substantial advantage over the plus method with respect to the security enhancement. In the linear PH matrix method with random variables, the three matrices, including the PH matrix, play a central role in the secret-key and public-key. In this paper, we clarify how to generate these matrices and thus present two probabilistic polynomial-time algorithms to generate these matrices. In particular, the second one has a concise form, and is obtained as a byproduct of the proof of the equivalence between the plus method and the primitive linear PH matrix method.
A Multilevel Study of the Role of Environment in Adolescent Substance Use
ERIC Educational Resources Information Center
Steen, Julie A.
2010-01-01
The purpose of this study is to assess the relationships between county-level characteristics and adolescent use of alcohol, cigarettes, and marijuana. The study consisted of a hierarchical generalized linear analysis of secondary data from the Florida Youth Substance Abuse Survey. Variables on the county level included the percent of adolescents…
NASA Astrophysics Data System (ADS)
Tisdell, C. C.
2017-08-01
Solution methods to exact differential equations via integrating factors have a rich history dating back to Euler (1740) and the ideas enjoy applications to thermodynamics and electromagnetism. Recently, Azevedo and Valentino presented an analysis of the generalized Bernoulli equation, constructing a general solution by linearizing the problem through a substitution. The purpose of this note is to present an alternative approach using 'exact methods', illustrating that a substitution and linearization of the problem is unnecessary. The ideas may be seen as forming a complimentary and arguably simpler approach to Azevedo and Valentino that have the potential to be assimilated and adapted to pedagogical needs of those learning and teaching exact differential equations in schools, colleges, universities and polytechnics. We illustrate how to apply the ideas through an analysis of the Gompertz equation, which is of interest in biomathematical models of tumour growth.
NASA Technical Reports Server (NTRS)
Klumpp, A. R.
1994-01-01
Ten families of subprograms are bundled together for the General-Purpose Ada Packages. The families bring to Ada many features from HAL/S, PL/I, FORTRAN, and other languages. These families are: string subprograms (INDEX, TRIM, LOAD, etc.); scalar subprograms (MAX, MIN, REM, etc.); array subprograms (MAX, MIN, PROD, SUM, GET, and PUT); numerical subprograms (EXP, CUBIC, etc.); service subprograms (DATE_TIME function, etc.); Linear Algebra II; Runge-Kutta integrators; and three text I/O families of packages. In two cases, a family consists of a single non-generic package. In all other cases, a family comprises a generic package and its instances for a selected group of scalar types. All generic packages are designed to be easily instantiated for the types declared in the user facility. The linear algebra package is LINRAG2. This package includes subprograms supplementing those in NPO-17985, An Ada Linear Algebra Package Modeled After HAL/S (LINRAG). Please note that LINRAG2 cannot be compiled without LINRAG. Most packages have widespread applicability, although some are oriented for avionics applications. All are designed to facilitate writing new software in Ada. Several of the packages use conventions introduced by other programming languages. A package of string subprograms is based on HAL/S (a language designed for the avionics software in the Space Shuttle) and PL/I. Packages of scalar and array subprograms are taken from HAL/S or generalized current Ada subprograms. A package of Runge-Kutta integrators is patterned after a built-in MAC (MIT Algebraic Compiler) integrator. Those packages modeled after HAL/S make it easy to translate existing HAL/S software to Ada. The General-Purpose Ada Packages program source code is available on two 360K 5.25" MS-DOS format diskettes. The software was developed using VAX Ada v1.5 under DEC VMS v4.5. It should be portable to any validated Ada compiler and it should execute either interactively or in batch. The largest package requires 205K of main memory on a DEC VAX running VMS. The software was developed in 1989, and is a copyrighted work with all copyright vested in NASA.
Caçola, Priscila M; Pant, Mohan D
2014-10-01
The purpose was to use a multi-level statistical technique to analyze how children's age, motor proficiency, and cognitive styles interact to affect accuracy on reach estimation tasks via Motor Imagery and Visual Imagery. Results from the Generalized Linear Mixed Model analysis (GLMM) indicated that only the 7-year-old age group had significant random intercepts for both tasks. Motor proficiency predicted accuracy in reach tasks, and cognitive styles (object scale) predicted accuracy in the motor imagery task. GLMM analysis is suitable to explore age and other parameters of development. In this case, it allowed an assessment of motor proficiency interacting with age to shape how children represent, plan, and act on the environment.
Artificial Neural Network versus Linear Models Forecasting Doha Stock Market
NASA Astrophysics Data System (ADS)
Yousif, Adil; Elfaki, Faiz
2017-12-01
The purpose of this study is to determine the instability of Doha stock market and develop forecasting models. Linear time series models are used and compared with a nonlinear Artificial Neural Network (ANN) namely Multilayer Perceptron (MLP) Technique. It aims to establish the best useful model based on daily and monthly data which are collected from Qatar exchange for the period starting from January 2007 to January 2015. Proposed models are for the general index of Qatar stock exchange and also for the usages in other several sectors. With the help of these models, Doha stock market index and other various sectors were predicted. The study was conducted by using various time series techniques to study and analyze data trend in producing appropriate results. After applying several models, such as: Quadratic trend model, double exponential smoothing model, and ARIMA, it was concluded that ARIMA (2,2) was the most suitable linear model for the daily general index. However, ANN model was found to be more accurate than time series models.
General-Purpose Software For Computer Graphics
NASA Technical Reports Server (NTRS)
Rogers, Joseph E.
1992-01-01
NASA Device Independent Graphics Library (NASADIG) is general-purpose computer-graphics package for computer-based engineering and management applications which gives opportunity to translate data into effective graphical displays for presentation. Features include two- and three-dimensional plotting, spline and polynomial interpolation, control of blanking of areas, multiple log and/or linear axes, control of legends and text, control of thicknesses of curves, and multiple text fonts. Included are subroutines for definition of areas and axes of plots; setup and display of text; blanking of areas; setup of style, interpolation, and plotting of lines; control of patterns and of shading of colors; control of legends, blocks of text, and characters; initialization of devices; and setting of mixed alphabets. Written in FORTRAN 77.
ERIC Educational Resources Information Center
Reynolds, Matthew R.
2013-01-01
The linear loadings of intelligence test composite scores on a general factor ("g") have been investigated recently in factor analytic studies. Spearman's law of diminishing returns (SLODR), however, implies that the "g" loadings of test scores likely decrease in magnitude as g increases, or they are nonlinear. The purpose of…
Digital receiver study and implementation
NASA Technical Reports Server (NTRS)
Fogle, D. A.; Lee, G. M.; Massey, J. C.
1972-01-01
Computer software was developed which makes it possible to use any general purpose computer with A/D conversion capability as a PSK receiver for low data rate telemetry processing. Carrier tracking, bit synchronization, and matched filter detection are all performed digitally. To aid in the implementation of optimum computer processors, a study of general digital processing techniques was performed which emphasized various techniques for digitizing general analog systems. In particular, the phase-locked loop was extensively analyzed as a typical non-linear communication element. Bayesian estimation techniques for PSK demodulation were studied. A hardware implementation of the digital Costas loop was developed.
Performance Analysis and Design Synthesis (PADS) computer program. Volume 3: User manual
NASA Technical Reports Server (NTRS)
1972-01-01
The two-fold purpose of the Performance Analysis and Design Synthesis (PADS) computer program is discussed. The program can size launch vehicles in conjunction with calculus-of-variations optimal trajectories and can also be used as a general purpose branched trajectory optimization program. For trajectory optimization alone or with sizing, PADS has two trajectory modules. The first trajectory module uses the method of steepest descent. The second module uses the method of quasi-linearization, which requires a starting solution from the first trajectory module.
Theoretical and software considerations for nonlinear dynamic analysis
NASA Technical Reports Server (NTRS)
Schmidt, R. J.; Dodds, R. H., Jr.
1983-01-01
In the finite element method for structural analysis, it is generally necessary to discretize the structural model into a very large number of elements to accurately evaluate displacements, strains, and stresses. As the complexity of the model increases, the number of degrees of freedom can easily exceed the capacity of present-day software system. Improvements of structural analysis software including more efficient use of existing hardware and improved structural modeling techniques are discussed. One modeling technique that is used successfully in static linear and nonlinear analysis is multilevel substructuring. This research extends the use of multilevel substructure modeling to include dynamic analysis and defines the requirements for a general purpose software system capable of efficient nonlinear dynamic analysis. The multilevel substructuring technique is presented, the analytical formulations and computational procedures for dynamic analysis and nonlinear mechanics are reviewed, and an approach to the design and implementation of a general purpose structural software system is presented.
Yue, Chen; Chen, Shaojie; Sair, Haris I; Airan, Raag; Caffo, Brian S
2015-09-01
Data reproducibility is a critical issue in all scientific experiments. In this manuscript, the problem of quantifying the reproducibility of graphical measurements is considered. The image intra-class correlation coefficient (I2C2) is generalized and the graphical intra-class correlation coefficient (GICC) is proposed for such purpose. The concept for GICC is based on multivariate probit-linear mixed effect models. A Markov Chain Monte Carlo EM (mcm-cEM) algorithm is used for estimating the GICC. Simulation results with varied settings are demonstrated and our method is applied to the KIRBY21 test-retest dataset.
Aerial photography : obtaining a true perspective
NASA Technical Reports Server (NTRS)
1923-01-01
A demonstration was given within the last few days at the British Museum by Mr. J. W. Gordon, author of "Generalized Linear Perspective" (Constable and Co.), a work describing a newly-worked-out system by which photographs can be made available for the purpose of exactly recording the dimensions of the objects photographed even when the objects themselves are presented foreshortened in the photograph.
A Comparison of Two Approaches for Measuring Educational Growth from CTBS and P-ACT+ Scores.
ERIC Educational Resources Information Center
Noble, Julie; Sawyer, Richard
The purpose of the study was to compare two regression-based approaches for measuring educational effectiveness in Tennessee high schools: the mean residual approach (MR), and a more general linear models (LM) approach. Data were obtained from a sample of 1,011 students who were enrolled in 48 high schools, and who had taken the Comprehensive…
ERIC Educational Resources Information Center
Zhao, Jing
2012-01-01
The purpose of the study is to further investigate the validity of instruments used for collecting preservice teachers' perceptions of self-efficacy adapting the three-level IRT model described in Cheong's study (2006). The focus of the present study is to investigate whether the polytomously-scored items on the preservice teachers' self-efficacy…
NASA Astrophysics Data System (ADS)
Kwintarini, Widiyanti; Wibowo, Agung; Arthaya, Bagus M.; Yuwana Martawirya, Yatna
2018-03-01
The purpose of this study was to improve the accuracy of three-axis CNC Milling Vertical engines with a general approach by using mathematical modeling methods of machine tool geometric errors. The inaccuracy of CNC machines can be caused by geometric errors that are an important factor during the manufacturing process and during the assembly phase, and are factors for being able to build machines with high-accuracy. To improve the accuracy of the three-axis vertical milling machine, by knowing geometric errors and identifying the error position parameters in the machine tool by arranging the mathematical modeling. The geometric error in the machine tool consists of twenty-one error parameters consisting of nine linear error parameters, nine angle error parameters and three perpendicular error parameters. The mathematical modeling approach of geometric error with the calculated alignment error and angle error in the supporting components of the machine motion is linear guide way and linear motion. The purpose of using this mathematical modeling approach is the identification of geometric errors that can be helpful as reference during the design, assembly and maintenance stages to improve the accuracy of CNC machines. Mathematically modeling geometric errors in CNC machine tools can illustrate the relationship between alignment error, position and angle on a linear guide way of three-axis vertical milling machines.
Frequency response of synthetic vocal fold models with linear and nonlinear material properties.
Shaw, Stephanie M; Thomson, Scott L; Dromey, Christopher; Smith, Simeon
2012-10-01
The purpose of this study was to create synthetic vocal fold models with nonlinear stress-strain properties and to investigate the effect of linear versus nonlinear material properties on fundamental frequency (F0) during anterior-posterior stretching. Three materially linear and 3 materially nonlinear models were created and stretched up to 10 mm in 1-mm increments. Phonation onset pressure (Pon) and F0 at Pon were recorded for each length. Measurements were repeated as the models were relaxed in 1-mm increments back to their resting lengths, and tensile tests were conducted to determine the stress-strain responses of linear versus nonlinear models. Nonlinear models demonstrated a more substantial frequency response than did linear models and a more predictable pattern of F0 increase with respect to increasing length (although range was inconsistent across models). Pon generally increased with increasing vocal fold length for nonlinear models, whereas for linear models, Pon decreased with increasing length. Nonlinear synthetic models appear to more accurately represent the human vocal folds than do linear models, especially with respect to F0 response.
Versey, Nathan G; Gore, Christopher J; Halson, Shona L; Plowman, Jamie S; Dawson, Brian T
2011-09-01
We determined the validity and reliability of heat flow thermistors, flexible thermocouple probes and general purpose thermistors compared with a calibrated reference thermometer in a stirred water bath. Validity (bias) was defined as the difference between the observed and criterion values, and reliability as the repeatability (standard deviation or typical error) of measurement. Data were logged every 5 s for 10 min at water temperatures of 14, 26 and 38 °C for ten heat flow thermistors and 24 general purpose thermistors, and at 35, 38 and 41 °C for eight flexible thermocouple probes. Statistical analyses were conducted using spreadsheets for validity and reliability, where an acceptable bias was set at ±0.1 °C. None of the heat flow thermistors, 17% of the flexible thermocouple probes and 71% of the general purpose thermistors met the validity criterion for temperature. The inter-probe reliabilities were 0.03 °C for heat flow thermistors, 0.04 °C for flexible thermocouple probes and 0.09 °C for general purpose thermistors. The within trial intra-probe reliability of all three temperature probes was 0.01 °C. The results suggest that these temperature sensors should be calibrated individually before use at relevant temperatures and the raw data corrected using individual linear regression equations.
Propagation of uncertainty by Monte Carlo simulations in case of basic geodetic computations
NASA Astrophysics Data System (ADS)
Wyszkowska, Patrycja
2017-12-01
The determination of the accuracy of functions of measured or adjusted values may be a problem in geodetic computations. The general law of covariance propagation or in case of the uncorrelated observations the propagation of variance (or the Gaussian formula) are commonly used for that purpose. That approach is theoretically justified for the linear functions. In case of the non-linear functions, the first-order Taylor series expansion is usually used but that solution is affected by the expansion error. The aim of the study is to determine the applicability of the general variance propagation law in case of the non-linear functions used in basic geodetic computations. The paper presents errors which are a result of negligence of the higher-order expressions and it determines the range of such simplification. The basis of that analysis is the comparison of the results obtained by the law of propagation of variance and the probabilistic approach, namely Monte Carlo simulations. Both methods are used to determine the accuracy of the following geodetic computations: the Cartesian coordinates of unknown point in the three-point resection problem, azimuths and distances of the Cartesian coordinates, height differences in the trigonometric and the geometric levelling. These simulations and the analysis of the results confirm the possibility of applying the general law of variance propagation in basic geodetic computations even if the functions are non-linear. The only condition is the accuracy of observations, which cannot be too low. Generally, this is not a problem with using present geodetic instruments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, X; Petrongolo, M; Wang, T
Purpose: A general problem of dual-energy CT (DECT) is that the decomposition is sensitive to noise in the two sets of dual-energy projection data, resulting in severely degraded qualities of decomposed images. We have previously proposed an iterative denoising method for DECT. Using a linear decomposition function, the method does not gain the full benefits of DECT on beam-hardening correction. In this work, we expand the framework of our iterative method to include non-linear decomposition models for noise suppression in DECT. Methods: We first obtain decomposed projections, which are free of beam-hardening artifacts, using a lookup table pre-measured on amore » calibration phantom. First-pass material images with high noise are reconstructed from the decomposed projections using standard filter-backprojection reconstruction. Noise on the decomposed images is then suppressed by an iterative method, which is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, we include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Analytical formulae are derived to compute the variance-covariance matrix from the measured decomposition lookup table. Results: We have evaluated the proposed method via phantom studies. Using non-linear decomposition, our method effectively suppresses the streaking artifacts of beam-hardening and obtains more uniform images than our previous approach based on a linear model. The proposed method reduces the average noise standard deviation of two basis materials by one order of magnitude without sacrificing the spatial resolution. Conclusion: We propose a general framework of iterative denoising for material decomposition of DECT. Preliminary phantom studies have shown the proposed method improves the image uniformity and reduces noise level without resolution loss. In the future, we will perform more phantom studies to further validate the performance of the purposed method. This work is supported by a Varian MRA grant.« less
Attitude Determination Error Analysis System (ADEAS) mathematical specifications document
NASA Technical Reports Server (NTRS)
Nicholson, Mark; Markley, F.; Seidewitz, E.
1988-01-01
The mathematical specifications of Release 4.0 of the Attitude Determination Error Analysis System (ADEAS), which provides a general-purpose linear error analysis capability for various spacecraft attitude geometries and determination processes, are presented. The analytical basis of the system is presented. The analytical basis of the system is presented, and detailed equations are provided for both three-axis-stabilized and spin-stabilized attitude sensor models.
ERIC Educational Resources Information Center
Sideridis, Georgios D.
2016-01-01
The purpose of the present studies was to test the hypothesis that the psychometric characteristics of ability scales may be significantly distorted if one accounts for emotional factors during test taking. Specifically, the present studies evaluate the effects of anxiety and motivation on the item difficulties of the Rasch model. In Study 1, the…
Forced vibration analysis of rotating cyclic structures in NASTRAN
NASA Technical Reports Server (NTRS)
Elchuri, V.; Gallo, A. M.; Skalski, S. C.
1981-01-01
A new capability was added to the general purpose finite element program NASTRAN Level 17.7 to conduct forced vibration analysis of tuned cyclic structures rotating about their axis of symmetry. The effects of Coriolis and centripetal accelerations together with those due to linear acceleration of the axis of rotation were included. The theoretical, user's, programmer's and demonstration manuals for this new capability are presented.
NASA Astrophysics Data System (ADS)
Charlemagne, S.; Ture Savadkoohi, A.; Lamarque, C.-H.
2018-07-01
The continuous approximation is used in this work to describe the dynamics of a nonlinear chain of light oscillators coupled to a linear main system. A general methodology is applied to an example where the chain has local nonlinear restoring forces. The slow invariant manifold is detected at fast time scale. At slow time scale, equilibrium and singular points are sought around this manifold in order to predict periodic regimes and strongly modulated responses of the system. Analytical predictions are in good accordance with numerical results and represent a potent tool for designing nonlinear chains for passive control purposes.
NASA Technical Reports Server (NTRS)
Jackson, C. E., Jr.
1976-01-01
The NTA Level 15.5.2/3, was used to provide non-linear steady-state (NLSS) and non-linear transient (NLTR) thermal predictions for the International Ultraviolet Explorer (IUE) Scientific Instrument (SI). NASTRAN structural models were used as the basis for the thermal models, which were produced by a straight forward conversion procedure. The accuracy of this technique was sub-sequently demonstrated by a comparison of NTA predicts with the results of a thermal vacuum test of the IUE Engineering Test Unit (ETU). Completion of these tasks was aided by the use of NTA subroutines.
Bowen, Stephen R; Chappell, Richard J; Bentzen, Søren M; Deveau, Michael A; Forrest, Lisa J; Jeraj, Robert
2012-01-01
Purpose To quantify associations between pre-radiotherapy and post-radiotherapy PET parameters via spatially resolved regression. Materials and methods Ten canine sinonasal cancer patients underwent PET/CT scans of [18F]FDG (FDGpre), [18F]FLT (FLTpre), and [61Cu]Cu-ATSM (Cu-ATSMpre). Following radiotherapy regimens of 50 Gy in 10 fractions, veterinary patients underwent FDG PET/CT scans at three months (FDGpost). Regression of standardized uptake values in baseline FDGpre, FLTpre and Cu-ATSMpre tumour voxels to those in FDGpost images was performed for linear, log-linear, generalized-linear and mixed-fit linear models. Goodness-of-fit in regression coefficients was assessed by R2. Hypothesis testing of coefficients over the patient population was performed. Results Multivariate linear model fits of FDGpre to FDGpost were significantly positive over the population (FDGpost~0.17 FDGpre, p=0.03), and classified slopes of RECIST non-responders and responders to be different (0.37 vs. 0.07, p=0.01). Generalized-linear model fits related FDGpre to FDGpost by a linear power law (FDGpost~FDGpre0.93, p<0.001). Univariate mixture model fits of FDGpre improved R2 from 0.17 to 0.52. Neither baseline FLT PET nor Cu-ATSM PET uptake contributed statistically significant multivariate regression coefficients. Conclusions Spatially resolved regression analysis indicates that pre-treatment FDG PET uptake is most strongly associated with three-month post-treatment FDG PET uptake in this patient population, though associations are histopathology-dependent. PMID:22682748
Stable orthogonal local discriminant embedding for linear dimensionality reduction.
Gao, Quanxue; Ma, Jingjie; Zhang, Hailin; Gao, Xinbo; Liu, Yamin
2013-07-01
Manifold learning is widely used in machine learning and pattern recognition. However, manifold learning only considers the similarity of samples belonging to the same class and ignores the within-class variation of data, which will impair the generalization and stableness of the algorithms. For this purpose, we construct an adjacency graph to model the intraclass variation that characterizes the most important properties, such as diversity of patterns, and then incorporate the diversity into the discriminant objective function for linear dimensionality reduction. Finally, we introduce the orthogonal constraint for the basis vectors and propose an orthogonal algorithm called stable orthogonal local discriminate embedding. Experimental results on several standard image databases demonstrate the effectiveness of the proposed dimensionality reduction approach.
Variability simulations with a steady, linearized primitive equations model
NASA Technical Reports Server (NTRS)
Kinter, J. L., III; Nigam, S.
1985-01-01
Solutions of the steady, primitive equations on a sphere, linearized about a zonally symmetric basic state are computed for the purpose of simulating monthly mean variability in the troposphere. The basic states are observed, winter monthly mean, zonal means of zontal and meridional velocities, temperatures and surface pressures computed from the 15 year NMC time series. A least squares fit to a series of Legendre polynomials is used to compute the basic states between 20 H and the equator, and the hemispheres are assumed symmetric. The model is spectral in the zonal direction, and centered differences are employed in the meridional and vertical directions. Since the model is steady and linear, the solution is obtained by inversion of a block, pente-diagonal matrix. The model simulates the climatology of the GFDL nine level, spectral general circulation model quite closely, particularly in middle latitudes above the boundary layer. This experiment is an extension of that simulation to examine variability of the steady, linear solution.
Mirmohseni, Abdolreza; Olad, Ali
2010-01-01
A polystyrene coated quartz crystal nanobalance (QCN) sensor was developed for use in the determination of a number of linear short-chain aliphatic aldehyde and ketone vapors contained in air. The quartz crystal was modified by a thin-layer coating of a commercial grade general purpose polystyrene (GPPS) from Tabriz petrochemical company using a solution casting method. Determination was based on frequency shifts of the modified quartz crystal due to the adsorption of analytes at the surface of modified electrode in exposure to various concentrations of analytes. The frequency shift was found to have a linear relation to the concentration of analytes. Linear calibration curves were obtained for 7-70 mg l(-1) of analytes with correlation coefficients in the range of 0.9935-0.9989 and sensitivity factors in the range of 2.07-6.74 Hz/mg l(-1). A storage period of over three months showed no loss in the sensitivity and performance of the sensor.
Holographic dark energy in higher derivative gravity with time varying model parameter c2
NASA Astrophysics Data System (ADS)
Borah, B.; Ansari, M.
2015-01-01
Purpose of this paper is to study holographic dark energy in higher derivative gravity assuming the model parameter c2 as a slowly time varying function. Since dark energy emerges as combined effect of linear as well as non-linear terms of curvature, therefore it is important to see holographic dark energy at higher derivative gravity, where action contains both linear as well as non-linear terms of Ricci curvature R. We consider non-interacting scenario of the holographic dark energy with dark matter in spatially flat universe and obtain evolution of the equation of state parameter. Also, we determine deceleration parameter as well as the evolution of dark energy density to explain expansion of the universe. Further, we investigate validity of generalized second law of thermodynamics in this scenario. Finally, we find out a cosmological application of our work by evaluating a relation for the equation of state of holographic dark energy for low red-shifts containing c2 correction.
General purpose graphic processing unit implementation of adaptive pulse compression algorithms
NASA Astrophysics Data System (ADS)
Cai, Jingxiao; Zhang, Yan
2017-07-01
This study introduces a practical approach to implement real-time signal processing algorithms for general surveillance radar based on NVIDIA graphical processing units (GPUs). The pulse compression algorithms are implemented using compute unified device architecture (CUDA) libraries such as CUDA basic linear algebra subroutines and CUDA fast Fourier transform library, which are adopted from open source libraries and optimized for the NVIDIA GPUs. For more advanced, adaptive processing algorithms such as adaptive pulse compression, customized kernel optimization is needed and investigated. A statistical optimization approach is developed for this purpose without needing much knowledge of the physical configurations of the kernels. It was found that the kernel optimization approach can significantly improve the performance. Benchmark performance is compared with the CPU performance in terms of processing accelerations. The proposed implementation framework can be used in various radar systems including ground-based phased array radar, airborne sense and avoid radar, and aerospace surveillance radar.
Dimeric spectra analysis in Microsoft Excel: a comparative study.
Gilani, A Ghanadzadeh; Moghadam, M; Zakerhamidi, M S
2011-11-01
The purpose of this work is to introduce the reader to an Add-in implementation, Decom. This implementation provides the whole processing requirements for analysis of dimeric spectra. General linear and nonlinear decomposition algorithms were integrated as an Excel Add-in for easy installation and usage. In this work, the results of several samples investigations were compared to those obtained by Datan. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
A statistical package for computing time and frequency domain analysis
NASA Technical Reports Server (NTRS)
Brownlow, J.
1978-01-01
The spectrum analysis (SPA) program is a general purpose digital computer program designed to aid in data analysis. The program does time and frequency domain statistical analyses as well as some preanalysis data preparation. The capabilities of the SPA program include linear trend removal and/or digital filtering of data, plotting and/or listing of both filtered and unfiltered data, time domain statistical characterization of data, and frequency domain statistical characterization of data.
Analog Fault Diagnosis of Large-Scale Electronic Circuits.
1983-08-01
is invertible. Note that Eq. G - Government Expenditure on Goods (26) is in general nonlinear while Equation (27) is and Services linear. The latter...is achieved at the expense of T - Taxes on Income more test points. 4 R = Government Regulator Navid and Willson, Jr., (71 considered the diagnosis...Theoretically, both approaches are still under development and all seem feasible. It is the purpose of this report to compare these two approaches numerically by
Zheng, Xiaoming
2017-12-01
The purpose of this work was to examine the effects of relationship functions between diagnostic image quality and radiation dose on the governing equations for image acquisition parameter variations in X-ray imaging. Various equations were derived for the optimal selection of peak kilovoltage (kVp) and exposure parameter (milliAmpere second, mAs) in computed tomography (CT), computed radiography (CR), and direct digital radiography. Logistic, logarithmic, and linear functions were employed to establish the relationship between radiation dose and diagnostic image quality. The radiation dose to the patient, as a function of image acquisition parameters (kVp, mAs) and patient size (d), was used in radiation dose and image quality optimization. Both logistic and logarithmic functions resulted in the same governing equation for optimal selection of image acquisition parameters using a dose efficiency index. For image quality as a linear function of radiation dose, the same governing equation was derived from the linear relationship. The general equations should be used in guiding clinical X-ray imaging through optimal selection of image acquisition parameters. The radiation dose to the patient could be reduced from current levels in medical X-ray imaging.
Streamflow record extension using power transformations and application to sediment transport
NASA Astrophysics Data System (ADS)
Moog, Douglas B.; Whiting, Peter J.; Thomas, Robert B.
1999-01-01
To obtain a representative set of flow rates for a stream, it is often desirable to fill in missing data or extend measurements to a longer time period by correlation to a nearby gage with a longer record. Linear least squares regression of the logarithms of the flows is a traditional and still common technique. However, its purpose is to generate optimal estimates of each day's discharge, rather than the population of discharges, for which it tends to underestimate variance. Maintenance-of-variance-extension (MOVE) equations [Hirsch, 1982] were developed to correct this bias. This study replaces the logarithmic transformation by the more general Box-Cox scaled power transformation, generating a more linear, constant-variance relationship for the MOVE extension. Combining the Box-Cox transformation with the MOVE extension is shown to improve accuracy in estimating order statistics of flow rate, particularly for the nonextreme discharges which generally govern cumulative transport over time. This advantage is illustrated by prediction of cumulative fractions of total bed load transport.
Rapid solution of large-scale systems of equations
NASA Technical Reports Server (NTRS)
Storaasli, Olaf O.
1994-01-01
The analysis and design of complex aerospace structures requires the rapid solution of large systems of linear and nonlinear equations, eigenvalue extraction for buckling, vibration and flutter modes, structural optimization and design sensitivity calculation. Computers with multiple processors and vector capabilities can offer substantial computational advantages over traditional scalar computer for these analyses. These computers fall into two categories: shared memory computers and distributed memory computers. This presentation covers general-purpose, highly efficient algorithms for generation/assembly or element matrices, solution of systems of linear and nonlinear equations, eigenvalue and design sensitivity analysis and optimization. All algorithms are coded in FORTRAN for shared memory computers and many are adapted to distributed memory computers. The capability and numerical performance of these algorithms will be addressed.
Block Gauss elimination followed by a classical iterative method for the solution of linear systems
NASA Astrophysics Data System (ADS)
Alanelli, Maria; Hadjidimos, Apostolos
2004-02-01
In the last two decades many papers have appeared in which the application of an iterative method for the solution of a linear system is preceded by a step of the Gauss elimination process in the hope that this will increase the rates of convergence of the iterative method. This combination of methods has been proven successful especially when the matrix A of the system is an M-matrix. The purpose of this paper is to extend the idea of one to more Gauss elimination steps, consider other classes of matrices A, e.g., p-cyclic consistently ordered, and generalize and improve the asymptotic convergence rates of some of the methods known so far.
An overview of longitudinal data analysis methods for neurological research.
Locascio, Joseph J; Atri, Alireza
2011-01-01
The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models.
Rowan, L.C.; Trautwein, C.M.; Purdy, T.L.
1990-01-01
This study was undertaken as part of the Conterminous U.S. Mineral Assessment Program (CUSMAP). The purpose of the study was to map linear features on Landsat Multispectral Scanner (MSS) images and a proprietary side-looking airborne radar (SLAR) image mosaic and to determine the spatial relationship between these linear features and the locations of metallic mineral occurrE-nces. The results show a close spatial association of linear features with metallic mineral occurrences in parts of the quadrangle, but in other areas the association is less well defined. Linear features are defined as distinct linear and slightly curvilinear elements mappable on MSS and SLAR images. The features generally represent linear segments of streams, ridges, and terminations of topographic features; however, they may also represent tonal patterns that are related to variations in lithology and vegetation. Most linear features in the Butte quadrangle probably represent underlying structural elements, such as fractures (with and without displacement), dikes, and alignment of fold axes. However, in areas underlain by sedimentary rocks, some of the linear features may reflect bedding traces. This report describes the geologic setting of the Butte quadrangle, the procedures used in mapping and analyzing the linear features, and the results of the study. Relationship of these features to placer and non-metal deposits were not analyzed in this study and are not discussed in this report.
Reynolds, Matthew R
2013-03-01
The linear loadings of intelligence test composite scores on a general factor (g) have been investigated recently in factor analytic studies. Spearman's law of diminishing returns (SLODR), however, implies that the g loadings of test scores likely decrease in magnitude as g increases, or they are nonlinear. The purpose of this study was to (a) investigate whether the g loadings of composite scores from the Differential Ability Scales (2nd ed.) (DAS-II, C. D. Elliott, 2007a, Differential Ability Scales (2nd ed.). San Antonio, TX: Pearson) were nonlinear and (b) if they were nonlinear, to compare them with linear g loadings to demonstrate how SLODR alters the interpretation of these loadings. Linear and nonlinear confirmatory factor analysis (CFA) models were used to model Nonverbal Reasoning, Verbal Ability, Visual Spatial Ability, Working Memory, and Processing Speed composite scores in four age groups (5-6, 7-8, 9-13, and 14-17) from the DAS-II norming sample. The nonlinear CFA models provided better fit to the data than did the linear models. In support of SLODR, estimates obtained from the nonlinear CFAs indicated that g loadings decreased as g level increased. The nonlinear portion for the nonverbal reasoning loading, however, was not statistically significant across the age groups. Knowledge of general ability level informs composite score interpretation because g is less likely to produce differences, or is measured less, in those scores at higher g levels. One implication is that it may be more important to examine the pattern of specific abilities at higher general ability levels. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Non linear predictive control of a LEGO mobile robot
NASA Astrophysics Data System (ADS)
Merabti, H.; Bouchemal, B.; Belarbi, K.; Boucherma, D.; Amouri, A.
2014-10-01
Metaheuristics are general purpose heuristics which have shown a great potential for the solution of difficult optimization problems. In this work, we apply the meta heuristic, namely particle swarm optimization, PSO, for the solution of the optimization problem arising in NLMPC. This algorithm is easy to code and may be considered as alternatives for the more classical solution procedures. The PSO- NLMPC is applied to control a mobile robot for the tracking trajectory and obstacles avoidance. Experimental results show the strength of this approach.
2008-10-01
and UTCHEM (Clement et al., 1998). While all four of these software packages use conservation of mass as the basic principle for tracking NAPL...simulate dissolution of a single NAPL component. UTCHEM can be used to simulate dissolution of a multiple NAPL components using either linear or first...parameters. No UTCHEM a/ 3D model, general purpose NAPL simulator. Yes Virulo a/ Probabilistic model for predicting leaching of viruses in unsaturated
Frequency Response of Synthetic Vocal Fold Models with Linear and Nonlinear Material Properties
ERIC Educational Resources Information Center
Shaw, Stephanie M.; Thomson, Scott L.; Dromey, Christopher; Smith, Simeon
2012-01-01
Purpose: The purpose of this study was to create synthetic vocal fold models with nonlinear stress-strain properties and to investigate the effect of linear versus nonlinear material properties on fundamental frequency (F[subscript 0]) during anterior-posterior stretching. Method: Three materially linear and 3 materially nonlinear models were…
An Overview of Longitudinal Data Analysis Methods for Neurological Research
Locascio, Joseph J.; Atri, Alireza
2011-01-01
The purpose of this article is to provide a concise, broad and readily accessible overview of longitudinal data analysis methods, aimed to be a practical guide for clinical investigators in neurology. In general, we advise that older, traditional methods, including (1) simple regression of the dependent variable on a time measure, (2) analyzing a single summary subject level number that indexes changes for each subject and (3) a general linear model approach with a fixed-subject effect, should be reserved for quick, simple or preliminary analyses. We advocate the general use of mixed-random and fixed-effect regression models for analyses of most longitudinal clinical studies. Under restrictive situations or to provide validation, we recommend: (1) repeated-measure analysis of covariance (ANCOVA), (2) ANCOVA for two time points, (3) generalized estimating equations and (4) latent growth curve/structural equation models. PMID:22203825
A look at scalable dense linear algebra libraries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dongarra, J.J.; Van de Geijn, R.A.; Walker, D.W.
1992-01-01
We discuss the essential design features of a library of scalable software for performing dense linear algebra computations on distributed memory concurrent computers. The square block scattered decomposition is proposed as a flexible and general-purpose way of decomposing most, if not all, dense matrix problems. An object- oriented interface to the library permits more portable applications to be written, and is easy to learn and use, since details of the parallel implementation are hidden from the user. Experiments on the Intel Touchstone Delta system with a prototype code that uses the square block scattered decomposition to perform LU factorization aremore » presented and analyzed. It was found that the code was both scalable and efficient, performing at about 14 GFLOPS (double precision) for the largest problem considered.« less
A look at scalable dense linear algebra libraries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dongarra, J.J.; Van de Geijn, R.A.; Walker, D.W.
1992-08-01
We discuss the essential design features of a library of scalable software for performing dense linear algebra computations on distributed memory concurrent computers. The square block scattered decomposition is proposed as a flexible and general-purpose way of decomposing most, if not all, dense matrix problems. An object- oriented interface to the library permits more portable applications to be written, and is easy to learn and use, since details of the parallel implementation are hidden from the user. Experiments on the Intel Touchstone Delta system with a prototype code that uses the square block scattered decomposition to perform LU factorization aremore » presented and analyzed. It was found that the code was both scalable and efficient, performing at about 14 GFLOPS (double precision) for the largest problem considered.« less
Predictors of Adolescents’ Health- promoting Behaviors Guided by Primary Socialization Theory
Rew, Lynn; Arheart, Kristopher L.; Thompson, Sanna; Johnson, Karen
2013-01-01
Purpose The purpose of this study was to determine the influence of parents and peers on adolescents’ health-promoting behaviors, framed by Primary Socialization Theory. Design and Method Longitudinal data collected annually from 1,081 rural youth (mean age = 17 ±.7; 43.5% males; 44% Hispanic) and once from their parents were analyzed using generalized linear models. Results Parental monitoring and adolescent’s religious commitment significantly predicted all health-promoting behaviors (nutrition, physical activity, safety, health practices awareness, stress management). Other statistically significant predictors were parent’s responsiveness and health-promoting behaviors. Peer influence predicted safety and stress management. Practice Implications Nurses may facilitate adolescents’ development of health-promoting behaviors through family-focused interventions. PMID:24094123
Turbulent Motion of Liquids in Hydraulic Resistances with a Linear Cylindrical Slide-Valve
Velescu, C.; Popa, N. C.
2015-01-01
We analyze the motion of viscous and incompressible liquids in the annular space of controllable hydraulic resistances with a cylindrical linear slide-valve. This theoretical study focuses on the turbulent and steady-state motion regimes. The hydraulic resistances mentioned above are the most frequent type of hydraulic resistances used in hydraulic actuators and automation systems. To study the liquids' motion in the controllable hydraulic resistances with a linear cylindrical slide-valve, the report proposes an original analytic method. This study can similarly be applied to any other type of hydraulic resistance. Another purpose of this study is to determine certain mathematical relationships useful to approach the theoretical functionality of hydraulic resistances with magnetic controllable fluids as incompressible fluids in the presence of a controllable magnetic field. In this report, we established general analytic equations to calculate (i) velocity and pressure distributions, (ii) average velocity, (iii) volume flow rate of the liquid, (iv) pressures difference, and (v) radial clearance. PMID:26167532
Turbulent Motion of Liquids in Hydraulic Resistances with a Linear Cylindrical Slide-Valve.
Velescu, C; Popa, N C
2015-01-01
We analyze the motion of viscous and incompressible liquids in the annular space of controllable hydraulic resistances with a cylindrical linear slide-valve. This theoretical study focuses on the turbulent and steady-state motion regimes. The hydraulic resistances mentioned above are the most frequent type of hydraulic resistances used in hydraulic actuators and automation systems. To study the liquids' motion in the controllable hydraulic resistances with a linear cylindrical slide-valve, the report proposes an original analytic method. This study can similarly be applied to any other type of hydraulic resistance. Another purpose of this study is to determine certain mathematical relationships useful to approach the theoretical functionality of hydraulic resistances with magnetic controllable fluids as incompressible fluids in the presence of a controllable magnetic field. In this report, we established general analytic equations to calculate (i) velocity and pressure distributions, (ii) average velocity, (iii) volume flow rate of the liquid, (iv) pressures difference, and (v) radial clearance.
Langley Stability and Transition Analysis Code (LASTRAC) Version 1.2 User Manual
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan
2004-01-01
LASTRAC is a general-purposed, physics-based transition prediction code released by NASA for Laminar Flow Control studies and transition research. The design and development of the LASTRAC code is aimed at providing an engineering tool that is easy to use and yet capable of dealing with a broad range of transition related issues. It was written from scratch based on the state-of-the-art numerical methods for stability analysis and modern software technologies. At low fidelity, it allows users to perform linear stability analysis and N-factor transition correlation for a broad range of flow regimes and configurations by using either the linear stability theory or linear parabolized stability equations method. At high fidelity, users may use nonlinear PSE to track finite-amplitude disturbances until the skin friction rise. This document describes the governing equations, numerical methods, code development, detailed description of input/output parameters, and case studies for the current release of LASTRAC.
A reduced successive quadratic programming strategy for errors-in-variables estimation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tjoa, I.-B.; Biegler, L. T.; Carnegie-Mellon Univ.
Parameter estimation problems in process engineering represent a special class of nonlinear optimization problems, because the maximum likelihood structure of the objective function can be exploited. Within this class, the errors in variables method (EVM) is particularly interesting. Here we seek a weighted least-squares fit to the measurements with an underdetermined process model. Thus, both the number of variables and degrees of freedom available for optimization increase linearly with the number of data sets. Large optimization problems of this type can be particularly challenging and expensive to solve because, for general-purpose nonlinear programming (NLP) algorithms, the computational effort increases atmore » least quadratically with problem size. In this study we develop a tailored NLP strategy for EVM problems. The method is based on a reduced Hessian approach to successive quadratic programming (SQP), but with the decomposition performed separately for each data set. This leads to the elimination of all variables but the model parameters, which are determined by a QP coordination step. In this way the computational effort remains linear in the number of data sets. Moreover, unlike previous approaches to the EVM problem, global and superlinear properties of the SQP algorithm apply naturally. Also, the method directly incorporates inequality constraints on the model parameters (although not on the fitted variables). This approach is demonstrated on five example problems with up to 102 degrees of freedom. Compared to general-purpose NLP algorithms, large improvements in computational performance are observed.« less
Attitude dynamics simulation subroutines for systems of hinge-connected rigid bodies
NASA Technical Reports Server (NTRS)
Fleischer, G. E.; Likins, P. W.
1974-01-01
Several computer subroutines are designed to provide the solution to minimum-dimension sets of discrete-coordinate equations of motion for systems consisting of an arbitrary number of hinge-connected rigid bodies assembled in a tree topology. In particular, these routines may be applied to: (1) the case of completely unrestricted hinge rotations, (2) the totally linearized case (all system rotations are small), and (3) the mixed, or partially linearized, case. The use of the programs in each case is demonstrated using a five-body spacecraft and attitude control system configuration. The ability of the subroutines to accommodate prescribed motions of system bodies is also demonstrated. Complete listings and user instructions are included for these routines (written in FORTRAN V) which are intended as multi- and general-purpose tools in the simulation of spacecraft and other complex electromechanical systems.
Gary, S. Peter
2015-04-06
Plasma turbulence consists of an ensemble of enhanced, broadband electromagnetic fluctuations, typically driven by multi-wave interactions which transfer energy in wavevector space via non- linear cascade processes. In addition, temperature anisotropy instabilities in collisionless plasmas are driven by quasi-linear wave–particle interactions which transfer particle kinetic energy to field fluctuation energy; the resulting enhanced fluctuations are typically narrowband in wavevector magnitude and direction. Whatever their sources, short-wavelength fluctuations are those at which charged particle kinetic, that is, velocity-space, properties are important; these are generally wavelengths of the order of or shorter than the ion inertial length or the thermal ion gyroradius.more » The purpose of this review is to summarize and interpret recent computational results concerning short-wavelength plasma turbulence, short-wavelength temperature anisotropy instabilities and relationships between the two phenomena.« less
Suboptimal LQR-based spacecraft full motion control: Theory and experimentation
NASA Astrophysics Data System (ADS)
Guarnaccia, Leone; Bevilacqua, Riccardo; Pastorelli, Stefano P.
2016-05-01
This work introduces a real time suboptimal control algorithm for six-degree-of-freedom spacecraft maneuvering based on a State-Dependent-Algebraic-Riccati-Equation (SDARE) approach and real-time linearization of the equations of motion. The control strategy is sub-optimal since the gains of the linear quadratic regulator (LQR) are re-computed at each sample time. The cost function of the proposed controller has been compared with the one obtained via a general purpose optimal control software, showing, on average, an increase in control effort of approximately 15%, compensated by real-time implementability. Lastly, the paper presents experimental tests on a hardware-in-the-loop six-degree-of-freedom spacecraft simulator, designed for testing new guidance, navigation, and control algorithms for nano-satellites in a one-g laboratory environment. The tests show the real-time feasibility of the proposed approach.
NASA Technical Reports Server (NTRS)
Shinn, J. L.; Wilson, J. W.
2003-01-01
The tissue equivalent proportional counter had the purpose of providing the energy absorbed from a radiation field and an estimate of the corresponding linear energy transfer (LET) for evaluation of radiation quality to convert to dose equivalent. It was the recognition of the limitations in estimating LET which lead to a new approach to dosimetry, microdosimetry, and the corresponding emphasis on energy deposit in a small tissue volume as the driver of biological response with the defined quantity of lineal energy. In many circumstances, the average of the lineal energy and LET are closely related and has provided a basis for estimating dose equivalent. Still in many cases the lineal is poorly related to LET and brings into question the usefulness as a general purpose device. These relationships are examined in this paper.
NASA Astrophysics Data System (ADS)
Susanti, Hesty; Suprijanto, Kurniadi, Deddy
2018-02-01
Needle visibility in ultrasound-guided technique has been a crucial factor for successful interventional procedure. It has been affected by several factors, i.e. puncture depth, insertion angle, needle size and material, and imaging technology. The influences of those factors made the needle not always well visible. 20 G needles of 15 cm length (Nano Line, facet) were inserted into water bath with variation of insertion angles and depths. Ultrasound measurements are performed with BK-Medical Flex Focus 800 using 12 MHz linear array and 5 MHz curved array in Ultrasound Guided Regional Anesthesia mode. We propose 3 criteria to evaluate needle visibility, i.e. maximum intensity, mean intensity, and the ratio between minimum and maximum intensity. Those criteria were then depicted into representative maps for practical purpose. The best criterion candidate for representing the needle visibility was criterion 1. Generally, the appearance pattern of the needle from this criterion was relatively consistent, i.e. for linear array, it was relatively poor visibility in the middle part of the shaft, while for curved array, it is relatively better visible toward the end of the shaft. With further investigations, for example with the use of tissue-mimicking phantom, the representative maps can be built for future practical purpose, i.e. as a tool for clinicians to ensure better needle placement in clinical application. It will help them to avoid the "dead" area where the needle is not well visible, so it can reduce the risks of vital structures traversing and the number of required insertion, resulting in less patient morbidity. Those simple criteria and representative maps can be utilized to evaluate general visibility patterns of the needle in vast range of needle types and sizes in different insertion media. This information is also important as an early investigation for future research of needle visibility improvement, i.e. the development of beamforming strategies and ultrasound enhanced (echogenic) needle.
Neural control of magnetic suspension systems
NASA Technical Reports Server (NTRS)
Gray, W. Steven
1993-01-01
The purpose of this research program is to design, build and test (in cooperation with NASA personnel from the NASA Langley Research Center) neural controllers for two different small air-gap magnetic suspension systems. The general objective of the program is to study neural network architectures for the purpose of control in an experimental setting and to demonstrate the feasibility of the concept. The specific objectives of the research program are: (1) to demonstrate through simulation and experimentation the feasibility of using neural controllers to stabilize a nonlinear magnetic suspension system; (2) to investigate through simulation and experimentation the performance of neural controllers designs under various types of parametric and nonparametric uncertainty; (3) to investigate through simulation and experimentation various types of neural architectures for real-time control with respect to performance and complexity; and (4) to benchmark in an experimental setting the performance of neural controllers against other types of existing linear and nonlinear compensator designs. To date, the first one-dimensional, small air-gap magnetic suspension system has been built, tested and delivered to the NASA Langley Research Center. The device is currently being stabilized with a digital linear phase-lead controller. The neural controller hardware is under construction. Two different neural network paradigms are under consideration, one based on hidden layer feedforward networks trained via back propagation and one based on using Gaussian radial basis functions trained by analytical methods related to stability conditions. Some advanced nonlinear control algorithms using feedback linearization and sliding mode control are in simulation studies.
Item Purification in Differential Item Functioning Using Generalized Linear Mixed Models
ERIC Educational Resources Information Center
Liu, Qian
2011-01-01
For this dissertation, four item purification procedures were implemented onto the generalized linear mixed model for differential item functioning (DIF) analysis, and the performance of these item purification procedures was investigated through a series of simulations. Among the four procedures, forward and generalized linear mixed model (GLMM)…
Numerical Analysis of 2-D and 3-D MHD Flows Relevant to Fusion Applications
Khodak, Andrei
2017-08-21
Here, the analysis of many fusion applications such as liquid-metal blankets requires application of computational fluid dynamics (CFD) methods for electrically conductive liquids in geometrically complex regions and in the presence of a strong magnetic field. A current state of the art general purpose CFD code allows modeling of the flow in complex geometric regions, with simultaneous conjugated heat transfer analysis in liquid and surrounding solid parts. Together with a magnetohydrodynamics (MHD) capability, the general purpose CFD code will be a valuable tool for the design and optimization of fusion devices. This paper describes an introduction of MHD capability intomore » the general purpose CFD code CFX, part of the ANSYS Workbench. The code was adapted for MHD problems using a magnetic induction approach. CFX allows introduction of user-defined variables using transport or Poisson equations. For MHD adaptation of the code three additional transport equations were introduced for the components of the magnetic field, in addition to the Poisson equation for electric potential. The Lorentz force is included in the momentum transport equation as a source term. Fusion applications usually involve very strong magnetic fields, with values of the Hartmann number of up to tens of thousands. In this situation a system of MHD equations become very rigid with very large source terms and very strong variable gradients. To increase system robustness, special measures were introduced during the iterative convergence process, such as linearization using source coefficient for momentum equations. The MHD implementation in general purpose CFD code was tested against benchmarks, specifically selected for liquid-metal blanket applications. Results of numerical simulations using present implementation closely match analytical solutions for a Hartmann number of up to 1500 for a 2-D laminar flow in the duct of square cross section, with conducting and nonconducting walls. Results for a 3-D test case are also included.« less
Numerical Analysis of 2-D and 3-D MHD Flows Relevant to Fusion Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khodak, Andrei
Here, the analysis of many fusion applications such as liquid-metal blankets requires application of computational fluid dynamics (CFD) methods for electrically conductive liquids in geometrically complex regions and in the presence of a strong magnetic field. A current state of the art general purpose CFD code allows modeling of the flow in complex geometric regions, with simultaneous conjugated heat transfer analysis in liquid and surrounding solid parts. Together with a magnetohydrodynamics (MHD) capability, the general purpose CFD code will be a valuable tool for the design and optimization of fusion devices. This paper describes an introduction of MHD capability intomore » the general purpose CFD code CFX, part of the ANSYS Workbench. The code was adapted for MHD problems using a magnetic induction approach. CFX allows introduction of user-defined variables using transport or Poisson equations. For MHD adaptation of the code three additional transport equations were introduced for the components of the magnetic field, in addition to the Poisson equation for electric potential. The Lorentz force is included in the momentum transport equation as a source term. Fusion applications usually involve very strong magnetic fields, with values of the Hartmann number of up to tens of thousands. In this situation a system of MHD equations become very rigid with very large source terms and very strong variable gradients. To increase system robustness, special measures were introduced during the iterative convergence process, such as linearization using source coefficient for momentum equations. The MHD implementation in general purpose CFD code was tested against benchmarks, specifically selected for liquid-metal blanket applications. Results of numerical simulations using present implementation closely match analytical solutions for a Hartmann number of up to 1500 for a 2-D laminar flow in the duct of square cross section, with conducting and nonconducting walls. Results for a 3-D test case are also included.« less
Environmental standards for ionizing radiation: theoretical basis for dose-response curves.
Upton, A C
1983-01-01
The types of injury attributable to ionizing radiation are subdivided, for purposes of risk assessment and radiological protection, into two broad categories: stochastic effects and nonstochastic effects. Stochastic effects are viewed as probablistic phenomena, varying in frequency but not severity as a function of the dose, without any threshold; nonstochastic effects are viewed as deterministic phenomena, varying in both frequency and severity as a function of the dose, with clinical thresholds. Included among stochastic effects are heritable effects (mutations and chromosome aberrations) and carcinogenic effects. Both types of effects are envisioned as unicellular phenomena which can result from nonlethal injury of individual cells, without the necessity of damage to other cells. For the induction of mutations and chromosome aberrations in the low-to-intermediate dose range, the dose-response curve with high-linear energy transfer (LET) radiation generally conforms to a linear nonthreshold relationship and varies relatively little with the dose rate. In contrast, the curve with low-LET radiation generally conforms to a linear-quadratic relationship, rising less steeply than the curve with high-LET radiation and increasing in slope with increasing dose and dose rate. The dose-response curve for carcinogenic effects varies widely from one type of neoplasm to another in the intermediate-to-high dose range, in part because of differences in the way large doses of radiation can affect the promotion and progression of different neoplasms. Information about dose-response relations for low-level irradiation is fragmentary but consistent, in general, with the hypothesis that the neoplastic transformation may result from mutation, chromosome aberration or genetic recombination in a single susceptible cell. PMID:6653536
NASA Astrophysics Data System (ADS)
Hau, Jan-Niklas; Oberlack, Martin; Chagelishvili, George
2017-04-01
We present a unifying solution framework for the linearized compressible equations for two-dimensional linearly sheared unbounded flows using the Lie symmetry analysis. The full set of symmetries that are admitted by the underlying system of equations is employed to systematically derive the one- and two-dimensional optimal systems of subalgebras, whose connected group reductions lead to three distinct invariant ansatz functions for the governing sets of partial differential equations (PDEs). The purpose of this analysis is threefold and explicitly we show that (i) there are three invariant solutions that stem from the optimal system. These include a general ansatz function with two free parameters, as well as the ansatz functions of the Kelvin mode and the modal approach. Specifically, the first approach unifies these well-known ansatz functions. By considering two limiting cases of the free parameters and related algebraic transformations, the general ansatz function is reduced to either of them. This fact also proves the existence of a link between the Kelvin mode and modal ansatz functions, as these appear to be the limiting cases of the general one. (ii) The Lie algebra associated with the Lie group admitted by the PDEs governing the compressible dynamics is a subalgebra associated with the group admitted by the equations governing the incompressible dynamics, which allows an additional (scaling) symmetry. Hence, any consequences drawn from the compressible case equally hold for the incompressible counterpart. (iii) In any of the systems of ordinary differential equations, derived by the three ansatz functions in the compressible case, the linearized potential vorticity is a conserved quantity that allows us to analyze vortex and wave mode perturbations separately.
Entanglement-assisted quantum feedback control
NASA Astrophysics Data System (ADS)
Yamamoto, Naoki; Mikami, Tomoaki
2017-07-01
The main advantage of quantum metrology relies on the effective use of entanglement, which indeed allows us to achieve strictly better estimation performance over the standard quantum limit. In this paper, we propose an analogous method utilizing entanglement for the purpose of feedback control. The system considered is a general linear dynamical quantum system, where the control goal can be systematically formulated as a linear quadratic Gaussian control problem based on the quantum Kalman filtering method; in this setting, an entangled input probe field is effectively used to reduce the estimation error and accordingly the control cost function. In particular, we show that, in the problem of cooling an opto-mechanical oscillator, the entanglement-assisted feedback control can lower the stationary occupation number of the oscillator below the limit attainable by the controller with a coherent probe field and furthermore beats the controller with an optimized squeezed probe field.
Fuzzy linear model for production optimization of mining systems with multiple entities
NASA Astrophysics Data System (ADS)
Vujic, Slobodan; Benovic, Tomo; Miljanovic, Igor; Hudej, Marjan; Milutinovic, Aleksandar; Pavlovic, Petar
2011-12-01
Planning and production optimization within multiple mines or several work sites (entities) mining systems by using fuzzy linear programming (LP) was studied. LP is the most commonly used operations research methods in mining engineering. After the introductory review of properties and limitations of applying LP, short reviews of the general settings of deterministic and fuzzy LP models are presented. With the purpose of comparative analysis, the application of both LP models is presented using the example of the Bauxite Basin Niksic with five mines. After the assessment, LP is an efficient mathematical modeling tool in production planning and solving many other single-criteria optimization problems of mining engineering. After the comparison of advantages and deficiencies of both deterministic and fuzzy LP models, the conclusion presents benefits of the fuzzy LP model but is also stating that seeking the optimal plan of production means to accomplish the overall analysis that will encompass the LP model approaches.
Decision and function problems based on boson sampling
NASA Astrophysics Data System (ADS)
Nikolopoulos, Georgios M.; Brougham, Thomas
2016-07-01
Boson sampling is a mathematical problem that is strongly believed to be intractable for classical computers, whereas passive linear interferometers can produce samples efficiently. So far, the problem remains a computational curiosity, and the possible usefulness of boson-sampling devices is mainly limited to the proof of quantum supremacy. The purpose of this work is to investigate whether boson sampling can be used as a resource of decision and function problems that are computationally hard, and may thus have cryptographic applications. After the definition of a rather general theoretical framework for the design of such problems, we discuss their solution by means of a brute-force numerical approach, as well as by means of nonboson samplers. Moreover, we estimate the sample sizes required for their solution by passive linear interferometers, and it is shown that they are independent of the size of the Hilbert space.
NASA Astrophysics Data System (ADS)
Levin, Alan R.; Zhang, Deyin; Polizzi, Eric
2012-11-01
In a recent article Polizzi (2009) [15], the FEAST algorithm has been presented as a general purpose eigenvalue solver which is ideally suited for addressing the numerical challenges in electronic structure calculations. Here, FEAST is presented beyond the “black-box” solver as a fundamental modeling framework which can naturally address the original numerical complexity of the electronic structure problem as formulated by Slater in 1937 [3]. The non-linear eigenvalue problem arising from the muffin-tin decomposition of the real-space domain is first derived and then reformulated to be solved exactly within the FEAST framework. This new framework is presented as a fundamental and practical solution for performing both accurate and scalable electronic structure calculations, bypassing the various issues of using traditional approaches such as linearization and pseudopotential techniques. A finite element implementation of this FEAST framework along with simulation results for various molecular systems is also presented and discussed.
Smoothed Residual Plots for Generalized Linear Models. Technical Report #450.
ERIC Educational Resources Information Center
Brant, Rollin
Methods for examining the viability of assumptions underlying generalized linear models are considered. By appealing to the likelihood, a natural generalization of the raw residual plot for normal theory models is derived and is applied to investigating potential misspecification of the linear predictor. A smooth version of the plot is also…
NASA Astrophysics Data System (ADS)
Schaa, R.; Gross, L.; du Plessis, J.
2016-04-01
We present a general finite-element solver, escript, tailored to solve geophysical forward and inverse modeling problems in terms of partial differential equations (PDEs) with suitable boundary conditions. Escript’s abstract interface allows geoscientists to focus on solving the actual problem without being experts in numerical modeling. General-purpose finite element solvers have found wide use especially in engineering fields and find increasing application in the geophysical disciplines as these offer a single interface to tackle different geophysical problems. These solvers are useful for data interpretation and for research, but can also be a useful tool in educational settings. This paper serves as an introduction into PDE-based modeling with escript where we demonstrate in detail how escript is used to solve two different forward modeling problems from applied geophysics (3D DC resistivity and 2D magnetotellurics). Based on these two different cases, other geophysical modeling work can easily be realized. The escript package is implemented as a Python library and allows the solution of coupled, linear or non-linear, time-dependent PDEs. Parallel execution for both shared and distributed memory architectures is supported and can be used without modifications to the scripts.
Hyperspectral processing in graphical processing units
NASA Astrophysics Data System (ADS)
Winter, Michael E.; Winter, Edwin M.
2011-06-01
With the advent of the commercial 3D video card in the mid 1990s, we have seen an order of magnitude performance increase with each generation of new video cards. While these cards were designed primarily for visualization and video games, it became apparent after a short while that they could be used for scientific purposes. These Graphical Processing Units (GPUs) are rapidly being incorporated into data processing tasks usually reserved for general purpose computers. It has been found that many image processing problems scale well to modern GPU systems. We have implemented four popular hyperspectral processing algorithms (N-FINDR, linear unmixing, Principal Components, and the RX anomaly detection algorithm). These algorithms show an across the board speedup of at least a factor of 10, with some special cases showing extreme speedups of a hundred times or more.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Huiqiang; Wu, Xizeng, E-mail: xwu@uabmc.edu, E-mail: tqxiao@sinap.ac.cn; Xiao, Tiqiao, E-mail: xwu@uabmc.edu, E-mail: tqxiao@sinap.ac.cn
Purpose: Propagation-based phase-contrast CT (PPCT) utilizes highly sensitive phase-contrast technology applied to x-ray microtomography. Performing phase retrieval on the acquired angular projections can enhance image contrast and enable quantitative imaging. In this work, the authors demonstrate the validity and advantages of a novel technique for high-resolution PPCT by using the generalized phase-attenuation duality (PAD) method of phase retrieval. Methods: A high-resolution angular projection data set of a fish head specimen was acquired with a monochromatic 60-keV x-ray beam. In one approach, the projection data were directly used for tomographic reconstruction. In two other approaches, the projection data were preprocessed bymore » phase retrieval based on either the linearized PAD method or the generalized PAD method. The reconstructed images from all three approaches were then compared in terms of tissue contrast-to-noise ratio and spatial resolution. Results: The authors’ experimental results demonstrated the validity of the PPCT technique based on the generalized PAD-based method. In addition, the results show that the authors’ technique is superior to the direct PPCT technique as well as the linearized PAD-based PPCT technique in terms of their relative capabilities for tissue discrimination and characterization. Conclusions: This novel PPCT technique demonstrates great potential for biomedical imaging, especially for applications that require high spatial resolution and limited radiation exposure.« less
Hierarchies of Manakov-Santini Type by Means of Rota-Baxter and Other Identities
NASA Astrophysics Data System (ADS)
Szablikowski, Błażej
2016-02-01
The Lax-Sato approach to the hierarchies of Manakov-Santini type is formalized in order to extend it to a more general class of integrable systems. For this purpose some linear operators are introduced, which must satisfy some integrability conditions, one of them is the Rota-Baxter identity. The theory is illustrated by means of the algebra of Laurent series, the related hierarchies are classified and examples, also new, of Manakov-Santini type systems are constructed, including those that are related to the dispersionless modified Kadomtsev-Petviashvili equation and so called dispersionless r-th systems.
32 bit digital optical computer - A hardware update
NASA Technical Reports Server (NTRS)
Guilfoyle, Peter S.; Carter, James A., III; Stone, Richard V.; Pape, Dennis R.
1990-01-01
Such state-of-the-art devices as multielement linear laser diode arrays, multichannel acoustooptic modulators, optical relays, and avalanche photodiode arrays, are presently applied to the implementation of a 32-bit supercomputer's general-purpose optical central processing architecture. Shannon's theorem, Morozov's control operator method (in conjunction with combinatorial arithmetic), and DeMorgan's law have been used to design an architecture whose 100 MHz clock renders it fully competitive with emerging planar-semiconductor technology. Attention is given to the architecture's multichannel Bragg cells, thermal design and RF crosstalk considerations, and the first and second anamorphic relay legs.
MO-F-16A-02: Simulation of a Medical Linear Accelerator for Teaching Purposes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carlone, M; Lamey, M; Anderson, R
Purpose: Detailed functioning of linear accelerator physics is well known. Less well developed is the basic understanding of how the adjustment of the linear accelerator's electrical components affects the resulting radiation beam. Other than the text by Karzmark, there is very little literature devoted to the practical understanding of linear accelerator functionality targeted at the radiotherapy clinic level. The purpose of this work is to describe a simulation environment for medical linear accelerators with the purpose of teaching linear accelerator physics. Methods: Varian type lineacs were simulated. Klystron saturation and peak output were modelled analytically. The energy gain of anmore » electron beam was modelled using load line expressions. The bending magnet was assumed to be a perfect solenoid whose pass through energy varied linearly with solenoid current. The dose rate calculated at depth in water was assumed to be a simple function of the target's beam current. The flattening filter was modelled as an attenuator with conical shape, and the time-averaged dose rate at a depth in water was determined by calculating kerma. Results: Fifteen analytical models were combined into a single model called SIMAC. Performance was verified systematically by adjusting typical linac control parameters. Increasing klystron pulse voltage increased dose rate to a peak, which then decreased as the beam energy was further increased due to the fixed pass through energy of the bending magnet. Increasing accelerator beam current leads to a higher dose per pulse. However, the energy of the electron beam decreases due to beam loading and so the dose rate eventually maximizes and the decreases as beam current was further increased. Conclusion: SIMAC can realistically simulate the functionality of a linear accelerator. It is expected to have value as a teaching tool for both medical physicists and linear accelerator service personnel.« less
An empirical model of diagnostic x-ray attenuation under narrow-beam geometry
Mathieu, Kelsey B.; Kappadath, S. Cheenu; White, R. Allen; Atkinson, E. Neely; Cody, Dianna D.
2011-01-01
Purpose: The purpose of this study was to develop and validate a mathematical model to describe narrow-beam attenuation of kilovoltage x-ray beams for the intended applications of half-value layer (HVL) and quarter-value layer (QVL) estimations, patient organ shielding, and computer modeling. Methods: An empirical model, which uses the Lambert W function and represents a generalized Lambert-Beer law, was developed. To validate this model, transmission of diagnostic energy x-ray beams was measured over a wide range of attenuator thicknesses [0.49–33.03 mm Al on a computed tomography (CT) scanner, 0.09–1.93 mm Al on two mammography systems, and 0.1–0.45 mm Cu and 0.49–14.87 mm Al using general radiography]. Exposure measurements were acquired under narrow-beam geometry using standard methods, including the appropriate ionization chamber, for each radiographic system. Nonlinear regression was used to find the best-fit curve of the proposed Lambert W model to each measured transmission versus attenuator thickness data set. In addition to validating the Lambert W model, we also assessed the performance of two-point Lambert W interpolation compared to traditional methods for estimating the HVL and QVL [i.e., semilogarithmic (exponential) and linear interpolation]. Results: The Lambert W model was validated for modeling attenuation versus attenuator thickness with respect to the data collected in this study (R2 > 0.99). Furthermore, Lambert W interpolation was more accurate and less sensitive to the choice of interpolation points used to estimate the HVL and∕or QVL than the traditional methods of semilogarithmic and linear interpolation. Conclusions: The proposed Lambert W model accurately describes attenuation of both monoenergetic radiation and (kilovoltage) polyenergetic beams (under narrow-beam geometry). PMID:21928626
An empirical model of diagnostic x-ray attenuation under narrow-beam geometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mathieu, Kelsey B.; Kappadath, S. Cheenu; White, R. Allen
2011-08-15
Purpose: The purpose of this study was to develop and validate a mathematical model to describe narrow-beam attenuation of kilovoltage x-ray beams for the intended applications of half-value layer (HVL) and quarter-value layer (QVL) estimations, patient organ shielding, and computer modeling. Methods: An empirical model, which uses the Lambert W function and represents a generalized Lambert-Beer law, was developed. To validate this model, transmission of diagnostic energy x-ray beams was measured over a wide range of attenuator thicknesses [0.49-33.03 mm Al on a computed tomography (CT) scanner, 0.09-1.93 mm Al on two mammography systems, and 0.1-0.45 mm Cu and 0.49-14.87more » mm Al using general radiography]. Exposure measurements were acquired under narrow-beam geometry using standard methods, including the appropriate ionization chamber, for each radiographic system. Nonlinear regression was used to find the best-fit curve of the proposed Lambert W model to each measured transmission versus attenuator thickness data set. In addition to validating the Lambert W model, we also assessed the performance of two-point Lambert W interpolation compared to traditional methods for estimating the HVL and QVL [i.e., semilogarithmic (exponential) and linear interpolation]. Results: The Lambert W model was validated for modeling attenuation versus attenuator thickness with respect to the data collected in this study (R{sup 2} > 0.99). Furthermore, Lambert W interpolation was more accurate and less sensitive to the choice of interpolation points used to estimate the HVL and/or QVL than the traditional methods of semilogarithmic and linear interpolation. Conclusions: The proposed Lambert W model accurately describes attenuation of both monoenergetic radiation and (kilovoltage) polyenergetic beams (under narrow-beam geometry).« less
Koom, Woong Sub; Choi, Mi Yeon; Lee, Jeongshim; Park, Eun Jung; Kim, Ju Hye; Kim, Sun-Hyun; Kim, Yong Bae
2016-01-01
Purpose: The purpose of this study was to evaluate the efficacy of art therapy to control fatigue in cancer patients during course of radiotherapy and its impact on quality of life (QoL). Materials and Methods: Fifty cancer patients receiving radiotherapy received weekly art therapy sessions using famous painting appreciation. Fatigue and QoL were assessed using the Brief Fatigue Inventory (BFI) Scale and the Functional Assessment of Chronic Illness Therapy-Fatigue (FACIT-F) at baseline before starting radiotherapy, every week for 4 weeks during radiotherapy, and at the end of radiotherapy. Mean changes of scores over time were analyzed using a generalized linear mixed model. Results: Of the 50 patients, 34 (68%) participated in 4 sessions of art therapy. Generalized linear mixed models testing for the effect of time on mean score changes showed no significant changes in scores from baseline for the BFI and FACIT-F. The mean BFI score and FACIT-F total score changed from 3.1 to 2.7 and from 110.7 to 109.2, respectively. Art therapy based on the appreciation of famous paintings led to increases in self-esteem by increasing self-realization and forming social relationships. Conclusion: Fatigue and QoL in cancer patients with art therapy do not deteriorate during a period of radiotherapy. Despite the single-arm small number of participants and pilot design, this study provides a strong initial demonstration that art therapy of appreciation for famous painting is worthy of further study for fatigue and QoL improvement. Further, it can play an important role in routine practice in cancer patients during radiotherapy. PMID:27306778
A comparison of bilingual education and generalist teachers' approaches to scientific biliteracy
NASA Astrophysics Data System (ADS)
Garza, Esther
The purpose of this study was to determine if educators were capitalizing on bilingual learners' use of their biliterate abilities to acquire scientific meaning and discourse that would formulate a scientific biliterate identity. Mixed methods were used to explore teachers' use of biliteracy and Funds of Knowledge (Moll, L., Amanti, C., Neff, D., & Gonzalez, N., 1992; Gonzales, Moll, & Amanti, 2005) from the students' Latino heritage while conducting science inquiry. The research study explored four constructs that conceptualized scientific biliteracy. The four constructs include science literacy, science biliteracy, reading comprehension strategies and students' cultural backgrounds. There were 156 4th-5th grade bilingual and general education teachers in South Texas that were surveyed using the Teacher Scientific Biliteracy Inventory (TSBI) and five teachers' science lessons were observed. Qualitative findings revealed that a variety of scientific biliteracy instructional strategies were frequently used in both bilingual and general education classrooms. The language used to deliver this instruction varied. A General Linear Model revealed that classroom assignment, bilingual or general education, had a significant effect on a teacher's instructional approach to employ scientific biliteracy. A simple linear regression found that the TSBI accounted for 17% of the variance on 4th grade reading benchmarks. Mixed methods results indicated that teachers were utilizing scientific biliteracy strategies in English, Spanish and/or both languages. Household items and science experimentation at home were encouraged by teachers to incorporate the students' cultural backgrounds. Finally, science inquiry was conducted through a universal approach to science learning versus a multicultural approach to science learning.
PAN AIR modeling studies. [higher order panel method for aircraft design
NASA Technical Reports Server (NTRS)
Towne, M. C.; Strande, S. M.; Erickson, L. L.; Kroo, I. M.; Enomoto, F. Y.; Carmichael, R. L.; Mcpherson, K. F.
1983-01-01
PAN AIR is a computer program that predicts subsonic or supersonic linear potential flow about arbitrary configurations. The code's versatility and generality afford numerous possibilities for modeling flow problems. Although this generality provides great flexibility, it also means that studies are required to establish the dos and don'ts of modeling. The purpose of this paper is to describe and evaluate a variety of methods for modeling flows with PAN AIR. The areas discussed are effects of panel density, internal flow modeling, forebody modeling in subsonic flow, propeller slipstream modeling, effect of wake length, wing-tail-wake interaction, effect of trailing-edge paneling on the Kutta condition, well- and ill-posed boundary-value problems, and induced-drag calculations. These nine topics address problems that are of practical interest to the users of PAN AIR.
Generalized Multilevel Structural Equation Modeling
ERIC Educational Resources Information Center
Rabe-Hesketh, Sophia; Skrondal, Anders; Pickles, Andrew
2004-01-01
A unifying framework for generalized multilevel structural equation modeling is introduced. The models in the framework, called generalized linear latent and mixed models (GLLAMM), combine features of generalized linear mixed models (GLMM) and structural equation models (SEM) and consist of a response model and a structural model for the latent…
Doubly robust estimation of generalized partial linear models for longitudinal data with dropouts.
Lin, Huiming; Fu, Bo; Qin, Guoyou; Zhu, Zhongyi
2017-12-01
We develop a doubly robust estimation of generalized partial linear models for longitudinal data with dropouts. Our method extends the highly efficient aggregate unbiased estimating function approach proposed in Qu et al. (2010) to a doubly robust one in the sense that under missing at random (MAR), our estimator is consistent when either the linear conditional mean condition is satisfied or a model for the dropout process is correctly specified. We begin with a generalized linear model for the marginal mean, and then move forward to a generalized partial linear model, allowing for nonparametric covariate effect by using the regression spline smoothing approximation. We establish the asymptotic theory for the proposed method and use simulation studies to compare its finite sample performance with that of Qu's method, the complete-case generalized estimating equation (GEE) and the inverse-probability weighted GEE. The proposed method is finally illustrated using data from a longitudinal cohort study. © 2017, The International Biometric Society.
Nasari, Masoud M; Szyszkowicz, Mieczysław; Chen, Hong; Crouse, Daniel; Turner, Michelle C; Jerrett, Michael; Pope, C Arden; Hubbell, Bryan; Fann, Neal; Cohen, Aaron; Gapstur, Susan M; Diver, W Ryan; Stieb, David; Forouzanfar, Mohammad H; Kim, Sun-Young; Olives, Casey; Krewski, Daniel; Burnett, Richard T
2016-01-01
The effectiveness of regulatory actions designed to improve air quality is often assessed by predicting changes in public health resulting from their implementation. Risk of premature mortality from long-term exposure to ambient air pollution is the single most important contributor to such assessments and is estimated from observational studies generally assuming a log-linear, no-threshold association between ambient concentrations and death. There has been only limited assessment of this assumption in part because of a lack of methods to estimate the shape of the exposure-response function in very large study populations. In this paper, we propose a new class of variable coefficient risk functions capable of capturing a variety of potentially non-linear associations which are suitable for health impact assessment. We construct the class by defining transformations of concentration as the product of either a linear or log-linear function of concentration multiplied by a logistic weighting function. These risk functions can be estimated using hazard regression survival models with currently available computer software and can accommodate large population-based cohorts which are increasingly being used for this purpose. We illustrate our modeling approach with two large cohort studies of long-term concentrations of ambient air pollution and mortality: the American Cancer Society Cancer Prevention Study II (CPS II) cohort and the Canadian Census Health and Environment Cohort (CanCHEC). We then estimate the number of deaths attributable to changes in fine particulate matter concentrations over the 2000 to 2010 time period in both Canada and the USA using both linear and non-linear hazard function models.
Analysing the Costs of Integrated Care: A Case on Model Selection for Chronic Care Purposes
Sánchez-Pérez, Inma; Ibern, Pere; Coderch, Jordi; Inoriza, José María
2016-01-01
Background: The objective of this study is to investigate whether the algorithm proposed by Manning and Mullahy, a consolidated health economics procedure, can also be used to estimate individual costs for different groups of healthcare services in the context of integrated care. Methods: A cross-sectional study focused on the population of the Baix Empordà (Catalonia-Spain) for the year 2012 (N = 92,498 individuals). A set of individual cost models as a function of sex, age and morbidity burden were adjusted and individual healthcare costs were calculated using a retrospective full-costing system. The individual morbidity burden was inferred using the Clinical Risk Groups (CRG) patient classification system. Results: Depending on the characteristics of the data, and according to the algorithm criteria, the choice of model was a linear model on the log of costs or a generalized linear model with a log link. We checked for goodness of fit, accuracy, linear structure and heteroscedasticity for the models obtained. Conclusion: The proposed algorithm identified a set of suitable cost models for the distinct groups of services integrated care entails. The individual morbidity burden was found to be indispensable when allocating appropriate resources to targeted individuals. PMID:28316542
Baseline-Subtraction-Free (BSF) Damage-Scattered Wave Extraction for Stiffened Isotropic Plates
NASA Technical Reports Server (NTRS)
He, Jiaze; Leser, Patrick E.; Leser, William P.
2017-01-01
Lamb waves enable long distance inspection of structures for health monitoring purposes. However, this capability is diminished when applied to complex structures where damage-scattered waves are often buried by scattering from various structural components or boundaries in the time-space domain. Here, a baseline-subtraction-free (BSF) inspection concept based on the Radon transform (RT) is proposed to identify and separate these scattered waves from those scattered by damage. The received time-space domain signals can be converted into the Radon domain, in which the scattered signals from structural components are suppressed into relatively small regions such that damage-scattered signals can be identified and extracted. In this study, a piezoelectric wafer and a linear scan via laser Doppler vibrometer (LDV) were used to excite and acquire the Lamb-wave signals in an aluminum plate with multiple stiffeners. Linear and inverse linear Radon transform algorithms were applied to the direct measurements. The results demonstrate the effectiveness of the Radon transform as a reliable extraction tool for damage-scattered waves in a stiffened aluminum plate and also suggest the possibility of generalizing this technique for application to a wide variety of complex, large-area structures.
Estimation of group means when adjusting for covariates in generalized linear models.
Qu, Yongming; Luo, Junxiang
2015-01-01
Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.
Recursive partitioned inversion of large (1500 x 1500) symmetric matrices
NASA Technical Reports Server (NTRS)
Putney, B. H.; Brownd, J. E.; Gomez, R. A.
1976-01-01
A recursive algorithm was designed to invert large, dense, symmetric, positive definite matrices using small amounts of computer core, i.e., a small fraction of the core needed to store the complete matrix. The described algorithm is a generalized Gaussian elimination technique. Other algorithms are also discussed for the Cholesky decomposition and step inversion techniques. The purpose of the inversion algorithm is to solve large linear systems of normal equations generated by working geodetic problems. The algorithm was incorporated into a computer program called SOLVE. In the past the SOLVE program has been used in obtaining solutions published as the Goddard earth models.
Large calculation of the flow over a hypersonic vehicle using a GPU
NASA Astrophysics Data System (ADS)
Elsen, Erich; LeGresley, Patrick; Darve, Eric
2008-12-01
Graphics processing units are capable of impressive computing performance up to 518 Gflops peak performance. Various groups have been using these processors for general purpose computing; most efforts have focussed on demonstrating relatively basic calculations, e.g. numerical linear algebra, or physical simulations for visualization purposes with limited accuracy. This paper describes the simulation of a hypersonic vehicle configuration with detailed geometry and accurate boundary conditions using the compressible Euler equations. To the authors' knowledge, this is the most sophisticated calculation of this kind in terms of complexity of the geometry, the physical model, the numerical methods employed, and the accuracy of the solution. The Navier-Stokes Stanford University Solver (NSSUS) was used for this purpose. NSSUS is a multi-block structured code with a provably stable and accurate numerical discretization which uses a vertex-based finite-difference method. A multi-grid scheme is used to accelerate the solution of the system. Based on a comparison of the Intel Core 2 Duo and NVIDIA 8800GTX, speed-ups of over 40× were demonstrated for simple test geometries and 20× for complex geometries.
Chow, Sy-Miin; Bendezú, Jason J.; Cole, Pamela M.; Ram, Nilam
2016-01-01
Several approaches currently exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA), generalized local linear approximation (GLLA), and generalized orthogonal local derivative approximation (GOLD). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children’s self-regulation. PMID:27391255
Chow, Sy-Miin; Bendezú, Jason J; Cole, Pamela M; Ram, Nilam
2016-01-01
Several approaches exist for estimating the derivatives of observed data for model exploration purposes, including functional data analysis (FDA; Ramsay & Silverman, 2005 ), generalized local linear approximation (GLLA; Boker, Deboeck, Edler, & Peel, 2010 ), and generalized orthogonal local derivative approximation (GOLD; Deboeck, 2010 ). These derivative estimation procedures can be used in a two-stage process to fit mixed effects ordinary differential equation (ODE) models. While the performance and utility of these routines for estimating linear ODEs have been established, they have not yet been evaluated in the context of nonlinear ODEs with mixed effects. We compared properties of the GLLA and GOLD to an FDA-based two-stage approach denoted herein as functional ordinary differential equation with mixed effects (FODEmixed) in a Monte Carlo (MC) study using a nonlinear coupled oscillators model with mixed effects. Simulation results showed that overall, the FODEmixed outperformed both the GLLA and GOLD across all the embedding dimensions considered, but a novel use of a fourth-order GLLA approach combined with very high embedding dimensions yielded estimation results that almost paralleled those from the FODEmixed. We discuss the strengths and limitations of each approach and demonstrate how output from each stage of FODEmixed may be used to inform empirical modeling of young children's self-regulation.
Social networking policies in nursing education.
Frazier, Blake; Culley, Joan M; Hein, Laura C; Williams, Amber; Tavakoli, Abbas S
2014-03-01
Social networking use has increased exponentially in the past few years. A literature review related to social networking and nursing revealed a research gap between nursing practice and education. Although there was information available on the appropriate use of social networking sites, there was limited research on the use of social networking policies within nursing education. The purpose of this study was to identify current use of social media by faculty and students and a need for policies within nursing education at one institution. A survey was developed and administered to nursing students (n = 273) and nursing faculty (n = 33). Inferential statistics included χ², Fisher exact test, t test, and General Linear Model. Cronbach's α was used to assess internal consistency of social media scales. The χ² result indicates that there were associations with the group and several social media items. t Test results indicate significant differences between student and faculty for average of policies are good (P = .0127), policies and discipline (P = .0315), and policy at the study school (P = .0013). General Linear Model analyses revealed significant differences for "friend" a patient with a bond, unprofessional posts, policy, and nursing with class level. Results showed that students and faculty supported the development of a social networking policy.
Des Roches, Carrie A.; Vallila-Rohter, Sofia; Villard, Sarah; Tripodis, Yorghos; Caplan, David
2016-01-01
Purpose The current study examined treatment outcomes and generalization patterns following 2 sentence comprehension therapies: object manipulation (OM) and sentence-to-picture matching (SPM). Findings were interpreted within the framework of specific deficit and resource reduction accounts, which were extended in order to examine the nature of generalization following treatment of sentence comprehension deficits in aphasia. Method Forty-eight individuals with aphasia were enrolled in 1 of 8 potential treatment assignments that varied by task (OM, SPM), complexity of trained sentences (complex, simple), and syntactic movement (noun phrase, wh-movement). Comprehension of trained and untrained sentences was probed before and after treatment using stimuli that differed from the treatment stimuli. Results Linear mixed-model analyses demonstrated that, although both OM and SPM treatments were effective, OM resulted in greater improvement than SPM. Analyses of covariance revealed main effects of complexity in generalization; generalization from complex to simple linguistically related sentences was observed both across task and across movement. Conclusions Results are consistent with the complexity account of treatment efficacy, as generalization effects were consistently observed from complex to simpler structures. Furthermore, results provide support for resource reduction accounts that suggest that generalization can extend across linguistic boundaries, such as across movement type. PMID:27997950
ERIC Educational Resources Information Center
Tendhar, Chosang; Paretti, Marie C.; Jones, Brett D.
2017-01-01
This study had three purposes and four hypotheses were tested. Three purposes: (1) To use hierarchical linear modeling (HLM) to investigate whether students' perceptions of their engineering career intentions changed over time; (2) To use HLM to test the effects of gender, engineering identification (the degree to which an individual values a…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yavari, M., E-mail: yavari@iaukashan.ac.ir
2016-06-15
We generalize the results of Nesterenko [13, 14] and Gogilidze and Surovtsev [15] for DNA structures. Using the generalized Hamiltonian formalism, we investigate solutions of the equilibrium shape equations for the linear free energy model.
Koda, Shin-ichi
2015-05-28
It has been shown by some existing studies that some linear dynamical systems defined on a dendritic network are equivalent to those defined on a set of one-dimensional networks in special cases and this transformation to the simple picture, which we call linear chain (LC) decomposition, has a significant advantage in understanding properties of dendrimers. In this paper, we expand the class of LC decomposable system with some generalizations. In addition, we propose two general sufficient conditions for LC decomposability with a procedure to systematically realize the LC decomposition. Some examples of LC decomposable linear dynamical systems are also presented with their graphs. The generalization of the LC decomposition is implemented in the following three aspects: (i) the type of linear operators; (ii) the shape of dendritic networks on which linear operators are defined; and (iii) the type of symmetry operations representing the symmetry of the systems. In the generalization (iii), symmetry groups that represent the symmetry of dendritic systems are defined. The LC decomposition is realized by changing the basis of a linear operator defined on a dendritic network into bases of irreducible representations of the symmetry group. The achievement of this paper makes it easier to utilize the LC decomposition in various cases. This may lead to a further understanding of the relation between structure and functions of dendrimers in future studies.
Measured and predicted structural behavior of the HiMAT tailored composite wing
NASA Technical Reports Server (NTRS)
Nelson, Lawrence H.
1987-01-01
A series of load tests was conducted on the HiMAT tailored composite wing. Coupon tests were also run on a series of unbalanced laminates, including the ply configuration of the wing, the purpose of which was to compare the measured and predicted behavior of unbalanced laminates, including - in the case of the wing - a comparison between the behavior of the full scale structure and coupon tests. Both linear and nonlinear finite element (NASTRAN) analyses were carried out on the wing. Both linear and nonlinear point-stress analyses were performed on the coupons. All test articles were instrumented with strain gages, and wing deflections measured. The leading and trailing edges were found to have no effect on the response of the wing to applied loads. A decrease in the stiffness of the wing box was evident over the 27-test program. The measured load-strain behavior of the wing was found to be linear, in contrast to coupon tests of the same laminate, which were nonlinear. A linear NASTRAN analysis of the wing generally correlated more favorably with measurements than did a nonlinear analysis. An examination of the predicted deflections in the wing root region revealed an anomalous behavior of the structural model that cannot be explained. Both hysteresis and creep appear to be less significant in the wing tests than in the corresponding laminate coupon tests.
NASA Technical Reports Server (NTRS)
Nguyen, D. T.; Al-Nasra, M.; Zhang, Y.; Baddourah, M. A.; Agarwal, T. K.; Storaasli, O. O.; Carmona, E. A.
1991-01-01
Several parallel-vector computational improvements to the unconstrained optimization procedure are described which speed up the structural analysis-synthesis process. A fast parallel-vector Choleski-based equation solver, pvsolve, is incorporated into the well-known SAP-4 general-purpose finite-element code. The new code, denoted PV-SAP, is tested for static structural analysis. Initial results on a four processor CRAY 2 show that using pvsolve reduces the equation solution time by a factor of 14-16 over the original SAP-4 code. In addition, parallel-vector procedures for the Golden Block Search technique and the BFGS method are developed and tested for nonlinear unconstrained optimization. A parallel version of an iterative solver and the pvsolve direct solver are incorporated into the BFGS method. Preliminary results on nonlinear unconstrained optimization test problems, using pvsolve in the analysis, show excellent parallel-vector performance indicating that these parallel-vector algorithms can be used in a new generation of finite-element based structural design/analysis-synthesis codes.
Classical and sequential limit analysis revisited
NASA Astrophysics Data System (ADS)
Leblond, Jean-Baptiste; Kondo, Djimédo; Morin, Léo; Remmal, Almahdi
2018-04-01
Classical limit analysis applies to ideal plastic materials, and within a linearized geometrical framework implying small displacements and strains. Sequential limit analysis was proposed as a heuristic extension to materials exhibiting strain hardening, and within a fully general geometrical framework involving large displacements and strains. The purpose of this paper is to study and clearly state the precise conditions permitting such an extension. This is done by comparing the evolution equations of the full elastic-plastic problem, the equations of classical limit analysis, and those of sequential limit analysis. The main conclusion is that, whereas classical limit analysis applies to materials exhibiting elasticity - in the absence of hardening and within a linearized geometrical framework -, sequential limit analysis, to be applicable, strictly prohibits the presence of elasticity - although it tolerates strain hardening and large displacements and strains. For a given mechanical situation, the relevance of sequential limit analysis therefore essentially depends upon the importance of the elastic-plastic coupling in the specific case considered.
Failure of the lumbar pedicles under bending loading - biomed 2010.
Arregui-Dalmases, Carlos; Ash, Joseph H; Del Pozo, Eduardo; Kerrigan, Jason R; Crandall, Jeff
2010-01-01
The purpose of this study was to investigate the magnitude of bending moment that results in fracture of the pedicles when lumbar vertebrae are loaded in four-point bending. Nine human second lumbar vertebrae (L2) were harvested from donors aged 59-75 years. The specimens were potted and then subjected to quasi-static sagittal-plane four-point bending, which allowed for a constant bending moment applied over a 3.8 cm span centered on the vertebral pedicles until fracture. The failure bending moment calculated for the pedicles varied widely (30.7 +/- 12.3 Nm) and was poorly correlated with subject age (y = -0.91x + 91.5, R(2) = -0.27). With increasing displacement, the bending moment applied to the pedicles increased, first linearly, followed by a non-linear portion, prior to specimen fracture. In general, the specimens failed at the interface of the pedicles and vertebral bodies, but failures were observed elsewhere as well. These data provide sufficient response and boundary condition information for finite element modeling and model validation.
Explicit methods in extended phase space for inseparable Hamiltonian problems
NASA Astrophysics Data System (ADS)
Pihajoki, Pauli
2015-03-01
We present a method for explicit leapfrog integration of inseparable Hamiltonian systems by means of an extended phase space. A suitably defined new Hamiltonian on the extended phase space leads to equations of motion that can be numerically integrated by standard symplectic leapfrog (splitting) methods. When the leapfrog is combined with coordinate mixing transformations, the resulting algorithm shows good long term stability and error behaviour. We extend the method to non-Hamiltonian problems as well, and investigate optimal methods of projecting the extended phase space back to original dimension. Finally, we apply the methods to a Hamiltonian problem of geodesics in a curved space, and a non-Hamiltonian problem of a forced non-linear oscillator. We compare the performance of the methods to a general purpose differential equation solver LSODE, and the implicit midpoint method, a symplectic one-step method. We find the extended phase space methods to compare favorably to both for the Hamiltonian problem, and to the implicit midpoint method in the case of the non-linear oscillator.
NASA Technical Reports Server (NTRS)
Collins, R. J. (Principal Investigator); Mccown, F. P.; Stonis, L. P.; Petzel, G. J.; Everett, J. R.
1974-01-01
The author has identified the following significant results. ERTS-1 data give exploration geologists a new perspective for looking at the earth. The data are excellent for interpreting regional lithologic and structural relationships and quickly directing attention to areas of greatest exploration interest. Information derived from ERTS data useful for petroleum exploration include: linear features, general lithologic distribution, identification of various anomalous features, some details of structures controlling hydrocarbon accumulation, overall structural relationships, and the regional context of the exploration province. Many anomalies (particularly geomorphic anomalies) correlate with known features of petroleum exploration interest. Linears interpreted from the imagery that were checked in the field correlate with fractures. Bands 5 and 7 and color composite imagery acquired during the periods of maximum and minimum vegetation vigor are best for geologic interpretation. Preliminary analysis indicates that use of ERTS imagery can substantially reduce the cost of petroleum exploration in relatively unexplored areas.
A road map for multi-way calibration models.
Escandar, Graciela M; Olivieri, Alejandro C
2017-08-07
A large number of experimental applications of multi-way calibration are known, and a variety of chemometric models are available for the processing of multi-way data. While the main focus has been directed towards three-way data, due to the availability of various instrumental matrix measurements, a growing number of reports are being produced on order signals of increasing complexity. The purpose of this review is to present a general scheme for selecting the appropriate data processing model, according to the properties exhibited by the multi-way data. In spite of the complexity of the multi-way instrumental measurements, simple criteria can be proposed for model selection, based on the presence and number of the so-called multi-linearity breaking modes (instrumental modes that break the low-rank multi-linearity of the multi-way arrays), and also on the existence of mutually dependent instrumental modes. Recent literature reports on multi-way calibration are reviewed, with emphasis on the models that were selected for data processing.
Relative multiplexing for minimising switching in linear-optical quantum computing
NASA Astrophysics Data System (ADS)
Gimeno-Segovia, Mercedes; Cable, Hugo; Mendoza, Gabriel J.; Shadbolt, Pete; Silverstone, Joshua W.; Carolan, Jacques; Thompson, Mark G.; O'Brien, Jeremy L.; Rudolph, Terry
2017-06-01
Many existing schemes for linear-optical quantum computing (LOQC) depend on multiplexing (MUX), which uses dynamic routing to enable near-deterministic gates and sources to be constructed using heralded, probabilistic primitives. MUXing accounts for the overwhelming majority of active switching demands in current LOQC architectures. In this manuscript we introduce relative multiplexing (RMUX), a general-purpose optimisation which can dramatically reduce the active switching requirements for MUX in LOQC, and thereby reduce hardware complexity and energy consumption, as well as relaxing demands on performance for various photonic components. We discuss the application of RMUX to the generation of entangled states from probabilistic single-photon sources, and argue that an order of magnitude improvement in the rate of generation of Bell states can be achieved. In addition, we apply RMUX to the proposal for percolation of a 3D cluster state by Gimeno-Segovia et al (2015 Phys. Rev. Lett. 115 020502), and we find that RMUX allows an 2.4× increase in loss tolerance for this architecture.
Blood Flow Characterization According to Linear Wall Models of the Carotid Bifurcation
NASA Astrophysics Data System (ADS)
Williamson, Shobha; Rayz, Vitaliy; Berger, Stanley; Saloner, David
2004-11-01
Previous studies of the arterial wall include linearly isotropic, isotropic with residual stresses, and anisotropic models. This poses the question of how the results of each method differ when coupled with flow. Hence, the purpose of this study was to compare flow for these material models and subsequently determine if variations exist. Results show that displacement at the bifurcation and internal carotid bulb was noticeably larger in the orthotropic versus the isotropic model with subtle differences toward the inlet and outlets, which are fixed in space. In general, the orthotropic wall is further distended than the isotropic wall for the entire cycle. This apparent distention of the orthotropic wall clearly affects the flow. In diastole, the combination of slower flow and larger wall distention due to lumen pressure creates a sinuous velocity profile, particularly in the orthotropic model where the recirculation zone created displaces the core flow to a smaller area thereby increasing the velocity magnitudes nearly 60
Probabilistic boundary element method
NASA Technical Reports Server (NTRS)
Cruse, T. A.; Raveendra, S. T.
1989-01-01
The purpose of the Probabilistic Structural Analysis Method (PSAM) project is to develop structural analysis capabilities for the design analysis of advanced space propulsion system hardware. The boundary element method (BEM) is used as the basis of the Probabilistic Advanced Analysis Methods (PADAM) which is discussed. The probabilistic BEM code (PBEM) is used to obtain the structural response and sensitivity results to a set of random variables. As such, PBEM performs analogous to other structural analysis codes such as finite elements in the PSAM system. For linear problems, unlike the finite element method (FEM), the BEM governing equations are written at the boundary of the body only, thus, the method eliminates the need to model the volume of the body. However, for general body force problems, a direct condensation of the governing equations to the boundary of the body is not possible and therefore volume modeling is generally required.
Corresponding states law for a generalized Lennard-Jones potential.
Orea, P; Romero-Martínez, A; Basurto, E; Vargas, C A; Odriozola, G
2015-07-14
It was recently shown that vapor-liquid coexistence densities derived from Mie and Yukawa models collapse to define a single master curve when represented against the difference between the reduced second virial coefficient at the corresponding temperature and that at the critical point. In this work, we further test this proposal for another generalization of the Lennard-Jones pair potential. This is carried out for vapor-liquid coexistence densities, surface tension, and vapor pressure, along a temperature window set below the critical point. For this purpose, we perform molecular dynamics simulations by varying the potential softness parameter to produce from very short to intermediate attractive ranges. We observed all properties to collapse and yield master curves. Moreover, the vapor-liquid curve is found to share the exact shape of the Mie and attractive Yukawa. Furthermore, the surface tension and the logarithm of the vapor pressure are linear functions of this difference of reduced second virial coefficients.
Modal forced vibration analysis of aerodynamically excited turbosystems
NASA Technical Reports Server (NTRS)
Elchuri, V.
1985-01-01
Theoretical aspects of a new capability to determine the vibratory response of turbosystems subjected to aerodynamic excitation are presented. Turbosystems such as advanced turbopropellers with highly swept blades, and axial-flow compressors and turbines can be analyzed using this capability. The capability has been developed and implemented in the April 1984 release of the general purpose finite element program NASTRAN. The dynamic response problem is addressed in terms of the normal modal coordinates of these tuned rotating cyclic structures. Both rigid and flexible hubs/disks are considered. Coriolis and centripetal accelerations, as well as differential stiffness effects are included. Generally non-uniform steady inflow fields and uniform flow fields arbitrarily inclined at small angles with respect to the axis of rotation of the turbosystem are considered sources of aerodynamic excitation. The spatial non-uniformities are considered to be small deviations from a principally uniform inflow. Subsonic and supersonic relative inflows are addressed, with provision for linearly interpolating transonic airloads.
Lie algebras and linear differential equations.
NASA Technical Reports Server (NTRS)
Brockett, R. W.; Rahimi, A.
1972-01-01
Certain symmetry properties possessed by the solutions of linear differential equations are examined. For this purpose, some basic ideas from the theory of finite dimensional linear systems are used together with the work of Wei and Norman on the use of Lie algebraic methods in differential equation theory.
Happell, Brenda; Byrne, Louise; Platania-Phung, Chris
2015-01-01
Recovery-oriented services are a goal for policy and practice in the Australian mental health service system. Evidence-based reform requires an instrument to measure knowledge of recovery concepts. The Recovery Knowledge Inventory (RKI) was designed for this purpose, however, its suitability and validity for student health professionals has not been evaluated. The purpose of the current article is to report the psychometric features of the RKI for measuring nursing students' views on recovery. The RKI, a self-report measure, consists of four scales: (I) Roles and Responsibilities, (II) Non-Linearity of the Recovery Process, (III) Roles of Self-Definition and Peers, and (IV) Expectations Regarding Recovery. Confirmatory and exploratory factor analyses of the baseline data (n = 167) were applied to assess validity and reliability. Exploratory factor analyses generally replicated the item structure suggested by the three main scales, however more stringent analyses (confirmatory factor analysis) did not provide strong support for convergent validity. A refined RKI with 16 items had internal reliabilities of α = .75 for Roles and Responsibilities, α = .49 for Roles of Self-Definition and Peers, and α = .72, for Recovery as Non-Linear Process. If the RKI is to be applied to nursing student populations, the conceptual underpinning of the instrument needs to be reworked, and new items should be generated to evaluate and improve scale validity and reliability.
Molenaar, Dylan; Tuerlinckx, Francis; van der Maas, Han L J
2015-01-01
A generalized linear modeling framework to the analysis of responses and response times is outlined. In this framework, referred to as bivariate generalized linear item response theory (B-GLIRT), separate generalized linear measurement models are specified for the responses and the response times that are subsequently linked by cross-relations. The cross-relations can take various forms. Here, we focus on cross-relations with a linear or interaction term for ability tests, and cross-relations with a curvilinear term for personality tests. In addition, we discuss how popular existing models from the psychometric literature are special cases in the B-GLIRT framework depending on restrictions in the cross-relation. This allows us to compare existing models conceptually and empirically. We discuss various extensions of the traditional models motivated by practical problems. We also illustrate the applicability of our approach using various real data examples, including data on personality and cognitive ability.
2011-01-01
Background Design of newly engineered microbial strains for biotechnological purposes would greatly benefit from the development of realistic mathematical models for the processes to be optimized. Such models can then be analyzed and, with the development and application of appropriate optimization techniques, one could identify the modifications that need to be made to the organism in order to achieve the desired biotechnological goal. As appropriate models to perform such an analysis are necessarily non-linear and typically non-convex, finding their global optimum is a challenging task. Canonical modeling techniques, such as Generalized Mass Action (GMA) models based on the power-law formalism, offer a possible solution to this problem because they have a mathematical structure that enables the development of specific algorithms for global optimization. Results Based on the GMA canonical representation, we have developed in previous works a highly efficient optimization algorithm and a set of related strategies for understanding the evolution of adaptive responses in cellular metabolism. Here, we explore the possibility of recasting kinetic non-linear models into an equivalent GMA model, so that global optimization on the recast GMA model can be performed. With this technique, optimization is greatly facilitated and the results are transposable to the original non-linear problem. This procedure is straightforward for a particular class of non-linear models known as Saturable and Cooperative (SC) models that extend the power-law formalism to deal with saturation and cooperativity. Conclusions Our results show that recasting non-linear kinetic models into GMA models is indeed an appropriate strategy that helps overcoming some of the numerical difficulties that arise during the global optimization task. PMID:21867520
Evaluation of a Nonlinear Finite Element Program - ABAQUS.
1983-03-15
anisotropic properties. * MATEXP - Linearly elastic thermal expansions with isotropic, orthotropic and anisotropic properties. * MATELG - Linearly...elastic materials for general sections (options available for beam and shell elements). • MATEXG - Linearly elastic thermal expansions for general...decomposition of a matrix. * Q-R algorithm • Vector normalization, etc. Obviously, by consolidating all the utility subroutines in a library, ABAQUS has
Linear discrete systems with memory: a generalization of the Langmuir model
NASA Astrophysics Data System (ADS)
Băleanu, Dumitru; Nigmatullin, Raoul R.
2013-10-01
In this manuscript we analyzed a general solution of the linear nonlocal Langmuir model within time scale calculus. Several generalizations of the Langmuir model are presented together with their exact corresponding solutions. The physical meaning of the proposed models are investigated and their corresponding geometries are reported.
An approximate generalized linear model with random effects for informative missing data.
Follmann, D; Wu, M
1995-03-01
This paper develops a class of models to deal with missing data from longitudinal studies. We assume that separate models for the primary response and missingness (e.g., number of missed visits) are linked by a common random parameter. Such models have been developed in the econometrics (Heckman, 1979, Econometrica 47, 153-161) and biostatistics (Wu and Carroll, 1988, Biometrics 44, 175-188) literature for a Gaussian primary response. We allow the primary response, conditional on the random parameter, to follow a generalized linear model and approximate the generalized linear model by conditioning on the data that describes missingness. The resultant approximation is a mixed generalized linear model with possibly heterogeneous random effects. An example is given to illustrate the approximate approach, and simulations are performed to critique the adequacy of the approximation for repeated binary data.
Linear and nonlinear dynamic analysis by boundary element method. Ph.D. Thesis, 1986 Final Report
NASA Technical Reports Server (NTRS)
Ahmad, Shahid
1991-01-01
An advanced implementation of the direct boundary element method (BEM) applicable to free-vibration, periodic (steady-state) vibration and linear and nonlinear transient dynamic problems involving two and three-dimensional isotropic solids of arbitrary shape is presented. Interior, exterior, and half-space problems can all be solved by the present formulation. For the free-vibration analysis, a new real variable BEM formulation is presented which solves the free-vibration problem in the form of algebraic equations (formed from the static kernels) and needs only surface discretization. In the area of time-domain transient analysis, the BEM is well suited because it gives an implicit formulation. Although the integral formulations are elegant, because of the complexity of the formulation it has never been implemented in exact form. In the present work, linear and nonlinear time domain transient analysis for three-dimensional solids has been implemented in a general and complete manner. The formulation and implementation of the nonlinear, transient, dynamic analysis presented here is the first ever in the field of boundary element analysis. Almost all the existing formulation of BEM in dynamics use the constant variation of the variables in space and time which is very unrealistic for engineering problems and, in some cases, it leads to unacceptably inaccurate results. In the present work, linear and quadratic isoparametric boundary elements are used for discretization of geometry and functional variations in space. In addition, higher order variations in time are used. These methods of analysis are applicable to piecewise-homogeneous materials, such that not only problems of the layered media and the soil-structure interaction can be analyzed but also a large problem can be solved by the usual sub-structuring technique. The analyses have been incorporated in a versatile, general-purpose computer program. Some numerical problems are solved and, through comparisons with available analytical and numerical results, the stability and high accuracy of these dynamic analysis techniques are established.
Resultant as the determinant of a Koszul complex
NASA Astrophysics Data System (ADS)
Anokhina, A. S.; Morozov, A. Yu.; Shakirov, Sh. R.
2009-09-01
The determinant is a very important characteristic of a linear map between vector spaces. Two generalizations of linear maps are intensively used in modern theory: linear complexes (nilpotent chains of linear maps) and nonlinear maps. The determinant of a complex and the resultant are then the corresponding generalizations of the determinant of a linear map. It turns out that these two quantities are related: the resultant of a nonlinear map is the determinant of the corresponding Koszul complex. We give an elementary introduction into these notions and relations, which will definitely play a role in the future development of theoretical physics.
Evaluation of the Coreless Linear Conduction Pump for Thermoelectromagnetic Pumps,
1991-08-01
Accession Number: 4466 Publication Date: Aug 01, 1991 Title: Evaluation of the Coreless Linear Conduction Pump for Thermoelectromagnetic Pumps ...083191 Report Prepared for: SDIO/T/SL, Washington, DC 20301-7100 Descriptors, Keywords: Coreless Linear Conduction Pump Thermoelectromagnetic...000001 Record ID: 26727 SUMMARY The purpose of the Coreless Linear Conduction Pump (CLCP) was to evaluate the feasibility of the CLCP as a means of
Bisimulation equivalence of differential-algebraic systems
NASA Astrophysics Data System (ADS)
Megawati, Noorma Yulia; Schaft, Arjan van der
2018-01-01
In this paper, the notion of bisimulation relation for linear input-state-output systems is extended to general linear differential-algebraic (DAE) systems. Geometric control theory is used to derive a linear-algebraic characterisation of bisimulation relations, and an algorithm for computing the maximal bisimulation relation between two linear DAE systems. The general definition is specialised to the case where the matrix pencil sE - A is regular. Furthermore, by developing a one-sided version of bisimulation, characterisations of simulation and abstraction are obtained.
Using Linear and Quadratic Functions to Teach Number Patterns in Secondary School
ERIC Educational Resources Information Center
Kenan, Kok Xiao-Feng
2017-01-01
This paper outlines an approach to definitively find the general term in a number pattern, of either a linear or quadratic form, by using the general equation of a linear or quadratic function. This approach is governed by four principles: (1) identifying the position of the term (input) and the term itself (output); (2) recognising that each…
Modelling of Asphalt Concrete Stiffness in the Linear Viscoelastic Region
NASA Astrophysics Data System (ADS)
Mazurek, Grzegorz; Iwański, Marek
2017-10-01
Stiffness modulus is a fundamental parameter used in the modelling of the viscoelastic behaviour of bituminous mixtures. On the basis of the master curve in the linear viscoelasticity range, the mechanical properties of asphalt concrete at different loading times and temperatures can be predicted. This paper discusses the construction of master curves under rheological mathematical models i.e. the sigmoidal function model (MEPDG), the fractional model, and Bahia and co-workers’ model in comparison to the results from mechanistic rheological models i.e. the generalized Huet-Sayegh model, the generalized Maxwell model and the Burgers model. For the purposes of this analysis, the reference asphalt concrete mix (denoted as AC16W) intended for the binder coarse layer and for traffic category KR3 (5×105
Watkins, Yashika J.; Quinn, Lauretta T.; Ruggiero, Laurie; Quinn, Michael T.; Choi, Young-Ku
2013-01-01
Purpose The purpose of this study is to investigate the relationship among spiritual and religious beliefs and practices, social support, and diabetes self-care activities in African Americans with type 2 diabetes, hypothesizing that there would be a positive association. Method This cohort study used a cross-sectional design that focused on baseline data from a larger randomized control trial. Diabetes self-care activities (Summary of Diabetes Self-Care Activities; SDSCA) and sociodemographic characteristics were assessed, in addition to spiritual and religious beliefs and practices and social support using the Systems of Belief Inventory (SBI) subscale I (beliefs and practices) and subscale II (social support). Results There were 132 participants: most were female, middle-aged, obese, single, high school-educated, and not employed. Using Pearson correlation matrices, there were significant relationships between spiritual and religious beliefs and practices and general diet. Additional significant relationships were found for social support with general diet, specific diet, and foot care. Using multiple linear regression, social support was a significant predictor for general diet, specific diet, and foot care. Gender was a significant predictor for specific diet, and income was a significant predictor for blood glucose testing. Conclusions The findings of this study highlight the importance of spiritual and religious beliefs and practices and social support in diabetes self-care activities. Future research should focus on determining how providers integrate patients' beliefs and practices and social support into clinical practice and include those in behavior change interventions. PMID:23411653
A refined analysis of composite laminates. [theory of statics and dynamics
NASA Technical Reports Server (NTRS)
Srinivas, S.
1973-01-01
The purpose of this paper is to develop a sufficiently accurate analysis, which is much simpler than exact three-dimensional analysis, for statics and dynamics of composite laminates. The governing differential equations and boundary conditions are derived by following a variational approach. The displacements are assumed piecewise linear across the thickness and the effects of transverse shear deformations and rotary inertia are included. A procedure for obtaining the general solution of the above governing differential equations in the form of hyperbolic-trigonometric series is given. The accuracy of the present theory is assessed by obtaining results for free vibrations and flexure of simply supported rectangular laminates and comparing them with results from exact three-dimensional analysis.
Quantum corrections to the generalized Proca theory via a matter field
NASA Astrophysics Data System (ADS)
Amado, André; Haghani, Zahra; Mohammadi, Azadeh; Shahidi, Shahab
2017-09-01
We study the quantum corrections to the generalized Proca theory via matter loops. We consider two types of interactions, linear and nonlinear in the vector field. Calculating the one-loop correction to the vector field propagator, three- and four-point functions, we show that the non-linear interactions are harmless, although they renormalize the theory. The linear matter-vector field interactions introduce ghost degrees of freedom to the generalized Proca theory. Treating the theory as an effective theory, we calculate the energy scale up to which the theory remains healthy.
Transfer Student Success: Educationally Purposeful Activities Predictive of Undergraduate GPA
ERIC Educational Resources Information Center
Fauria, Renee M.; Fuller, Matthew B.
2015-01-01
Researchers evaluated the effects of Educationally Purposeful Activities (EPAs) on transfer and nontransfer students' cumulative GPAs. Hierarchical, linear, and multiple regression models yielded seven statistically significant educationally purposeful items that influenced undergraduate student GPAs. Statistically significant positive EPAs for…
Weidling, Patrick; Jaschinski, Wolfgang
2015-01-01
When presbyopic employees are wearing general-purpose progressive lenses, they have clear vision only with a lower gaze inclination to the computer monitor, given the head assumes a comfortable inclination. Therefore, in the present intervention field study the monitor position was lowered, also with the aim to reduce musculoskeletal symptoms. A comparison group comprised users of lenses that do not restrict the field of clear vision. The lower monitor positions led the participants to lower their head inclination, which was linearly associated with a significant reduction in musculoskeletal symptoms. However, for progressive lenses a lower head inclination means a lower zone of clear vision, so that clear vision of the complete monitor was not achieved, rather the monitor should have been placed even lower. The procedures of this study may be useful for optimising the individual monitor position depending on the comfortable head and gaze inclination and the vertical zone of clear vision of progressive lenses. For users of general-purpose progressive lenses, it is suggested that low monitor positions allow for clear vision at the monitor and for a physiologically favourable head inclination. Employees may improve their workplace using a flyer providing ergonomic-optometric information.
Probabilistic Structural Analysis Program
NASA Technical Reports Server (NTRS)
Pai, Shantaram S.; Chamis, Christos C.; Murthy, Pappu L. N.; Stefko, George L.; Riha, David S.; Thacker, Ben H.; Nagpal, Vinod K.; Mital, Subodh K.
2010-01-01
NASA/NESSUS 6.2c is a general-purpose, probabilistic analysis program that computes probability of failure and probabilistic sensitivity measures of engineered systems. Because NASA/NESSUS uses highly computationally efficient and accurate analysis techniques, probabilistic solutions can be obtained even for extremely large and complex models. Once the probabilistic response is quantified, the results can be used to support risk-informed decisions regarding reliability for safety-critical and one-of-a-kind systems, as well as for maintaining a level of quality while reducing manufacturing costs for larger-quantity products. NASA/NESSUS has been successfully applied to a diverse range of problems in aerospace, gas turbine engines, biomechanics, pipelines, defense, weaponry, and infrastructure. This program combines state-of-the-art probabilistic algorithms with general-purpose structural analysis and lifting methods to compute the probabilistic response and reliability of engineered structures. Uncertainties in load, material properties, geometry, boundary conditions, and initial conditions can be simulated. The structural analysis methods include non-linear finite-element methods, heat-transfer analysis, polymer/ceramic matrix composite analysis, monolithic (conventional metallic) materials life-prediction methodologies, boundary element methods, and user-written subroutines. Several probabilistic algorithms are available such as the advanced mean value method and the adaptive importance sampling method. NASA/NESSUS 6.2c is structured in a modular format with 15 elements.
A neural network based methodology to predict site-specific spectral acceleration values
NASA Astrophysics Data System (ADS)
Kamatchi, P.; Rajasankar, J.; Ramana, G. V.; Nagpal, A. K.
2010-12-01
A general neural network based methodology that has the potential to replace the computationally-intensive site-specific seismic analysis of structures is proposed in this paper. The basic framework of the methodology consists of a feed forward back propagation neural network algorithm with one hidden layer to represent the seismic potential of a region and soil amplification effects. The methodology is implemented and verified with parameters corresponding to Delhi city in India. For this purpose, strong ground motions are generated at bedrock level for a chosen site in Delhi due to earthquakes considered to originate from the central seismic gap of the Himalayan belt using necessary geological as well as geotechnical data. Surface level ground motions and corresponding site-specific response spectra are obtained by using a one-dimensional equivalent linear wave propagation model. Spectral acceleration values are considered as a target parameter to verify the performance of the methodology. Numerical studies carried out to validate the proposed methodology show that the errors in predicted spectral acceleration values are within acceptable limits for design purposes. The methodology is general in the sense that it can be applied to other seismically vulnerable regions and also can be updated by including more parameters depending on the state-of-the-art in the subject.
A General Accelerated Degradation Model Based on the Wiener Process.
Liu, Le; Li, Xiaoyang; Sun, Fuqiang; Wang, Ning
2016-12-06
Accelerated degradation testing (ADT) is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses.
A General Accelerated Degradation Model Based on the Wiener Process
Liu, Le; Li, Xiaoyang; Sun, Fuqiang; Wang, Ning
2016-01-01
Accelerated degradation testing (ADT) is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses. PMID:28774107
7 CFR 2902.48 - General purpose household cleaners.
Code of Federal Regulations, 2010 CFR
2010-01-01
... PROCUREMENT Designated Items § 2902.48 General purpose household cleaners. (a) Definition. Products designed... procurement preference for qualifying biobased general purpose household cleaners. By that date, Federal... 7 Agriculture 15 2010-01-01 2010-01-01 false General purpose household cleaners. 2902.48 Section...
In search of average growth: describing within-year oral reading fluency growth across Grades 1-8.
Nese, Joseph F T; Biancarosa, Gina; Cummings, Kelli; Kennedy, Patrick; Alonzo, Julie; Tindal, Gerald
2013-10-01
Measures of oral reading fluency (ORF) are perhaps the most often used assessment to monitor student progress as part of a response to intervention (RTI) model. Rates of growth in research and aim lines in practice are used to characterize student growth; in either case, growth is generally defined as linear, increasing at a constant rate. Recent research suggests ORF growth follows a nonlinear trajectory, but limitations related to the datasets used in such studies, composed of only three testing occasions, curtails their ability to examine the true functional form of ORF growth. The purpose of this study was to model within-year ORF growth using up to eight testing occasions for 1448 students in Grades 1 to 8 to assess (a) the average growth trajectory for within-year ORF growth, (b) whether students vary significantly in within-year ORF growth, and (c) the extent to which findings are consistent across grades. Results demonstrated that for Grades 1 to 7, a quadratic growth model fit better than either linear or cubic growth models, and for Grade 8, there was no substantial, stable growth. Findings suggest that the expectation for linear growth currently used in practice may be unrealistic. Copyright © 2013 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
Flühs, Dirk; Flühs, Andrea; Ebenau, Melanie; Eichmann, Marion
2015-01-01
Background Dosimetric measurements in small radiation fields with large gradients, such as eye plaque dosimetry with β or low-energy photon emitters, require dosimetrically almost water-equivalent detectors with volumes of <1 mm3 and linear responses over several orders of magnitude. Polyvinyltoluene-based scintillators fulfil these conditions. Hence, they are a standard for such applications. However, they show disadvantages with regard to certain material properties and their dosimetric behaviour towards low-energy photons. Purpose, Materials and Methods Polyethylene naphthalate, recently recognized as a scintillator, offers chemical, physical and basic dosimetric properties superior to polyvinyltoluene. Its general applicability as a clinical dosimeter, however, has not been shown yet. To prove this applicability, extensive measurements at several clinical photon and electron radiation sources, ranging from ophthalmic plaques to a linear accelerator, were performed. Results For all radiation qualities under investigation, covering a wide range of dose rates, a linearity of the detector response to the dose was shown. Conclusion Polyethylene naphthalate proved to be a suitable detector material for the dosimetry of ophthalmic plaques, including low-energy photon emitters and other small radiation fields. Due to superior properties, it has the potential to replace polyvinyltoluene as the standard scintillator for such applications. PMID:27171681
Vibrational spectroscopy via the Caldeira-Leggett model with anharmonic system potentials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gottwald, Fabian; Ivanov, Sergei D., E-mail: sergei.ivanov@uni-rostock.de; Kühn, Oliver
2016-04-28
The Caldeira-Leggett (CL) model, which describes a system bi-linearly coupled to a harmonic bath, has enjoyed popularity in condensed phase spectroscopy owing to its utmost simplicity. However, the applicability of the model to cases with anharmonic system potentials, as it is required for the description of realistic systems in solution, is questionable due to the presence of the invertibility problem [F. Gottwald et al., J. Phys. Chem. Lett. 6, 2722 (2015)] unless the system itself resembles the CL model form. This might well be the case at surfaces or in the solid regime, which we here confirm for a particularmore » example of an iodine molecule in the atomic argon environment under high pressure. For this purpose we extend the recently proposed Fourier method for parameterizing linear generalized Langevin dynamics [F. Gottwald et al., J. Chem. Phys. 142, 244110 (2015)] to the non-linear case based on the CL model and perform an extensive error analysis. In order to judge on the applicability of this model in advance, we give practical empirical criteria and discuss the effect of the potential renormalization term. The obtained results provide evidence that the CL model can be used for describing a potentially broad class of systems.« less
Identification of single-input-single-output quantum linear systems
NASA Astrophysics Data System (ADS)
Levitt, Matthew; GuÅ£ǎ, Mǎdǎlin
2017-03-01
The purpose of this paper is to investigate system identification for single-input-single-output general (active or passive) quantum linear systems. For a given input we address the following questions: (1) Which parameters can be identified by measuring the output? (2) How can we construct a system realization from sufficient input-output data? We show that for time-dependent inputs, the systems which cannot be distinguished are related by symplectic transformations acting on the space of system modes. This complements a previous result of Guţă and Yamamoto [IEEE Trans. Autom. Control 61, 921 (2016), 10.1109/TAC.2015.2448491] for passive linear systems. In the regime of stationary quantum noise input, the output is completely determined by the power spectrum. We define the notion of global minimality for a given power spectrum, and characterize globally minimal systems as those with a fully mixed stationary state. We show that in the case of systems with a cascade realization, the power spectrum completely fixes the transfer function, so the system can be identified up to a symplectic transformation. We give a method for constructing a globally minimal subsystem direct from the power spectrum. Restricting to passive systems the analysis simplifies so that identifiability may be completely understood from the eigenvalues of a particular system matrix.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-24
... (NOA) for General Purpose Warehouse and Information Technology Center Construction (GPW/IT)--Tracy Site.... ACTION: Notice of Availability (NOA) for General Purpose Warehouse and Information Technology Center... FR 65300) announcing the publication of the General Purpose Warehouse and Information Technology...
ERIC Educational Resources Information Center
Xu, Xueli; von Davier, Matthias
2008-01-01
The general diagnostic model (GDM) utilizes located latent classes for modeling a multidimensional proficiency variable. In this paper, the GDM is extended by employing a log-linear model for multiple populations that assumes constraints on parameters across multiple groups. This constrained model is compared to log-linear models that assume…
Generalized Fluid System Simulation Program (GFSSP) - Version 6
NASA Technical Reports Server (NTRS)
Majumdar, Alok; LeClair, Andre; Moore, Ric; Schallhorn, Paul
2015-01-01
The Generalized Fluid System Simulation Program (GFSSP) is a finite-volume based general-purpose computer program for analyzing steady state and time-dependent flow rates, pressures, temperatures, and concentrations in a complex flow network. The program is capable of modeling real fluids with phase changes, compressibility, mixture thermodynamics, conjugate heat transfer between solid and fluid, fluid transients, pumps, compressors, flow control valves and external body forces such as gravity and centrifugal. The thermo-fluid system to be analyzed is discretized into nodes, branches, and conductors. The scalar properties such as pressure, temperature, and concentrations are calculated at nodes. Mass flow rates and heat transfer rates are computed in branches and conductors. The graphical user interface allows users to build their models using the 'point, drag, and click' method; the users can also run their models and post-process the results in the same environment. The integrated fluid library supplies thermodynamic and thermo-physical properties of 36 fluids, and 24 different resistance/source options are provided for modeling momentum sources or sinks in the branches. Users can introduce new physics, non-linear and time-dependent boundary conditions through user-subroutine.
Psychosocial issues on-orbit: results from two space station programs
NASA Astrophysics Data System (ADS)
Kanas, N. A.; Salnitskiy, V. P.; Ritsher, J. B.; Gushin, V. I.; Weiss, D. S.; Saylor, S. A.; Marmar, C. R.
PURPOSE Psychosocial issues affecting people working in isolated and confined environments such as spacecraft can jeopardize mental health and mission safety Our team has completed two large NASA-funded studies involving missions to the Mir and International Space Stations where crewmembers were on-orbit for four to seven months Combining these two datasets allows us to generalize across these two settings and maximize statistical power in testing our hypotheses This paper presents results from three sets of hypotheses concerning possible changes in mood and social climate over time displacement of negative emotions to outside monitoring personnel and the task and support roles of the leader METHODS The combined sample of 216 participants included 13 American astronauts 17 Russian cosmonauts and 150 U S and 36 Russian mission control personnel Subjects completed a weekly questionnaire that included items from the Profile of Mood States the Group Environment Scale and the Work Environment Scale producing 20 subscale scores The analytic strategy included piecewise linear regression and general linear modeling and it accounted for the effects of multiple observations per person and multiple analyses RESULTS There was little evidence to suggest that universal changes in levels of mood and group climate occurred among astronauts and cosmonauts over time Although a few individuals experienced decrements in the second half of the mission the majority did not However there was evidence that subjects displaced negative emotions to outside
Inflation in a closed universe
NASA Astrophysics Data System (ADS)
Ratra, Bharat
2017-11-01
To derive a power spectrum for energy density inhomogeneities in a closed universe, we study a spatially-closed inflation-modified hot big bang model whose evolutionary history is divided into three epochs: an early slowly-rolling scalar field inflation epoch and the usual radiation and nonrelativistic matter epochs. (For our purposes it is not necessary to consider a final dark energy dominated epoch.) We derive general solutions of the relativistic linear perturbation equations in each epoch. The constants of integration in the inflation epoch solutions are determined from de Sitter invariant quantum-mechanical initial conditions in the Lorentzian section of the inflating closed de Sitter space derived from Hawking's prescription that the quantum state of the universe only include field configurations that are regular on the Euclidean (de Sitter) sphere section. The constants of integration in the radiation and matter epoch solutions are determined from joining conditions derived by requiring that the linear perturbation equations remain nonsingular at the transitions between epochs. The matter epoch power spectrum of gauge-invariant energy density inhomogeneities is not a power law, and depends on spatial wave number in the way expected for a generalization to the closed model of the standard flat-space scale-invariant power spectrum. The power spectrum we derive appears to differ from a number of other closed inflation model power spectra derived assuming different (presumably non de Sitter invariant) initial conditions.
NASA Astrophysics Data System (ADS)
Chowdhury, Aritra; Sevinsky, Christopher J.; Santamaria-Pang, Alberto; Yener, Bülent
2017-03-01
The cancer diagnostic workflow is typically performed by highly specialized and trained pathologists, for which analysis is expensive both in terms of time and money. This work focuses on grade classification in colon cancer. The analysis is performed over 3 protein markers; namely E-cadherin, beta actin and colagenIV. In addition, we also use a virtual Hematoxylin and Eosin (HE) stain. This study involves a comparison of various ways in which we can manipulate the information over the 4 different images of the tissue samples and come up with a coherent and unified response based on the data at our disposal. Pre- trained convolutional neural networks (CNNs) is the method of choice for feature extraction. The AlexNet architecture trained on the ImageNet database is used for this purpose. We extract a 4096 dimensional feature vector corresponding to the 6th layer in the network. Linear SVM is used to classify the data. The information from the 4 different images pertaining to a particular tissue sample; are combined using the following techniques: soft voting, hard voting, multiplication, addition, linear combination, concatenation and multi-channel feature extraction. We observe that we obtain better results in general than when we use a linear combination of the feature representations. We use 5-fold cross validation to perform the experiments. The best results are obtained when the various features are linearly combined together resulting in a mean accuracy of 91.27%.
The Use of Linear Programming for Prediction.
ERIC Educational Resources Information Center
Schnittjer, Carl J.
The purpose of the study was to develop a linear programming model to be used for prediction, test the accuracy of the predictions, and compare the accuracy with that produced by curvilinear multiple regression analysis. (Author)
Zhao, Yingfeng; Liu, Sanyang
2016-01-01
We present a practical branch and bound algorithm for globally solving generalized linear multiplicative programming problem with multiplicative constraints. To solve the problem, a relaxation programming problem which is equivalent to a linear programming is proposed by utilizing a new two-phase relaxation technique. In the algorithm, lower and upper bounds are simultaneously obtained by solving some linear relaxation programming problems. Global convergence has been proved and results of some sample examples and a small random experiment show that the proposed algorithm is feasible and efficient.
Park, Jee Won; Kim, Chun Ja; Kim, Yong Soon; Yoo, Moon Sook; Yoo, Hyera; Chae, Sun Mi; Ahn, Jeong Ah
2012-09-01
The purpose of this study was to evaluate the relationships among critical thinking disposition, general self-efficacy, leadership and clinical competence, and identify the factors influencing clinical competence in nursing students. In this descriptive study, 153 nursing students (from 2nd to 4th school year) of a university in South Korea were enrolled in December 2010. The instruments for this study were the Korean versions of the Critical Thinking Disposition Scale, General Self-Efficacy Scale, Leadership Inventory, and Clinical Competence Scale. Data were analyzed by descriptive statistics, t-test, MANOVA, Pearson correlation, and multiple linear regression with PASW 18.0 software. The mean scores (ranging from 1 to 5) in nursing students for critical thinking disposition, general self-efficacy, leadership, and clinical competence were 3.44, 3.51, 3.55, and 3.42, respectively. Positive correlations were found for clinical competence with critical thinking disposition, general self-efficacy, and leadership. The strongest predictor of clinical competence was leadership. In addition, leadership, nursing school year, and subjective academic achievement accounted for 34.5% of variance in clinical competence. This study revealed that developing leadership, critical thinking disposition, and self-efficacy in undergraduate nursing education is important to improve clinical competence of nursing students.
NASA Technical Reports Server (NTRS)
Beaton, K. H.; Holly, J. E.; Clement, G. R.; Wood, S. J.
2011-01-01
The neural mechanisms to resolve ambiguous tilt-translation motion have been hypothesized to be different for motion perception and eye movements. Previous studies have demonstrated differences in ocular and perceptual responses using a variety of motion paradigms, including Off-Vertical Axis Rotation (OVAR), Variable Radius Centrifugation (VRC), translation along a linear track, and tilt about an Earth-horizontal axis. While the linear acceleration across these motion paradigms is presumably equivalent, there are important differences in semicircular canal cues. The purpose of this study was to compare translation motion perception and horizontal slow phase velocity to quantify consistencies, or lack thereof, across four different motion paradigms. Twelve healthy subjects were exposed to sinusoidal interaural linear acceleration between 0.01 and 0.6 Hz at 1.7 m/s/s (equivalent to 10 tilt) using OVAR, VRC, roll tilt, and lateral translation. During each trial, subjects verbally reported the amount of perceived peak-to-peak lateral translation and indicated the direction of motion with a joystick. Binocular eye movements were recorded using video-oculography. In general, the gain of translation perception (ratio of reported linear displacement to equivalent linear stimulus displacement) increased with stimulus frequency, while the phase did not significantly vary. However, translation perception was more pronounced during both VRC and lateral translation involving actual translation, whereas perceptions were less consistent and more variable during OVAR and roll tilt which did not involve actual translation. For each motion paradigm, horizontal eye movements were negligible at low frequencies and showed phase lead relative to the linear stimulus. At higher frequencies, the gain of the eye movements increased and became more inphase with the acceleration stimulus. While these results are consistent with the hypothesis that the neural computational strategies for motion perception and eye movements differ, they also indicate that the specific motion platform employed can have a significant effect on both the amplitude and phase of each.
Liu, Lan; Jiang, Tao
2007-01-01
With the launch of the international HapMap project, the haplotype inference problem has attracted a great deal of attention in the computational biology community recently. In this paper, we study the question of how to efficiently infer haplotypes from genotypes of individuals related by a pedigree without mating loops, assuming that the hereditary process was free of mutations (i.e. the Mendelian law of inheritance) and recombinants. We model the haplotype inference problem as a system of linear equations as in [10] and present an (optimal) linear-time (i.e. O(mn) time) algorithm to generate a particular solution (A particular solution of any linear system is an assignment of numerical values to the variables in the system which satisfies the equations in the system.) to the haplotype inference problem, where m is the number of loci (or markers) in a genotype and n is the number of individuals in the pedigree. Moreover, the algorithm also provides a general solution (A general solution of any linear system is denoted by the span of a basis in the solution space to its associated homogeneous system, offset from the origin by a vector, namely by any particular solution. A general solution for ZRHC is very useful in practice because it allows the end user to efficiently enumerate all solutions for ZRHC and performs tasks such as random sampling.) in O(mn2) time, which is optimal because the size of a general solution could be as large as Theta(mn2). The key ingredients of our construction are (i) a fast consistency checking procedure for the system of linear equations introduced in [10] based on a careful investigation of the relationship between the equations (ii) a novel linear-time method for solving linear equations without invoking the Gaussian elimination method. Although such a fast method for solving equations is not known for general systems of linear equations, we take advantage of the underlying loop-free pedigree graph and some special properties of the linear equations.
NASA Astrophysics Data System (ADS)
Wu, Bofeng; Huang, Chao-Guang
2018-04-01
The 1 /r expansion in the distance to the source is applied to the linearized f (R ) gravity, and its multipole expansion in the radiation field with irreducible Cartesian tensors is presented. Then, the energy, momentum, and angular momentum in the gravitational waves are provided for linearized f (R ) gravity. All of these results have two parts, which are associated with the tensor part and the scalar part in the multipole expansion of linearized f (R ) gravity, respectively. The former is the same as that in General Relativity, and the latter, as the correction to the result in General Relativity, is caused by the massive scalar degree of freedom and plays an important role in distinguishing General Relativity and f (R ) gravity.
Generalized Bezout's Theorem and its applications in coding theory
NASA Technical Reports Server (NTRS)
Berg, Gene A.; Feng, Gui-Liang; Rao, T. R. N.
1996-01-01
This paper presents a generalized Bezout theorem which can be used to determine a tighter lower bound of the number of distinct points of intersection of two or more curves for a large class of plane curves. A new approach to determine a lower bound on the minimum distance (and also the generalized Hamming weights) for algebraic-geometric codes defined from a class of plane curves is introduced, based on the generalized Bezout theorem. Examples of more efficient linear codes are constructed using the generalized Bezout theorem and the new approach. For d = 4, the linear codes constructed by the new construction are better than or equal to the known linear codes. For d greater than 5, these new codes are better than the known codes. The Klein code over GF(2(sup 3)) is also constructed.
NASA Technical Reports Server (NTRS)
Gupta, K. K.; Akyuz, F. A.; Heer, E.
1972-01-01
This program, an extension of the linear equilibrium problem solver ELAS, is an updated and extended version of its earlier form (written in FORTRAN 2 for the IBM 7094 computer). A synchronized material property concept utilizing incremental time steps and the finite element matrix displacement approach has been adopted for the current analysis. A special option enables employment of constant time steps in the logarithmic scale, thereby reducing computational efforts resulting from accumulative material memory effects. A wide variety of structures with elastic or viscoelastic material properties can be analyzed by VISCEL. The program is written in FORTRAN 5 language for the Univac 1108 computer operating under the EXEC 8 system. Dynamic storage allocation is automatically effected by the program, and the user may request up to 195K core memory in a 260K Univac 1108/EXEC 8 machine. The physical program VISCEL, consisting of about 7200 instructions, has four distinct links (segments), and the compiled program occupies a maximum of about 11700 words decimal of core storage.
González-Alvarez, I; Fernández-Teruel, C; Garrigues, T M; Casabo, V G; Ruiz-García, A; Bermejo, M
2005-12-01
The purpose was to develop a general mathematical model for estimating passive permeability and efflux transport parameters from in vitro cell culture experiments. The procedure is applicable for linear and non-linear transport of drug with time, <10 or >10% of drug transport, negligible or relevant back flow, and would allow the adequate correction in the case of relevant mass balance problems. A compartmental kinetic approach was used and the transport barriers were described quantitatively in terms of apical and basolateral clearances. The method can be applied when sink conditions are not achieved and it allows the evaluation of the location of the transporter and its binding site. In this work it was possible to demonstrate, from a functional point of view, the higher efflux capacity of the TC7 clone and to identify the apical membrane as the main resistance for the xenobiotic transport. This methodology can be extremely useful as a complementary tool for molecular biology approaches in order to establish meaningful hypotheses about transport mechanisms.
Suprun, Elena V; Saveliev, Anatoly A; Evtugyn, Gennady A; Lisitsa, Alexander V; Bulko, Tatiana V; Shumyantseva, Victoria V; Archakov, Alexander I
2012-03-15
A novel direct antibodies-free electrochemical approach for acute myocardial infarction (AMI) diagnosis has been developed. For this purpose, a combination of the electrochemical assay of plasma samples with chemometrics was proposed. Screen printed carbon electrodes modified with didodecyldimethylammonium bromide were used for plasma charactrerization by cyclic (CV) and square wave voltammetry and square wave (SWV) voltammetry. It was shown that the cathodic peak in voltammograms at about -250 mV vs. Ag/AgCl can be associated with AMI. In parallel tests, cardiac myoglobin and troponin I, the AMI biomarkers, were determined in each sample by RAMP immunoassay. The applicability of the electrochemical testing for AMI diagnostics was confirmed by statistical methods: generalized linear model (GLM), linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA), artificial neural net (multi-layer perception, MLP), and support vector machine (SVM), all of which were created to obtain the "True-False" distribution prediction where "True" and "False" are, respectively, positive and negative decision about an illness event. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Coubard, F.; Brédif, M.; Paparoditis, N.; Briottet, X.
2011-04-01
Terrestrial geolocalized images are nowadays widely used on the Internet, mainly in urban areas, through immersion services such as Google Street View. On the long run, we seek to enhance the visualization of these images; for that purpose, radiometric corrections must be performed to free them from illumination conditions at the time of acquisition. Given the simultaneously acquired 3D geometric model of the scene with LIDAR or vision techniques, we face an inverse problem where the illumination and the geometry of the scene are known and the reflectance of the scene is to be estimated. Our main contribution is the introduction of a symbolic ray-tracing rendering to generate parametric images, for quick evaluation and comparison with the acquired images. The proposed approach is then based on an iterative estimation of the reflectance parameters of the materials, using a single rendering pre-processing. We validate the method on synthetic data with linear BRDF models and discuss the limitations of the proposed approach with more general non-linear BRDF models.
Hinz, Andreas; Kittel, Jörg; Karoff, Marthin; Daig, Isolde
2011-01-01
Anxiety and depression are often found in cardiac patients, but also in the general population. Therefore, evaluation of these symptoms in patients requires a comparison with norm values. The purpose of this study was to explore differences between cardiac patients and the general population in age dependency of anxiety and depression, and to discuss possible reasons for these differences. A sample of German cardiac patients (n = 2,696) and a sample of the German general population (n = 2,037) were tested using the Hospital Anxiety and Depression Scale (HADS). While we confirmed a linear age trend of anxiety and depression in the general population, we observed an inverted U-shaped age dependency in the patient sample. Young patients are especially affected by anxiety and depression. Five items of the HADS that mainly contributed to the age differences were identified. Formal characteristics of these 5 items could not explain the age differences. Concerning the meaning of the items, however, most of the items refer to worrying about the future. The relatively low rates of anxiety and depression in older patients (compared with the general population) indicate that adaptation processes took place, which should be taken into account in studies concerning the psychological status of patients. Young patients need special attention when dealing with mental distress. Copyright © 2011 S. Karger AG, Basel.
NASA Astrophysics Data System (ADS)
Wang, Xiao; Burghardt, Dirk
2018-05-01
This paper presents a new strategy for the generalization of discrete area features by using stroke grouping method and polarization transportation selection. The mentioned stroke is constructed on derive of the refined proximity graph of area features, and the refinement is under the control of four constraints to meet different grouping requirements. The area features which belong to the same stroke are detected into the same group. The stroke-based strategy decomposes the generalization process into two sub-processes by judging whether the area features related to strokes or not. For the area features which belong to the same one stroke, they normally present a linear like pat-tern, and in order to preserve this kind of pattern, typification is chosen as the operator to implement the generalization work. For the remaining area features which are not related by strokes, they are still distributed randomly and discretely, and the selection is chosen to conduct the generalization operation. For the purpose of retaining their original distribution characteristic, a Polarization Transportation (PT) method is introduced to implement the selection operation. Buildings and lakes are selected as the representatives of artificial area feature and natural area feature respectively to take the experiments. The generalized results indicate that by adopting this proposed strategy, the original distribution characteristics of building and lake data can be preserved, and the visual perception is pre-served as before.
Efficient polarimetric BRDF model.
Renhorn, Ingmar G E; Hallberg, Tomas; Boreman, Glenn D
2015-11-30
The purpose of the present manuscript is to present a polarimetric bidirectional reflectance distribution function (BRDF) model suitable for hyperspectral and polarimetric signature modelling. The model is based on a further development of a previously published four-parameter model that has been generalized in order to account for different types of surface structures (generalized Gaussian distribution). A generalization of the Lambertian diffuse model is presented. The pBRDF-functions are normalized using numerical integration. Using directional-hemispherical reflectance (DHR) measurements, three of the four basic parameters can be determined for any wavelength. This simplifies considerably the development of multispectral polarimetric BRDF applications. The scattering parameter has to be determined from at least one BRDF measurement. The model deals with linear polarized radiation; and in similarity with e.g. the facet model depolarization is not included. The model is very general and can inherently model extreme surfaces such as mirrors and Lambertian surfaces. The complex mixture of sources is described by the sum of two basic models, a generalized Gaussian/Fresnel model and a generalized Lambertian model. Although the physics inspired model has some ad hoc features, the predictive power of the model is impressive over a wide range of angles and scattering magnitudes. The model has been applied successfully to painted surfaces, both dull and glossy and also on metallic bead blasted surfaces. The simple and efficient model should be attractive for polarimetric simulations and polarimetric remote sensing.
NASA Astrophysics Data System (ADS)
Baldysz, Zofia; Nykiel, Grzegorz; Araszkiewicz, Andrzej; Figurski, Mariusz; Szafranek, Karolina
2016-09-01
The main purpose of this research was to acquire information about consistency of ZTD (zenith total delay) linear trends and seasonal components between two consecutive GPS reprocessing campaigns. The analysis concerned two sets of the ZTD time series which were estimated during EUREF (Reference Frame Sub-Commission for Europe) EPN (Permanent Network) reprocessing campaigns according to 2008 and 2015 MUT AC (Military University of Technology Analysis Centre) scenarios. Firstly, Lomb-Scargle periodograms were generated for 57 EPN stations to obtain a characterisation of oscillations occurring in the ZTD time series. Then, the values of seasonal components and linear trends were estimated using the LSE (least squares estimation) approach. The Mann-Kendall trend test was also carried out to verify the presence of linear long-term ZTD changes. Finally, differences in seasonal signals and linear trends between these two data sets were investigated. All these analyses were conducted for the ZTD time series of two lengths: a shortened 16-year series and a full 18-year one. In the case of spectral analysis, amplitudes of the annual and semi-annual periods were almost exactly the same for both reprocessing campaigns. Exceptions were found for only a few stations and they did not exceed 1 mm. The estimated trends were also similar. However, for the reprocessing performed in 2008, the trends values were usually higher. In general, shortening of the analysed time period by 2 years resulted in a decrease of the linear trends values of about 0.07 mm yr-1. This was confirmed by analyses based on two data sets.
Linear and Nonlinear Analysis of Brain Dynamics in Children with Cerebral Palsy
ERIC Educational Resources Information Center
Sajedi, Firoozeh; Ahmadlou, Mehran; Vameghi, Roshanak; Gharib, Masoud; Hemmati, Sahel
2013-01-01
This study was carried out to determine linear and nonlinear changes of brain dynamics and their relationships with the motor dysfunctions in CP children. For this purpose power of EEG frequency bands (as a linear analysis) and EEG fractality (as a nonlinear analysis) were computed in eyes-closed resting state and statistically compared between 26…
A Few New 2+1-Dimensional Nonlinear Dynamics and the Representation of Riemann Curvature Tensors
NASA Astrophysics Data System (ADS)
Wang, Yan; Zhang, Yufeng; Zhang, Xiangzhi
2016-09-01
We first introduced a linear stationary equation with a quadratic operator in ∂x and ∂y, then a linear evolution equation is given by N-order polynomials of eigenfunctions. As applications, by taking N=2, we derived a (2+1)-dimensional generalized linear heat equation with two constant parameters associative with a symmetric space. When taking N=3, a pair of generalized Kadomtsev-Petviashvili equations with the same eigenvalues with the case of N=2 are generated. Similarly, a second-order flow associative with a homogeneous space is derived from the integrability condition of the two linear equations, which is a (2+1)-dimensional hyperbolic equation. When N=3, the third second flow associative with the homogeneous space is generated, which is a pair of new generalized Kadomtsev-Petviashvili equations. Finally, as an application of a Hermitian symmetric space, we established a pair of spectral problems to obtain a new (2+1)-dimensional generalized Schrödinger equation, which is expressed by the Riemann curvature tensors.
Next Linear Collider Home Page
Welcome to the Next Linear Collider NLC Home Page If you would like to learn about linear colliders in general and about this next-generation linear collider project's mission, design ideas, and Linear Collider. line | NLC Home | NLC Technical | SLAC | mcdunn Tuesday, February 14, 2006 01:32:11 PM
Control of Distributed Parameter Systems
1990-08-01
vari- ant of the general Lotka - Volterra model for interspecific competition. The variant described the emergence of one subpopulation from another as a...distribut ion unlimited. I&. ARSTRACT (MAUMUnw2O1 A unified arioroximation framework for Parameter estimation In general linear POE models has been completed...unified approximation framework for parameter estimation in general linear PDE models. This framework has provided the theoretical basis for a number of
NASA Technical Reports Server (NTRS)
Rankin, C. C.
1988-01-01
A consistent linearization is provided for the element-dependent corotational formulation, providing the proper first and second variation of the strain energy. As a result, the warping problem that has plagued flat elements has been overcome, with beneficial effects carried over to linear solutions. True Newton quadratic convergence has been restored to the Structural Analysis of General Shells (STAGS) code for conservative loading using the full corotational implementation. Some implications for general finite element analysis are discussed, including what effect the automatic frame invariance provided by this work might have on the development of new, improved elements.
Pei, Soo-Chang; Ding, Jian-Jiun
2005-03-01
Prolate spheroidal wave functions (PSWFs) are known to be useful for analyzing the properties of the finite-extension Fourier transform (fi-FT). We extend the theory of PSWFs for the finite-extension fractional Fourier transform, the finite-extension linear canonical transform, and the finite-extension offset linear canonical transform. These finite transforms are more flexible than the fi-FT and can model much more generalized optical systems. We also illustrate how to use the generalized prolate spheroidal functions we derive to analyze the energy-preservation ratio, the self-imaging phenomenon, and the resonance phenomenon of the finite-sized one-stage or multiple-stage optical systems.
McNamara, C; Naddy, B; Rohan, D; Sexton, J
2003-10-01
The Monte Carlo computational system for stochastic modelling of dietary exposure to food chemicals and nutrients is presented. This system was developed through a European Commission-funded research project. It is accessible as a Web-based application service. The system allows and supports very significant complexity in the data sets used as the model input, but provides a simple, general purpose, linear kernel for model evaluation. Specific features of the system include the ability to enter (arbitrarily) complex mathematical or probabilistic expressions at each and every input data field, automatic bootstrapping on subjects and on subject food intake diaries, and custom kernels to apply brand information such as market share and loyalty to the calculation of food and chemical intake.
Seo, Bong-Kyung; Kim, Nam-Eun; Park, Kyong-Min; Park, Kye-Yeung; Park, Hoon-Ki
2017-01-01
Background The purpose of this study was to evaluate serum lipid levels in Korean adults after consumption of different types of yogurt. Methods Study subjects were 3,038 individuals (≥19 years of age) who participated in the 2012 Korean National Health and Nutrition Examination Survey. Yogurt intake was assessed with a food frequency questionnaire by using the 24-hour recall method. We conducted complex samples general linear analysis with adjustment for covariates. Results The serum triglyceride levels in the group consuming viscous yogurt were lower than those in the group consuming non-viscous yogurt. Conclusion Consumption of viscous yogurt is associated with low serum triglyceride levels in Korean adults. PMID:29026484
NASA Technical Reports Server (NTRS)
Elchuri, V.; Pamidi, P. R.
1985-01-01
This report is a supplemental NASTRAN document for a new capability to determine the vibratory response of turbosystems subjected to aerodynamic excitation. Supplements of NASTRAN Theoretical, User's, Programmer's, and Demonstration Manuals are included. Turbosystems such as advanced turbopropellers with highly swept blades, and axial-flow compressors and turbines can be analyzed using this capability, which has been developed and implemented in the April 1984 release of the general purpose finite element program NASTRAN. The dynamic response problem is addressed in terms of the normal modal coordinates of these tuned rotating cyclic structures. Both rigid and flexible hubs/disks are considered. Coriolis and centripetal accelerations, as well as differential stiffness effects are included. Generally nonuniform steady inflow fields and uniform flow fields arbitrarily inclined at small angles with respect to the axis of rotation of the turbosystem are considered as the sources of aerodynamic excitation. The spatial nonuniformities are considered to be small deviations from a principally uniform inflow. Subsonic relative inflows are addressed, with provision for linearly interpolating transonic airloads.
On the conversion of tritium units to mass fractions for hydrologic applications
Stonestrom, David A.; Andraski, Brian J.; Cooper, Clay A.; Mayers, Charles J.; Michel, Robert L.
2013-01-01
We develop a general equation for converting laboratory-reported tritium levels, expressed either as concentrations (tritium isotope number fractions) or mass-based specific activities, to mass fractions in aqueous systems. Assuming that all tritium is in the form of monotritiated water simplifies the derivation and is shown to be reasonable for most environmental settings encountered in practice. The general equation is nonlinear. For tritium concentrations c less than 4.5×1012 tritium units (TU) - i.e. specific tritium activities11 Bq kg-1 - the mass fraction w of tritiated water is approximated to within 1 part per million by w ≈ c×2.22293×10-18, i.e. the conversion is linear for all practical purposes. Terrestrial abundances serve as a proxy for non-tritium isotopes in the absence of sample-specific data. Variation in the relative abundances of non-tritium isotopes in the terrestrial hydrosphere produces a minimum range for the mantissa of the conversion factor of [2.22287; 2.22300].
Stochastic nature of Landsat MSS data
NASA Technical Reports Server (NTRS)
Labovitz, M. L.; Masuoka, E. J.
1987-01-01
A multiple series generalization of the ARIMA models is used to model Landsat MSS scan lines as sequences of vectors, each vector having four elements (bands). The purpose of this work is to investigate if Landsat scan lines can be described by a general multiple series linear stochastic model and if the coefficients of such a model vary as a function of satellite system and target attributes. To accomplish this objective, an exploratory experimental design was set up incorporating six factors, four representing target attributes - location, cloud cover, row (within location), and column (within location) - and two factors representing system attributes - satellite number and detector bank. Each factor was included in the design at two levels and, with two replicates per treatment, 128 scan lines were analyzed. The results of the analysis suggests that a multiple AR(4) model is an adequate representation across all scan lines. Furthermore, the coefficients of the AR(4) model vary with location, particularly changes in physiography (slope regimes), and with percent cloud cover, but are insensitive to changes in system attributes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
D'Ambra, P.; Vassilevski, P. S.
2014-05-30
Adaptive Algebraic Multigrid (or Multilevel) Methods (αAMG) are introduced to improve robustness and efficiency of classical algebraic multigrid methods in dealing with problems where no a-priori knowledge or assumptions on the near-null kernel of the underlined matrix are available. Recently we proposed an adaptive (bootstrap) AMG method, αAMG, aimed to obtain a composite solver with a desired convergence rate. Each new multigrid component relies on a current (general) smooth vector and exploits pairwise aggregation based on weighted matching in a matrix graph to define a new automatic, general-purpose coarsening process, which we refer to as “the compatible weighted matching”. Inmore » this work, we present results that broaden the applicability of our method to different finite element discretizations of elliptic PDEs. In particular, we consider systems arising from displacement methods in linear elasticity problems and saddle-point systems that appear in the application of the mixed method to Darcy problems.« less
On HPM approximation for the perihelion precession angle in general relativity
NASA Astrophysics Data System (ADS)
Shchigolev, Victor; Bezbatko, Dmitrii
2017-03-01
In this paper, the homotopy perturbation method (HPM) is applied for calculating the perihelion precession angle of planetary orbits in General Relativity. The HPM is quite efficient and is practically well suited for use in many astrophysical and cosmological problems. For our purpose, we applied HPM to the approximate solutions for the orbits in order to calculate the perihelion shift. On the basis of the main idea of HPM, we construct the appropriate homotopy that leads to the problem of solving the set of linear algebraic equations. As a result, we obtain a simple formula for the angle of precession avoiding any restrictions on the smallness of physical parameters. First of all, we consider the simple examples of the Schwarzschild metric and the Reissner - Nordström spacetime of a charged star for which the approximate geodesics solutions are known. Furthermore, the implementation of HPM has allowed us to readily obtain the precession angle for the orbits in the gravitational field of Kiselev black hole.
Brown, Jennifer L.; Sales, Jessica M.; Swartzendruber, Andrea L.; Eriksen, Michael D.; DiClemente, Ralph J.; Rose, Eve S.
2014-01-01
Background Adolescents experience elevated depressive symptoms which health promotion interventions may reduce. Purpose This study investigated whether HIV prevention trial participation decreased depressive symptoms among African-American female adolescents. Methods Adolescents (N=701; M age = 17.6) first received a group-delivered HIV prevention intervention and then either 12 sexual health (intervention condition) or 12 general health (comparison condition) phone counseling contacts over 24 months. ACASI assessments were conducted at baseline, and at 6-, 12-, 18-, and 24-months post-baseline. Linear generalized estimating equations were used to detect percent relative change in depressive symptoms. Results Participants reported a 2.7% decrease in depressive symptoms (p = 0.001) at each assessment. Intervention participants endorsed an additional 3.6% decrease in depressive symptoms (p = 0.058). Conclusions Trial participation was associated with reduced depressive symptomatology, particularly among those receiving personalized sexual health counseling. HIV prevention interventions may benefit from incorporating additional content to address adolescents’ mental health needs. PMID:24366521
Krylov Subspace Methods for Complex Non-Hermitian Linear Systems. Thesis
NASA Technical Reports Server (NTRS)
Freund, Roland W.
1991-01-01
We consider Krylov subspace methods for the solution of large sparse linear systems Ax = b with complex non-Hermitian coefficient matrices. Such linear systems arise in important applications, such as inverse scattering, numerical solution of time-dependent Schrodinger equations, underwater acoustics, eddy current computations, numerical computations in quantum chromodynamics, and numerical conformal mapping. Typically, the resulting coefficient matrices A exhibit special structures, such as complex symmetry, or they are shifted Hermitian matrices. In this paper, we first describe a Krylov subspace approach with iterates defined by a quasi-minimal residual property, the QMR method, for solving general complex non-Hermitian linear systems. Then, we study special Krylov subspace methods designed for the two families of complex symmetric respectively shifted Hermitian linear systems. We also include some results concerning the obvious approach to general complex linear systems by solving equivalent real linear systems for the real and imaginary parts of x. Finally, numerical experiments for linear systems arising from the complex Helmholtz equation are reported.
Wu, Jibo
2016-01-01
In this article, a generalized difference-based ridge estimator is proposed for the vector parameter in a partial linear model when the errors are dependent. It is supposed that some additional linear constraints may hold to the whole parameter space. Its mean-squared error matrix is compared with the generalized restricted difference-based estimator. Finally, the performance of the new estimator is explained by a simulation study and a numerical example.
On Generalizations of Cochran’s Theorem and Projection Matrices.
1980-08-01
Definiteness of the Estimated Dispersion Matrix in a Multivariate Linear Model ," F. Pukelsheim and George P.H. Styan, May 1978. TECHNICAL REPORTS...with applications to the analysis of covariance," Proc. Cambridge Philos. Soc., 30, pp. 178-191. Graybill , F. A. and Marsaglia, G. (1957...34Idempotent matrices and quad- ratic forms in the general linear hypothesis," Ann. Math. Statist., 28, pp. 678-686. Greub, W. (1975). Linear Algebra (4th ed
24 CFR 902.1 - Purpose and general description.
Code of Federal Regulations, 2010 CFR
2010-04-01
... URBAN DEVELOPMENT PUBLIC HOUSING ASSESSMENT SYSTEM General Provisions § 902.1 Purpose and general description. (a) Purpose. The purpose of the Public Housing Assessment System (PHAS) is to improve the delivery of services in public housing and enhance trust in the public housing system among public housing...
Generalized massive optimal data compression
NASA Astrophysics Data System (ADS)
Alsing, Justin; Wandelt, Benjamin
2018-05-01
In this paper, we provide a general procedure for optimally compressing N data down to n summary statistics, where n is equal to the number of parameters of interest. We show that compression to the score function - the gradient of the log-likelihood with respect to the parameters - yields n compressed statistics that are optimal in the sense that they preserve the Fisher information content of the data. Our method generalizes earlier work on linear Karhunen-Loéve compression for Gaussian data whilst recovering both lossless linear compression and quadratic estimation as special cases when they are optimal. We give a unified treatment that also includes the general non-Gaussian case as long as mild regularity conditions are satisfied, producing optimal non-linear summary statistics when appropriate. As a worked example, we derive explicitly the n optimal compressed statistics for Gaussian data in the general case where both the mean and covariance depend on the parameters.
Use of general purpose graphics processing units with MODFLOW
Hughes, Joseph D.; White, Jeremy T.
2013-01-01
To evaluate the use of general-purpose graphics processing units (GPGPUs) to improve the performance of MODFLOW, an unstructured preconditioned conjugate gradient (UPCG) solver has been developed. The UPCG solver uses a compressed sparse row storage scheme and includes Jacobi, zero fill-in incomplete, and modified-incomplete lower-upper (LU) factorization, and generalized least-squares polynomial preconditioners. The UPCG solver also includes options for sequential and parallel solution on the central processing unit (CPU) using OpenMP. For simulations utilizing the GPGPU, all basic linear algebra operations are performed on the GPGPU; memory copies between the central processing unit CPU and GPCPU occur prior to the first iteration of the UPCG solver and after satisfying head and flow criteria or exceeding a maximum number of iterations. The efficiency of the UPCG solver for GPGPU and CPU solutions is benchmarked using simulations of a synthetic, heterogeneous unconfined aquifer with tens of thousands to millions of active grid cells. Testing indicates GPGPU speedups on the order of 2 to 8, relative to the standard MODFLOW preconditioned conjugate gradient (PCG) solver, can be achieved when (1) memory copies between the CPU and GPGPU are optimized, (2) the percentage of time performing memory copies between the CPU and GPGPU is small relative to the calculation time, (3) high-performance GPGPU cards are utilized, and (4) CPU-GPGPU combinations are used to execute sequential operations that are difficult to parallelize. Furthermore, UPCG solver testing indicates GPGPU speedups exceed parallel CPU speedups achieved using OpenMP on multicore CPUs for preconditioners that can be easily parallelized.
22 CFR 309.1 - General purpose.
Code of Federal Regulations, 2010 CFR
2010-04-01
...-tax debts owed to Peace Corps and to the United States. ... 22 Foreign Relations 2 2010-04-01 2010-04-01 true General purpose. 309.1 Section 309.1 Foreign Relations PEACE CORPS DEBT COLLECTION General Provisions § 309.1 General purpose. This part prescribes the...
47 CFR 32.6124 - General purpose computers expense.
Code of Federal Regulations, 2013 CFR
2013-10-01
... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...
47 CFR 32.6124 - General purpose computers expense.
Code of Federal Regulations, 2014 CFR
2014-10-01
... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...
47 CFR 32.6124 - General purpose computers expense.
Code of Federal Regulations, 2011 CFR
2011-10-01
... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...
47 CFR 32.6124 - General purpose computers expense.
Code of Federal Regulations, 2012 CFR
2012-10-01
... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...
47 CFR 32.6124 - General purpose computers expense.
Code of Federal Regulations, 2010 CFR
2010-10-01
... is the physical operation of general purpose computers and the maintenance of operating systems. This... UNIFORM SYSTEM OF ACCOUNTS FOR TELECOMMUNICATIONS COMPANIES Instructions for Expense Accounts § 32.6124... application systems and databases for general purpose computers. (See also § 32.6720, General and...
Medina, Daniel C.; Findley, Sally E.; Guindo, Boubacar; Doumbia, Seydou
2007-01-01
Background Much of the developing world, particularly sub-Saharan Africa, exhibits high levels of morbidity and mortality associated with diarrhea, acute respiratory infection, and malaria. With the increasing awareness that the aforementioned infectious diseases impose an enormous burden on developing countries, public health programs therein could benefit from parsimonious general-purpose forecasting methods to enhance infectious disease intervention. Unfortunately, these disease time-series often i) suffer from non-stationarity; ii) exhibit large inter-annual plus seasonal fluctuations; and, iii) require disease-specific tailoring of forecasting methods. Methodology/Principal Findings In this longitudinal retrospective (01/1996–06/2004) investigation, diarrhea, acute respiratory infection of the lower tract, and malaria consultation time-series are fitted with a general-purpose econometric method, namely the multiplicative Holt-Winters, to produce contemporaneous on-line forecasts for the district of Niono, Mali. This method accommodates seasonal, as well as inter-annual, fluctuations and produces reasonably accurate median 2- and 3-month horizon forecasts for these non-stationary time-series, i.e., 92% of the 24 time-series forecasts generated (2 forecast horizons, 3 diseases, and 4 age categories = 24 time-series forecasts) have mean absolute percentage errors circa 25%. Conclusions/Significance The multiplicative Holt-Winters forecasting method: i) performs well across diseases with dramatically distinct transmission modes and hence it is a strong general-purpose forecasting method candidate for non-stationary epidemiological time-series; ii) obliquely captures prior non-linear interactions between climate and the aforementioned disease dynamics thus, obviating the need for more complex disease-specific climate-based parametric forecasting methods in the district of Niono; furthermore, iii) readily decomposes time-series into seasonal components thereby potentially assisting with programming of public health interventions, as well as monitoring of disease dynamics modification. Therefore, these forecasts could improve infectious diseases management in the district of Niono, Mali, and elsewhere in the Sahel. PMID:18030322
Medina, Daniel C; Findley, Sally E; Guindo, Boubacar; Doumbia, Seydou
2007-11-21
Much of the developing world, particularly sub-Saharan Africa, exhibits high levels of morbidity and mortality associated with diarrhea, acute respiratory infection, and malaria. With the increasing awareness that the aforementioned infectious diseases impose an enormous burden on developing countries, public health programs therein could benefit from parsimonious general-purpose forecasting methods to enhance infectious disease intervention. Unfortunately, these disease time-series often i) suffer from non-stationarity; ii) exhibit large inter-annual plus seasonal fluctuations; and, iii) require disease-specific tailoring of forecasting methods. In this longitudinal retrospective (01/1996-06/2004) investigation, diarrhea, acute respiratory infection of the lower tract, and malaria consultation time-series are fitted with a general-purpose econometric method, namely the multiplicative Holt-Winters, to produce contemporaneous on-line forecasts for the district of Niono, Mali. This method accommodates seasonal, as well as inter-annual, fluctuations and produces reasonably accurate median 2- and 3-month horizon forecasts for these non-stationary time-series, i.e., 92% of the 24 time-series forecasts generated (2 forecast horizons, 3 diseases, and 4 age categories = 24 time-series forecasts) have mean absolute percentage errors circa 25%. The multiplicative Holt-Winters forecasting method: i) performs well across diseases with dramatically distinct transmission modes and hence it is a strong general-purpose forecasting method candidate for non-stationary epidemiological time-series; ii) obliquely captures prior non-linear interactions between climate and the aforementioned disease dynamics thus, obviating the need for more complex disease-specific climate-based parametric forecasting methods in the district of Niono; furthermore, iii) readily decomposes time-series into seasonal components thereby potentially assisting with programming of public health interventions, as well as monitoring of disease dynamics modification. Therefore, these forecasts could improve infectious diseases management in the district of Niono, Mali, and elsewhere in the Sahel.
Non-Linear Approach in Kinesiology Should Be Preferred to the Linear--A Case of Basketball.
Trninić, Marko; Jeličić, Mario; Papić, Vladan
2015-07-01
In kinesiology, medicine, biology and psychology, in which research focus is on dynamical self-organized systems, complex connections exist between variables. Non-linear nature of complex systems has been discussed and explained by the example of non-linear anthropometric predictors of performance in basketball. Previous studies interpreted relations between anthropometric features and measures of effectiveness in basketball by (a) using linear correlation models, and by (b) including all basketball athletes in the same sample of participants regardless of their playing position. In this paper the significance and character of linear and non-linear relations between simple anthropometric predictors (AP) and performance criteria consisting of situation-related measures of effectiveness (SE) in basketball were determined and evaluated. The sample of participants consisted of top-level junior basketball players divided in three groups according to their playing time (8 minutes and more per game) and playing position: guards (N = 42), forwards (N = 26) and centers (N = 40). Linear (general model) and non-linear (general model) regression models were calculated simultaneously and separately for each group. The conclusion is viable: non-linear regressions are frequently superior to linear correlations when interpreting actual association logic among research variables.
NASA Astrophysics Data System (ADS)
Kaplan, Melike; Hosseini, Kamyar; Samadani, Farzan; Raza, Nauman
2018-07-01
A wide range of problems in different fields of the applied sciences especially non-linear optics is described by non-linear Schrödinger's equations (NLSEs). In the present paper, a specific type of NLSEs known as the cubic-quintic non-linear Schrödinger's equation including an anti-cubic term has been studied. The generalized Kudryashov method along with symbolic computation package has been exerted to carry out this objective. As a consequence, a series of optical soliton solutions have formally been retrieved. It is corroborated that the generalized form of Kudryashov method is a direct, effectual, and reliable technique to deal with various types of non-linear Schrödinger's equations.
Burgansky-Eliash, Zvia; Wollstein, Gadi; Chu, Tianjiao; Ramsey, Joseph D.; Glymour, Clark; Noecker, Robert J.; Ishikawa, Hiroshi; Schuman, Joel S.
2007-01-01
Purpose Machine-learning classifiers are trained computerized systems with the ability to detect the relationship between multiple input parameters and a diagnosis. The present study investigated whether the use of machine-learning classifiers improves optical coherence tomography (OCT) glaucoma detection. Methods Forty-seven patients with glaucoma (47 eyes) and 42 healthy subjects (42 eyes) were included in this cross-sectional study. Of the glaucoma patients, 27 had early disease (visual field mean deviation [MD] ≥ −6 dB) and 20 had advanced glaucoma (MD < −6 dB). Machine-learning classifiers were trained to discriminate between glaucomatous and healthy eyes using parameters derived from OCT output. The classifiers were trained with all 38 parameters as well as with only 8 parameters that correlated best with the visual field MD. Five classifiers were tested: linear discriminant analysis, support vector machine, recursive partitioning and regression tree, generalized linear model, and generalized additive model. For the last two classifiers, a backward feature selection was used to find the minimal number of parameters that resulted in the best and most simple prediction. The cross-validated receiver operating characteristic (ROC) curve and accuracies were calculated. Results The largest area under the ROC curve (AROC) for glaucoma detection was achieved with the support vector machine using eight parameters (0.981). The sensitivity at 80% and 95% specificity was 97.9% and 92.5%, respectively. This classifier also performed best when judged by cross-validated accuracy (0.966). The best classification between early glaucoma and advanced glaucoma was obtained with the generalized additive model using only three parameters (AROC = 0.854). Conclusions Automated machine classifiers of OCT data might be useful for enhancing the utility of this technology for detecting glaucomatous abnormality. PMID:16249492
The microcomputer scientific software series 2: general linear model--regression.
Harold M. Rauscher
1983-01-01
The general linear model regression (GLMR) program provides the microcomputer user with a sophisticated regression analysis capability. The output provides a regression ANOVA table, estimators of the regression model coefficients, their confidence intervals, confidence intervals around the predicted Y-values, residuals for plotting, a check for multicollinearity, a...
Knowledge to Manage the Knowledge Society
ERIC Educational Resources Information Center
Minati, Gianfranco
2012-01-01
Purpose: The purpose of this research is to make evident the inadequateness of concepts and language based on industrial knowledge still used in current practices by managers to cope with problems of the post-industrial societies characterised by non-linear process of emergence and acquisition of properties. The purpose is to allow management to…
Single-Scale Retinex Using Digital Signal Processors
NASA Technical Reports Server (NTRS)
Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn
2005-01-01
The Retinex is an image enhancement algorithm that improves the brightness, contrast and sharpness of an image. It performs a non-linear spatial/spectral transform that provides simultaneous dynamic range compression and color constancy. It has been used for a wide variety of applications ranging from aviation safety to general purpose photography. Many potential applications require the use of Retinex processing at video frame rates. This is difficult to achieve with general purpose processors because the algorithm contains a large number of complex computations and data transfers. In addition, many of these applications also constrain the potential architectures to embedded processors to save power, weight and cost. Thus we have focused on digital signal processors (DSPs) and field programmable gate arrays (FPGAs) as potential solutions for real-time Retinex processing. In previous efforts we attained a 21 (full) frame per second (fps) processing rate for the single-scale monochromatic Retinex with a TMS320C6711 DSP operating at 150 MHz. This was achieved after several significant code improvements and optimizations. Since then we have migrated our design to the slightly more powerful TMS320C6713 DSP and the fixed point TMS320DM642 DSP. In this paper we briefly discuss the Retinex algorithm, the performance of the algorithm executing on the TMS320C6713 and the TMS320DM642, and compare the results with the TMS320C6711.
1 CFR 2.1 - Scope and purpose.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 1 General Provisions 1 2010-01-01 2010-01-01 false Scope and purpose. 2.1 Section 2.1 General Provisions ADMINISTRATIVE COMMITTEE OF THE FEDERAL REGISTER GENERAL GENERAL INFORMATION § 2.1 Scope and purpose. (a) This chapter sets forth the policies, procedures, and delegations under which the...
1 CFR 2.1 - Scope and purpose.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 1 General Provisions 1 2011-01-01 2011-01-01 false Scope and purpose. 2.1 Section 2.1 General Provisions ADMINISTRATIVE COMMITTEE OF THE FEDERAL REGISTER GENERAL GENERAL INFORMATION § 2.1 Scope and purpose. (a) This chapter sets forth the policies, procedures, and delegations under which the...
1 CFR 2.1 - Scope and purpose.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 1 General Provisions 1 2014-01-01 2012-01-01 true Scope and purpose. 2.1 Section 2.1 General Provisions ADMINISTRATIVE COMMITTEE OF THE FEDERAL REGISTER GENERAL GENERAL INFORMATION § 2.1 Scope and purpose. (a) This chapter sets forth the policies, procedures, and delegations under which the...
1 CFR 2.1 - Scope and purpose.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 1 General Provisions 1 2012-01-01 2012-01-01 false Scope and purpose. 2.1 Section 2.1 General Provisions ADMINISTRATIVE COMMITTEE OF THE FEDERAL REGISTER GENERAL GENERAL INFORMATION § 2.1 Scope and purpose. (a) This chapter sets forth the policies, procedures, and delegations under which the...
1 CFR 2.1 - Scope and purpose.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 1 General Provisions 1 2013-01-01 2012-01-01 true Scope and purpose. 2.1 Section 2.1 General Provisions ADMINISTRATIVE COMMITTEE OF THE FEDERAL REGISTER GENERAL GENERAL INFORMATION § 2.1 Scope and purpose. (a) This chapter sets forth the policies, procedures, and delegations under which the...
The Use of Shrinkage Techniques in the Estimation of Attrition Rates for Large Scale Manpower Models
1988-07-27
auto regressive model combined with a linear program that solves for the coefficients using MAD. But this success has diminished with time (Rowe...8217Harrison-Stevens Forcasting and the Multiprocess Dy- namic Linear Model ", The American Statistician, v.40, pp. 12 9 - 1 3 5 . 1986. 8. Box, G. E. P. and...1950. 40. McCullagh, P. and Nelder, J., Generalized Linear Models , Chapman and Hall. 1983. 41. McKenzie, E. General Exponential Smoothing and the
The Linear Mixing Approximation for Planetary Ices
NASA Astrophysics Data System (ADS)
Bethkenhagen, M.; Meyer, E. R.; Hamel, S.; Nettelmann, N.; French, M.; Scheibe, L.; Ticknor, C.; Collins, L. A.; Kress, J. D.; Fortney, J. J.; Redmer, R.
2017-12-01
We investigate the validity of the widely used linear mixing approximation for the equations of state (EOS) of planetary ices, which are thought to dominate the interior of the ice giant planets Uranus and Neptune. For that purpose we perform density functional theory molecular dynamics simulations using the VASP code.[1] In particular, we compute 1:1 binary mixtures of water, ammonia, and methane, as well as their 2:1:4 ternary mixture at pressure-temperature conditions typical for the interior of Uranus and Neptune.[2,3] In addition, a new ab initio EOS for methane is presented. The linear mixing approximation is verified for the conditions present inside Uranus ranging up to 10 Mbar based on the comprehensive EOS data set. We also calculate the diffusion coefficients for the ternary mixture along different Uranus interior profiles and compare them to the values of the pure compounds. We find that deviations of the linear mixing approximation from the real mixture are generally small; for the EOS they fall within about 4% uncertainty while the diffusion coefficients deviate up to 20% . The EOS of planetary ices are applied to adiabatic models of Uranus. It turns out that a deep interior of almost pure ices is consistent with the gravity field data, in which case the planet becomes rather cold (T core ˜ 4000 K). [1] G. Kresse and J. Hafner, Physical Review B 47, 558 (1993). [2] R. Redmer, T.R. Mattsson, N. Nettelmann and M. French, Icarus 211, 798 (2011). [3] N. Nettelmann, K. Wang, J. J. Fortney, S. Hamel, S. Yellamilli, M. Bethkenhagen and R. Redmer, Icarus 275, 107 (2016).
Relations between basic and specific motor abilities and player quality of young basketball players.
Marić, Kristijan; Katić, Ratko; Jelicić, Mario
2013-05-01
Subjects from 5 first league clubs from Herzegovina were tested with the purpose of determining the relations of basic and specific motor abilities, as well as the effect of specific abilities on player efficiency in young basketball players (cadets). A battery of 12 tests assessing basic motor abilities and 5 specific tests assessing basketball efficiency were used on a sample of 83 basketball players. Two significant canonical correlations, i.e. linear combinations explained the relation between the set of twelve variables of basic motor space and five variables of situational motor abilities. Underlying the first canonical linear combination is the positive effect of the general motor factor, predominantly defined by jumping explosive power, movement speed of the arms, static strength of the arms and coordination, on specific basketball abilities: movement efficiency, the power of the overarm throw, shooting and passing precision, and the skill of handling the ball. The impact of basic motor abilities of precision and balance on specific abilities of passing and shooting precision and ball handling is underlying the second linear combination. The results of regression correlation analysis between the variable set of specific motor abilities and game efficiency have shown that the ability of ball handling has the largest impact on player quality in basketball cadets, followed by shooting precision and passing precision, and the power of the overarm throw.
Kirchberger, Martin
2016-01-01
Dynamic range compression serves different purposes in the music and hearing-aid industries. In the music industry, it is used to make music louder and more attractive to normal-hearing listeners. In the hearing-aid industry, it is used to map the variable dynamic range of acoustic signals to the reduced dynamic range of hearing-impaired listeners. Hence, hearing-aided listeners will typically receive a dual dose of compression when listening to recorded music. The present study involved an acoustic analysis of dynamic range across a cross section of recorded music as well as a perceptual study comparing the efficacy of different compression schemes. The acoustic analysis revealed that the dynamic range of samples from popular genres, such as rock or rap, was generally smaller than the dynamic range of samples from classical genres, such as opera and orchestra. By comparison, the dynamic range of speech, based on recordings of monologues in quiet, was larger than the dynamic range of all music genres tested. The perceptual study compared the effect of the prescription rule NAL-NL2 with a semicompressive and a linear scheme. Music subjected to linear processing had the highest ratings for dynamics and quality, followed by the semicompressive and the NAL-NL2 setting. These findings advise against NAL-NL2 as a prescription rule for recorded music and recommend linear settings. PMID:26868955
Peripheral generators of the vestibular evoked potentials (VsEPs) in the chick.
Weisleder, P; Jones, T A; Rubel, E W
1990-10-01
Electrophysiological activity in response to linear acceleration stimuli was recorded from young chickens by means of subcutaneous electrodes. This investigation had 2 purposes: (1) to establish the vestibular origin of the potentials; and (2) to investigate the contribution of each vestibular labyrinth to the response. The stimuli consisted of pulses of linear acceleration delivered by a mechanical vibrator (shaker). In the first set of experiments vestibular evoked potentials (VsEPs) were recorded prior to and 24 h after bilateral cochlea removal. In the second set of experiments responses were recorded before and after unilateral or bilateral intralabyrinthine injections of tetrodotoxin (TTX). Different groups of subjects were used for each experimental condition. The general morphology of the VsEPs was maintained after bilateral cochlea removal. Absolute latency of wave P2, the most prominent component of the response, was not significantly affected by the manipulation. Unilateral intralabyrinthine TTX injections consistently prolonged the latency and reduced the amplitude of wave P2. Following binaural TTX injections we were unable to elicit responses at the acceleration levels used in this study. The results from these experiments suggest that: (1) the activity recorded in response to linear acceleration stimuli is vestibular in origin; (2) when recorded from intact animals the evoked response is composed of activity from both vestibular systems; and (3) TTX consistently blocks the activity of the vestibular portion of the VIIIth cranial nerve.
Kirchberger, Martin; Russo, Frank A
2016-02-10
Dynamic range compression serves different purposes in the music and hearing-aid industries. In the music industry, it is used to make music louder and more attractive to normal-hearing listeners. In the hearing-aid industry, it is used to map the variable dynamic range of acoustic signals to the reduced dynamic range of hearing-impaired listeners. Hence, hearing-aided listeners will typically receive a dual dose of compression when listening to recorded music. The present study involved an acoustic analysis of dynamic range across a cross section of recorded music as well as a perceptual study comparing the efficacy of different compression schemes. The acoustic analysis revealed that the dynamic range of samples from popular genres, such as rock or rap, was generally smaller than the dynamic range of samples from classical genres, such as opera and orchestra. By comparison, the dynamic range of speech, based on recordings of monologues in quiet, was larger than the dynamic range of all music genres tested. The perceptual study compared the effect of the prescription rule NAL-NL2 with a semicompressive and a linear scheme. Music subjected to linear processing had the highest ratings for dynamics and quality, followed by the semicompressive and the NAL-NL2 setting. These findings advise against NAL-NL2 as a prescription rule for recorded music and recommend linear settings. © The Author(s) 2016.
Comparison of dynamical approximation schemes for non-linear gravitational clustering
NASA Technical Reports Server (NTRS)
Melott, Adrian L.
1994-01-01
We have recently conducted a controlled comparison of a number of approximations for gravitational clustering against the same n-body simulations. These include ordinary linear perturbation theory (Eulerian), the adhesion approximation, the frozen-flow approximation, the Zel'dovich approximation (describable as first-order Lagrangian perturbation theory), and its second-order generalization. In the last two cases we also created new versions of approximation by truncation, i.e., smoothing the initial conditions by various smoothing window shapes and varying their sizes. The primary tool for comparing simulations to approximation schemes was crosscorrelation of the evolved mass density fields, testing the extent to which mass was moved to the right place. The Zel'dovich approximation, with initial convolution with a Gaussian e(exp -k(exp 2)/k(exp 2, sub G)) where k(sub G) is adjusted to be just into the nonlinear regime of the evolved model (details in text) worked extremely well. Its second-order generalization worked slightly better. All other schemes, including those proposed as generalizations of the Zel'dovich approximation created by adding forces, were in fact generally worse by this measure. By explicitly checking, we verified that the success of our best-choice was a result of the best treatment of the phases of nonlinear Fourier components. Of all schemes tested, the adhesion approximation produced the most accurate nonlinear power spectrum and density distribution, but its phase errors suggest mass condensations were moved to slightly the wrong location. Due to its better reproduction of the mass density distribution function and power spectrum, it might be preferred for some uses. We recommend either n-body simulations or our modified versions of the Zel'dovich approximation, depending upon the purpose. The theoretical implication is that pancaking is implicit in all cosmological gravitational clustering, at least from Gaussian initial conditions, even when subcondensations are present.
Oh, Gye-Jeong; Yun, Kwi-Dug; Lee, Kwang-Min; Lim, Hyun-Pil
2010-01-01
PURPOSE The purpose of this study was to compare the linear sintering behavior of presintered zirconia blocks of various densities. The mechanical properties of the resulting sintered zirconia blocks were then analyzed. MATERIALS AND METHODS Three experimental groups of dental zirconia blocks, with a different presintering density each, were designed in the present study. Kavo Everest® ZS blanks (Kavo, Biberach, Germany) were used as a control group. The experimental group blocks were fabricated from commercial yttria-stabilized tetragonal zirconia powder (KZ-3YF (SD) Type A, KCM. Corporation, Nagoya, Japan). The biaxial flexural strengths, microhardnesses, and microstructures of the sintered blocks were then investigated. The linear sintering shrinkages of blocks were calculated and compared. RESULTS Despite their different presintered densities, the sintered blocks of the control and experimental groups showed similar mechanical properties. However, the sintered block had different linear sintering shrinkage rate depending on the density of the presintered block. As the density of the presintered block increased, the linear sintering shrinkage decreased. In the experimental blocks, the three sectioned pieces of each block showed the different linear shrinkage depending on the area. The tops of the experimental blocks showed the lowest linear sintering shrinkage, whereas the bottoms of the experimental blocks showed the highest linear sintering shrinkage. CONCLUSION Within the limitations of this study, the density difference of the presintered zirconia block did not affect the mechanical properties of the sintered zirconia block, but affected the linear sintering shrinkage of the zirconia block. PMID:21165274
Huppert, Theodore J
2016-01-01
Functional near-infrared spectroscopy (fNIRS) is a noninvasive neuroimaging technique that uses low levels of light to measure changes in cerebral blood oxygenation levels. In the majority of NIRS functional brain studies, analysis of this data is based on a statistical comparison of hemodynamic levels between a baseline and task or between multiple task conditions by means of a linear regression model: the so-called general linear model. Although these methods are similar to their implementation in other fields, particularly for functional magnetic resonance imaging, the specific application of these methods in fNIRS research differs in several key ways related to the sources of noise and artifacts unique to fNIRS. In this brief communication, we discuss the application of linear regression models in fNIRS and the modifications needed to generalize these models in order to deal with structured (colored) noise due to systemic physiology and noise heteroscedasticity due to motion artifacts. The objective of this work is to present an overview of these noise properties in the context of the linear model as it applies to fNIRS data. This work is aimed at explaining these mathematical issues to the general fNIRS experimental researcher but is not intended to be a complete mathematical treatment of these concepts.
Implementing general quantum measurements on linear optical and solid-state qubits
NASA Astrophysics Data System (ADS)
Ota, Yukihiro; Ashhab, Sahel; Nori, Franco
2013-03-01
We show a systematic construction for implementing general measurements on a single qubit, including both strong (or projection) and weak measurements. We mainly focus on linear optical qubits. The present approach is composed of simple and feasible elements, i.e., beam splitters, wave plates, and polarizing beam splitters. We show how the parameters characterizing the measurement operators are controlled by the linear optical elements. We also propose a method for the implementation of general measurements in solid-state qubits. Furthermore, we show an interesting application of the general measurements, i.e., entanglement amplification. YO is partially supported by the SPDR Program, RIKEN. SA and FN acknowledge ARO, NSF grant No. 0726909, JSPS-RFBR contract No. 12-02-92100, Grant-in-Aid for Scientific Research (S), MEXT Kakenhi on Quantum Cybernetics, and the JSPS via its FIRST program.
7 CFR 226.1 - General purpose and scope.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 4 2010-01-01 2010-01-01 false General purpose and scope. 226.1 Section 226.1 Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE CHILD NUTRITION PROGRAMS CHILD AND ADULT CARE FOOD PROGRAM General § 226.1 General purpose and scope. This part announces the...
7 CFR 225.1 - General purpose and scope.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 4 2010-01-01 2010-01-01 false General purpose and scope. 225.1 Section 225.1 Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE CHILD NUTRITION PROGRAMS SUMMER FOOD SERVICE PROGRAM General § 225.1 General purpose and scope. This part establishes the regulations...
Linear systems with structure group and their feedback invariants
NASA Technical Reports Server (NTRS)
Martin, C.; Hermann, R.
1977-01-01
A general method described by Hermann and Martin (1976) for the study of the feedback invariants of linear systems is considered. It is shown that this method, which makes use of ideas of topology and algebraic geometry, is very useful in the investigation of feedback problems for which the classical methods are not suitable. The transfer function as a curve in the Grassmanian is examined. The general concepts studied in the context of specific systems and applications are organized in terms of the theory of Lie groups and algebraic geometry. Attention is given to linear systems which have a structure group, linear mechanical systems, and feedback invariants. The investigation shows that Lie group techniques are powerful and useful tools for analysis of the feedback structure of linear systems.
Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models
ERIC Educational Resources Information Center
Wagler, Amy E.
2014-01-01
Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…
Estimation of Complex Generalized Linear Mixed Models for Measurement and Growth
ERIC Educational Resources Information Center
Jeon, Minjeong
2012-01-01
Maximum likelihood (ML) estimation of generalized linear mixed models (GLMMs) is technically challenging because of the intractable likelihoods that involve high dimensional integrations over random effects. The problem is magnified when the random effects have a crossed design and thus the data cannot be reduced to small independent clusters. A…
A General Linear Model (GLM) was used to evaluate the deviation of predicted values from expected values for a complex environmental model. For this demonstration, we used the default level interface of the Regional Mercury Cycling Model (R-MCM) to simulate epilimnetic total mer...
Modeling containment of large wildfires using generalized linear mixed-model analysis
Mark Finney; Isaac C. Grenfell; Charles W. McHugh
2009-01-01
Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...
Posterior propriety for hierarchical models with log-likelihoods that have norm bounds
Michalak, Sarah E.; Morris, Carl N.
2015-07-17
Statisticians often use improper priors to express ignorance or to provide good frequency properties, requiring that posterior propriety be verified. Our paper addresses generalized linear mixed models, GLMMs, when Level I parameters have Normal distributions, with many commonly-used hyperpriors. It provides easy-to-verify sufficient posterior propriety conditions based on dimensions, matrix ranks, and exponentiated norm bounds, ENBs, for the Level I likelihood. Since many familiar likelihoods have ENBs, which is often verifiable via log-concavity and MLE finiteness, our novel use of ENBs permits unification of posterior propriety results and posterior MGF/moment results for many useful Level I distributions, including those commonlymore » used with multilevel generalized linear models, e.g., GLMMs and hierarchical generalized linear models, HGLMs. Furthermore, those who need to verify existence of posterior distributions or of posterior MGFs/moments for a multilevel generalized linear model given a proper or improper multivariate F prior as in Section 1 should find the required results in Sections 1 and 2 and Theorem 3 (GLMMs), Theorem 4 (HGLMs), or Theorem 5 (posterior MGFs/moments).« less
NASA Technical Reports Server (NTRS)
Wiggins, R. A.
1972-01-01
The discrete general linear inverse problem reduces to a set of m equations in n unknowns. There is generally no unique solution, but we can find k linear combinations of parameters for which restraints are determined. The parameter combinations are given by the eigenvectors of the coefficient matrix. The number k is determined by the ratio of the standard deviations of the observations to the allowable standard deviations in the resulting solution. Various linear combinations of the eigenvectors can be used to determine parameter resolution and information distribution among the observations. Thus we can determine where information comes from among the observations and exactly how it constraints the set of possible models. The application of such analyses to surface-wave and free-oscillation observations indicates that (1) phase, group, and amplitude observations for any particular mode provide basically the same type of information about the model; (2) observations of overtones can enhance the resolution considerably; and (3) the degree of resolution has generally been overestimated for many model determinations made from surface waves.
Huang, Jian; Zhang, Cun-Hui
2013-01-01
The ℓ1-penalized method, or the Lasso, has emerged as an important tool for the analysis of large data sets. Many important results have been obtained for the Lasso in linear regression which have led to a deeper understanding of high-dimensional statistical problems. In this article, we consider a class of weighted ℓ1-penalized estimators for convex loss functions of a general form, including the generalized linear models. We study the estimation, prediction, selection and sparsity properties of the weighted ℓ1-penalized estimator in sparse, high-dimensional settings where the number of predictors p can be much larger than the sample size n. Adaptive Lasso is considered as a special case. A multistage method is developed to approximate concave regularized estimation by applying an adaptive Lasso recursively. We provide prediction and estimation oracle inequalities for single- and multi-stage estimators, a general selection consistency theorem, and an upper bound for the dimension of the Lasso estimator. Important models including the linear regression, logistic regression and log-linear models are used throughout to illustrate the applications of the general results. PMID:24348100
Martin-Collado, D; Byrne, T J; Visser, B; Amer, P R
2016-12-01
This study used simulation to evaluate the performance of alternative selection index configurations in the context of a breeding programme where a trait with a non-linear economic value is approaching an economic optimum. The simulation used a simple population structure that approximately mimics selection in dual purpose sheep flocks in New Zealand (NZ). In the NZ dual purpose sheep population, number of lambs born is a genetic trait that is approaching an economic optimum, while genetically correlated growth traits have linear economic values and are not approaching any optimum. The predominant view among theoretical livestock geneticists is that the optimal approach to select for nonlinear profit traits is to use a linear selection index and to update it regularly. However, there are some nonlinear index approaches that have not been evaluated. This study assessed the efficiency of the following four alternative selection index approaches in terms of genetic progress relative to each other: (i) a linear index, (ii) a linear index updated regularly, (iii) a nonlinear (quadratic) index, and (iv) a NLF index (nonlinear index below the optimum and then flat). The NLF approach does not reward or penalize animals for additional genetic merit beyond the trait optimum. It was found to be at least comparable in efficiency to the approach of regularly updating the linear index with short (15 year) and long (30 year) time frames. The relative efficiency of this approach was slightly reduced when the current average value of the nonlinear trait was close to the optimum. Finally, practical issues of industry application of indexes are considered and some potential practical benefits of efficient deployment of a NLF index in highly heterogeneous industries (breeds, flocks and production environments) such as in the NZ dual purpose sheep population are discussed. © 2016 Blackwell Verlag GmbH.
The cell monolayer trajectory from the system state point of view.
Stys, Dalibor; Vanek, Jan; Nahlik, Tomas; Urban, Jan; Cisar, Petr
2011-10-01
Time-lapse microscopic movies are being increasingly utilized for understanding the derivation of cell states and predicting cell future. Often, fluorescence and other types of labeling are not available or desirable, and cell state-definitions based on observable structures must be used. We present the methodology for cell behavior recognition and prediction based on the short term cell recurrent behavior analysis. This approach has theoretical justification in non-linear dynamics theory. The methodology is based on the general stochastic systems theory which allows us to define the cell states, trajectory and the system itself. We introduce the usage of a novel image content descriptor based on information contribution (gain) by each image point for the cell state characterization as the first step. The linkage between the method and the general system theory is presented as a general frame for cell behavior interpretation. We also discuss extended cell description, system theory and methodology for future development. This methodology may be used for many practical purposes, ranging from advanced, medically relevant, precise cell culture diagnostics to very utilitarian cell recognition in a noisy or uneven image background. In addition, the results are theoretically justified.
Modal expansions in periodic photonic systems with material loss and dispersion
NASA Astrophysics Data System (ADS)
Wolff, Christian; Busch, Kurt; Mortensen, N. Asger
2018-03-01
We study band-structure properties of periodic optical systems composed of lossy and intrinsically dispersive materials. To this end, we develop an analytical framework based on adjoint modes of a lossy periodic electromagnetic system and show how the problem of linearly dependent eigenmodes in the presence of material dispersion can be overcome. We then formulate expressions for the band-structure derivative (∂ ω )/(∂ k ) (complex group velocity) and the local and total density of transverse optical states. Our exact expressions hold for 3D periodic arrays of materials with arbitrary dispersion properties and in general need to be evaluated numerically. They can be generalized to systems with two, one, or no directions of periodicity provided the fields are localized along nonperiodic directions. Possible applications are photonic crystals, metamaterials, metasurfaces composed of highly dispersive materials such as metals or lossless photonic crystals, and metamaterials or metasurfaces strongly coupled to resonant perturbations such as quantum dots or excitons in 2D materials. For illustration purposes, we analytically evaluate our expressions for some simple systems consisting of lossless dielectrics with one sharp Lorentzian material resonance added. By combining several Lorentz poles, this provides an avenue to perturbatively treat quite general material loss bands in photonic crystals.
NASA Astrophysics Data System (ADS)
Bičák, Jiří; Schmidt, Josef
2016-01-01
The question of the uniqueness of energy-momentum tensors in the linearized general relativity and in the linear massive gravity is analyzed without using variational techniques. We start from a natural ansatz for the form of the tensor (for example, that it is a linear combination of the terms quadratic in the first derivatives), and require it to be conserved as a consequence of field equations. In the case of the linear gravity in a general gauge we find a four-parametric system of conserved second-rank tensors which contains a unique symmetric tensor. This turns out to be the linearized Landau-Lifshitz pseudotensor employed often in full general relativity. We elucidate the relation of the four-parametric system to the expression proposed recently by Butcher et al. "on physical grounds" in harmonic gauge, and we show that the results coincide in the case of high-frequency waves in vacuum after a suitable averaging. In the massive gravity we show how one can arrive at the expression which coincides with the "generalized linear symmetric Landau-Lifshitz" tensor. However, there exists another uniquely given simpler symmetric tensor which can be obtained by adding the divergence of a suitable superpotential to the canonical energy-momentum tensor following from the Fierz-Pauli action. In contrast to the symmetric tensor derived by the Belinfante procedure which involves the second derivatives of the field variables, this expression contains only the field and its first derivatives. It is simpler than the generalized Landau-Lifshitz tensor but both yield the same total quantities since they differ by the divergence of a superpotential. We also discuss the role of the gauge conditions in the proofs of the uniqueness. In the Appendix, the symbolic tensor manipulation software cadabra is briefly described. It is very effective in obtaining various results which would otherwise require lengthy calculations.
Fogolari, Federico; Corazza, Alessandra; Esposito, Gennaro
2015-04-05
The generalized Born model in the Onufriev, Bashford, and Case (Onufriev et al., Proteins: Struct Funct Genet 2004, 55, 383) implementation has emerged as one of the best compromises between accuracy and speed of computation. For simulations of nucleic acids, however, a number of issues should be addressed: (1) the generalized Born model is based on a linear model and the linearization of the reference Poisson-Boltmann equation may be questioned for highly charged systems as nucleic acids; (2) although much attention has been given to potentials, solvation forces could be much less sensitive to linearization than the potentials; and (3) the accuracy of the Onufriev-Bashford-Case (OBC) model for nucleic acids depends on fine tuning of parameters. Here, we show that the linearization of the Poisson Boltzmann equation has mild effects on computed forces, and that with optimal choice of the OBC model parameters, solvation forces, essential for molecular dynamics simulations, agree well with those computed using the reference Poisson-Boltzmann model. © 2015 Wiley Periodicals, Inc.
Asymptotic aspect of derivations in Banach algebras.
Roh, Jaiok; Chang, Ick-Soon
2017-01-01
We prove that every approximate linear left derivation on a semisimple Banach algebra is continuous. Also, we consider linear derivations on Banach algebras and we first study the conditions for a linear derivation on a Banach algebra. Then we examine the functional inequalities related to a linear derivation and their stability. We finally take central linear derivations with radical ranges on semiprime Banach algebras and a continuous linear generalized left derivation on a semisimple Banach algebra.
Das, Sai Krupa; Mason, Shawn T; Vail, Taylor A; Rogers, Gail V; Livingston, Kara A; Whelan, Jillian G; Chin, Meghan K; Blanchard, Caroline M; Turgiss, Jennifer L; Roberts, Susan B
2018-01-01
Programs focused on employee well-being have gained momentum in recent years, but few have been rigorously evaluated. This study evaluates the effectiveness of an intervention designed to enhance vitality and purpose in life by assessing changes in employee quality of life (QoL) and health-related behaviors. A worksite-based randomized controlled trial. Twelve eligible worksites (8 randomized to the intervention group [IG] and 4 to the wait-listed control group [CG]). Employees (n = 240) at the randomized worksites. A 2.5-day group-based behavioral intervention. Rand Medical Outcomes Survey (MOS) 36-item Short-Form (SF-36) vitality and QoL measures, Ryff Purpose in Life Scale, Center for Epidemiologic Studies questionnaire for depression, MOS sleep, body weight, physical activity, diet quality, and blood measures for glucose and lipids (which were used to calculate a cardiometabolic risk score) obtained at baseline and 6 months. General linear mixed models were used to compare least squares means or prevalence differences in outcomes between IG and CG participants. As compared to CG, IG had a significantly higher mean 6-month change on the SF-36 vitality scale ( P = .003) and scored in the highest categories for 5 of the remaining 7 SF-36 domains: general health ( P = .014), mental health ( P = .027), absence of role limitations due to physical problems ( P = .026), and social functioning ( P = .007). The IG also had greater improvements in purpose in life ( P < .001) and sleep quality (index I, P = .024; index II, P = .021). No statistically significant changes were observed for weight, diet, physical activity, or cardiometabolic risk factors. An intensive 2.5-day intervention showed improvement in employee QoL and well-being over 6 months.
47 CFR 32.2124 - General purpose computers.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 2 2010-10-01 2010-10-01 false General purpose computers. 32.2124 Section 32.2124 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM... General purpose computers. (a) This account shall include the original cost of computers and peripheral...
47 CFR 32.2124 - General purpose computers.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 2 2011-10-01 2011-10-01 false General purpose computers. 32.2124 Section 32.2124 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM... General purpose computers. (a) This account shall include the original cost of computers and peripheral...
47 CFR 32.2124 - General purpose computers.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 2 2014-10-01 2014-10-01 false General purpose computers. 32.2124 Section 32.2124 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM... General purpose computers. (a) This account shall include the original cost of computers and peripheral...
47 CFR 32.2124 - General purpose computers.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 2 2013-10-01 2013-10-01 false General purpose computers. 32.2124 Section 32.2124 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM... General purpose computers. (a) This account shall include the original cost of computers and peripheral...
47 CFR 32.2124 - General purpose computers.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 2 2012-10-01 2012-10-01 false General purpose computers. 32.2124 Section 32.2124 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES UNIFORM... General purpose computers. (a) This account shall include the original cost of computers and peripheral...
Code of Federal Regulations, 2010 CFR
2010-07-01
... buildings, including land incidental thereto, suitable for the general use of Government agencies, including...) Special-purpose space is space in buildings, including land incidental thereto, wholly or predominantly utilized for the special purposes of an agency, and not generally suitable for general-purpose use...
Predicting dropout using student- and school-level factors: An ecological perspective.
Wood, Laura; Kiperman, Sarah; Esch, Rachel C; Leroux, Audrey J; Truscott, Stephen D
2017-03-01
High school dropout has been associated with negative outcomes, including increased rates of unemployment, incarceration, and mortality. Dropout rates vary significantly depending on individual and environmental factors. The purpose of our study was to use an ecological perspective to concurrently explore student- and school-level predictors associated with dropout for the purpose of better understanding how to prevent it. We used the Education Longitudinal Study of 2002 dataset. Participants included 14,106 sophomores across 684 public and private schools. We identified variables of interest based on previous research on dropout and implemented hierarchical generalized linear modeling. In the final model, significant student-level predictors included academic achievement, retention, sex, family socioeconomic status (SES), and extracurricular involvement. Significant school-level predictors included school SES and school size. Race/ethnicity, special education status, born in the United States, English as first language, school urbanicity, and school region did not significantly predict dropout after controlling for the aforementioned predictors. Implications for prevention and intervention efforts within a multitiered intervention model are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Parent-Implemented Enhanced Milieu Teaching with Preschool Children with Intellectual Disabilities
Kaiser, Ann P.; Roberts, Megan Y.
2013-01-01
Purpose The purpose of this study was to compare the effects of Enhanced Milieu Teaching (EMT) implemented by parents and therapists versus therapists only on the language skills of preschool children with intellectual disabilities (ID), including children with Down syndrome and children with autism spectrum disorders (ASD). Method Seventy-seven children were randomly assigned to two treatments (parent + therapist EMT or therapist only EMT) and received 36 intervention sessions. Children were assessed before, immediately after, 6 months after, and 12 months after intervention. Separate linear regressions were conducted for each standardized and observational measure at each time point. Results Parents in the parent + therapist group demonstrated greater use of EMT strategies at home than untrained parents in the therapist only group and these effects maintained over time. Effect sizes for observational measures ranged from d = .10 to d = 1.32 favoring the parent + therapist group, with the largest effect sizes found 12 months after intervention. Conclusion Findings from this study indicate generally that there are benefits to training parents to implement naturalistic language intervention strategies with preschool children who have ID and significant language impairments. PMID:22744141
26 CFR 1.355-0 - Outline of sections.
Code of Federal Regulations, 2010 CFR
2010-04-01
.... (b) Independent business purpose. (1) Independent business purpose requirement. (2) Corporate business purpose. (3) Business purpose for distribution. (4) Business purpose as evidence of nondevice. (5... distribution of earnings and profits. (1) In general. (2) Device factors. (i) In general. (ii) Pro rata...
26 CFR 1.355-0 - Outline of sections.
Code of Federal Regulations, 2011 CFR
2011-04-01
.... (b) Independent business purpose. (1) Independent business purpose requirement. (2) Corporate business purpose. (3) Business purpose for distribution. (4) Business purpose as evidence of nondevice. (5... distribution of earnings and profits. (1) In general. (2) Device factors. (i) In general. (ii) Pro rata...
General linear methods and friends: Toward efficient solutions of multiphysics problems
NASA Astrophysics Data System (ADS)
Sandu, Adrian
2017-07-01
Time dependent multiphysics partial differential equations are of great practical importance as they model diverse phenomena that appear in mechanical and chemical engineering, aeronautics, astrophysics, meteorology and oceanography, financial modeling, environmental sciences, etc. There is no single best time discretization for the complex multiphysics systems of practical interest. We discuss "multimethod" approaches that combine different time steps and discretizations using the rigourous frameworks provided by Partitioned General Linear Methods and Generalize-structure Additive Runge Kutta Methods..
An Application to the Prediction of LOD Change Based on General Regression Neural Network
NASA Astrophysics Data System (ADS)
Zhang, X. H.; Wang, Q. J.; Zhu, J. J.; Zhang, H.
2011-07-01
Traditional prediction of the LOD (length of day) change was based on linear models, such as the least square model and the autoregressive technique, etc. Due to the complex non-linear features of the LOD variation, the performances of the linear model predictors are not fully satisfactory. This paper applies a non-linear neural network - general regression neural network (GRNN) model to forecast the LOD change, and the results are analyzed and compared with those obtained with the back propagation neural network and other models. The comparison shows that the performance of the GRNN model in the prediction of the LOD change is efficient and feasible.
NASA Astrophysics Data System (ADS)
Zielnica, J.; Ziółkowski, A.; Cempel, C.
2003-03-01
Design and theoretical and experimental investigation of vibroisolation pads with non-linear static and dynamic responses is the objective of the paper. The analytical investigations are based on non-linear finite element analysis where the load-deflection response is traced against the shape and material properties of the analysed model of the vibroisolation pad. A new model of vibroisolation pad of antisymmetrical type was designed and analysed by the finite element method based on the second-order theory (large displacements and strains) with the assumption of material's non-linearities (Mooney-Rivlin model). Stability loss phenomenon was used in the design of the vibroisolators, and it was proved that it would be possible to design a model of vibroisolator in the form of a continuous pad with non-linear static and dynamic response, typical to vibroisolation purposes. The materials used for the vibroisolator are those of rubber, elastomers, and similar ones. The results of theoretical investigations were examined experimentally. A series of models made of soft rubber were designed for the test purposes. The experimental investigations of the vibroisolation models, under static and dynamic loads, confirmed the results of the FEM analysis.
26 CFR 1.355-0 - Outline of sections.
Code of Federal Regulations, 2014 CFR
2014-04-01
... distributed. (b) Independent business purpose. (1) Independent business purpose requirement. (2) Corporate business purpose. (3) Business purpose for distribution. (4) Business purpose as evidence of nondevice. (5... distribution of earnings and profits. (1) In general. (2) Device factors. (i) In general. (ii) Pro rata...
26 CFR 1.355-0 - Outline of sections.
Code of Federal Regulations, 2013 CFR
2013-04-01
... distributed. (b) Independent business purpose. (1) Independent business purpose requirement. (2) Corporate business purpose. (3) Business purpose for distribution. (4) Business purpose as evidence of nondevice. (5... distribution of earnings and profits. (1) In general. (2) Device factors. (i) In general. (ii) Pro rata...
26 CFR 1.355-0 - Outline of sections.
Code of Federal Regulations, 2012 CFR
2012-04-01
... distributed. (b) Independent business purpose. (1) Independent business purpose requirement. (2) Corporate business purpose. (3) Business purpose for distribution. (4) Business purpose as evidence of nondevice. (5... distribution of earnings and profits. (1) In general. (2) Device factors. (i) In general. (ii) Pro rata...
Jimenez, Daniel E; Cook, Benjamin Lê; Kim, Giyeon; Reynolds, Charles F; Alegría, Margarita; Coe-Odess, Sarah; Bartels, Stephen J
2015-07-01
The association of general medical illness and mental health service use among older adults from racial-ethnic minority groups is an important area of study given the disparities in mental health and general medical services and the low use of mental health services in this population. The purpose of this report is to describe the impact of comorbid general medical illness on mental health service use and expenditures among older adults and to evaluate disparities in mental health service use and expenditures in a racially-ethnically diverse sample of older adults with and without comorbid general medical illness. Data were obtained from the Medical Expenditure Panel Survey (years 2004-2011). The sample included 1,563 whites, 519 African Americans, and 642 Latinos (N=2,724) age ≥65 with probable mental illness. Two-part generalized linear models were used to estimate and compare mental health service use among adults with and without a comorbid general medical illness. Mental health service use was more likely for older adults with comorbid general medical illness than for those without it. Once mental health services were accessed, no differences in mental health expenditures were found. Comorbid general medical illness increased the likelihood of mental health service use by older whites and Latinos. However, the presence of comorbidity did not affect racial-ethnic disparities in mental health service use. This study highlighted the important role of comorbid general medical illness as a potential contributor to using mental health services and suggests intervention strategies to enhance engagement in mental health services by older adults from racial-ethnic minority groups.
TI-59 Programs for Multiple Regression.
1980-05-01
general linear hypothesis model of full rank [ Graybill , 19611 can be written as Y = x 8 + C , s-N(O,o 2I) nxl nxk kxl nxl where Y is the vector of n...a "reduced model " solution, and confidence intervals for linear functions of the coefficients can be obtained using (x’x) and a2, based on the t...O107)l UA.LLL. Library ModuIe NASTER -Puter 0NTINA Cards 1 PROGRAM DESCRIPTION (s s 2 ror the general linear hypothesis model Y - XO + C’ calculates
DOE Office of Scientific and Technical Information (OSTI.GOV)
CULLEN, D. E.
2005-02-21
Version 00 As distributed, the original evaluated data include cross sections represented in the form of a combination of resonance parameters and/or tabulated energy dependent cross sections, nominally at 0 Kelvin temperature. For use in applications this library has been processed into the form of temperature dependent cross sections at eight neutron reactor like temperatures, between 0 and 2100 Kelvin, in steps of 300 Kelvin. It has also been processed to five astrophysics like temperatures, 1, 10, 100 eV, 1 and 10 keV. For reference purposes, 300 Kelvin is approximately 1/40 eV, so that 1 eV is approximately 12,000 Kelvin.more » At each temperature the cross sections are tabulated and linearly interpolable in energy. POINT2004 contains all of the evaluations in the ENDF/B-VI general purpose library, which contains evaluations for 328 materials (isotopes or naturally occurring elemental mixtures of isotopes). No special purpose ENDF/B-VI libraries, such as fission products, thermal scattering, or photon interaction data are included. The majority of these evaluations are complete, in the sense that they include all cross sections over the energy range 10-5 eV to at least 20 MeV. However, the following are only partial evaluations that either contain only single reactions and no total cross section (Mg24, K41, Ti46, Ti47, Ti48, Ti50 and Ni59), or do not include energy dependent cross sections above the resonance region (Ar40, Mo92, Mo98, Mo100, In115, Sn120, Sn122 and Sn124). The CCC-638/TART20002 code package is recommended for use with these data. Codes within TART can be used to display these data or to run calculations using these data.« less
NASA Technical Reports Server (NTRS)
Young, Katherine C.; Sobieszczanski-Sobieski, Jaroslaw
1988-01-01
This project has two objectives. The first is to determine whether linear programming techniques can improve performance when handling design optimization problems with a large number of design variables and constraints relative to the feasible directions algorithm. The second purpose is to determine whether using the Kreisselmeier-Steinhauser (KS) function to replace the constraints with one constraint will reduce the cost of total optimization. Comparisons are made using solutions obtained with linear and non-linear methods. The results indicate that there is no cost saving using the linear method or in using the KS function to replace constraints.
A General Linear Model Approach to Adjusting the Cumulative GPA.
ERIC Educational Resources Information Center
Young, John W.
A general linear model (GLM), using least-squares techniques, was used to develop a criterion measure to replace freshman year grade point average (GPA) in college admission predictive validity studies. Problems with the use of GPA include those associated with the combination of grades from different courses and disciplines into a single measure,…
ERIC Educational Resources Information Center
Chen, Haiwen
2012-01-01
In this article, linear item response theory (IRT) observed-score equating is compared under a generalized kernel equating framework with Levine observed-score equating for nonequivalent groups with anchor test design. Interestingly, these two equating methods are closely related despite being based on different methodologies. Specifically, when…
Statistical inference for template aging
NASA Astrophysics Data System (ADS)
Schuckers, Michael E.
2006-04-01
A change in classification error rates for a biometric device is often referred to as template aging. Here we offer two methods for determining whether the effect of time is statistically significant. The first of these is the use of a generalized linear model to determine if these error rates change linearly over time. This approach generalizes previous work assessing the impact of covariates using generalized linear models. The second approach uses of likelihood ratio tests methodology. The focus here is on statistical methods for estimation not the underlying cause of the change in error rates over time. These methodologies are applied to data from the National Institutes of Standards and Technology Biometric Score Set Release 1. The results of these applications are discussed.
Observed Score Linear Equating with Covariates
ERIC Educational Resources Information Center
Branberg, Kenny; Wiberg, Marie
2011-01-01
This paper examined observed score linear equating in two different data collection designs, the equivalent groups design and the nonequivalent groups design, when information from covariates (i.e., background variables correlated with the test scores) was included. The main purpose of the study was to examine the effect (i.e., bias, variance, and…
The Transformation App Redux: The Notion of Linearity
ERIC Educational Resources Information Center
Domenick, Anthony
2015-01-01
The notion of linearity is perhaps the most fundamental idea in algebraic thinking. It sets the transition to functions and culminates with the instantaneous rate of change in calculus. Despite its simplicity, this concept poses complexities to a considerable number of first semester college algebra students. The purpose of this observational…
Linear Programming for Vocational Education Planning. Interim Report.
ERIC Educational Resources Information Center
Young, Robert C.; And Others
The purpose of the paper is to define for potential users of vocational education management information systems a quantitative analysis technique and its utilization to facilitate more effective planning of vocational education programs. Defining linear programming (LP) as a management technique used to solve complex resource allocation problems…
Fatigue life estimation program for Part 23 airplanes, `AFS.FOR`
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaul, S.K.
1993-12-31
The purpose of this paper is to introduce to the general aviation industry a computer program which estimates the safe fatigue life of any Federal Aviation Regulation (FAR) Part 23 airplane. The algorithm uses the methodology (Miner`s Linear Cumulative Damage Theory) and the various data presented in the Federal Aviation Administration (FAA) Report No. AFS-120-73-2, dated May 1973. The program is written in FORTRAN 77 language and is executable on a desk top personal computer. The program prompts the user for the input data needed and provides a variety of options for its intended use. The program is envisaged tomore » be released through issuance of a FAA report, which will contain the appropriate comments, instructions, warnings and limitations.« less
Overview of magnetic suspension research at Langley Research Center
NASA Technical Reports Server (NTRS)
Groom, Nelson J.
1992-01-01
An overview of research in small- and large-gap magnetic suspension systems at LaRC is presented. The overview is limited to systems which have been built as laboratory models or engineering models. Small-gap systems applications include the Annular Momentum Control Device (AMCD), which is a momentum storage device for the stabilization and control of spacecraft, and the Annular Suspension and Pointing System (ASPS), which is a general purpose pointing mount designed to provide orientation, mechanical isolation, and fine pointing space experiments. These devices are described and control and linearization approaches for the magnetic suspension systems for these devices are discussed. Large-gap systems applications at LaRC have been almost exclusively wind tunnel magnetic suspension systems. A brief description of these efforts is also presented.
Force-reflection and shared compliant control in operating telemanipulators with time delay
NASA Technical Reports Server (NTRS)
Kim, Won S.; Hannaford, Blake; Bejczy, Antal K.
1992-01-01
The performance of an advanced telemanipulation system in the presence of a wide range of time delays between a master control station and a slave robot is quantified. The contemplated applications include multiple satellite links to LEO, geosynchronous operation, spacecraft local area networks, and general-purpose computer-based short-distance designs. The results of high-precision peg-in-hole tasks performed by six test operators indicate that task performance decreased linearly with introduced time delays for both kinesthetic force feedback (KFF) and shared compliant control (SCC). The rate of this decrease was substantially improved with SCC compared to KFF. Task performance at delays above 1 s was not possible using KFF. SCC enabled task performance for such delays, which are realistic values for ground-controlled remote manipulation of telerobots in space.
Seismic waves in a self-gravitating planet
NASA Astrophysics Data System (ADS)
Brazda, Katharina; de Hoop, Maarten V.; Hörmann, Günther
2013-04-01
The elastic-gravitational equations describe the propagation of seismic waves including the effect of self-gravitation. We rigorously derive and analyze this system of partial differential equations and boundary conditions for a general, uniformly rotating, elastic, but aspherical, inhomogeneous, and anisotropic, fluid-solid earth model, under minimal assumptions concerning the smoothness of material parameters and geometry. For this purpose we first establish a consistent mathematical formulation of the low regularity planetary model within the framework of nonlinear continuum mechanics. Using calculus of variations in a Sobolev space setting, we then show how the weak form of the linearized elastic-gravitational equations directly arises from Hamilton's principle of stationary action. Finally we prove existence and uniqueness of weak solutions by the method of energy estimates and discuss additional regularity properties.
The influences of consumer characteristics on the amount of rice consumption
NASA Astrophysics Data System (ADS)
Supriana, T.; Pane, TC
2018-02-01
This study aimed to analyze the characteristics of rice consumers and the influences of consumer characteristics on the amount of rice consumption. The research areas were determined purposively in the sub-districts with the most significant population in Medan City. The analytical methods used were descriptive and multiple linear regression analysis. The results showed that consumers in the study areas have various characteristics, concerning age, income, family size, health, and education. Simultaneously, characteristics of rice consumers have the significant effect on the amount of rice consumed. Partially, age and the number of family members have the significant effect on the amount of rice consumed. The implications of this research are, need different policies toward consumers of rice based on their income strata. Rice policies cannot be generalized.
NASA Astrophysics Data System (ADS)
Arabahmadi, Ehsan; Ahmadi, Zabihollah; Rashidian, Bizhan
2018-06-01
A quantum theory for describing the interaction of photons and plasmons, in one- and two-dimensional arrays is presented. Ohmic losses and inter-band transitions are not considered. We use macroscopic approach, and quantum field theory methods including S-matrix expansion, and Feynman diagrams for this purpose. Non-linear interactions are also studied, and increasing the probability of such interactions, and its application are also discussed.
Finite-time H∞ filtering for non-linear stochastic systems
NASA Astrophysics Data System (ADS)
Hou, Mingzhe; Deng, Zongquan; Duan, Guangren
2016-09-01
This paper describes the robust H∞ filtering analysis and the synthesis of general non-linear stochastic systems with finite settling time. We assume that the system dynamic is modelled by Itô-type stochastic differential equations of which the state and the measurement are corrupted by state-dependent noises and exogenous disturbances. A sufficient condition for non-linear stochastic systems to have the finite-time H∞ performance with gain less than or equal to a prescribed positive number is established in terms of a certain Hamilton-Jacobi inequality. Based on this result, the existence of a finite-time H∞ filter is given for the general non-linear stochastic system by a second-order non-linear partial differential inequality, and the filter can be obtained by solving this inequality. The effectiveness of the obtained result is illustrated by a numerical example.
Convex set and linear mixing model
NASA Technical Reports Server (NTRS)
Xu, P.; Greeley, R.
1993-01-01
A major goal of optical remote sensing is to determine surface compositions of the earth and other planetary objects. For assessment of composition, single pixels in multi-spectral images usually record a mixture of the signals from various materials within the corresponding surface area. In this report, we introduce a closed and bounded convex set as a mathematical model for linear mixing. This model has a clear geometric implication because the closed and bounded convex set is a natural generalization of a triangle in n-space. The endmembers are extreme points of the convex set. Every point in the convex closure of the endmembers is a linear mixture of those endmembers, which is exactly how linear mixing is defined. With this model, some general criteria for selecting endmembers could be described. This model can lead to a better understanding of linear mixing models.
Peterson, David; Stofleth, Jerome H.; Saul, Venner W.
2017-07-11
Linear shaped charges are described herein. In a general embodiment, the linear shaped charge has an explosive with an elongated arrowhead-shaped profile. The linear shaped charge also has and an elongated v-shaped liner that is inset into a recess of the explosive. Another linear shaped charge includes an explosive that is shaped as a star-shaped prism. Liners are inset into crevices of the explosive, where the explosive acts as a tamper.
Genetic parameters for racing records in trotters using linear and generalized linear models.
Suontama, M; van der Werf, J H J; Juga, J; Ojala, M
2012-09-01
Heritability and repeatability and genetic and phenotypic correlations were estimated for trotting race records with linear and generalized linear models using 510,519 records on 17,792 Finnhorses and 513,161 records on 25,536 Standardbred trotters. Heritability and repeatability were estimated for single racing time and earnings traits with linear models, and logarithmic scale was used for racing time and fourth-root scale for earnings to correct for nonnormality. Generalized linear models with a gamma distribution were applied for single racing time and with a multinomial distribution for single earnings traits. In addition, genetic parameters for annual earnings were estimated with linear models on the observed and fourth-root scales. Racing success traits of single placings, winnings, breaking stride, and disqualifications were analyzed using generalized linear models with a binomial distribution. Estimates of heritability were greatest for racing time, which ranged from 0.32 to 0.34. Estimates of heritability were low for single earnings with all distributions, ranging from 0.01 to 0.09. Annual earnings were closer to normal distribution than single earnings. Heritability estimates were moderate for annual earnings on the fourth-root scale, 0.19 for Finnhorses and 0.27 for Standardbred trotters. Heritability estimates for binomial racing success variables ranged from 0.04 to 0.12, being greatest for winnings and least for breaking stride. Genetic correlations among racing traits were high, whereas phenotypic correlations were mainly low to moderate, except correlations between racing time and earnings were high. On the basis of a moderate heritability and moderate to high repeatability for racing time and annual earnings, selection of horses for these traits is effective when based on a few repeated records. Because of high genetic correlations, direct selection for racing time and annual earnings would also result in good genetic response in racing success.
Seasonal control skylight glazing panel with passive solar energy switching
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, J.V.
1983-10-25
A substantially transparent one-piece glazing panel is provided for generally horizontal mounting in a skylight. The panel is comprised of an repeated pattern of two alternating and contiguous linear optical elements; a first optical element being an upstanding generally right-triangular linear prism, and the second optical element being an upward-facing plano-cylindrical lens in which the planar surface is reflectively opaque and is generally in the same plane as the base of the triangular prism.
7 CFR 227.1 - General purpose and scope.
Code of Federal Regulations, 2013 CFR
2013-01-01
... Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE CHILD NUTRITION PROGRAMS NUTRITION EDUCATION AND TRAINING PROGRAM General § 227.1 General purpose and scope. The purpose of these regulations is to implement section 19 of the Child Nutrition Act...
7 CFR 227.1 - General purpose and scope.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE CHILD NUTRITION PROGRAMS NUTRITION EDUCATION AND TRAINING PROGRAM General § 227.1 General purpose and scope. The purpose of these regulations is to implement section 19 of the Child Nutrition Act...
7 CFR 227.1 - General purpose and scope.
Code of Federal Regulations, 2012 CFR
2012-01-01
... Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE CHILD NUTRITION PROGRAMS NUTRITION EDUCATION AND TRAINING PROGRAM General § 227.1 General purpose and scope. The purpose of these regulations is to implement section 19 of the Child Nutrition Act...
7 CFR 227.1 - General purpose and scope.
Code of Federal Regulations, 2014 CFR
2014-01-01
... Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE CHILD NUTRITION PROGRAMS NUTRITION EDUCATION AND TRAINING PROGRAM General § 227.1 General purpose and scope. The purpose of these regulations is to implement section 19 of the Child Nutrition Act...
7 CFR 227.1 - General purpose and scope.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE CHILD NUTRITION PROGRAMS NUTRITION EDUCATION AND TRAINING PROGRAM General § 227.1 General purpose and scope. The purpose of these regulations is to implement section 19 of the Child Nutrition Act...
21 CFR 864.4010 - General purpose reagent.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 8 2014-04-01 2014-04-01 false General purpose reagent. 864.4010 Section 864.4010 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES HEMATOLOGY AND PATHOLOGY DEVICES Specimen Preparation Reagents § 864.4010 General purpose...
21 CFR 864.4010 - General purpose reagent.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 8 2012-04-01 2012-04-01 false General purpose reagent. 864.4010 Section 864.4010 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES HEMATOLOGY AND PATHOLOGY DEVICES Specimen Preparation Reagents § 864.4010 General purpose...
21 CFR 864.4010 - General purpose reagent.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 8 2013-04-01 2013-04-01 false General purpose reagent. 864.4010 Section 864.4010 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) MEDICAL DEVICES HEMATOLOGY AND PATHOLOGY DEVICES Specimen Preparation Reagents § 864.4010 General purpose...
NASA Astrophysics Data System (ADS)
Yamamoto, H.; Nakajima, K.; Zhang, K.; Nanai, S.
2015-12-01
Powerful numerical codes that are capable of modeling complex coupled processes of physics and chemistry have been developed for predicting the fate of CO2 in reservoirs as well as its potential impacts on groundwater and subsurface environments. However, they are often computationally demanding for solving highly non-linear models in sufficient spatial and temporal resolutions. Geological heterogeneity and uncertainties further increase the challenges in modeling works. Two-phase flow simulations in heterogeneous media usually require much longer computational time than that in homogeneous media. Uncertainties in reservoir properties may necessitate stochastic simulations with multiple realizations. Recently, massively parallel supercomputers with more than thousands of processors become available in scientific and engineering communities. Such supercomputers may attract attentions from geoscientist and reservoir engineers for solving the large and non-linear models in higher resolutions within a reasonable time. However, for making it a useful tool, it is essential to tackle several practical obstacles to utilize large number of processors effectively for general-purpose reservoir simulators. We have implemented massively-parallel versions of two TOUGH2 family codes (a multi-phase flow simulator TOUGH2 and a chemically reactive transport simulator TOUGHREACT) on two different types (vector- and scalar-type) of supercomputers with a thousand to tens of thousands of processors. After completing implementation and extensive tune-up on the supercomputers, the computational performance was measured for three simulations with multi-million grid models, including a simulation of the dissolution-diffusion-convection process that requires high spatial and temporal resolutions to simulate the growth of small convective fingers of CO2-dissolved water to larger ones in a reservoir scale. The performance measurement confirmed that the both simulators exhibit excellent scalabilities showing almost linear speedup against number of processors up to over ten thousand cores. Generally this allows us to perform coupled multi-physics (THC) simulations on high resolution geologic models with multi-million grid in a practical time (e.g., less than a second per time step).
Two-dimensional models for the optical response of thin films
NASA Astrophysics Data System (ADS)
Li, Yilei; Heinz, Tony F.
2018-04-01
In this work, we present a systematic study of 2D optical models for the response of thin layers of material under excitation by normally incident light. The treatment, within the framework of classical optics, analyzes a thin film supported by a semi-infinite substrate, with both the thin layer and the substrate assumed to exhibit local, isotropic linear response. Starting from the conventional three-dimensional (3D) slab model of the system, we derive a two-dimensional (2D) sheet model for the thin film in which the optical response is described by a sheet optical conductivity. We develop criteria for the applicability of this 2D sheet model for a layer with an optical thickness far smaller than the wavelength of the light. We examine in detail atomically thin semi-metallic and semiconductor van-der-Waals layers and ultrathin metal films as representative examples. Excellent agreement of the 2D sheet model with the 3D slab model is demonstrated over a broad spectral range from the radio frequency limit to the near ultraviolet. A linearized version of system response for the 2D model is also presented for the case where the influence of the optically thin layer is sufficiently weak. Analytical expressions for the applicability and accuracy of the different optical models are derived, and the appropriateness of the linearized treatment for the materials is considered. We discuss the advantages, as well as limitations, of these models for the purpose of deducing the optical response function of the thin layer from experiment. We generalize the theory to take into account in-plane anisotropy, layered thin film structures, and more general substrates. Implications of the 2D model for the transmission of light by the thin film and for the implementation of half- and totally absorbing layers are discussed.
NASA Technical Reports Server (NTRS)
Gupta, K. K.
1997-01-01
A multidisciplinary, finite element-based, highly graphics-oriented, linear and nonlinear analysis capability that includes such disciplines as structures, heat transfer, linear aerodynamics, computational fluid dynamics, and controls engineering has been achieved by integrating several new modules in the original STARS (STructural Analysis RoutineS) computer program. Each individual analysis module is general-purpose in nature and is effectively integrated to yield aeroelastic and aeroservoelastic solutions of complex engineering problems. Examples of advanced NASA Dryden Flight Research Center projects analyzed by the code in recent years include the X-29A, F-18 High Alpha Research Vehicle/Thrust Vectoring Control System, B-52/Pegasus Generic Hypersonics, National AeroSpace Plane (NASP), SR-71/Hypersonic Launch Vehicle, and High Speed Civil Transport (HSCT) projects. Extensive graphics capabilities exist for convenient model development and postprocessing of analysis results. The program is written in modular form in standard FORTRAN language to run on a variety of computers, such as the IBM RISC/6000, SGI, DEC, Cray, and personal computer; associated graphics codes use OpenGL and IBM/graPHIGS language for color depiction. This program is available from COSMIC, the NASA agency for distribution of computer programs.
Efficient least angle regression for identification of linear-in-the-parameters models
Beach, Thomas H.; Rezgui, Yacine
2017-01-01
Least angle regression, as a promising model selection method, differentiates itself from conventional stepwise and stagewise methods, in that it is neither too greedy nor too slow. It is closely related to L1 norm optimization, which has the advantage of low prediction variance through sacrificing part of model bias property in order to enhance model generalization capability. In this paper, we propose an efficient least angle regression algorithm for model selection for a large class of linear-in-the-parameters models with the purpose of accelerating the model selection process. The entire algorithm works completely in a recursive manner, where the correlations between model terms and residuals, the evolving directions and other pertinent variables are derived explicitly and updated successively at every subset selection step. The model coefficients are only computed when the algorithm finishes. The direct involvement of matrix inversions is thereby relieved. A detailed computational complexity analysis indicates that the proposed algorithm possesses significant computational efficiency, compared with the original approach where the well-known efficient Cholesky decomposition is involved in solving least angle regression. Three artificial and real-world examples are employed to demonstrate the effectiveness, efficiency and numerical stability of the proposed algorithm. PMID:28293140
Existing methods for improving the accuracy of digital-to-analog converters
NASA Astrophysics Data System (ADS)
Eielsen, Arnfinn A.; Fleming, Andrew J.
2017-09-01
The performance of digital-to-analog converters is principally limited by errors in the output voltage levels. Such errors are known as element mismatch and are quantified by the integral non-linearity. Element mismatch limits the achievable accuracy and resolution in high-precision applications as it causes gain and offset errors, as well as harmonic distortion. In this article, five existing methods for mitigating the effects of element mismatch are compared: physical level calibration, dynamic element matching, noise-shaping with digital calibration, large periodic high-frequency dithering, and large stochastic high-pass dithering. These methods are suitable for improving accuracy when using digital-to-analog converters that use multiple discrete output levels to reconstruct time-varying signals. The methods improve linearity and therefore reduce harmonic distortion and can be retrofitted to existing systems with minor hardware variations. The performance of each method is compared theoretically and confirmed by simulations and experiments. Experimental results demonstrate that three of the five methods provide significant improvements in the resolution and accuracy when applied to a general-purpose digital-to-analog converter. As such, these methods can directly improve performance in a wide range of applications including nanopositioning, metrology, and optics.
TIME CALIBRATED OSCILLOSCOPE SWEEP
Owren, H.M.; Johnson, B.M.; Smith, V.L.
1958-04-22
The time calibrator of an electric signal displayed on an oscilloscope is described. In contrast to the conventional technique of using time-calibrated divisions on the face of the oscilloscope, this invention provides means for directly superimposing equal time spaced markers upon a signal displayed upon an oscilloscope. More explicitly, the present invention includes generally a generator for developing a linear saw-tooth voltage and a circuit for combining a high-frequency sinusoidal voltage of a suitable amplitude and frequency with the saw-tooth voltage to produce a resultant sweep deflection voltage having a wave shape which is substantially linear with respect to time between equal time spaced incremental plateau regions occurring once each cycle of the sinusoidal voltage. The foregoing sweep voltage when applied to the horizontal deflection plates in combination with a signal to be observed applied to the vertical deflection plates of a cathode ray oscilloscope produces an image on the viewing screen which is essentially a display of the signal to be observed with respect to time. Intensified spots, or certain other conspicuous indications corresponding to the equal time spaced plateau regions of said sweep voltage, appear superimposed upon said displayed signal, which indications are therefore suitable for direct time calibration purposes.
Cyran, Krzysztof A.
2018-01-01
This work considers the problem of utilizing electroencephalographic signals for use in systems designed for monitoring and enhancing the performance of aircraft pilots. Systems with such capabilities are generally referred to as cognitive cockpits. This article provides a description of the potential that is carried by such systems, especially in terms of increasing flight safety. Additionally, a neuropsychological background of the problem is presented. Conducted research was focused mainly on the problem of discrimination between states of brain activity related to idle but focused anticipation of visual cue and reaction to it. Especially, a problem of selecting a proper classification algorithm for such problems is being examined. For that purpose an experiment involving 10 subjects was planned and conducted. Experimental electroencephalographic data was acquired using an Emotiv EPOC+ headset. Proposed methodology involved use of a popular method in biomedical signal processing, the Common Spatial Pattern, extraction of bandpower features, and an extensive test of different classification algorithms, such as Linear Discriminant Analysis, k-nearest neighbors, and Support Vector Machines with linear and radial basis function kernels, Random Forests, and Artificial Neural Networks. PMID:29849544
A First Assessment of Two-Beam Linear Colliders and Longer-Term Two-Beam R& D Issues at SLAC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loew, Greg
2001-06-05
The purpose of this document is to summarize the work that has been done at SLAC in the last three or four months to assess the possibilities of two-beam linear colliders proposed by Ron Ruth, and to compare these colliders to the current NLC designs and their costs. The work is based on general discussions with C. Adolphsen, D. Burke, J. Irwin, J. Paterson, R. Ruth, T. Lavine and T. Raubenheimer, with considerable work done by the latter two. Given the complexities of these machines, the fact that the designs are far from complete and that all cost estimates aremore » still in a state of flux, it is clear that the conclusions drawn in this report cannot be cast in concrete. On the other hand, it does not seem too early to present the results that have been gathered so far, even if the facts contain significant uncertainties and the costs have large error bars. Now that R. Ruth has returned to SLAC, he will be able to add his point of view to the discussion. At this time, the conclusions presented here are the sole responsibility of the author.« less
Performance testing and results of the first Etec CORE-2564
NASA Astrophysics Data System (ADS)
Franks, C. Edward; Shikata, Asao; Baker, Catherine A.
1993-03-01
In order to be able to write 64 megabit DRAM reticles, to prepare to write 256 megabit DRAM reticles and in general to meet the current and next generation mask and reticle quality requirements, Hoya Micro Mask (HMM) installed in 1991 the first CORE-2564 Laser Reticle Writer from Etec Systems, Inc. The system was delivered as a CORE-2500XP and was subsequently upgraded to a 2564. The CORE (Custom Optical Reticle Engraver) system produces photomasks with an exposure strategy similar to that employed by an electron beam system, but it uses a laser beam to deliver the photoresist exposure energy. Since then the 2564 has been tested by Etec's standard Acceptance Test Procedure and by several supplementary HMM techniques to insure performance to all the Etec advertised specifications and certain additional HMM requirements that were more demanding and/or more thorough than the advertised specifications. The primary purpose of the HMM tests was to more closely duplicate mask usage. The performance aspects covered by the tests include registration accuracy and repeatability; linewidth accuracy, uniformity and linearity; stripe butting; stripe and scan linearity; edge quality; system cleanliness; minimum geometry resolution; minimum address size and plate loading accuracy and repeatability.
Metadynamics in the conformational space nonlinearly dimensionally reduced by Isomap
NASA Astrophysics Data System (ADS)
Spiwok, Vojtěch; Králová, Blanka
2011-12-01
Atomic motions in molecules are not linear. This infers that nonlinear dimensionality reduction methods can outperform linear ones in analysis of collective atomic motions. In addition, nonlinear collective motions can be used as potentially efficient guides for biased simulation techniques. Here we present a simulation with a bias potential acting in the directions of collective motions determined by a nonlinear dimensionality reduction method. Ad hoc generated conformations of trans,trans-1,2,4-trifluorocyclooctane were analyzed by Isomap method to map these 72-dimensional coordinates to three dimensions, as described by Brown and co-workers [J. Chem. Phys. 129, 064118 (2008)]. Metadynamics employing the three-dimensional embeddings as collective variables was applied to explore all relevant conformations of the studied system and to calculate its conformational free energy surface. The method sampled all relevant conformations (boat, boat-chair, and crown) and corresponding transition structures inaccessible by an unbiased simulation. This scheme allows to use essentially any parameter of the system as a collective variable in biased simulations. Moreover, the scheme we used for mapping out-of-sample conformations from the 72D to 3D space can be used as a general purpose mapping for dimensionality reduction, beyond the context of molecular modeling.
Log-normal frailty models fitted as Poisson generalized linear mixed models.
Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver
2016-12-01
The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
1980-06-01
COMPUTERIZED GENERAL PURPOSE INFORMATION MANAGEMENT SYSTEM (SELGE.M) TO KEDICALLY IMPORTANT ARTHROPODS (DIPTERA: CULICIDAE) Annual Report Terry L. Erwin June...GENERAL PURPOSE INFORMATION MANAGEMENT SYSTEM Annual--1 September 1979- (SEIGEM) TO MEDICALLY ThWORTANT ARTHROPODS 30 May 1980 (DIPTERA: CULICIDAE) 6
Regression Is a Univariate General Linear Model Subsuming Other Parametric Methods as Special Cases.
ERIC Educational Resources Information Center
Vidal, Sherry
Although the concept of the general linear model (GLM) has existed since the 1960s, other univariate analyses such as the t-test and the analysis of variance models have remained popular. The GLM produces an equation that minimizes the mean differences of independent variables as they are related to a dependent variable. From a computer printout…
ERIC Educational Resources Information Center
Li, Deping; Oranje, Andreas
2007-01-01
Two versions of a general method for approximating standard error of regression effect estimates within an IRT-based latent regression model are compared. The general method is based on Binder's (1983) approach, accounting for complex samples and finite populations by Taylor series linearization. In contrast, the current National Assessment of…
Parameter Recovery for the 1-P HGLLM with Non-Normally Distributed Level-3 Residuals
ERIC Educational Resources Information Center
Kara, Yusuf; Kamata, Akihito
2017-01-01
A multilevel Rasch model using a hierarchical generalized linear model is one approach to multilevel item response theory (IRT) modeling and is referred to as a one-parameter hierarchical generalized linear logistic model (1-P HGLLM). Although it has the flexibility to model nested structure of data with covariates, the model assumes the normality…
Extending local canonical correlation analysis to handle general linear contrasts for FMRI data.
Jin, Mingwu; Nandy, Rajesh; Curran, Tim; Cordes, Dietmar
2012-01-01
Local canonical correlation analysis (CCA) is a multivariate method that has been proposed to more accurately determine activation patterns in fMRI data. In its conventional formulation, CCA has several drawbacks that limit its usefulness in fMRI. A major drawback is that, unlike the general linear model (GLM), a test of general linear contrasts of the temporal regressors has not been incorporated into the CCA formalism. To overcome this drawback, a novel directional test statistic was derived using the equivalence of multivariate multiple regression (MVMR) and CCA. This extension will allow CCA to be used for inference of general linear contrasts in more complicated fMRI designs without reparameterization of the design matrix and without reestimating the CCA solutions for each particular contrast of interest. With the proper constraints on the spatial coefficients of CCA, this test statistic can yield a more powerful test on the inference of evoked brain regional activations from noisy fMRI data than the conventional t-test in the GLM. The quantitative results from simulated and pseudoreal data and activation maps from fMRI data were used to demonstrate the advantage of this novel test statistic.
Extending Local Canonical Correlation Analysis to Handle General Linear Contrasts for fMRI Data
Jin, Mingwu; Nandy, Rajesh; Curran, Tim; Cordes, Dietmar
2012-01-01
Local canonical correlation analysis (CCA) is a multivariate method that has been proposed to more accurately determine activation patterns in fMRI data. In its conventional formulation, CCA has several drawbacks that limit its usefulness in fMRI. A major drawback is that, unlike the general linear model (GLM), a test of general linear contrasts of the temporal regressors has not been incorporated into the CCA formalism. To overcome this drawback, a novel directional test statistic was derived using the equivalence of multivariate multiple regression (MVMR) and CCA. This extension will allow CCA to be used for inference of general linear contrasts in more complicated fMRI designs without reparameterization of the design matrix and without reestimating the CCA solutions for each particular contrast of interest. With the proper constraints on the spatial coefficients of CCA, this test statistic can yield a more powerful test on the inference of evoked brain regional activations from noisy fMRI data than the conventional t-test in the GLM. The quantitative results from simulated and pseudoreal data and activation maps from fMRI data were used to demonstrate the advantage of this novel test statistic. PMID:22461786
NASA Astrophysics Data System (ADS)
Fan, Zuhui
2000-01-01
The linear bias of the dark halos from a model under the Zeldovich approximation is derived and compared with the fitting formula of simulation results. While qualitatively similar to the Press-Schechter formula, this model gives a better description for the linear bias around the turnaround point. This advantage, however, may be compromised by the large uncertainty of the actual behavior of the linear bias near the turnaround point. For a broad class of structure formation models in the cold dark matter framework, a general relation exists between the number density and the linear bias of dark halos. This relation can be readily tested by numerical simulations. Thus, instead of laboriously checking these models one by one, numerical simulation studies can falsify a whole category of models. The general validity of this relation is important in identifying key physical processes responsible for the large-scale structure formation in the universe.
Application of General Regression Neural Network to the Prediction of LOD Change
NASA Astrophysics Data System (ADS)
Zhang, Xiao-Hong; Wang, Qi-Jie; Zhu, Jian-Jun; Zhang, Hao
2012-01-01
Traditional methods for predicting the change in length of day (LOD change) are mainly based on some linear models, such as the least square model and autoregression model, etc. However, the LOD change comprises complicated non-linear factors and the prediction effect of the linear models is always not so ideal. Thus, a kind of non-linear neural network — general regression neural network (GRNN) model is tried to make the prediction of the LOD change and the result is compared with the predicted results obtained by taking advantage of the BP (back propagation) neural network model and other models. The comparison result shows that the application of the GRNN to the prediction of the LOD change is highly effective and feasible.
Nonlinear Model Predictive Control for Cooperative Control and Estimation
NASA Astrophysics Data System (ADS)
Ru, Pengkai
Recent advances in computational power have made it possible to do expensive online computations for control systems. It is becoming more realistic to perform computationally intensive optimization schemes online on systems that are not intrinsically stable and/or have very small time constants. Being one of the most important optimization based control approaches, model predictive control (MPC) has attracted a lot of interest from the research community due to its natural ability to incorporate constraints into its control formulation. Linear MPC has been well researched and its stability can be guaranteed in the majority of its application scenarios. However, one issue that still remains with linear MPC is that it completely ignores the system's inherent nonlinearities thus giving a sub-optimal solution. On the other hand, if achievable, nonlinear MPC, would naturally yield a globally optimal solution and take into account all the innate nonlinear characteristics. While an exact solution to a nonlinear MPC problem remains extremely computationally intensive, if not impossible, one might wonder if there is a middle ground between the two. We tried to strike a balance in this dissertation by employing a state representation technique, namely, the state dependent coefficient (SDC) representation. This new technique would render an improved performance in terms of optimality compared to linear MPC while still keeping the problem tractable. In fact, the computational power required is bounded only by a constant factor of the completely linearized MPC. The purpose of this research is to provide a theoretical framework for the design of a specific kind of nonlinear MPC controller and its extension into a general cooperative scheme. The controller is designed and implemented on quadcopter systems.
Hernández Alava, Mónica; Wailoo, Allan; Wolfe, Fred; Michaud, Kaleb
2014-10-01
Analysts frequently estimate health state utility values from other outcomes. Utility values like EQ-5D have characteristics that make standard statistical methods inappropriate. We have developed a bespoke, mixture model approach to directly estimate EQ-5D. An indirect method, "response mapping," first estimates the level on each of the 5 dimensions of the EQ-5D and then calculates the expected tariff score. These methods have never previously been compared. We use a large observational database from patients with rheumatoid arthritis (N = 100,398). Direct estimation of UK EQ-5D scores as a function of the Health Assessment Questionnaire (HAQ), pain, and age was performed with a limited dependent variable mixture model. Indirect modeling was undertaken with a set of generalized ordered probit models with expected tariff scores calculated mathematically. Linear regression was reported for comparison purposes. Impact on cost-effectiveness was demonstrated with an existing model. The linear model fits poorly, particularly at the extremes of the distribution. The bespoke mixture model and the indirect approaches improve fit over the entire range of EQ-5D. Mean average error is 10% and 5% lower compared with the linear model, respectively. Root mean squared error is 3% and 2% lower. The mixture model demonstrates superior performance to the indirect method across almost the entire range of pain and HAQ. These lead to differences in cost-effectiveness of up to 20%. There are limited data from patients in the most severe HAQ health states. Modeling of EQ-5D from clinical measures is best performed directly using the bespoke mixture model. This substantially outperforms the indirect method in this example. Linear models are inappropriate, suffer from systematic bias, and generate values outside the feasible range. © The Author(s) 2013.
A Thermodynamic Theory Of Solid Viscoelasticity. Part 1: Linear Viscoelasticity.
NASA Technical Reports Server (NTRS)
Freed, Alan D.; Leonov, Arkady I.
2002-01-01
The present series of three consecutive papers develops a general theory for linear and finite solid viscoelasticity. Because the most important object for nonlinear studies are rubber-like materials, the general approach is specified in a form convenient for solving problems important for many industries that involve rubber-like materials. General linear and nonlinear theories for non-isothermal deformations of viscoelastic solids are developed based on the quasi-linear approach of non-equilibrium thermodynamics. In this, the first paper of the series, we analyze non-isothermal linear viscoelasticity, which is applicable in a range of small strains not only to all synthetic polymers and bio-polymers but also to some non-polymeric materials. Although the linear case seems to be well developed, there still are some reasons to implement a thermodynamic derivation of constitutive equations for solid-like, non-isothermal, linear viscoelasticity. The most important is the thermodynamic modeling of thermo-rheological complexity , i.e. different temperature dependences of relaxation parameters in various parts of relaxation spectrum. A special structure of interaction matrices is established for different physical mechanisms contributed to the normal relaxation modes. This structure seems to be in accord with observations, and creates a simple mathematical framework for both continuum and molecular theories of the thermo-rheological complex relaxation phenomena. Finally, a unified approach is briefly discussed that, in principle, allows combining both the long time (discrete) and short time (continuous) descriptions of relaxation behaviors for polymers in the rubbery and glassy regions.
General job stress: a unidimensional measure and its non-linear relations with outcome variables.
Yankelevich, Maya; Broadfoot, Alison; Gillespie, Jennifer Z; Gillespie, Michael A; Guidroz, Ashley
2012-04-01
This article aims to examine the non-linear relations between a general measure of job stress [Stress in General (SIG)] and two outcome variables: intentions to quit and job satisfaction. In so doing, we also re-examine the factor structure of the SIG and determine that, as a two-factor scale, it obscures non-linear relations with outcomes. Thus, in this research, we not only test for non-linear relations between stress and outcome variables but also present an updated version of the SIG scale. Using two distinct samples of working adults (sample 1, N = 589; sample 2, N = 4322), results indicate that a more parsimonious eight-item SIG has better model-data fit than the 15-item two-factor SIG and that the eight-item SIG has non-linear relations with job satisfaction and intentions to quit. Specifically, the revised SIG has an inverted curvilinear J-shaped relation with job satisfaction such that job satisfaction drops precipitously after a certain level of stress; the SIG has a J-shaped curvilinear relation with intentions to quit such that turnover intentions increase exponentially after a certain level of stress. Copyright © 2011 John Wiley & Sons, Ltd.
Meta-Analysis in Higher Education: An Illustrative Example Using Hierarchical Linear Modeling
ERIC Educational Resources Information Center
Denson, Nida; Seltzer, Michael H.
2011-01-01
The purpose of this article is to provide higher education researchers with an illustrative example of meta-analysis utilizing hierarchical linear modeling (HLM). This article demonstrates the step-by-step process of meta-analysis using a recently-published study examining the effects of curricular and co-curricular diversity activities on racial…
Lines of Eigenvectors and Solutions to Systems of Linear Differential Equations
ERIC Educational Resources Information Center
Rasmussen, Chris; Keynes, Michael
2003-01-01
The purpose of this paper is to describe an instructional sequence where students invent a method for locating lines of eigenvectors and corresponding solutions to systems of two first order linear ordinary differential equations with constant coefficients. The significance of this paper is two-fold. First, it represents an innovative alternative…
Statistical Methodology for the Analysis of Repeated Duration Data in Behavioral Studies
ERIC Educational Resources Information Center
Letué, Frédérique; Martinez, Marie-José; Samson, Adeline; Vilain, Anne; Vilain, Coriandre
2018-01-01
Purpose: Repeated duration data are frequently used in behavioral studies. Classical linear or log-linear mixed models are often inadequate to analyze such data, because they usually consist of nonnegative and skew-distributed variables. Therefore, we recommend use of a statistical methodology specific to duration data. Method: We propose a…
Using Cognitive Tutor Software in Learning Linear Algebra Word Concept
ERIC Educational Resources Information Center
Yang, Kai-Ju
2015-01-01
This paper reports on a study of twelve 10th grade students using Cognitive Tutor, a math software program, to learn linear algebra word concept. The study's purpose was to examine whether students' mathematics performance as it is related to using Cognitive Tutor provided evidence to support Koedlinger's (2002) four instructional principles used…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yadava, G; Imai, Y; Hsieh, J
2014-06-15
Purpose: Quantitative accuracy of Iodine Hounsfield Unit (HU) in conventional single-kVp scanning is susceptible to beam-hardening effect. Dual-energy CT has unique capabilities of quantification using monochromatic CT images, but this scanning mode requires the availability of the state-of-the-art CT scanner and, therefore, is limited in routine clinical practice. Purpose of this work was to develop a beam-hardening-correction (BHC) for single-kVp CT that can linearize Iodine projections at any nominal energy, apply this approach to study Iodine response with respect to keV, and compare with dual-energy based monochromatic images obtained from material-decomposition using 80kVp and 140kVp. Methods: Tissue characterization phantoms (Gammexmore » Inc.), containing solid-Iodine inserts of different concentrations, were scanned using GE multi-slice CT scanner at 80, 100, 120, and 140 kVp. A model-based BHC algorithm was developed where Iodine was estimated using re-projection of image volume and corrected through an iterative process. In the correction, the re-projected Iodine was linearized using a polynomial mapping between monochromatic path-lengths at various nominal energies (40 to 140 keV) and physically modeled polychromatic path-lengths. The beam-hardening-corrected 80kVp and 140kVp images (linearized approximately at effective energy of the beam) were used for dual-energy material-decomposition in Water-Iodine basis-pair followed by generation of monochromatic images. Characterization of Iodine HU and noise in the images obtained from singlekVp with BHC at various nominal keV, and corresponding dual-energy monochromatic images, was carried out. Results: Iodine HU vs. keV response from single-kVp with BHC and dual-energy monochromatic images were found to be very similar, indicating that single-kVp data may be used to create material specific monochromatic equivalent using modelbased projection linearization. Conclusion: This approach may enable quantification of Iodine contrast enhancement and potential reduction in injected contrast without using dual-energy scanning. However, in general, dual-energy scanning has unique value in material characterization and quantification, and its value cannot be discounted. GE Healthcare Employee.« less
NASA Technical Reports Server (NTRS)
Park, K. C.; Belvin, W. Keith
1990-01-01
A general form for the first-order representation of the continuous second-order linear structural-dynamics equations is introduced to derive a corresponding form of first-order continuous Kalman filtering equations. Time integration of the resulting equations is carried out via a set of linear multistep integration formulas. It is shown that a judicious combined selection of computational paths and the undetermined matrices introduced in the general form of the first-order linear structural systems leads to a class of second-order discrete Kalman filtering equations involving only symmetric sparse N x N solution matrices.
Second-order discrete Kalman filtering equations for control-structure interaction simulations
NASA Technical Reports Server (NTRS)
Park, K. C.; Belvin, W. Keith; Alvin, Kenneth F.
1991-01-01
A general form for the first-order representation of the continuous, second-order linear structural dynamics equations is introduced in order to derive a corresponding form of first-order Kalman filtering equations (KFE). Time integration of the resulting first-order KFE is carried out via a set of linear multistep integration formulas. It is shown that a judicious combined selection of computational paths and the undetermined matrices introduced in the general form of the first-order linear structural systems leads to a class of second-order discrete KFE involving only symmetric, N x N solution matrix.
Implementing Linear Algebra Related Algorithms on the TI-92+ Calculator.
ERIC Educational Resources Information Center
Alexopoulos, John; Abraham, Paul
2001-01-01
Demonstrates a less utilized feature of the TI-92+: its natural and powerful programming language. Shows how to implement several linear algebra related algorithms including the Gram-Schmidt process, Least Squares Approximations, Wronskians, Cholesky Decompositions, and Generalized Linear Least Square Approximations with QR Decompositions.…
Linear and Nonlinear Thinking: A Multidimensional Model and Measure
ERIC Educational Resources Information Center
Groves, Kevin S.; Vance, Charles M.
2015-01-01
Building upon previously developed and more general dual-process models, this paper provides empirical support for a multidimensional thinking style construct comprised of linear thinking and multiple dimensions of nonlinear thinking. A self-report assessment instrument (Linear/Nonlinear Thinking Style Profile; LNTSP) is presented and…
Linearization instability for generic gravity in AdS spacetime
NASA Astrophysics Data System (ADS)
Altas, Emel; Tekin, Bayram
2018-01-01
In general relativity, perturbation theory about a background solution fails if the background spacetime has a Killing symmetry and a compact spacelike Cauchy surface. This failure, dubbed as linearization instability, shows itself as non-integrability of the perturbative infinitesimal deformation to a finite deformation of the background. Namely, the linearized field equations have spurious solutions which cannot be obtained from the linearization of exact solutions. In practice, one can show the failure of the linear perturbation theory by showing that a certain quadratic (integral) constraint on the linearized solutions is not satisfied. For non-compact Cauchy surfaces, the situation is different and for example, Minkowski space having a non-compact Cauchy surface, is linearization stable. Here we study, the linearization instability in generic metric theories of gravity where Einstein's theory is modified with additional curvature terms. We show that, unlike the case of general relativity, for modified theories even in the non-compact Cauchy surface cases, there are some theories which show linearization instability about their anti-de Sitter backgrounds. Recent D dimensional critical and three dimensional chiral gravity theories are two such examples. This observation sheds light on the paradoxical behavior of vanishing conserved charges (mass, angular momenta) for non-vacuum solutions, such as black holes, in these theories.
41 CFR 109-38.5103 - Motor vehicle utilization standards.
Code of Federal Regulations, 2014 CFR
2014-01-01
... are established for DOE as objectives for those motor vehicles operated generally for those purposes for which acquired: (1) Sedans and station wagons, general purpose use—12,000 miles per year. (2) Light trucks (4×2's) and general purpose vehicles, one ton and under (less than 12,500 GVWR)—10,000...
41 CFR 109-38.5103 - Motor vehicle utilization standards.
Code of Federal Regulations, 2012 CFR
2012-01-01
... are established for DOE as objectives for those motor vehicles operated generally for those purposes for which acquired: (1) Sedans and station wagons, general purpose use—12,000 miles per year. (2) Light trucks (4×2's) and general purpose vehicles, one ton and under (less than 12,500 GVWR)—10,000...
76 FR 76954 - 36(b)(1) Arms Sales Notification
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-09
... Bombs, 1000 BLU-117 2000lb General Purpose Bombs, 600 BLU-109 2000lb Hard Target Penetrator Bombs, and four BDU-50C inert bombs, fuzes, weapons integration, munitions trainers, personnel training and... kits, 3300 BLU-111 500lb General Purpose Bombs, 1000 BLU-117 2000lb General Purpose Bombs, 600 BLU-109...
41 CFR 60-741.40 - General purpose and applicability of the affirmative action program requirement.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 41 Public Contracts and Property Management 1 2014-07-01 2014-07-01 false General purpose and... Property Management Other Provisions Relating to Public Contracts OFFICE OF FEDERAL CONTRACT COMPLIANCE... requirement. (a) General purpose. An affirmative action program is a management tool designed to ensure equal...
1982-07-01
GENERAL PURPOSE INFORMATION MANAGEMENT SYSTEM (SELGEM) TO MEDICALLY 0 IMPORTANT ARTHROPODS (DIPTERA: CULICIDAE) oAnnual Report Terry L. Erwin July...APPLICATION OF A COMPUTERIZED GENERAL PURPOSE Annual Report INFORMATION MANAGEMENT SYSTEM (SELGEM) TO July 1981 to June 1982 MEDICALLY IMPORTANT ARTHROPODS
Monolithic ceramic analysis using the SCARE program
NASA Technical Reports Server (NTRS)
Manderscheid, Jane M.
1988-01-01
The Structural Ceramics Analysis and Reliability Evaluation (SCARE) computer program calculates the fast fracture reliability of monolithic ceramic components. The code is a post-processor to the MSC/NASTRAN general purpose finite element program. The SCARE program automatically accepts the MSC/NASTRAN output necessary to compute reliability. This includes element stresses, temperatures, volumes, and areas. The SCARE program computes two-parameter Weibull strength distributions from input fracture data for both volume and surface flaws. The distributions can then be used to calculate the reliability of geometrically complex components subjected to multiaxial stress states. Several fracture criteria and flaw types are available for selection by the user, including out-of-plane crack extension theories. The theoretical basis for the reliability calculations was proposed by Batdorf. These models combine linear elastic fracture mechanics (LEFM) with Weibull statistics to provide a mechanistic failure criterion. Other fracture theories included in SCARE are the normal stress averaging technique and the principle of independent action. The objective of this presentation is to summarize these theories, including their limitations and advantages, and to provide a general description of the SCARE program, along with example problems.
Adaptation of MSC/NASTRAN to a supercomputer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gloudeman, J.F.; Hodge, J.C.
1982-01-01
MSC/NASTRAN is a large-scale general purpose digital computer program which solves a wider variety of engineering analysis problems by the finite element method. The program capabilities include static and dynamic structural analysis (linear and nonlinear), heat transfer, acoustics, electromagnetism and other types of field problems. It is used worldwide by large and small companies in such diverse fields as automotive, aerospace, civil engineering, shipbuilding, offshore oil, industrial equipment, chemical engineering, biomedical research, optics and government research. The paper presents the significant aspects of the adaptation of MSC/NASTRAN to the Cray-1. First, the general architecture and predominant functional use of MSC/NASTRANmore » are discussed to help explain the imperatives and the challenges of this undertaking. The key characteristics of the Cray-1 which influenced the decision to undertake this effort are then reviewed to help identify performance targets. An overview of the MSC/NASTRAN adaptation effort is then given to help define the scope of the project. Finally, some measures of MSC/NASTRAN's operational performance on the Cray-1 are given, along with a few guidelines to help avoid improper interpretation. 17 references.« less
Minimization of color shift generated in RGBW quad structure.
NASA Astrophysics Data System (ADS)
Kim, Hong Chul; Yun, Jae Kyeong; Baek, Heume-Il; Kim, Ki Duk; Oh, Eui Yeol; Chung, In Jae
2005-03-01
The purpose of RGBW Quad Structure Technology is to realize higher brightness than that of normal panel (RGB stripe structure) by adding white sub-pixel to existing RGB stripe structure. However, there is side effect called 'color shift' resulted from increasing brightness. This side effect degrades general color characteristics due to change of 'Hue', 'Brightness' and 'Saturation' as compared with existing RGB stripe structure. Especially, skin-tone colors show a tendency to get darker in contrast to normal panel. We"ve tried to minimize 'color shift' through use of LUT (Look Up Table) for linear arithmetic processing of input data, data bit expansion to 12-bit for minimizing arithmetic tolerance and brightness weight of white sub-pixel on each R, G, B pixel. The objective of this study is to minimize and keep Δu'v' value (we commonly use to represent a color difference), quantitative basis of color difference between RGB stripe structure and RGBW quad structure, below 0.01 level (existing 0.02 or higher) using Macbeth colorchecker that is general reference of color characteristics.
A General Tool for Evaluating High-Contrast Coronagraphic Telescope Performance Error Budgets
NASA Technical Reports Server (NTRS)
Marchen, Luis F.; Shaklan, Stuart B.
2009-01-01
This paper describes a general purpose Coronagraph Performance Error Budget (CPEB) tool that we have developed under the NASA Exoplanet Exploration Program. The CPEB automates many of the key steps required to evaluate the scattered starlight contrast in the dark hole of a space-based coronagraph. It operates in 3 steps: first, a CodeV or Zemax prescription is converted into a MACOS optical prescription. Second, a Matlab program calls ray-trace code that generates linear beam-walk and aberration sensitivity matrices for motions of the optical elements and line-of-sight pointing, with and without controlled coarse and fine-steering mirrors. Third, the sensitivity matrices are imported by macros into Excel 2007 where the error budget is created. Once created, the user specifies the quality of each optic from a predefined set of PSDs. The spreadsheet creates a nominal set of thermal and jitter motions and combines them with the sensitivity matrices to generate an error budget for the system. The user can easily modify the motion allocations to perform trade studies.
Study on sampling of continuous linear system based on generalized Fourier transform
NASA Astrophysics Data System (ADS)
Li, Huiguang
2003-09-01
In the research of signal and system, the signal's spectrum and the system's frequency characteristic can be discussed through Fourier Transform (FT) and Laplace Transform (LT). However, some singular signals such as impulse function and signum signal don't satisfy Riemann integration and Lebesgue integration. They are called generalized functions in Maths. This paper will introduce a new definition -- Generalized Fourier Transform (GFT) and will discuss generalized function, Fourier Transform and Laplace Transform under a unified frame. When the continuous linear system is sampled, this paper will propose a new method to judge whether the spectrum will overlap after generalized Fourier transform (GFT). Causal and non-causal systems are studied, and sampling method to maintain system's dynamic performance is presented. The results can be used on ordinary sampling and non-Nyquist sampling. The results also have practical meaning on research of "discretization of continuous linear system" and "non-Nyquist sampling of signal and system." Particularly, condition for ensuring controllability and observability of MIMO continuous systems in references 13 and 14 is just an applicable example of this paper.
Fiber optic sensors and systems at the Federal University of Rio de Janeiro
NASA Astrophysics Data System (ADS)
Werneck, Marcelo M.; dos Santos, Paulo A. M.; Ferreira, Aldo P.; Maggi, Luis E.; de Carvalho, Carlos R., Jr.; Ribeiro, R. M.
1998-08-01
As widely known, fiberoptics (FO) are being used in a large variety of sensors and systems particularly for their small dimensions and low cost, large bandwidth and favorable dielectric properties. These properties have allowed us to develop sensors and systems for general applications and, particularly, for biomedical engineering. The intravascular pressure sensor was designed for small dimensions and high bandwidth. The system is based on light-intensity modulation technique and uses a 2 mm-diameter elastomer membrane as the sensor element and a pigtailed laser as a light source. The optical power output curve was linear for pressures within the range of 0 to 300 mmHg. The real time optical biosensor uses the evanescent field technique for monitoring Escherichia coli growth in culture media. The optical biosensor monitors interactions between the analytic (bacteria) and the evanescent field of an optical fiber passing through it. The FO based high voltage and current sensor is a measuring system designed for monitoring voltage and current in high voltage transmission lines. The linearity of the system is better than 2% in both ranges of 0 to 25 kV and 0 to 1000 A. The optical flowmeter uses a cross-correlation technique that analyses two light beams crossing the flow separated by a fixed distance. The x-ray image sensor uses a scintillating FO array, one FO for each image pixel to form an image of the x-ray field. The systems described in these paper use general-purpose components including optical fibers and optoelectronic devices, which are readily available, and of low cost.
Research progress in fiber optic sensors and systems at the Federal University of Rio de Janeiro
NASA Astrophysics Data System (ADS)
Werneck, Marcelo M.; Ferreira, Aldo P.; Maggi, Luis E.; De Carvalho, C. C.; Ribeiro, R. M.
1999-02-01
As widely known, fiberoptics (FO) are being used in a large variety of sensor an systems particularly for their small dimensions and low cost, large bandwidth and favorable dielectric properties. These properties have allowed us to develop sensor and systems for general applications and, particularly, for biomedical engineering. The intravasculator pressure sensor was designed for small dimensions and high bandwidth. The system is based on light- intensity modulation technique and use a 2 mm-diameter elastomer membrane as the sensor element and a pigtailed laser as a light source. The optical power out put curve was linear for pressures within the range of 0 to 300 mmHg. The real time optical biosensor uses the evanescent field technique for monitoring Escherichia coli growth in culture media. The optical biosensor monitors interactions between the analytic and the evanescent field of an optical fiber passing through it. The FO based high voltage and current sensor is a measuring system designed for monitoring voltage and current in high voltage transmission lines. The linearity of the system is better than 2 percent in both ranges of 0 to 25 kV and 0 to 1000 A. The optical flowmeter uses a cross-correlation technique that analyzes two light beams crossing the flow separated by a fixed distance. The x-ray image sensor uses a scintillating FO array, one FO for each image pixel to form an image of the x-ray field. The systems described in this paper use general-purpose components including optical fibers and optoelectronic devices, which are readily available, and of low cost.
Diminished autonomic neurocardiac function in patients with generalized anxiety disorder.
Kim, Kyungwook; Lee, Seul; Kim, Jong-Hoon
2016-01-01
Generalized anxiety disorder (GAD) is a chronic and highly prevalent disorder that is characterized by a number of autonomic nervous system symptoms. The purpose of this study was to investigate the linear and nonlinear complexity measures of heart rate variability (HRV), measuring autonomic regulation, and to evaluate the relationship between HRV parameters and the severity of anxiety, in medication-free patients with GAD. Assessments of linear and nonlinear complexity measures of HRV were performed in 42 medication-free patients with GAD and 50 healthy control subjects. In addition, the severity of anxiety symptoms was assessed using the State-Trait Anxiety Inventory and Beck Anxiety Inventory. The values of the HRV measures of the groups were compared, and the correlations between the HRV measures and the severity of anxiety symptoms were assessed. The GAD group showed significantly lower standard deviation of RR intervals and the square root of the mean squared differences of successive normal sinus intervals values compared to the control group ( P <0.01). The approximate entropy value, which is a nonlinear complexity indicator, was also significantly lower in the patient group than in the control group ( P <0.01). In correlation analysis, there were no significant correlations between HRV parameters and the severity of anxiety symptoms. The present study indicates that GAD is significantly associated with reduced HRV, suggesting that autonomic neurocardiac integrity is substantially impaired in patients with GAD. Future prospective studies are required to investigate the effects of pharmacological or non-pharmacological treatment on neuroautonomic modulation in patients with GAD.
Generalization of mixed multiscale finite element methods with applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C S
Many science and engineering problems exhibit scale disparity and high contrast. The small scale features cannot be omitted in the physical models because they can affect the macroscopic behavior of the problems. However, resolving all the scales in these problems can be prohibitively expensive. As a consequence, some types of model reduction techniques are required to design efficient solution algorithms. For practical purpose, we are interested in mixed finite element problems as they produce solutions with certain conservative properties. Existing multiscale methods for such problems include the mixed multiscale finite element methods. We show that for complicated problems, the mixedmore » multiscale finite element methods may not be able to produce reliable approximations. This motivates the need of enrichment for coarse spaces. Two enrichment approaches are proposed, one is based on generalized multiscale finte element metthods (GMsFEM), while the other is based on spectral element-based algebraic multigrid (rAMGe). The former one, which is called mixed GMsFEM, is developed for both Darcy’s flow and linear elasticity. Application of the algorithm in two-phase flow simulations are demonstrated. For linear elasticity, the algorithm is subtly modified due to the symmetry requirement of the stress tensor. The latter enrichment approach is based on rAMGe. The algorithm differs from GMsFEM in that both of the velocity and pressure spaces are coarsened. Due the multigrid nature of the algorithm, recursive application is available, which results in an efficient multilevel construction of the coarse spaces. Stability, convergence analysis, and exhaustive numerical experiments are carried out to validate the proposed enrichment approaches. iii« less
Schierz, Oliver; Dommel, Sandra; Hirsch, Christian; Reissmann, Daniel R
2014-09-01
Tooth wear is an increasing problem in a society where people are living longer. The purpose of this study was to assess the effect of age, sex, and location of teeth on the severity of tooth wear and to determine the prevalence of dentin exposure in the general population of Germany. Tooth wear was measured in casts of both jaws of 836 persons with a 6-point (0-5) ordinal rating scale. Linear random-intercept regression models with the covariates of age, sex, jaw, and tooth group (with the participant as a grouping variable) were computed to determine the association of these covariates with tooth wear of a single tooth. The mean tooth wear score across all age groups, both sexes, and all teeth was 2.9 (standard deviation, 0.8), and the prevalence of teeth with exposed dentin was 23.4%. The participants' age was correlated with the mean tooth wear scores (r=0.51). The tooth wear level among women was on average 0.15 units lower than among men, and tooth wear was on average 0.59 units higher for anterior teeth than for posterior teeth. Increased tooth wear in anterior teeth may be due to the initially predominant guidance by anterior teeth, with age-related linear progress in tooth wear. Occlusal tooth wear scores and dentin exposure increase with age. Copyright © 2014 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Semple, Scott; Harry, Vanessa N. MRCOG.; Parkin, David E.
2009-10-01
Purpose: To investigate the combination of pharmacokinetic and radiologic assessment of dynamic contrast-enhanced magnetic resonance imaging (MRI) as an early response indicator in women receiving chemoradiation for advanced cervical cancer. Methods and Materials: Twenty women with locally advanced cervical cancer were included in a prospective cohort study. Dynamic contrast-enhanced MRI was carried out before chemoradiation, after 2 weeks of therapy, and at the conclusion of therapy using a 1.5-T MRI scanner. Radiologic assessment of uptake parameters was obtained from resultant intensity curves. Pharmacokinetic analysis using a multicompartment model was also performed. General linear modeling was used to combine radiologic andmore » pharmacokinetic parameters and correlated with eventual response as determined by change in MRI tumor size and conventional clinical response. A subgroup of 11 women underwent repeat pretherapy MRI to test pharmacokinetic reproducibility. Results: Pretherapy radiologic parameters and pharmacokinetic K{sup trans} correlated with response (p < 0.01). General linear modeling demonstrated that a combination of radiologic and pharmacokinetic assessments before therapy was able to predict more than 88% of variance of response. Reproducibility of pharmacokinetic modeling was confirmed. Conclusions: A combination of radiologic assessment with pharmacokinetic modeling applied to dynamic MRI before the start of chemoradiation improves the predictive power of either by more than 20%. The potential improvements in therapy response prediction using this type of combined analysis of dynamic contrast-enhanced MRI may aid in the development of more individualized, effective therapy regimens for this patient group.« less
Use of generalized linear models and digital data in a forest inventory of Northern Utah
Moisen, Gretchen G.; Edwards, Thomas C.
1999-01-01
Forest inventories, like those conducted by the Forest Service's Forest Inventory and Analysis Program (FIA) in the Rocky Mountain Region, are under increased pressure to produce better information at reduced costs. Here we describe our efforts in Utah to merge satellite-based information with forest inventory data for the purposes of reducing the costs of estimates of forest population totals and providing spatial depiction of forest resources. We illustrate how generalized linear models can be used to construct approximately unbiased and efficient estimates of population totals while providing a mechanism for prediction in space for mapping of forest structure. We model forest type and timber volume of five tree species groups as functions of a variety of predictor variables in the northern Utah mountains. Predictor variables include elevation, aspect, slope, geographic coordinates, as well as vegetation cover types based on satellite data from both the Advanced Very High Resolution Radiometer (AVHRR) and Thematic Mapper (TM) platforms. We examine the relative precision of estimates of area by forest type and mean cubic-foot volumes under six different models, including the traditional double sampling for stratification strategy. Only very small gains in precision were realized through the use of expensive photointerpreted or TM-based data for stratification, while models based on topography and spatial coordinates alone were competitive. We also compare the predictive capability of the models through various map accuracy measures. The models including the TM-based vegetation performed best overall, while topography and spatial coordinates alone provided substantial information at very low cost.
Emergency department length of stay for ethanol intoxication encounters.
Klein, Lauren R; Driver, Brian E; Miner, James R; Martel, Marc L; Cole, Jon B
2017-12-08
Emergency Department (ED) encounters for ethanol intoxication are becoming increasingly common. The purpose of this study was to explore factors associated with ED length of stay (LOS) for ethanol intoxication encounters. This was a multi-center, retrospective, observational study of patients presenting to the ED for ethanol intoxication. Data were abstracted from the electronic medical record. To explore factors associated with ED LOS, we created a mixed-effects generalized linear model. We identified 18,664 eligible patients from 6 different EDs during the study period (2012-2016). The median age was 37years, 69% were male, and the median ethanol concentration was 213mg/dL. Median LOS was 348min (range 43-1658). Using a mixed-effects generalized linear model, independent variables associated with a significant increase in ED LOS included use of parenteral sedation (beta=0.30, increase in LOS=34%), laboratory testing (beta=0.21, increase in LOS=23%), as well as the hour of arrival to the ED, such that patients arriving to the ED during evening hours (between 18:00 and midnight) had up to an 86% increase in LOS. Variables not significantly associated with an increase in LOS included age, gender, ethanol concentration, psychiatric disposition, using the ED frequently for ethanol intoxication, CT use, and daily ED volume. Variables such as diagnostic testing, treatments, and hour of arrival may influence ED LOS in patients with acute ethanol intoxication. Identification and further exploration of these factors may assist in developing hospital and community based improvements to modify LOS in this population. Copyright © 2017 Elsevier Inc. All rights reserved.
Generalized Polynomial Chaos Based Uncertainty Quantification for Planning MRgLITT Procedures
Fahrenholtz, S.; Stafford, R. J.; Maier, F.; Hazle, J. D.; Fuentes, D.
2014-01-01
Purpose A generalized polynomial chaos (gPC) method is used to incorporate constitutive parameter uncertainties within the Pennes representation of bioheat transfer phenomena. The stochastic temperature predictions of the mathematical model are critically evaluated against MR thermometry data for planning MR-guided Laser Induced Thermal Therapies (MRgLITT). Methods Pennes bioheat transfer model coupled with a diffusion theory approximation of laser tissue interaction was implemented as the underlying deterministic kernel. A probabilistic sensitivity study was used to identify parameters that provide the most variance in temperature output. Confidence intervals of the temperature predictions are compared to MR temperature imaging (MRTI) obtained during phantom and in vivo canine (n=4) MRgLITT experiments. The gPC predictions were quantitatively compared to MRTI data using probabilistic linear and temporal profiles as well as 2-D 60 °C isotherms. Results Within the range of physically meaningful constitutive values relevant to the ablative temperature regime of MRgLITT, the sensitivity study indicated that the optical parameters, particularly the anisotropy factor, created the most variance in the stochastic model's output temperature prediction. Further, within the statistical sense considered, a nonlinear model of the temperature and damage dependent perfusion, absorption, and scattering is captured within the confidence intervals of the linear gPC method. Multivariate stochastic model predictions using parameters with the dominant sensitivities show good agreement with experimental MRTI data. Conclusions Given parameter uncertainties and mathematical modeling approximations of the Pennes bioheat model, the statistical framework demonstrates conservative estimates of the therapeutic heating and has potential for use as a computational prediction tool for thermal therapy planning. PMID:23692295
Koom, Woong Sub; Choi, Mi Yeon; Lee, Jeongshim; Park, Eun Jung; Kim, Ju Hye; Kim, Sun-Hyun; Kim, Yong Bae
2016-06-01
The purpose of this study was to evaluate the efficacy of art therapy to control fatigue in cancer patients during course of radiotherapy and its impact on quality of life (QoL). Fifty cancer patients receiving radiotherapy received weekly art therapy sessions using famous painting appreciation. Fatigue and QoL were assessed using the Brief Fatigue Inventory (BFI) Scale and the Functional Assessment of Chronic Illness Therapy-Fatigue (FACIT-F) at baseline before starting radiotherapy, every week for 4 weeks during radiotherapy, and at the end of radiotherapy. Mean changes of scores over time were analyzed using a generalized linear mixed model. Of the 50 patients, 34 (68%) participated in 4 sessions of art therapy. Generalized linear mixed models testing for the effect of time on mean score changes showed no significant changes in scores from baseline for the BFI and FACIT-F. The mean BFI score and FACIT-F total score changed from 3.1 to 2.7 and from 110.7 to 109.2, respectively. Art therapy based on the appreciation of famous paintings led to increases in self-esteem by increasing self-realization and forming social relationships. Fatigue and QoL in cancer patients with art therapy do not deteriorate during a period of radiotherapy. Despite the single-arm small number of participants and pilot design, this study provides a strong initial demonstration that art therapy of appreciation for famous painting is worthy of further study for fatigue and QoL improvement. Further, it can play an important role in routine practice in cancer patients during radiotherapy.
Raymond L. Czaplewski
1973-01-01
A generalized, non-linear population dynamics model of an ecosystem is used to investigate the direction of selective pressures upon a mutant by studying the competition between parent and mutant populations. The model has the advantages of considering selection as operating on the phenotype, of retaining the interaction of the mutant population with the ecosystem as a...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pribram-Jones, Aurora; Grabowski, Paul E.; Burke, Kieron
We present that the van Leeuwen proof of linear-response time-dependent density functional theory (TDDFT) is generalized to thermal ensembles. This allows generalization to finite temperatures of the Gross-Kohn relation, the exchange-correlation kernel of TDDFT, and fluctuation dissipation theorem for DFT. Finally, this produces a natural method for generating new thermal exchange-correlation approximations.
ERIC Educational Resources Information Center
Bashaw, W. L., Ed.; Findley, Warren G., Ed.
This volume contains the five major addresses and subsequent discussion from the Symposium on the General Linear Models Approach to the Analysis of Experimental Data in Educational Research, which was held in 1967 in Athens, Georgia. The symposium was designed to produce systematic information, including new methodology, for dissemination to the…
ERIC Educational Resources Information Center
Dimitrov, Dimiter M.; Raykov, Tenko; AL-Qataee, Abdullah Ali
2015-01-01
This article is concerned with developing a measure of general academic ability (GAA) for high school graduates who apply to colleges, as well as with the identification of optimal weights of the GAA indicators in a linear combination that yields a composite score with maximal reliability and maximal predictive validity, employing the framework of…
Pribram-Jones, Aurora; Grabowski, Paul E.; Burke, Kieron
2016-06-08
We present that the van Leeuwen proof of linear-response time-dependent density functional theory (TDDFT) is generalized to thermal ensembles. This allows generalization to finite temperatures of the Gross-Kohn relation, the exchange-correlation kernel of TDDFT, and fluctuation dissipation theorem for DFT. Finally, this produces a natural method for generating new thermal exchange-correlation approximations.
1993-01-31
28 Controllability and Observability ............................. .32 ’ Separation of Learning and Control ... ... 37 Linearization via... Linearization via Transformation of Coordinates and Nonlinear Fedlback . .1 Main Result ......... .............................. 13 Discussion...9 2.1 Basic Structure of a NLM........................ . 2.2 General Structure of NNLM .......................... .28 2.3 Linear System
A Constrained Linear Estimator for Multiple Regression
ERIC Educational Resources Information Center
Davis-Stober, Clintin P.; Dana, Jason; Budescu, David V.
2010-01-01
"Improper linear models" (see Dawes, Am. Psychol. 34:571-582, "1979"), such as equal weighting, have garnered interest as alternatives to standard regression models. We analyze the general circumstances under which these models perform well by recasting a class of "improper" linear models as "proper" statistical models with a single predictor. We…
Mössbauer spectra linearity improvement by sine velocity waveform followed by linearization process
NASA Astrophysics Data System (ADS)
Kohout, Pavel; Frank, Tomas; Pechousek, Jiri; Kouril, Lukas
2018-05-01
This note reports the development of a new method for linearizing the Mössbauer spectra recorded with a sine drive velocity signal. Mössbauer spectra linearity is a critical parameter to determine Mössbauer spectrometer accuracy. Measuring spectra with a sine velocity axis and consecutive linearization increases the linearity of spectra in a wider frequency range of a drive signal, as generally harmonic movement is natural for velocity transducers. The obtained data demonstrate that linearized sine spectra have lower nonlinearity and line width parameters in comparison with those measured using a traditional triangle velocity signal.
Query construction, entropy, and generalization in neural-network models
NASA Astrophysics Data System (ADS)
Sollich, Peter
1994-05-01
We study query construction algorithms, which aim at improving the generalization ability of systems that learn from examples by choosing optimal, nonredundant training sets. We set up a general probabilistic framework for deriving such algorithms from the requirement of optimizing a suitable objective function; specifically, we consider the objective functions entropy (or information gain) and generalization error. For two learning scenarios, the high-low game and the linear perceptron, we evaluate the generalization performance obtained by applying the corresponding query construction algorithms and compare it to training on random examples. We find qualitative differences between the two scenarios due to the different structure of the underlying rules (nonlinear and ``noninvertible'' versus linear); in particular, for the linear perceptron, random examples lead to the same generalization ability as a sequence of queries in the limit of an infinite number of examples. We also investigate learning algorithms which are ill matched to the learning environment and find that, in this case, minimum entropy queries can in fact yield a lower generalization ability than random examples. Finally, we study the efficiency of single queries and its dependence on the learning history, i.e., on whether the previous training examples were generated randomly or by querying, and the difference between globally and locally optimal query construction.
... morphea, linear scleroderma, and scleroderma en coup de sabre. Each type can be subdivided further and some ... described for morphea. Linear scleroderma en coup de sabre is the term generally applied when children have ...
77 FR 9899 - 36(b)(1) Arms Sales Notification
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-21
... Medium Range Air-to-Air Missiles, 42 GBU-49 Enhanced PAVEWAY II 500 lb Bombs, 200 GBU-54 (2000 lb) Laser Joint Direct Attack Munitions (JDAM) Bombs, 642 BLU-111 (500 lb) General Purpose Bombs, 127 MK-82 (500 lb) General Purpose Bombs, 80 BLU-117 (2000 lb) General Purpose Bombs, 4 MK-84 (2000 lb) Inert...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-02
... Treatment Under the Generalized System of Preferences and for Other Purposes #0; #0; #0; Presidential... Modify Duty-Free Treatment Under the Generalized System of Preferences and for Other Purposes By the... competitive need limitations on the preferential treatment afforded under the GSP to eligible articles. 4...
The Effects of Academic Optimism on Elementary Reading Achievement
ERIC Educational Resources Information Center
Bevel, Raymona K.; Mitchell, Roxanne M.
2012-01-01
Purpose: The purpose of this paper is to explore the relationship between academic optimism (AO) and elementary reading achievement (RA). Design/methodology/approach: Using correlation and hierarchical linear regression, the authors examined school-level effects of AO on fifth grade reading achievement in 29 elementary schools in Alabama.…
Aims or Purposes of School Mediation in Spain
ERIC Educational Resources Information Center
Viana-Orta, María-Isabel
2013-01-01
Mediation continues to expand, both geographically and in terms of scope. Depending on its purpose, there are three main consolidated mediation models or schools worldwide: the Traditional-Linear Harvard model, which seeks to find an agreement between the parties; the Circular-Narrative model, which apart from the agreement also emphasizes…
Stochastic search, optimization and regression with energy applications
NASA Astrophysics Data System (ADS)
Hannah, Lauren A.
Designing clean energy systems will be an important task over the next few decades. One of the major roadblocks is a lack of mathematical tools to economically evaluate those energy systems. However, solutions to these mathematical problems are also of interest to the operations research and statistical communities in general. This thesis studies three problems that are of interest to the energy community itself or provide support for solution methods: R&D portfolio optimization, nonparametric regression and stochastic search with an observable state variable. First, we consider the one stage R&D portfolio optimization problem to avoid the sequential decision process associated with the multi-stage. The one stage problem is still difficult because of a non-convex, combinatorial decision space and a non-convex objective function. We propose a heuristic solution method that uses marginal project values---which depend on the selected portfolio---to create a linear objective function. In conjunction with the 0-1 decision space, this new problem can be solved as a knapsack linear program. This method scales well to large decision spaces. We also propose an alternate, provably convergent algorithm that does not exploit problem structure. These methods are compared on a solid oxide fuel cell R&D portfolio problem. Next, we propose Dirichlet Process mixtures of Generalized Linear Models (DPGLM), a new method of nonparametric regression that accommodates continuous and categorical inputs, and responses that can be modeled by a generalized linear model. We prove conditions for the asymptotic unbiasedness of the DP-GLM regression mean function estimate. We also give examples for when those conditions hold, including models for compactly supported continuous distributions and a model with continuous covariates and categorical response. We empirically analyze the properties of the DP-GLM and why it provides better results than existing Dirichlet process mixture regression models. We evaluate DP-GLM on several data sets, comparing it to modern methods of nonparametric regression like CART, Bayesian trees and Gaussian processes. Compared to existing techniques, the DP-GLM provides a single model (and corresponding inference algorithms) that performs well in many regression settings. Finally, we study convex stochastic search problems where a noisy objective function value is observed after a decision is made. There are many stochastic search problems whose behavior depends on an exogenous state variable which affects the shape of the objective function. Currently, there is no general purpose algorithm to solve this class of problems. We use nonparametric density estimation to take observations from the joint state-outcome distribution and use them to infer the optimal decision for a given query state. We propose two solution methods that depend on the problem characteristics: function-based and gradient-based optimization. We examine two weighting schemes, kernel-based weights and Dirichlet process-based weights, for use with the solution methods. The weights and solution methods are tested on a synthetic multi-product newsvendor problem and the hour-ahead wind commitment problem. Our results show that in some cases Dirichlet process weights offer substantial benefits over kernel based weights and more generally that nonparametric estimation methods provide good solutions to otherwise intractable problems.
Nikoloulopoulos, Aristidis K
2017-10-01
A bivariate copula mixed model has been recently proposed to synthesize diagnostic test accuracy studies and it has been shown that it is superior to the standard generalized linear mixed model in this context. Here, we call trivariate vine copulas to extend the bivariate meta-analysis of diagnostic test accuracy studies by accounting for disease prevalence. Our vine copula mixed model includes the trivariate generalized linear mixed model as a special case and can also operate on the original scale of sensitivity, specificity, and disease prevalence. Our general methodology is illustrated by re-analyzing the data of two published meta-analyses. Our study suggests that there can be an improvement on trivariate generalized linear mixed model in fit to data and makes the argument for moving to vine copula random effects models especially because of their richness, including reflection asymmetric tail dependence, and computational feasibility despite their three dimensionality.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 23 Highways 1 2011-04-01 2011-04-01 false Purpose. 1.1 Section 1.1 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION GENERAL MANAGEMENT AND ADMINISTRATION GENERAL § 1.1 Purpose. The purpose of the regulations in this part is to implement and carry out the provisions of Federal law...
Code of Federal Regulations, 2010 CFR
2010-04-01
... 23 Highways 1 2010-04-01 2010-04-01 false Purpose. 1.1 Section 1.1 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION GENERAL MANAGEMENT AND ADMINISTRATION GENERAL § 1.1 Purpose. The purpose of the regulations in this part is to implement and carry out the provisions of Federal law...
Code of Federal Regulations, 2013 CFR
2013-04-01
... 23 Highways 1 2013-04-01 2013-04-01 false Purpose. 1.1 Section 1.1 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION GENERAL MANAGEMENT AND ADMINISTRATION GENERAL § 1.1 Purpose. The purpose of the regulations in this part is to implement and carry out the provisions of Federal law...
Code of Federal Regulations, 2014 CFR
2014-04-01
... 23 Highways 1 2014-04-01 2014-04-01 false Purpose. 1.1 Section 1.1 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION GENERAL MANAGEMENT AND ADMINISTRATION GENERAL § 1.1 Purpose. The purpose of the regulations in this part is to implement and carry out the provisions of Federal law...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Purpose. 1002.1 Section 1002.1 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) OFFICIAL SEAL AND DISTINGUISHING FLAG General § 1002.1 Purpose. The purpose of this part is to describe the official seal and distinguishing flag of the Department of Energy, and to...
Code of Federal Regulations, 2012 CFR
2012-04-01
... 23 Highways 1 2012-04-01 2012-04-01 false Purpose. 1.1 Section 1.1 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION GENERAL MANAGEMENT AND ADMINISTRATION GENERAL § 1.1 Purpose. The purpose of the regulations in this part is to implement and carry out the provisions of Federal law...
Generalized Clifford Algebras as Algebras in Suitable Symmetric Linear Gr-Categories
NASA Astrophysics Data System (ADS)
Cheng, Tao; Huang, Hua-Lin; Yang, Yuping
2016-01-01
By viewing Clifford algebras as algebras in some suitable symmetric Gr-categories, Albuquerque and Majid were able to give a new derivation of some well known results about Clifford algebras and to generalize them. Along the same line, Bulacu observed that Clifford algebras are weak Hopf algebras in the aforementioned categories and obtained other interesting properties. The aim of this paper is to study generalized Clifford algebras in a similar manner and extend the results of Albuquerque, Majid and Bulacu to the generalized setting. In particular, by taking full advantage of the gauge transformations in symmetric linear Gr-categories, we derive the decomposition theorem and provide categorical weak Hopf structures for generalized Clifford algebras in a conceptual and simpler manner.
Eric J. Gustafson; L. Jay Roberts; Larry A. Leefers
2006-01-01
Forest management planners require analytical tools to assess the effects of alternative strategies on the sometimes disparate benefits from forests such as timber production and wildlife habitat. We assessed the spatial patterns of alternative management strategies by linking two models that were developed for different purposes. We used a linear programming model (...
The Integration of Teacher's Pedagogical Content Knowledge Components in Teaching Linear Equation
ERIC Educational Resources Information Center
Yusof, Yusminah Mohd.; Effandi, Zakaria
2015-01-01
This qualitative research aimed to explore the integration of the components of pedagogical content knowledge (PCK) in teaching Linear Equation with one unknown. For the purpose of the study, a single local case study with multiple participants was used. The selection of the participants was made based on various criteria: having more than 5 years…
ERIC Educational Resources Information Center
Novak, Melissa A.
2017-01-01
The purpose of this qualitative practitioner research study was to describe middle school algebra students' experiences of learning linear functions through kinesthetic movement. Participants were comprised of 8th grade algebra students. Practitioner research was used because I wanted to improve my teaching so students will have more success in…
Planning Student Flow with Linear Programming: A Tunisian Case Study.
ERIC Educational Resources Information Center
Bezeau, Lawrence
A student flow model in linear programming format, designed to plan the movement of students into secondary and university programs in Tunisia, is described. The purpose of the plan is to determine a sufficient number of graduating students that would flow back into the system as teachers or move into the labor market to meet fixed manpower…
Spin dynamics in storage rings and linear accelerators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Irwin, J.
1994-12-01
The purpose of these lectures is to survey the subject of spin dynamics in accelerators: to give a sense of the underlying physics, the typical analytic and numeric methods used, and an overview of results achieved. Consideration will be limited to electrons and protons. Examples of experimental and theoretical results in both linear and circular machines are included.
ERIC Educational Resources Information Center
Liou, Pey-Yan; Ho, Hsin-Ning Jessie
2018-01-01
The purpose of this study is to examine students' perceptions of instructional practices in the classroom, and to further investigate the relationships among instructional practices, motivational beliefs and science achievement. Hierarchical linear modelling was utilised to examine the Trends in International Mathematics and Science Study 2007…
Typical Werner states satisfying all linear Bell inequalities with dichotomic measurements
NASA Astrophysics Data System (ADS)
Luo, Ming-Xing
2018-04-01
Quantum entanglement as a special resource inspires various distinct applications in quantum information processing. Unfortunately, it is NP-hard to detect general quantum entanglement using Bell testing. Our goal is to investigate quantum entanglement with white noises that appear frequently in experiment and quantum simulations. Surprisingly, for almost all multipartite generalized Greenberger-Horne-Zeilinger states there are entangled noisy states that satisfy all linear Bell inequalities consisting of full correlations with dichotomic inputs and outputs of each local observer. This result shows generic undetectability of mixed entangled states in contrast to Gisin's theorem of pure bipartite entangled states in terms of Bell nonlocality. We further provide an accessible method to show a nontrivial set of noisy entanglement with small number of parties satisfying all general linear Bell inequalities. These results imply typical incompleteness of special Bell theory in explaining entanglement.
General purpose force doctrine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weltman, J.J.
In contemporary American strategic parlance, the general purpose forces have come to mean those forces intended for conflict situations other than nuclear war with the Soviet Union. As with all military forces, the general purpose forces are powerfully determined by prevailing conceptions of the problems they must meet and by institutional biases as to the proper way to deal with those problems. This paper deals with the strategic problems these forces are intended to meet, the various and often conflicting doctrines and organizational structures which have been generated in order to meet those problems, and the factors which will influencemore » general purpose doctrine and structure in the future. This paper does not attempt to prescribe technological solutions to the needs of the general purpose forces. Rather, it attempts to display the doctrinal and institutional context within which new technologies must operate, and which will largely determine whether these technologies are accepted into the force structure or not.« less
NASA Astrophysics Data System (ADS)
Arshad, Muhammad; Lu, Dianchen; Wang, Jun
2017-07-01
In this paper, we pursue the general form of the fractional reduced differential transform method (DTM) to (N+1)-dimensional case, so that fractional order partial differential equations (PDEs) can be resolved effectively. The most distinct aspect of this method is that no prescribed assumptions are required, and the huge computational exertion is reduced and round-off errors are also evaded. We utilize the proposed scheme on some initial value problems and approximate numerical solutions of linear and nonlinear time fractional PDEs are obtained, which shows that the method is highly accurate and simple to apply. The proposed technique is thus an influential technique for solving the fractional PDEs and fractional order problems occurring in the field of engineering, physics etc. Numerical results are obtained for verification and demonstration purpose by using Mathematica software.
Development of an integrated aeroservoelastic analysis program and correlation with test data
NASA Technical Reports Server (NTRS)
Gupta, K. K.; Brenner, M. J.; Voelker, L. S.
1991-01-01
The details and results are presented of the general-purpose finite element STructural Analysis RoutineS (STARS) to perform a complete linear aeroelastic and aeroservoelastic analysis. The earlier version of the STARS computer program enabled effective finite element modeling as well as static, vibration, buckling, and dynamic response of damped and undamped systems, including those with pre-stressed and spinning structures. Additions to the STARS program include aeroelastic modeling for flutter and divergence solutions, and hybrid control system augmentation for aeroservoelastic analysis. Numerical results of the X-29A aircraft pertaining to vibration, flutter-divergence, and open- and closed-loop aeroservoelastic controls analysis are compared to ground vibration, wind-tunnel, and flight-test results. The open- and closed-loop aeroservoelastic control analyses are based on a hybrid formulation representing the interaction of structural, aerodynamic, and flight-control dynamics.
Dogra, Shilpa; Al-Sahab, Ban; Manson, James; Tamim, Hala
2015-04-01
The purpose of the current study was to determine whether aging expectations (AE) are associated with physical activity participation and health among older adults of low socioeconomic status (SES). A cross-sectional analysis of a sample of 170 older adults (mean age 70.9 years) was conducted. Data on AE, physical activity, and health were collected using the 12 item Expectations Regarding Aging instrument, the Healthy Physical Activity Participation Questionnaire, and the Short Form-36, respectively. Adjusted linear regression models showed significant associations between AE and social functioning, energy/vitality, mental health, and self-rated general health, as well as physical activity. These results suggest that AE may help to better explain the established association between low SES, low physical activity uptake, and poor health outcomes among older adults.
Acquisition of gamma camera and physiological data by computer.
Hack, S N; Chang, M; Line, B R; Cooper, J A; Robeson, G H
1986-11-01
We have designed, implemented, and tested a new Research Data Acquisition System (RDAS) that permits a general purpose digital computer to acquire signals from both gamma camera sources and physiological signal sources concurrently. This system overcomes the limited multi-source, high speed data acquisition capabilities found in most clinically oriented nuclear medicine computers. The RDAS can simultaneously input signals from up to four gamma camera sources with a throughput of 200 kHz per source and from up to eight physiological signal sources with an aggregate throughput of 50 kHz. Rigorous testing has found the RDAS to exhibit acceptable linearity and timing characteristics. In addition, flood images obtained by this system were compared with flood images acquired by a commercial nuclear medicine computer system. National Electrical Manufacturers Association performance standards of the flood images were found to be comparable.
Lee, Jeong Hyeon; Kang, Yun-Seong; Jeong, Yun-Jeong; Yoon, Young-Soon; Kwack, Won Gun; Oh, Jin Young
2016-01-01
Purpose. We aimed to determine the value of lung function measurement for predicting cardiovascular (CV) disease by evaluating the association between FEV1 (%) and CV risk factors in general population. Materials and Methods. This was a cross-sectional, retrospective study of subjects above 18 years of age who underwent health examinations. The relationship between FEV1 (%) and presence of carotid plaque and thickened carotid IMT (≥0.8 mm) was analyzed by multiple logistic regression, and the relationship between FEV1 (%) and PWV (%), and serum uric acid was analyzed by multiple linear regression. Various factors were adjusted by using Model 1 and Model 2. Results. 1,003 subjects were enrolled in this study and 96.7% ( n = 970) of the subjects were men. In both models, the odds ratio of the presence of carotid plaque and thickened carotid IMT had no consistent trend and statistical significance. In the analysis of the PWV (%) and uric acid, there was no significant relationship with FEV1 (%) in both models. Conclusion. FEV1 had no significant relationship with CV risk factors. The result suggests that FEV1 may have no association with CV risk factors or may be insensitive to detecting the association in general population without airflow limitation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Platts, J.A.; Abraham, M.H.
The partitioning of organic compounds between air and foliage and between water and foliage is of considerable environmental interest. The purpose of this work is to show that partitioning into the cuticular matrix of one particular species can be satisfactorily modeled by general equations the authors have previously developed and, hence, that the same general equations could be used to model partitioning into other plant materials of the same or different species. The general equations are linear free energy relationships that employ descriptors for polarity/polarizability, hydrogen bond acidity and basicity, dispersive effects, and volume. They have been applied to themore » partition of 62 very varied organic compounds between cuticular matrix of the tomato fruit, Lycopersicon esculentum, and either air (MX{sub a}) or water (MX{sub w}). Values of log MX{sub a} covering a range of 12.4 log units are correlated with a standard deviation of 0.232 log unit, and values of log MX{sub w} covering a range of 7.6 log unit are correlated with an SD of 0.236 log unit. Possibilities are discussed for the prediction of new air-plant cuticular matrix and water-plant cuticular matrix partition values on the basis of the equations developed.« less
Kassa, Beneberu Teferra; Haile, Anteneh Girma; Essa, John Abdu
2011-12-01
In order to assess and identify the determinants of sheep price and price variation across time, a time series data were collected from four selected markets in North Shewa, Northeastern Ethiopia on weekly market day basis for a period of 2 years. Data on animal characteristics and purpose of buying were collected on a weekly basis from randomly selected 15-25 animals, and a total of 7,976 transactions were recorded. A general linear model technique was used to identify factors influencing sheep price, and the results showed that sheep price (liveweight sheep price per kilogram taken as a dependent variable) is affected by animal characteristics such as weight, sex, age, condition, season, and color. Most of the markets' purpose for which the animal was purchased did not affect significantly the price per kilogram. This may be due to the similarity of the markets in terms of buyer's purpose. The results suggest that there will be benefit from coordinated fattening, breeding, and marketing programs to take the highest advantage from the preferred animals' characteristics and selected festival markets. Finally, the study recommends for a coordinated action to enhance the benefit generated for all participant actors in the sheep value chain through raising sheep productivity, improving the capacity of sheep producers and agribusiness entrepreneurs to access and use latest knowledge and technologies; and strengthening linkages among actors in the sheep value chain.
DOE Office of Scientific and Technical Information (OSTI.GOV)
CULLEN, D. E.
2001-06-13
Version 00 As distributed, the original evaluated data include cross sections represented in the form of a combination of resonance parameters and/or tabulated energy dependent cross sections, nominally at 0 Kelvin temperature. For use in applications, these ENDF/B-VI, Release 7 data were processed into the form of temperature dependent cross sections at eight temperatures between 0 and 2100 Kelvin, in steps of 300 Kelvin. At each temperature the cross sections are tabulated and linearly interpolable in energy. POINT2000 contains all of the evaluations in the ENDF/B-VI general purpose library, which contains evaluations for 324 materials (isotopes or naturally occurring elementalmore » mixtures of isotopes). No special purpose ENDF/B-VI libraries, such as fission products, thermal scattering, photon interaction data are included. The majority of these evaluations are complete, in the sense that they include all cross sections over the energy range 10-5 eV to at least 20 MeV. However, the following are only partial evaluations that either only contain single reactions and no total cross section (Mg24, K41, Ti46, Ti47, Ti48, Ti50 and Ni59), or do not include energy dependent cross sections above the resonance region (Ar40, Mo92, Mo98, Mo100, In115, Sn120, Sn122 and Sn124). The CCC-638/TART96 code package will soon be updated to TART2000, which is recommended for use with these data. Codes within TART2000 can be used to display these data or to run calculations using these data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grenon, Cedric; Lake, Kayll
We generalize the Swiss-cheese cosmologies so as to include nonzero linear momenta of the associated boundary surfaces. The evolution of mass scales in these generalized cosmologies is studied for a variety of models for the background without having to specify any details within the local inhomogeneities. We find that the final effective gravitational mass and size of the evolving inhomogeneities depends on their linear momenta but these properties are essentially unaffected by the details of the background model.
NASA Technical Reports Server (NTRS)
Vohra, Yogesh K. (Inventor); McCauley, Thomas S. (Inventor)
1997-01-01
The deposition of high quality diamond films at high linear growth rates and substrate temperatures for microwave-plasma chemical vapor deposition is disclosed. The linear growth rate achieved for this process is generally greater than 50 .mu.m/hr for high quality films, as compared to rates of less than 5 .mu.m/hr generally reported for MPCVD processes.
NASA Technical Reports Server (NTRS)
Ferencz, Donald C.; Viterna, Larry A.
1991-01-01
ALPS is a computer program which can be used to solve general linear program (optimization) problems. ALPS was designed for those who have minimal linear programming (LP) knowledge and features a menu-driven scheme to guide the user through the process of creating and solving LP formulations. Once created, the problems can be edited and stored in standard DOS ASCII files to provide portability to various word processors or even other linear programming packages. Unlike many math-oriented LP solvers, ALPS contains an LP parser that reads through the LP formulation and reports several types of errors to the user. ALPS provides a large amount of solution data which is often useful in problem solving. In addition to pure linear programs, ALPS can solve for integer, mixed integer, and binary type problems. Pure linear programs are solved with the revised simplex method. Integer or mixed integer programs are solved initially with the revised simplex, and the completed using the branch-and-bound technique. Binary programs are solved with the method of implicit enumeration. This manual describes how to use ALPS to create, edit, and solve linear programming problems. Instructions for installing ALPS on a PC compatible computer are included in the appendices along with a general introduction to linear programming. A programmers guide is also included for assistance in modifying and maintaining the program.
GVE-Based Dynamics and Control for Formation Flying Spacecraft
NASA Technical Reports Server (NTRS)
Breger, Louis; How, Jonathan P.
2004-01-01
Formation flying is an enabling technology for many future space missions. This paper presents extensions to the equations of relative motion expressed in Keplerian orbital elements, including new initialization techniques for general formation configurations. A new linear time-varying form of the equations of relative motion is developed from Gauss Variational Equations and used in a model predictive controller. The linearizing assumptions for these equations are shown to be consistent with typical formation flying scenarios. Several linear, convex initialization techniques are presented, as well as a general, decentralized method for coordinating a tetrahedral formation using differential orbital elements. Control methods are validated using a commercial numerical propagator.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 17 Commodity and Securities Exchanges 1 2010-04-01 2010-04-01 false General policy considerations, purpose and scope of rules relating to open Commission meetings. 147.1 Section 147.1 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION OPEN COMMISSION MEETINGS § 147.1 General policy considerations, purpose and scope of rules...
Physics and control of wall turbulence for drag reduction.
Kim, John
2011-04-13
Turbulence physics responsible for high skin-friction drag in turbulent boundary layers is first reviewed. A self-sustaining process of near-wall turbulence structures is then discussed from the perspective of controlling this process for the purpose of skin-friction drag reduction. After recognizing that key parts of this self-sustaining process are linear, a linear systems approach to boundary-layer control is discussed. It is shown that singular-value decomposition analysis of the linear system allows us to examine different approaches to boundary-layer control without carrying out the expensive nonlinear simulations. Results from the linear analysis are consistent with those observed in full nonlinear simulations, thus demonstrating the validity of the linear analysis. Finally, fundamental performance limit expected of optimal control input is discussed.
Alternate approaches to future electron-positron linear colliders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loew, G.A.
1998-07-01
The purpose of this article is two-fold: to review the current international status of various design approaches to the next generation of e{sup +}e{sup {minus}} linear colliders, and on the occasion of his 80th birthday, to celebrate Richard B. Neal`s many contributions to the field of linear accelerators. As it turns out, combining these two tasks is a rather natural enterprise because of Neal`s long professional involvement and insight into many of the problems and options which the international e{sup +}e{sup {minus}} linear collider community is currently studying to achieve a practical design for a future machine.
Feature extraction with deep neural networks by a generalized discriminant analysis.
Stuhlsatz, André; Lippel, Jens; Zielke, Thomas
2012-04-01
We present an approach to feature extraction that is a generalization of the classical linear discriminant analysis (LDA) on the basis of deep neural networks (DNNs). As for LDA, discriminative features generated from independent Gaussian class conditionals are assumed. This modeling has the advantages that the intrinsic dimensionality of the feature space is bounded by the number of classes and that the optimal discriminant function is linear. Unfortunately, linear transformations are insufficient to extract optimal discriminative features from arbitrarily distributed raw measurements. The generalized discriminant analysis (GerDA) proposed in this paper uses nonlinear transformations that are learnt by DNNs in a semisupervised fashion. We show that the feature extraction based on our approach displays excellent performance on real-world recognition and detection tasks, such as handwritten digit recognition and face detection. In a series of experiments, we evaluate GerDA features with respect to dimensionality reduction, visualization, classification, and detection. Moreover, we show that GerDA DNNs can preprocess truly high-dimensional input data to low-dimensional representations that facilitate accurate predictions even if simple linear predictors or measures of similarity are used.
Teachers' Evaluations and Students' Achievement: A "Deviation from the Reference" Analysis
ERIC Educational Resources Information Center
Iacus, Stefano M.; Porro, Giuseppe
2011-01-01
Several studies show that teachers make use of grading practices to affect students' effort and achievement. Generally linearity is assumed in the grading equation, while it is everyone's experience that grading practices are frequently non-linear. Representing grading practices as linear can be misleading both from a descriptive and a…
Linear Logistic Test Modeling with R
ERIC Educational Resources Information Center
Baghaei, Purya; Kubinger, Klaus D.
2015-01-01
The present paper gives a general introduction to the linear logistic test model (Fischer, 1973), an extension of the Rasch model with linear constraints on item parameters, along with eRm (an R package to estimate different types of Rasch models; Mair, Hatzinger, & Mair, 2014) functions to estimate the model and interpret its parameters. The…
Operator Factorization and the Solution of Second-Order Linear Ordinary Differential Equations
ERIC Educational Resources Information Center
Robin, W.
2007-01-01
The theory and application of second-order linear ordinary differential equations is reviewed from the standpoint of the operator factorization approach to the solution of ordinary differential equations (ODE). Using the operator factorization approach, the general second-order linear ODE is solved, exactly, in quadratures and the resulting…
Derivation and definition of a linear aircraft model
NASA Technical Reports Server (NTRS)
Duke, Eugene L.; Antoniewicz, Robert F.; Krambeer, Keith D.
1988-01-01
A linear aircraft model for a rigid aircraft of constant mass flying over a flat, nonrotating earth is derived and defined. The derivation makes no assumptions of reference trajectory or vehicle symmetry. The linear system equations are derived and evaluated along a general trajectory and include both aircraft dynamics and observation variables.
40 CFR 72.1 - Purpose and scope.
Code of Federal Regulations, 2013 CFR
2013-07-01
... REGULATION Acid Rain Program General Provisions § 72.1 Purpose and scope. (a) Purpose. The purpose of this... affected sources and affected units under the Acid Rain Program, pursuant to title IV of the Clean Air Act... regulations under this part set forth certain generally applicable provisions under the Acid Rain Program. The...
40 CFR 72.1 - Purpose and scope.
Code of Federal Regulations, 2010 CFR
2010-07-01
... REGULATION Acid Rain Program General Provisions § 72.1 Purpose and scope. (a) Purpose. The purpose of this... affected sources and affected units under the Acid Rain Program, pursuant to title IV of the Clean Air Act... regulations under this part set forth certain generally applicable provisions under the Acid Rain Program. The...
40 CFR 72.1 - Purpose and scope.
Code of Federal Regulations, 2011 CFR
2011-07-01
... REGULATION Acid Rain Program General Provisions § 72.1 Purpose and scope. (a) Purpose. The purpose of this... affected sources and affected units under the Acid Rain Program, pursuant to title IV of the Clean Air Act... regulations under this part set forth certain generally applicable provisions under the Acid Rain Program. The...
40 CFR 72.1 - Purpose and scope.
Code of Federal Regulations, 2012 CFR
2012-07-01
... REGULATION Acid Rain Program General Provisions § 72.1 Purpose and scope. (a) Purpose. The purpose of this... affected sources and affected units under the Acid Rain Program, pursuant to title IV of the Clean Air Act... regulations under this part set forth certain generally applicable provisions under the Acid Rain Program. The...
40 CFR 72.1 - Purpose and scope.
Code of Federal Regulations, 2014 CFR
2014-07-01
... REGULATION Acid Rain Program General Provisions § 72.1 Purpose and scope. (a) Purpose. The purpose of this... affected sources and affected units under the Acid Rain Program, pursuant to title IV of the Clean Air Act... regulations under this part set forth certain generally applicable provisions under the Acid Rain Program. The...
34 CFR 303.1 - Purpose of the early intervention program for infants and toddlers with disabilities.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 34 Education 2 2010-07-01 2010-07-01 false Purpose of the early intervention program for infants... EDUCATION EARLY INTERVENTION PROGRAM FOR INFANTS AND TODDLERS WITH DISABILITIES General Purpose, Eligibility, and Other General Provisions § 303.1 Purpose of the early intervention program for infants and...
34 CFR 303.1 - Purpose of the early intervention program for infants and toddlers with disabilities.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 34 Education 2 2011-07-01 2010-07-01 true Purpose of the early intervention program for infants... EDUCATION EARLY INTERVENTION PROGRAM FOR INFANTS AND TODDLERS WITH DISABILITIES General Purpose, Eligibility, and Other General Provisions § 303.1 Purpose of the early intervention program for infants and...
NASA Technical Reports Server (NTRS)
Holdaway, Daniel; Kent, James
2015-01-01
The linearity of a selection of common advection schemes is tested and examined with a view to their use in the tangent linear and adjoint versions of an atmospheric general circulation model. The schemes are tested within a simple offline one-dimensional periodic domain as well as using a simplified and complete configuration of the linearised version of NASA's Goddard Earth Observing System version 5 (GEOS-5). All schemes which prevent the development of negative values and preserve the shape of the solution are confirmed to have nonlinear behaviour. The piecewise parabolic method (PPM) with certain flux limiters, including that used by default in GEOS-5, is found to support linear growth near the shocks. This property can cause the rapid development of unrealistically large perturbations within the tangent linear and adjoint models. It is shown that these schemes with flux limiters should not be used within the linearised version of a transport scheme. The results from tests using GEOS-5 show that the current default scheme (a version of PPM) is not suitable for the tangent linear and adjoint model, and that using a linear third-order scheme for the linearised model produces better behaviour. Using the third-order scheme for the linearised model improves the correlations between the linear and non-linear perturbation trajectories for cloud liquid water and cloud liquid ice in GEOS-5.
A general framework of noise suppression in material decomposition for dual-energy CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petrongolo, Michael; Dong, Xue; Zhu, Lei, E-mail: leizhu@gatech.edu
Purpose: As a general problem of dual-energy CT (DECT), noise amplification in material decomposition severely reduces the signal-to-noise ratio on the decomposed images compared to that on the original CT images. In this work, the authors propose a general framework of noise suppression in material decomposition for DECT. The method is based on an iterative algorithm recently developed in their group for image-domain decomposition of DECT, with an extension to include nonlinear decomposition models. The generalized framework of iterative DECT decomposition enables beam-hardening correction with simultaneous noise suppression, which improves the clinical benefits of DECT. Methods: The authors propose tomore » suppress noise on the decomposed images of DECT using convex optimization, which is formulated in the form of least-squares estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance–covariance matrix of the decomposed images as the penalty weight in the least-squares term. Analytical formulas are derived to compute the variance–covariance matrix for decomposed images with general-form numerical or analytical decomposition. As a demonstration, the authors implement the proposed algorithm on phantom data using an empirical polynomial function of decomposition measured on a calibration scan. The polynomial coefficients are determined from the projection data acquired on a wedge phantom, and the signal decomposition is performed in the projection domain. Results: On the Catphan{sup ®}600 phantom, the proposed noise suppression method reduces the average noise standard deviation of basis material images by one to two orders of magnitude, with a superior performance on spatial resolution as shown in comparisons of line-pair images and modulation transfer function measurements. On the synthesized monoenergetic CT images, the noise standard deviation is reduced by a factor of 2–3. By using nonlinear decomposition on projections, the authors’ method effectively suppresses the streaking artifacts of beam hardening and obtains more uniform images than their previous approach based on a linear model. Similar performance of noise suppression is observed in the results of an anthropomorphic head phantom and a pediatric chest phantom generated by the proposed method. With beam-hardening correction enabled by their approach, the image spatial nonuniformity on the head phantom is reduced from around 10% on the original CT images to 4.9% on the synthesized monoenergetic CT image. On the pediatric chest phantom, their method suppresses image noise standard deviation by a factor of around 7.5, and compared with linear decomposition, it reduces the estimation error of electron densities from 33.3% to 8.6%. Conclusions: The authors propose a general framework of noise suppression in material decomposition for DECT. Phantom studies have shown the proposed method improves the image uniformity and the accuracy of electron density measurements by effective beam-hardening correction and reduces noise level without noticeable resolution loss.« less
The concept of collision strength and its applications
NASA Astrophysics Data System (ADS)
Chang, Yongbin
Collision strength, the measure of strength for a binary collision, hasn't been defined clearly. In practice, many physical arguments have been employed for the purpose and taken for granted. A scattering angle has been widely and intensively used as a measure of collision strength in plasma physics for years. The result of this is complication and unnecessary approximation in deriving some of the basic kinetic equations and in calculating some of the basic physical terms. The Boltzmann equation has a five-fold integral collision term that is complicated. Chandrasekhar and Spitzer's approaches to the linear Fokker-Planck coefficients have several approximations. An effective variable-change technique has been developed in this dissertation as an alternative to scattering angle as the measure of collision strength. By introducing the square of the reduced impulse or its equivalencies as a collision strength variable, many plasma calculations have been simplified. The five-fold linear Boltzmann collision integral and linearized Boltzmann collision integral are simplified to three-fold integrals. The arbitrary order linear Fokker-Planck coefficients are calculated and expressed in a uniform expression. The new theory provides a simple and exact method for describing the equilibrium plasma collision rate, and a precise calculation of the equilibrium relaxation time. It generalizes bimolecular collision reaction rate theory to a reaction rate theory for plasmas. A simple formula of high precision with wide temperature range has been developed for electron impact ionization rates for carbon atoms and ions. The universality of the concept of collision strength is emphasized. This dissertation will show how Arrhenius' chemical reaction rate theory and Thomson's ionization theory can be unified as one single theory under the concept of collision strength, and how many important physical terms in different disciplines, such as activation energy in chemical reaction theory, ionization energy in Thomson's ionization theory, and the Coulomb logarithm in plasma physics, can be unified into a single one---the threshold value of collision strength. The collision strength, which is a measure of a transfer of momentum in units of energy, can be used to reconcile the differences between Descartes' opinion and Leibnitz's opinion about the "true" measure of a force. Like Newton's second law, which provides an instantaneous measure of a force, collision strength, as a cumulative measure of a force, can be regarded as part of a law of force in general.
The Use of Non-Standard Devices in Finite Element Analysis
NASA Technical Reports Server (NTRS)
Schur, Willi W.; Broduer, Steve (Technical Monitor)
2001-01-01
A general mathematical description of the response behavior of thin-skin pneumatic envelopes and many other membrane and cable structures produces under-constrained systems that pose severe difficulties to analysis. These systems are mobile, and the general mathematical description exposes the mobility. Yet the response behavior of special under-constrained structures under special loadings can be accurately predicted using a constrained mathematical description. The static response behavior of systems that are infinitesimally mobile, such as a non-slack membrane subtended from a rigid or elastic boundary frame, can be easily analyzed using such general mathematical description as afforded by the non-linear, finite element method using an implicit solution scheme if the incremental uploading is guided through a suitable path. Similarly, if such structures are assembled with structural lack of fit that provides suitable self-stress, then dynamic response behavior can be predicted by the non-linear, finite element method and an implicit solution scheme. An explicit solution scheme is available for evolution problems. Such scheme can be used via the method of dynamic relaxation to obtain the solution to a static problem. In some sense, pneumatic envelopes and many other compliant structures can be said to have destiny under a specified loading system. What that means to the analyst is that what happens on the evolution path of the solution is irrelevant as long as equilibrium is achieved at destiny under full load and that the equilibrium is stable in the vicinity of that load. The purpose of this paper is to alert practitioners to the fact that non-standard procedures in finite element analysis are useful and can be legitimate although they burden their users with the requirement to use special caution. Some interesting findings that are useful to the US Scientific Balloon Program and that could not be obtained without non-standard techniques are presented.
Use of Conventional and Alternative Tobacco and Nicotine Products Among a Sample of Canadian Youth.
Czoli, Christine D; Hammond, David; Reid, Jessica L; Cole, Adam G; Leatherdale, Scott T
2015-07-01
The purpose of this study was to examine the use of conventional and alternative tobacco and nicotine products among secondary school students. Respondents were 44,163 grade 9-12 students who participated in Year 2 (2013-2014) of COMPASS, a cohort study of 89 purposefully sampled secondary schools in Ontario and Alberta, Canada. Past-month use of various tobacco and nicotine products was assessed, as well as correlates of use, using a generalized linear mixed effects model. Overall, 21.2% of the sample reported past-month use of any tobacco or nicotine product, with 7.2% reporting past-month use of e-cigarettes. E-cigarette users reported significantly greater prevalence of current use for all products. Students who were male, white, had more spending money, and had a history of tobacco use were more likely to report past-month use of e-cigarettes. Approximately one fifth of youth reported past-month use of a nicotine product, with e-cigarettes being the third most common product. Overall, the findings suggest a rapidly evolving nicotine market. Copyright © 2015 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKemmish, Laura K., E-mail: laura.mckemmish@gmail.com; Research School of Chemistry, Australian National University, Canberra
Algorithms for the efficient calculation of two-electron integrals in the newly developed mixed ramp-Gaussian basis sets are presented, alongside a Fortran90 implementation of these algorithms, RAMPITUP. These new basis sets have significant potential to (1) give some speed-up (estimated at up to 20% for large molecules in fully optimised code) to general-purpose Hartree-Fock (HF) and density functional theory quantum chemistry calculations, replacing all-Gaussian basis sets, and (2) give very large speed-ups for calculations of core-dependent properties, such as electron density at the nucleus, NMR parameters, relativistic corrections, and total energies, replacing the current use of Slater basis functions or verymore » large specialised all-Gaussian basis sets for these purposes. This initial implementation already demonstrates roughly 10% speed-ups in HF/R-31G calculations compared to HF/6-31G calculations for large linear molecules, demonstrating the promise of this methodology, particularly for the second application. As well as the reduction in the total primitive number in R-31G compared to 6-31G, this timing advantage can be attributed to the significant reduction in the number of mathematically complex intermediate integrals after modelling each ramp-Gaussian basis-function-pair as a sum of ramps on a single atomic centre.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taneja, S; Fru, L Che; Desai, V
Purpose: It is now commonplace to handle treatments of hyperthyroidism using iodine-131 as an outpatient procedure due to lower costs and less stringent federal regulations. The Nuclear Regulatory Commission has currently updated release guidelines for these procedures, but there is still a large uncertainty in the dose to the public. Current guidelines to minimize dose to the public require patients to remain isolated after treatment. The purpose of this study was to use a low-cost common device, such as a cell phone, to estimate exposure emitted from a patient to the general public. Methods: Measurements were performed using an Applemore » iPhone 3GS and a Cs-137 irradiator. The charge-coupled device (CCD) camera on the phone was irradiated to exposure rates ranging from 0.1 mR/hr to 100 mR/hr and 30-sec videos were taken during irradiation with the camera lens covered by electrical tape. Interactions were detected as white pixels on a black background in each video. Both single threshold (ST) and colony counting (CC) methods were performed using MATLAB®. Calibration curves were determined by comparing the total pixel intensity output from each method to the known exposure rate. Results: The calibration curve showed a linear relationship above 5 mR/hr for both analysis techniques. The number of events counted per unit exposure rate within the linear region was 19.5 ± 0.7 events/mR and 8.9 ± 0.4 events/mR for the ST and CC methods respectively. Conclusion: Two algorithms were developed and show a linear relationship between photons detected by a CCD camera and low exposure rates, in the range of 5 mR/hr to 100-mR/hr. Future work aims to refine this model by investigating the dose-rate and energy dependencies of the camera response. This algorithm allows for quantitative monitoring of exposure from patients treated with iodine-131 using a simple device outside of the hospital.« less
Correlates of Phonological Awareness in Preschoolers with Speech Sound Disorders
ERIC Educational Resources Information Center
Rvachew, Susan; Grawburg, Meghann
2006-01-01
Purpose: The purpose of this study was to examine the relationships among variables that may contribute to poor phonological awareness (PA) skills in preschool-aged children with speech sound disorders (SSD). Method: Ninety-five 4- and 5-year-old children with SSD were assessed during the spring of their prekindergarten year. Linear structural…
Tsou, Tsung-Shan
2007-03-30
This paper introduces an exploratory way to determine how variance relates to the mean in generalized linear models. This novel method employs the robust likelihood technique introduced by Royall and Tsou.A urinary data set collected by Ginsberg et al. and the fabric data set analysed by Lee and Nelder are considered to demonstrate the applicability and simplicity of the proposed technique. Application of the proposed method could easily reveal a mean-variance relationship that would generally be left unnoticed, or that would require more complex modelling to detect. Copyright (c) 2006 John Wiley & Sons, Ltd.
Code of Federal Regulations, 2010 CFR
2010-01-01
... securities from or through an affiliate of the member bank. (4) General purpose credit card transactions. (i... imposed in, a general purpose credit card issued by the member bank to the nonaffiliate. (ii) Definition. “General purpose credit card” means a credit card issued by a member bank that is widely accepted by...
Code of Federal Regulations, 2012 CFR
2012-01-01
... securities from or through an affiliate of the member bank. (4) General purpose credit card transactions. (i... imposed in, a general purpose credit card issued by the member bank to the nonaffiliate. (ii) Definition. “General purpose credit card” means a credit card issued by a member bank that is widely accepted by...
Code of Federal Regulations, 2013 CFR
2013-01-01
... securities from or through an affiliate of the member bank. (4) General purpose credit card transactions. (i... imposed in, a general purpose credit card issued by the member bank to the nonaffiliate. (ii) Definition. “General purpose credit card” means a credit card issued by a member bank that is widely accepted by...
Code of Federal Regulations, 2011 CFR
2011-01-01
... securities from or through an affiliate of the member bank. (4) General purpose credit card transactions. (i... imposed in, a general purpose credit card issued by the member bank to the nonaffiliate. (ii) Definition. “General purpose credit card” means a credit card issued by a member bank that is widely accepted by...
Code of Federal Regulations, 2014 CFR
2014-01-01
... securities from or through an affiliate of the member bank. (4) General purpose credit card transactions. (i... imposed in, a general purpose credit card issued by the member bank to the nonaffiliate. (ii) Definition. “General purpose credit card” means a credit card issued by a member bank that is widely accepted by...
ERIC Educational Resources Information Center
Puhan, Gautam; Moses, Tim P.; Yu, Lei; Dorans, Neil J.
2007-01-01
The purpose of the current study was to examine whether log-linear smoothing of observed score distributions in small samples results in more accurate differential item functioning (DIF) estimates under the simultaneous item bias test (SIBTEST) framework. Data from a teacher certification test were analyzed using White candidates in the reference…
ERIC Educational Resources Information Center
Liu, Xing
2008-01-01
The purpose of this study was to illustrate the use of Hierarchical Linear Models (HLM) to investigate the effects of school and children's attributes on children' reading achievement. In particular, this study was designed to: (1) develop the HLM models to determine the effects of school-level and child-level variables on children's reading…
Examining the Differences of Linear Systems between Finnish and Taiwanese Textbooks
ERIC Educational Resources Information Center
Yang, Der-Ching; Lin, Yung-Chi
2015-01-01
The purpose of this study was to examine the differences between Finnish and Taiwanese textbooks for grades 7 to 9 on the topic of solving systems of linear equations (simultaneous equations). The specific textbooks examined were TK in Taiwan and FL in Finland. The content analysis method was used to examine (a) the teaching sequence, (b)…
ERIC Educational Resources Information Center
Parker, Catherine Frieda
2010-01-01
A possible contributing factor to students' difficulty in learning advanced mathematics is the conflict between students' "natural" learning styles and the formal structure of mathematics, which is based on definitions, theorems, and proofs. Students' natural learning styles may be a function of their intuition and language skills. The purpose of…
Progress in linear optics, non-linear optics and surface alignment of liquid crystals
NASA Astrophysics Data System (ADS)
Ong, H. L.; Meyer, R. B.; Hurd, A. J.; Karn, A. J.; Arakelian, S. M.; Shen, Y. R.; Sanda, P. N.; Dove, D. B.; Jansen, S. A.; Hoffmann, R.
We first discuss the progress in linear optics, in particular, the formulation and application of geometrical-optics approximation and its generalization. We then discuss the progress in non-linear optics, in particular, the enhancement of a first-order Freedericksz transition and intrinsic optical bistability in homeotropic and parallel oriented nematic liquid crystal cells. Finally, we discuss the liquid crystal alignment and surface effects on field-induced Freedericksz transition.
Electromagnetic axial anomaly in a generalized linear sigma model
NASA Astrophysics Data System (ADS)
Fariborz, Amir H.; Jora, Renata
2017-06-01
We construct the electromagnetic anomaly effective term for a generalized linear sigma model with two chiral nonets, one with a quark-antiquark structure, the other one with a four-quark content. We compute in the leading order of this framework the decays into two photons of six pseudoscalars: π0(137 ), π0(1300 ), η (547 ), η (958 ), η (1295 ) and η (1760 ). Our results agree well with the available experimental data.
NASA Technical Reports Server (NTRS)
Middleton, W. D.; Lundry, J. L.
1975-01-01
An integrated system of computer programs has been developed for the design and analysis of supersonic configurations. The system uses linearized theory methods for the calculation of surface pressures and supersonic area rule concepts in combination with linearized theory for calculation of aerodynamic force coefficients. Interactive graphics are optional at the user's request. This part presents a general description of the system and describes the theoretical methods used.
NASA Astrophysics Data System (ADS)
Made Tirta, I.; Anggraeni, Dian
2018-04-01
Statistical models have been developed rapidly into various directions to accommodate various types of data. Data collected from longitudinal, repeated measured, clustered data (either continuous, binary, count, or ordinal), are more likely to be correlated. Therefore statistical model for independent responses, such as Generalized Linear Model (GLM), Generalized Additive Model (GAM) are not appropriate. There are several models available to apply for correlated responses including GEEs (Generalized Estimating Equations), for marginal model and various mixed effect model such as GLMM (Generalized Linear Mixed Models) and HGLM (Hierarchical Generalized Linear Models) for subject spesific models. These models are available on free open source software R, but they can only be accessed through command line interface (using scrit). On the othe hand, most practical researchers very much rely on menu based or Graphical User Interface (GUI). We develop, using Shiny framework, standard pull down menu Web-GUI that unifies most models for correlated responses. The Web-GUI has accomodated almost all needed features. It enables users to do and compare various modeling for repeated measure data (GEE, GLMM, HGLM, GEE for nominal responses) much more easily trough online menus. This paper discusses the features of the Web-GUI and illustrates the use of them. In General we find that GEE, GLMM, HGLM gave very closed results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalligiannaki, Evangelia, E-mail: ekalligian@tem.uoc.gr; Harmandaris, Vagelis, E-mail: harman@uoc.gr; Institute of Applied and Computational Mathematics
Using the probabilistic language of conditional expectations, we reformulate the force matching method for coarse-graining of molecular systems as a projection onto spaces of coarse observables. A practical outcome of this probabilistic description is the link of the force matching method with thermodynamic integration. This connection provides a way to systematically construct a local mean force and to optimally approximate the potential of mean force through force matching. We introduce a generalized force matching condition for the local mean force in the sense that allows the approximation of the potential of mean force under both linear and non-linear coarse grainingmore » mappings (e.g., reaction coordinates, end-to-end length of chains). Furthermore, we study the equivalence of force matching with relative entropy minimization which we derive for general non-linear coarse graining maps. We present in detail the generalized force matching condition through applications to specific examples in molecular systems.« less
Voit, E O; Knapp, R G
1997-08-15
The linear-logistic regression model and Cox's proportional hazard model are widely used in epidemiology. Their successful application leaves no doubt that they are accurate reflections of observed disease processes and their associated risks or incidence rates. In spite of their prominence, it is not a priori evident why these models work. This article presents a derivation of the two models from the framework of canonical modeling. It begins with a general description of the dynamics between risk sources and disease development, formulates this description in the canonical representation of an S-system, and shows how the linear-logistic model and Cox's proportional hazard model follow naturally from this representation. The article interprets the model parameters in terms of epidemiological concepts as well as in terms of general systems theory and explains the assumptions and limitations generally accepted in the application of these epidemiological models.
Generalized concurrence in boson sampling.
Chin, Seungbeom; Huh, Joonsuk
2018-04-17
A fundamental question in linear optical quantum computing is to understand the origin of the quantum supremacy in the physical system. It is found that the multimode linear optical transition amplitudes are calculated through the permanents of transition operator matrices, which is a hard problem for classical simulations (boson sampling problem). We can understand this problem by considering a quantum measure that directly determines the runtime for computing the transition amplitudes. In this paper, we suggest a quantum measure named "Fock state concurrence sum" C S , which is the summation over all the members of "the generalized Fock state concurrence" (a measure analogous to the generalized concurrences of entanglement and coherence). By introducing generalized algorithms for computing the transition amplitudes of the Fock state boson sampling with an arbitrary number of photons per mode, we show that the minimal classical runtime for all the known algorithms directly depends on C S . Therefore, we can state that the Fock state concurrence sum C S behaves as a collective measure that controls the computational complexity of Fock state BS. We expect that our observation on the role of the Fock state concurrence in the generalized algorithm for permanents would provide a unified viewpoint to interpret the quantum computing power of linear optics.
Proceedings of the Non-Linear Aero Prediction Requirements Workshop
NASA Technical Reports Server (NTRS)
Logan, Michael J. (Editor)
1994-01-01
The purpose of the Non-Linear Aero Prediction Requirements Workshop, held at NASA Langley Research Center on 8-9 Dec. 1993, was to identify and articulate requirements for non-linear aero prediction capabilities during conceptual/preliminary design. The attendees included engineers from industry, government, and academia in a variety of aerospace disciplines, such as advanced design, aerodynamic performance analysis, aero methods development, flight controls, and experimental and theoretical aerodynamics. Presentations by industry and government organizations were followed by panel discussions. This report contains copies of the presentations and the results of the panel discussions.
Aspects of general higher-order gravities
NASA Astrophysics Data System (ADS)
Bueno, Pablo; Cano, Pablo A.; Min, Vincent S.; Visser, Manus R.
2017-02-01
We study several aspects of higher-order gravities constructed from general contractions of the Riemann tensor and the metric in arbitrary dimensions. First, we use the fast-linearization procedure presented in [P. Bueno and P. A. Cano, arXiv:1607.06463] to obtain the equations satisfied by the metric perturbation modes on a maximally symmetric background in the presence of matter and to classify L (Riemann ) theories according to their spectrum. Then, we linearize all theories up to quartic order in curvature and use this result to construct quartic versions of Einsteinian cubic gravity. In addition, we show that the most general cubic gravity constructed in a dimension-independent way and which does not propagate the ghostlike spin-2 mode (but can propagate the scalar) is a linear combination of f (Lovelock ) invariants, plus the Einsteinian cubic gravity term, plus a new ghost-free gravity term. Next, we construct the generalized Newton potential and the post-Newtonian parameter γ for general L (Riemann ) gravities in arbitrary dimensions, unveiling some interesting differences with respect to the four-dimensional case. We also study the emission and propagation of gravitational radiation from sources for these theories in four dimensions, providing a generalized formula for the power emitted. Finally, we review Wald's formalism for general L (Riemann ) theories and construct new explicit expressions for the relevant quantities involved. Many examples illustrate our calculations.
ERIC Educational Resources Information Center
Kane, Michael T.; Mroch, Andrew A.; Suh, Youngsuk; Ripkey, Douglas R.
2009-01-01
This paper analyzes five linear equating models for the "nonequivalent groups with anchor test" (NEAT) design with internal anchors (i.e., the anchor test is part of the full test). The analysis employs a two-dimensional framework. The first dimension contrasts two general approaches to developing the equating relationship. Under a "parameter…
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Purpose. 10.2 Section 10.2 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL COMMERCIAL MOBILE ALERT SYSTEM General Information § 10.2 Purpose. The rules in this part establish the requirements for participation in the voluntary Commercial...
Code of Federal Regulations, 2010 CFR
2010-07-01
... Navigation and Navigable Waters COAST GUARD, DEPARTMENT OF HOMELAND SECURITY MARITIME SECURITY MARITIME SECURITY: GENERAL General § 101.100 Purpose. (a) The purpose of this subchapter is: (1) To implement portions of the maritime security regime required by the Maritime Transportation Security Act of 2002, as...
Modelling female fertility traits in beef cattle using linear and non-linear models.
Naya, H; Peñagaricano, F; Urioste, J I
2017-06-01
Female fertility traits are key components of the profitability of beef cattle production. However, these traits are difficult and expensive to measure, particularly under extensive pastoral conditions, and consequently, fertility records are in general scarce and somehow incomplete. Moreover, fertility traits are usually dominated by the effects of herd-year environment, and it is generally assumed that relatively small margins are kept for genetic improvement. New ways of modelling genetic variation in these traits are needed. Inspired in the methodological developments made by Prof. Daniel Gianola and co-workers, we assayed linear (Gaussian), Poisson, probit (threshold), censored Poisson and censored Gaussian models to three different kinds of endpoints, namely calving success (CS), number of days from first calving (CD) and number of failed oestrus (FE). For models involving FE and CS, non-linear models overperformed their linear counterparts. For models derived from CD, linear versions displayed better adjustment than the non-linear counterparts. Non-linear models showed consistently higher estimates of heritability and repeatability in all cases (h 2 < 0.08 and r < 0.13, for linear models; h 2 > 0.23 and r > 0.24, for non-linear models). While additive and permanent environment effects showed highly favourable correlations between all models (>0.789), consistency in selecting the 10% best sires showed important differences, mainly amongst the considered endpoints (FE, CS and CD). In consequence, endpoints should be considered as modelling different underlying genetic effects, with linear models more appropriate to describe CD and non-linear models better for FE and CS. © 2017 Blackwell Verlag GmbH.
77 FR 77043 - 36(b)(1) Arms Sales Notification
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-31
... MK-84 2000 lb General Purpose Bombs; 1,725 MK-82 500 lb General Purpose Bombs; 1,725 BLU-109 Bombs; 3,450 GBU-39 Small Diameter Bombs; 11,500 FMU-139 Fuses; 11,500 FMU-143 Fuses; and 11,500 FMU-152 Fuses... and 1,725 KMU-572 (GBU-38) for MK-82 warheads); 3,450 MK-84 2000 lb General Purpose Bombs; 1,725 MK-82...
Shen, Wenfeng; Wei, Daming; Xu, Weimin; Zhu, Xin; Yuan, Shizhong
2010-10-01
Biological computations like electrocardiological modelling and simulation usually require high-performance computing environments. This paper introduces an implementation of parallel computation for computer simulation of electrocardiograms (ECGs) in a personal computer environment with an Intel CPU of Core (TM) 2 Quad Q6600 and a GPU of Geforce 8800GT, with software support by OpenMP and CUDA. It was tested in three parallelization device setups: (a) a four-core CPU without a general-purpose GPU, (b) a general-purpose GPU plus 1 core of CPU, and (c) a four-core CPU plus a general-purpose GPU. To effectively take advantage of a multi-core CPU and a general-purpose GPU, an algorithm based on load-prediction dynamic scheduling was developed and applied to setting (c). In the simulation with 1600 time steps, the speedup of the parallel computation as compared to the serial computation was 3.9 in setting (a), 16.8 in setting (b), and 20.0 in setting (c). This study demonstrates that a current PC with a multi-core CPU and a general-purpose GPU provides a good environment for parallel computations in biological modelling and simulation studies. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.
A normative study of the Italian printed word version of the free and cued selective reminding test.
Girtler, N; De Carli, F; Amore, M; Arnaldi, D; Bosia, L E; Bruzzaniti, C; Cappa, S F; Cocito, L; Colazzo, G; Ghio, L; Magi, E; Mancardi, G L; Nobili, F; Pardini, M; Picco, A; Rissotto, R; Serrati, C; Brugnolo, A
2015-07-01
According to the new research criteria for the diagnosis of Alzheimer's disease, episodic memory impairment, not significantly improved by cueing, is the core neuropsychological marker, even at a pre-dementia stage. The FCSRT assesses verbal learning and memory using semantic cues and is widely used in Europe. Standardization values for the Italian population are available for the colored picture version, but not for the 16-item printed word version. In this study, we present age- and education-adjusted normative data for FCSRT-16 obtained using linear regression techniques and generalized linear model, and critical values for classifying sub-test performance into equivalent scores. Six scores were derived from the performance of 194 normal subjects (MMSE score, range 27-30, mean 29.5 ± 0.5) divided per decade (from 20 to 90), per gender and per level of education (4 levels: 3-5, 6-8, 9-13, >13 years): immediate free recall (IFR), immediate total recall (ITR), recognition phase (RP), delayed free recall (DFR), delayed total recall (DTR), Index of Sensitivity of Cueing (ISC), number of intrusions. This study confirms the effect of age and education, but not of gender on immediate and delayed free and cued recall. The Italian version of the FCSRT-16 can be useful for both clinical and research purposes.
Metadynamics in the conformational space nonlinearly dimensionally reduced by Isomap.
Spiwok, Vojtěch; Králová, Blanka
2011-12-14
Atomic motions in molecules are not linear. This infers that nonlinear dimensionality reduction methods can outperform linear ones in analysis of collective atomic motions. In addition, nonlinear collective motions can be used as potentially efficient guides for biased simulation techniques. Here we present a simulation with a bias potential acting in the directions of collective motions determined by a nonlinear dimensionality reduction method. Ad hoc generated conformations of trans,trans-1,2,4-trifluorocyclooctane were analyzed by Isomap method to map these 72-dimensional coordinates to three dimensions, as described by Brown and co-workers [J. Chem. Phys. 129, 064118 (2008)]. Metadynamics employing the three-dimensional embeddings as collective variables was applied to explore all relevant conformations of the studied system and to calculate its conformational free energy surface. The method sampled all relevant conformations (boat, boat-chair, and crown) and corresponding transition structures inaccessible by an unbiased simulation. This scheme allows to use essentially any parameter of the system as a collective variable in biased simulations. Moreover, the scheme we used for mapping out-of-sample conformations from the 72D to 3D space can be used as a general purpose mapping for dimensionality reduction, beyond the context of molecular modeling. © 2011 American Institute of Physics
Light field image denoising using a linear 4D frequency-hyperfan all-in-focus filter
NASA Astrophysics Data System (ADS)
Dansereau, Donald G.; Bongiorno, Daniel L.; Pizarro, Oscar; Williams, Stefan B.
2013-02-01
Imaging in low light is problematic as sensor noise can dominate imagery, and increasing illumination or aperture size is not always effective or practical. Computational photography offers a promising solution in the form of the light field camera, which by capturing redundant information offers an opportunity for elegant noise rejection. We show that the light field of a Lambertian scene has a 4D hyperfan-shaped frequency-domain region of support at the intersection of a dual-fan and a hypercone. By designing and implementing a filter with appropriately shaped passband we accomplish denoising with a single all-in-focus linear filter. Drawing examples from the Stanford Light Field Archive and images captured using a commercially available lenselet- based plenoptic camera, we demonstrate that the hyperfan outperforms competing methods including synthetic focus, fan-shaped antialiasing filters, and a range of modern nonlinear image and video denoising techniques. We show the hyperfan preserves depth of field, making it a single-step all-in-focus denoising filter suitable for general-purpose light field rendering. We include results for different noise types and levels, over a variety of metrics, and in real-world scenarios. Finally, we show that the hyperfan's performance scales with aperture count.
NASA Astrophysics Data System (ADS)
Provencher, Stephen W.
1982-09-01
CONTIN is a portable Fortran IV package for inverting noisy linear operator equations. These problems occur in the analysis of data from a wide variety experiments. They are generally ill-posed problems, which means that errors in an unregularized inversion are unbounded. Instead, CONTIN seeks the optimal solution by incorporating parsimony and any statistical prior knowledge into the regularizor and absolute prior knowledge into equallity and inequality constraints. This can be greatly increase the resolution and accuracyh of the solution. CONTIN is very flexible, consisting of a core of about 50 subprograms plus 13 small "USER" subprograms, which the user can easily modify to specify special-purpose constraints, regularizors, operator equations, simulations, statistical weighting, etc. Specjial collections of USER subprograms are available for photon correlation spectroscopy, multicomponent spectra, and Fourier-Bessel, Fourier and Laplace transforms. Numerically stable algorithms are used throughout CONTIN. A fairly precise definition of information content in terms of degrees of freedom is given. The regularization parameter can be automatically chosen on the basis of an F-test and confidence region. The interpretation of the latter and of error estimates based on the covariance matrix of the constrained regularized solution are discussed. The strategies, methods and options in CONTIN are outlined. The program itself is described in the following paper.
Health Disparities Grants Funded by National Institute on Aging: Trends Between 2000 and 2010
Kim, Giyeon; DeCoster, Jamie; Huang, Chao-Hui; Parmelee, Patricia
2012-01-01
Purpose of the Study: The present study examined the characteristics of health disparities grants funded by National Institute on Aging (NIA) from 2000 to 2010. Objectives were (a) to examine longitudinal trends in health disparities–related grants funded by NIA and (b) to identify moderators of these trends. Design and Methods: Our primary data source was the National Institutes of Health Research Portfolio Online Reporting Tools Expenditures and Results (RePORTER) system. The RePORTER data were merged with data from the Carnegie Classification of Institutions of Higher Education. General linear models were used to examine the longitudinal trends and how these trends were associated with type of grant and institutional characteristics. Results: NIA funded 825 grants on health disparities between 2000 and 2010, expending approximately 330 million dollars. There was an overall linear increase over time in both the total number of grants and amount of funding, with an outlying spike during 2009. These trends were significantly influenced by several moderators including funding mechanism and type of institution. Implications: The findings highlight NIA’s current efforts to fund health disparities grants to reduce disparities among older adults. Gerontology researchers may find this information very useful for their future grant submissions. PMID:22454392
Bayes factors for the linear ballistic accumulator model of decision-making.
Evans, Nathan J; Brown, Scott D
2018-04-01
Evidence accumulation models of decision-making have led to advances in several different areas of psychology. These models provide a way to integrate response time and accuracy data, and to describe performance in terms of latent cognitive processes. Testing important psychological hypotheses using cognitive models requires a method to make inferences about different versions of the models which assume different parameters to cause observed effects. The task of model-based inference using noisy data is difficult, and has proven especially problematic with current model selection methods based on parameter estimation. We provide a method for computing Bayes factors through Monte-Carlo integration for the linear ballistic accumulator (LBA; Brown and Heathcote, 2008), a widely used evidence accumulation model. Bayes factors are used frequently for inference with simpler statistical models, and they do not require parameter estimation. In order to overcome the computational burden of estimating Bayes factors via brute force integration, we exploit general purpose graphical processing units; we provide free code for this. This approach allows estimation of Bayes factors via Monte-Carlo integration within a practical time frame. We demonstrate the method using both simulated and real data. We investigate the stability of the Monte-Carlo approximation, and the LBA's inferential properties, in simulation studies.
Critical N = (1, 1) general massive supergravity
NASA Astrophysics Data System (ADS)
Deger, Nihat Sadik; Moutsopoulos, George; Rosseel, Jan
2018-04-01
In this paper we study the supermultiplet structure of N = (1, 1) General Massive Supergravity at non-critical and critical points of its parameter space. To do this, we first linearize the theory around its maximally supersymmetric AdS3 vacuum and obtain the full linearized Lagrangian including fermionic terms. At generic values, linearized modes can be organized as two massless and 2 massive multiplets where supersymmetry relates them in the standard way. At critical points logarithmic modes appear and we find that in three of such points some of the supersymmetry transformations are non-invertible in logarithmic multiplets. However, in the fourth critical point, there is a massive logarithmic multiplet with invertible supersymmetry transformations.
Linear frictional forces cause orbits to neither circularize nor precess
NASA Astrophysics Data System (ADS)
Hamilton, B.; Crescimanno, M.
2008-06-01
For the undamped Kepler potential the lack of precession has historically been understood in terms of the Runge-Lenz symmetry. For the damped Kepler problem this result may be understood in terms of the generalization of Poisson structure to damped systems suggested recently by Tarasov (2005 J. Phys. A: Math. Gen. 38 2145). In this generalized algebraic structure the orbit-averaged Runge-Lenz vector remains a constant in the linearly damped Kepler problem to leading order in the damping coefficient. Beyond Kepler, we prove that, for any potential proportional to a power of the radius, the orbit shape and precession angle remain constant to leading order in the linear friction coefficient.
NASA Technical Reports Server (NTRS)
Zimmerle, D.; Bernhard, R. J.
1985-01-01
An alternative method for performing singular boundary element integrals for applications in linear acoustics is discussed. The method separates the integral of the characteristic solution into a singular and nonsingular part. The singular portion is integrated with a combination of analytic and numerical techniques while the nonsingular portion is integrated with standard Gaussian quadrature. The method may be generalized to many types of subparametric elements. The integrals over elements containing the root node are considered, and the characteristic solution for linear acoustic problems are examined. The method may be generalized to most characteristic solutions.
Jain, Amit; Kuhls-Gilcrist, Andrew T; Gupta, Sandesh K; Bednarek, Daniel R; Rudin, Stephen
2010-03-01
The MTF, NNPS, and DQE are standard linear system metrics used to characterize intrinsic detector performance. To evaluate total system performance for actual clinical conditions, generalized linear system metrics (GMTF, GNNPS and GDQE) that include the effect of the focal spot distribution, scattered radiation, and geometric unsharpness are more meaningful and appropriate. In this study, a two-dimensional (2D) generalized linear system analysis was carried out for a standard flat panel detector (FPD) (194-micron pixel pitch and 600-micron thick CsI) and a newly-developed, high-resolution, micro-angiographic fluoroscope (MAF) (35-micron pixel pitch and 300-micron thick CsI). Realistic clinical parameters and x-ray spectra were used. The 2D detector MTFs were calculated using the new Noise Response method and slanted edge method and 2D focal spot distribution measurements were done using a pin-hole assembly. The scatter fraction, generated for a uniform head equivalent phantom, was measured and the scatter MTF was simulated with a theoretical model. Different magnifications and scatter fractions were used to estimate the 2D GMTF, GNNPS and GDQE for both detectors. Results show spatial non-isotropy for the 2D generalized metrics which provide a quantitative description of the performance of the complete imaging system for both detectors. This generalized analysis demonstrated that the MAF and FPD have similar capabilities at lower spatial frequencies, but that the MAF has superior performance over the FPD at higher frequencies even when considering focal spot blurring and scatter. This 2D generalized performance analysis is a valuable tool to evaluate total system capabilities and to enable optimized design for specific imaging tasks.
Multilayer neural networks for reduced-rank approximation.
Diamantaras, K I; Kung, S Y
1994-01-01
This paper is developed in two parts. First, the authors formulate the solution to the general reduced-rank linear approximation problem relaxing the invertibility assumption of the input autocorrelation matrix used by previous authors. The authors' treatment unifies linear regression, Wiener filtering, full rank approximation, auto-association networks, SVD and principal component analysis (PCA) as special cases. The authors' analysis also shows that two-layer linear neural networks with reduced number of hidden units, trained with the least-squares error criterion, produce weights that correspond to the generalized singular value decomposition of the input-teacher cross-correlation matrix and the input data matrix. As a corollary the linear two-layer backpropagation model with reduced hidden layer extracts an arbitrary linear combination of the generalized singular vector components. Second, the authors investigate artificial neural network models for the solution of the related generalized eigenvalue problem. By introducing and utilizing the extended concept of deflation (originally proposed for the standard eigenvalue problem) the authors are able to find that a sequential version of linear BP can extract the exact generalized eigenvector components. The advantage of this approach is that it's easier to update the model structure by adding one more unit or pruning one or more units when the application requires it. An alternative approach for extracting the exact components is to use a set of lateral connections among the hidden units trained in such a way as to enforce orthogonality among the upper- and lower-layer weights. The authors call this the lateral orthogonalization network (LON) and show via theoretical analysis-and verify via simulation-that the network extracts the desired components. The advantage of the LON-based model is that it can be applied in a parallel fashion so that the components are extracted concurrently. Finally, the authors show the application of their results to the solution of the identification problem of systems whose excitation has a non-invertible autocorrelation matrix. Previous identification methods usually rely on the invertibility assumption of the input autocorrelation, therefore they can not be applied to this case.
Code of Federal Regulations, 2012 CFR
2012-01-01
... AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE GENERAL REGULATIONS OF THE ENVIRONMENTAL DATA SERVICE DEEP SEABED MINING REGULATIONS FOR EXPLORATION LICENSES General § 970.100 Purpose. (a) General... recognition that the deep seabed mining industry is still evolving and that more information must be developed...
Code of Federal Regulations, 2013 CFR
2013-01-01
... AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE GENERAL REGULATIONS OF THE ENVIRONMENTAL DATA SERVICE DEEP SEABED MINING REGULATIONS FOR EXPLORATION LICENSES General § 970.100 Purpose. (a) General... recognition that the deep seabed mining industry is still evolving and that more information must be developed...
Code of Federal Regulations, 2014 CFR
2014-01-01
... AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE GENERAL REGULATIONS OF THE ENVIRONMENTAL DATA SERVICE DEEP SEABED MINING REGULATIONS FOR EXPLORATION LICENSES General § 970.100 Purpose. (a) General... recognition that the deep seabed mining industry is still evolving and that more information must be developed...
7 CFR 254.1 - General purpose.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE GENERAL REGULATIONS AND POLICIES-FOOD DISTRIBUTION ADMINISTRATION OF THE FOOD DISTRIBUTION PROGRAM FOR INDIAN HOUSEHOLDS IN OKLAHOMA § 254.1 General purpose. This part sets the requirement under which...
7 CFR 254.1 - General purpose.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE GENERAL REGULATIONS AND POLICIES-FOOD DISTRIBUTION ADMINISTRATION OF THE FOOD DISTRIBUTION PROGRAM FOR INDIAN HOUSEHOLDS IN OKLAHOMA § 254.1 General purpose. This part sets the requirement under which...
Code of Federal Regulations, 2010 CFR
2010-01-01
... AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE GENERAL REGULATIONS OF THE ENVIRONMENTAL DATA SERVICE DEEP SEABED MINING REGULATIONS FOR EXPLORATION LICENSES General § 970.100 Purpose. (a) General... recognition that the deep seabed mining industry is still evolving and that more information must be developed...
Code of Federal Regulations, 2011 CFR
2011-01-01
... AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE GENERAL REGULATIONS OF THE ENVIRONMENTAL DATA SERVICE DEEP SEABED MINING REGULATIONS FOR EXPLORATION LICENSES General § 970.100 Purpose. (a) General... recognition that the deep seabed mining industry is still evolving and that more information must be developed...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chung, Moses; Qin, Hong; Davidson, Ronald C.
In an uncoupled linear lattice system, the Kapchinskij-Vladimirskij (KV) distribution formulated on the basis of the single-particle Courant-Snyder invariants has served as a fundamental theoretical basis for the analyses of the equilibrium, stability, and transport properties of high-intensity beams for the past several decades. Recent applications of high-intensity beams, however, require beam phase-space manipulations by intentionally introducing strong coupling. Here in this Letter, we report the full generalization of the KV model by including all of the linear (both external and space-charge) coupling forces, beam energy variations, and arbitrary emittance partition, which all form essential elements for phase-space manipulations. Themore » new generalized KV model yields spatially uniform density profiles and corresponding linear self-field forces as desired. Finally, the corresponding matrix envelope equations and beam matrix for the generalized KV model provide important new theoretical tools for the detailed design and analysis of high-intensity beam manipulations, for which previous theoretical models are not easily applicable.« less
Generalized Predictive and Neural Generalized Predictive Control of Aerospace Systems
NASA Technical Reports Server (NTRS)
Kelkar, Atul G.
2000-01-01
The research work presented in this thesis addresses the problem of robust control of uncertain linear and nonlinear systems using Neural network-based Generalized Predictive Control (NGPC) methodology. A brief overview of predictive control and its comparison with Linear Quadratic (LQ) control is given to emphasize advantages and drawbacks of predictive control methods. It is shown that the Generalized Predictive Control (GPC) methodology overcomes the drawbacks associated with traditional LQ control as well as conventional predictive control methods. It is shown that in spite of the model-based nature of GPC it has good robustness properties being special case of receding horizon control. The conditions for choosing tuning parameters for GPC to ensure closed-loop stability are derived. A neural network-based GPC architecture is proposed for the control of linear and nonlinear uncertain systems. A methodology to account for parametric uncertainty in the system is proposed using on-line training capability of multi-layer neural network. Several simulation examples and results from real-time experiments are given to demonstrate the effectiveness of the proposed methodology.
Chung, Moses; Qin, Hong; Davidson, Ronald C.; ...
2016-11-23
In an uncoupled linear lattice system, the Kapchinskij-Vladimirskij (KV) distribution formulated on the basis of the single-particle Courant-Snyder invariants has served as a fundamental theoretical basis for the analyses of the equilibrium, stability, and transport properties of high-intensity beams for the past several decades. Recent applications of high-intensity beams, however, require beam phase-space manipulations by intentionally introducing strong coupling. Here in this Letter, we report the full generalization of the KV model by including all of the linear (both external and space-charge) coupling forces, beam energy variations, and arbitrary emittance partition, which all form essential elements for phase-space manipulations. Themore » new generalized KV model yields spatially uniform density profiles and corresponding linear self-field forces as desired. Finally, the corresponding matrix envelope equations and beam matrix for the generalized KV model provide important new theoretical tools for the detailed design and analysis of high-intensity beam manipulations, for which previous theoretical models are not easily applicable.« less
48 CFR 9905.502-60 - Illustrations.
Code of Federal Regulations, 2012 CFR
2012-10-01
... for the same purpose: (1) An educational institution normally allocates special test equipment costs directly to contracts. The costs of general purpose test equipment are normally included in the indirect... of general purpose test equipment costs from the indirect cost pool to the contract, in addition to...
44 CFR 10.1 - Background and purpose.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 44 Emergency Management and Assistance 1 2010-10-01 2010-10-01 false Background and purpose. 10.1 Section 10.1 Emergency Management and Assistance FEDERAL EMERGENCY MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND SECURITY GENERAL ENVIRONMENTAL CONSIDERATIONS General § 10.1 Background and purpose. (a) This part...
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false Purpose. 1016.1 Section 1016.1 Energy DEPARTMENT OF ENERGY (GENERAL PROVISIONS) SAFEGUARDING OF RESTRICTED DATA General Provisions § 1016.1 Purpose. The regulations in this part establish requirements for the safeguarding of Secret and Confidential Restricted Data...
Non-linear behavior of fiber composite laminates
NASA Technical Reports Server (NTRS)
Hashin, Z.; Bagchi, D.; Rosen, B. W.
1974-01-01
The non-linear behavior of fiber composite laminates which results from lamina non-linear characteristics was examined. The analysis uses a Ramberg-Osgood representation of the lamina transverse and shear stress strain curves in conjunction with deformation theory to describe the resultant laminate non-linear behavior. A laminate having an arbitrary number of oriented layers and subjected to a general state of membrane stress was treated. Parametric results and comparison with experimental data and prior theoretical results are presented.
In-vivo detectability index: development and validation of an automated methodology
NASA Astrophysics Data System (ADS)
Smith, Taylor Brunton; Solomon, Justin; Samei, Ehsan
2017-03-01
The purpose of this study was to develop and validate a method to estimate patient-specific detectability indices directly from patients' CT images (i.e., "in vivo"). The method works by automatically extracting noise (NPS) and resolution (MTF) properties from each patient's CT series based on previously validated techniques. Patient images are thresholded into skin-air interfaces to form edge-spread functions, which are further binned, differentiated, and Fourier transformed to form the MTF. The NPS is likewise estimated from uniform areas of the image. These are combined with assumed task functions (reference function: 10 mm disk lesion with contrast of -15 HU) to compute detectability indices for a non-prewhitening matched filter model observer predicting observer performance. The results were compared to those from a previous human detection study on 105 subtle, hypo-attenuating liver lesions, using a two-alternative-forcedchoice (2AFC) method, over 6 dose levels using 16 readers. The in vivo detectability indices estimated for all patient images were compared to binary 2AFC outcomes with a generalized linear mixed-effects statistical model (Probit link function, linear terms only, no interactions, random term for readers). The model showed that the in vivo detectability indices were strongly predictive of 2AFC outcomes (P < 0.05). A linear comparison between the human detection accuracy and model-predicted detection accuracy (for like conditions) resulted in Pearson and Spearman correlations coefficients of 0.86 and 0.87, respectively. These data provide evidence that the in vivo detectability index could potentially be used to automatically estimate and track image quality in a clinical operation.
The determination of third order linear models from a seventh order nonlinear jet engine model
NASA Technical Reports Server (NTRS)
Lalonde, Rick J.; Hartley, Tom T.; De Abreu-Garcia, J. Alex
1989-01-01
Results are presented that demonstrate how good reduced-order models can be obtained directly by recursive parameter identification using input/output (I/O) data of high-order nonlinear systems. Three different methods of obtaining a third-order linear model from a seventh-order nonlinear turbojet engine model are compared. The first method is to obtain a linear model from the original model and then reduce the linear model by standard reduction techniques such as residualization and balancing. The second method is to identify directly a third-order linear model by recursive least-squares parameter estimation using I/O data of the original model. The third method is to obtain a reduced-order model from the original model and then linearize the reduced model. Frequency responses are used as the performance measure to evaluate the reduced models. The reduced-order models along with their Bode plots are presented for comparison purposes.
Overview of epidemiologic studies of radiation and cancer risk based on medical series
DOE Office of Scientific and Technical Information (OSTI.GOV)
Howe, G.R.
1997-03-01
Epidemiologic studies of individuals exposed to ionizing radiation for medical reasons have made important contributions to understanding of the relationship between such radiation and subsequent cancer risk. In this paper the strengths and limitations of medical studies are considered and their future potential usefulness is discussed. Studies may be broadly classified into two types, namely, those of individuals exposed for therapeutic purposes such as the study of ankylosing spondylytics and those of individuals exposed for diagnostic or examination purposes such as those of tuberculosis patients routinely examined by chest fluoroscopy. In general, studies of therapeutic exposures tend to involve highmore » doses of radiation given at high dose rates and in a relatively small number of fractions, whereas studies of diagnostic exposures tend to involve relatively low doses, low dose rates and many fractions. However, these generalizations are not always true: for example, in the fluoroscopy studies some patients received doses to organs such as breast and lung which were substantially higher than those experienced in the atomic bomb survivors study and in a study of Israeli children treated with radiation for tinea capitis the average thyroid dose was reported to be low, and only about 0.09 gray. These studies illustrate one of the most important advantages of medical series, namely the variety of such studies in terms of the characteristics of the radiation involved (linear energy transfer characteristics, dose range, dose rate, and fractionation), the organs exposed and hence potentially at risk, and the characteristics of those exposed to such radiation.« less
Credibility analysis of risk classes by generalized linear model
NASA Astrophysics Data System (ADS)
Erdemir, Ovgucan Karadag; Sucu, Meral
2016-06-01
In this paper generalized linear model (GLM) and credibility theory which are frequently used in nonlife insurance pricing are combined for reliability analysis. Using full credibility standard, GLM is associated with limited fluctuation credibility approach. Comparison criteria such as asymptotic variance and credibility probability are used to analyze the credibility of risk classes. An application is performed by using one-year claim frequency data of a Turkish insurance company and results of credible risk classes are interpreted.
A general theory of linear cosmological perturbations: bimetric theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lagos, Macarena; Ferreira, Pedro G., E-mail: m.lagos13@imperial.ac.uk, E-mail: p.ferreira1@physics.ox.ac.uk
2017-01-01
We implement the method developed in [1] to construct the most general parametrised action for linear cosmological perturbations of bimetric theories of gravity. Specifically, we consider perturbations around a homogeneous and isotropic background, and identify the complete form of the action invariant under diffeomorphism transformations, as well as the number of free parameters characterising this cosmological class of theories. We discuss, in detail, the case without derivative interactions, and compare our results with those found in massive bigravity.
Linear spin-2 fields in most general backgrounds
NASA Astrophysics Data System (ADS)
Bernard, Laura; Deffayet, Cédric; Schmidt-May, Angnis; von Strauss, Mikael
2016-04-01
We derive the full perturbative equations of motion for the most general background solutions in ghost-free bimetric theory in its metric formulation. Clever field redefinitions at the level of fluctuations enable us to circumvent the problem of varying a square-root matrix appearing in the theory. This greatly simplifies the expressions for the linear variation of the bimetric interaction terms. We show that these field redefinitions exist and are uniquely invertible if and only if the variation of the square-root matrix itself has a unique solution, which is a requirement for the linearized theory to be well defined. As an application of our results we examine the constraint structure of ghost-free bimetric theory at the level of linear equations of motion for the first time. We identify a scalar combination of equations which is responsible for the absence of the Boulware-Deser ghost mode in the theory. The bimetric scalar constraint is in general not manifestly covariant in its nature. However, in the massive gravity limit the constraint assumes a covariant form when one of the interaction parameters is set to zero. For that case our analysis provides an alternative and almost trivial proof of the absence of the Boulware-Deser ghost. Our findings generalize previous results in the metric formulation of massive gravity and also agree with studies of its vielbein version.
Non-linear regime of the Generalized Minimal Massive Gravity in critical points
NASA Astrophysics Data System (ADS)
Setare, M. R.; Adami, H.
2016-03-01
The Generalized Minimal Massive Gravity (GMMG) theory is realized by adding the CS deformation term, the higher derivative deformation term, and an extra term to pure Einstein gravity with a negative cosmological constant. In the present paper we obtain exact solutions to the GMMG field equations in the non-linear regime of the model. GMMG model about AdS_3 space is conjectured to be dual to a 2-dimensional CFT. We study the theory in critical points corresponding to the central charges c_-=0 or c_+=0, in the non-linear regime. We show that AdS_3 wave solutions are present, and have logarithmic form in critical points. Then we study the AdS_3 non-linear deformation solution. Furthermore we obtain logarithmic deformation of extremal BTZ black hole. After that using Abbott-Deser-Tekin method we calculate the energy and angular momentum of these types of black hole solutions.
ERIC Educational Resources Information Center
Malin, Heather; Han, Hyemin; Liauw, Indrawati
2017-01-01
This study investigated the effects of internal and demographic variables on civic development in late adolescence using the construct "civic purpose." We conducted surveys on civic engagement with 480 high school seniors, and surveyed them again 2 years later. Using multivariate regression and linear mixed models, we tested the main…
ERIC Educational Resources Information Center
Matzke, Orville R.
The purpose of this study was to formulate a linear programming model to simulate a foundation type support program and to apply this model to a state support program for the public elementary and secondary school districts in the State of Iowa. The model was successful in producing optimal solutions to five objective functions proposed for…
ERIC Educational Resources Information Center
KANTASEWI, NIPHON
THE PURPOSE OF THE STUDY WAS TO COMPARE THE EFFECTIVENESS OF (1) LECTURE PRESENTATIONS, (2) LINEAR PROGRAM USE IN CLASS WITH AND WITHOUT DISCUSSION, AND (3) LINEAR PROGRAMS USED OUTSIDE OF CLASS WITH INCLASS PROBLEMS OR DISCUSSION. THE 126 COLLEGE STUDENTS ENROLLED IN A BACTERIOLOGY COURSE WERE RANDOMLY ASSIGNED TO THREE GROUPS. IN A SUCCEEDING…
Mayhew, Jerry L; Smith, Abbie E; Arabas, Jana L; Roberts, B Scott
2010-10-01
The purpose of this study was to determine the degree of upper-body strength gained by college women who are underweight and those who are obese using different modes of resistance training. Women who were underweight (UWW, n = 93, weight = 49.3 ± 4.5 kg) and women who were obese (OBW, n = 73, weight = 94.0 ± 15.1 kg) were selected from a larger cohort based on body mass index (UWW ≤ 18.5 kg·m⁻²; OBW ≥ 30 kg·m⁻²). Subjects elected to train with either free weights (FW, n = 38), supine vertical bench press machine (n = 52) or seated horizontal bench press machine (n = 76) using similar linear periodization resistance training programs 3× per week for 12 weeks. Each participant was assessed for upper-body strength using FWs (general) and machine weight (specific) 1 repetition maximum bench press before and after training. Increases in general and mode-specific strength were significantly greater for OBW (5.2 ± 5.1 and 9.6 ± 5.1 kg, respectively) than for UWW (3.5 ± 4.1 and 7.2 ± 5.2 kg, respectively). General strength gains were not significantly different among the training modes. Mode-specific gains were significantly greater (p < 0.05) than general strength gains for all groups. In conclusion, various resistance training modes may produce comparable increases in general strength but will register greater gains if measured using the specific mode employed for training, regardless of the weight category of the individual.
Working With the Wave Equation in Aeroacoustics: The Pleasures of Generalized Functions
NASA Technical Reports Server (NTRS)
Farassat, F.; Brentner, Kenneth S.; Dunn, mark H.
2007-01-01
The theme of this paper is the applications of generalized function (GF) theory to the wave equation in aeroacoustics. We start with a tutorial on GFs with particular emphasis on viewing functions as continuous linear functionals. We next define operations on GFs. The operation of interest to us in this paper is generalized differentiation. We give many applications of generalized differentiation, particularly for the wave equation. We discuss the use of GFs in finding Green s function and some subtleties that only GF theory can clarify without ambiguities. We show how the knowledge of the Green s function of an operator L in a given domain D can allow us to solve a whole range of problems with operator L for domains situated within D by the imbedding method. We will show how we can use the imbedding method to find the Kirchhoff formulas for stationary and moving surfaces with ease and elegance without the use of the four-dimensional Green s theorem, which is commonly done. Other subjects covered are why the derivatives in conservation laws should be viewed as generalized derivatives and what are the consequences of doing this. In particular we show how we can imbed a problem in a larger domain for the identical differential equation for which the Green s function is known. The primary purpose of this paper is to convince the readers that GF theory is absolutely essential in aeroacoustics because of its powerful operational properties. Furthermore, learning the subject and using it can be fun.
NASA Astrophysics Data System (ADS)
van Berkel, M.; Kobayashi, T.; Igami, H.; Vandersteen, G.; Hogeweij, G. M. D.; Tanaka, K.; Tamura, N.; Zwart, H. J.; Kubo, S.; Ito, S.; Tsuchiya, H.; de Baar, M. R.; LHD Experiment Group
2017-12-01
A new methodology to analyze non-linear components in perturbative transport experiments is introduced. The methodology has been experimentally validated in the Large Helical Device for the electron heat transport channel. Electron cyclotron resonance heating with different modulation frequencies by two gyrotrons has been used to directly quantify the amplitude of the non-linear component at the inter-modulation frequencies. The measurements show significant quadratic non-linear contributions and also the absence of cubic and higher order components. The non-linear component is analyzed using the Volterra series, which is the non-linear generalization of transfer functions. This allows us to study the radial distribution of the non-linearity of the plasma and to reconstruct linear profiles where the measurements were not distorted by non-linearities. The reconstructed linear profiles are significantly different from the measured profiles, demonstrating the significant impact that non-linearity can have.
Living environment and mobility of older adults.
Cress, M Elaine; Orini, Stefania; Kinsler, Laura
2011-01-01
Older adults often elect to move into smaller living environments. Smaller living space and the addition of services provided by a retirement community (RC) may make living easier for the individual, but it may also reduce the amount of daily physical activity and ultimately reduce functional ability. With home size as an independent variable, the primary purpose of this study was to evaluate daily physical activity and physical function of community dwellers (CD; n = 31) as compared to residents of an RC (n = 30). In this cross-sectional study design, assessments included: the Continuous Scale Physical Functional Performance - 10 test, with a possible range of 0-100, higher scores reflecting better function; Step Activity Monitor (StepWatch 3.1); a physical activity questionnaire, the area of the home (in square meters). Groups were compared by one-way ANOVA. A general linear regression model was used to predict the number of steps per day at home. The level of significance was p < 0.05. Of the 61 volunteers (mean age: 79 ± 6.3 years; range: 65-94 years), the RC living space (68 ± 37.7 m(2)) was 62% smaller than the CD living space (182.8 ± 77.9 m(2); p = 0.001). After correcting for age, the RC took fewer total steps per day excluding exercise (p = 0.03) and had lower function (p = 0.005) than the CD. On average, RC residents take 3,000 steps less per day and have approximately 60% of the living space of a CD. Home size and physical function were primary predictors of the number of steps taken at home, as found using a general linear regression analysis. Copyright © 2010 S. Karger AG, Basel.
2012-01-01
Background This study used a social capital framework to examine the relationship between a set of potential protective ('health assets') factors and the wellbeing of 15 year adolescents living in Spain and England. The overall purpose of the study was to compare the consistency of these relationships between countries and to investigate their respective relative importance. Methods Data were drawn from the 2002, English and Spanish components of the WHO Health Behaviour in School-Aged Children (HBSC) survey A total of 3,591 respondents (1884, Spain; 1707, England) aged 15, drawn from random samples of students in 215 and 80 schools respectively were included in the study. A series of univariate, bivariate and multivariate (general linear modelling and decision tree) analyses were used to establish the relationships. Results Results showed that the wellbeing of Spanish and English adolescents is similar and good. Three measures of social capital and 2 measures of social support were found to be important factors in the general linear model. Namely, family autonomy and control; family and school sense of belonging; and social support at home and school. However, there were differences in how the sub components of social capital manifest themselves in each country--feelings of autonomy of control, were more important in England and social support factors in Spain. Conclusions There is some evidence to suggest that social capital (and its related concept of social support) do travel and are applicable to young people living in Spain and England. Given the different constellation of assets found in each country, it is not possible to define exactly the precise formula for applying social capital across cultures. This should more appropriately be defined at the programme planning stage. PMID:22353283
An empirical model of diagnostic x-ray attenuation under narrow-beam geometry.
Mathieu, Kelsey B; Kappadath, S Cheenu; White, R Allen; Atkinson, E Neely; Cody, Dianna D
2011-08-01
The purpose of this study was to develop and validate a mathematical model to describe narrow-beam attenuation of kilovoltage x-ray beams for the intended applications of half-value layer (HVL) and quarter-value layer (QVL) estimations, patient organ shielding, and computer modeling. An empirical model, which uses the Lambert W function and represents a generalized Lambert-Beer law, was developed. To validate this model, transmission of diagnostic energy x-ray beams was measured over a wide range of attenuator thicknesses [0.49-33.03 mm Al on a computed tomography (CT) scanner, 0.09-1.93 mm Al on two mammography systems, and 0.1-0.45 mm Cu and 0.49-14.87 mm Al using general radiography]. Exposure measurements were acquired under narrow-beam geometry using standard methods, including the appropriate ionization chamber, for each radiographic system. Nonlinear regression was used to find the best-fit curve of the proposed Lambert W model to each measured transmission versus attenuator thickness data set. In addition to validating the Lambert W model, we also assessed the performance of two-point Lambert W interpolation compared to traditional methods for estimating the HVL and QVL [i.e., semi-logarithmic (exponential) and linear interpolation]. The Lambert W model was validated for modeling attenuation versus attenuator thickness with respect to the data collected in this study (R2 > 0.99). Furthermore, Lambert W interpolation was more accurate and less sensitive to the choice of interpolation points used to estimate the HVL and/or QVL than the traditional methods of semilogarithmic and linear interpolation. The proposed Lambert W model accurately describes attenuation of both monoenergetic radiation and (kilovoltage) polyenergetic beams (under narrow-beam geometry).
Gao, Jinghong; Sun, Yunzong; Liu, Qiyong; Zhou, Maigeng; Lu, Yaogui; Li, Liping
2015-02-01
Few multi-city studies have been conducted to explore the regional level definition of heat wave and examine the association between extreme high temperature and mortality in developing countries. The purpose of the present study was to investigate the impact of extreme high temperature on mortality and to explore the local definition of heat wave in five Chinese cities. We first used a distributed lag non-linear model to characterize the effects of daily mean temperature on non-accidental mortality. We then employed a generalized additive model to explore the city-specific definition of heat wave. Finally, we performed a comparative analysis to evaluate the effectiveness of the definition. For each city, we found a positive non-linear association between extreme high temperature and mortality, with the highest effects appearing within 3 days of extreme heat event onset. Specifically, we defined individual heat waves of Beijing and Tianjin as being two or more consecutive days with daily mean temperatures exceeding 30.2 °C and 29.5 °C, respectively, and Nanjing, Shanghai and Changsha heat waves as ≥3 consecutive days with daily mean temperatures higher than 32.9 °C, 32.3 °C and 34.5 °C, respectively. Comparative analysis generally supported the definition. We found extreme high temperatures were associated with increased mortality, after a short lag period, when temperatures exceeded obvious threshold levels. The city-specific definition of heat wave developed in our study may provide guidance for the establishment and implementation of early heat-health response systems for local government to deal with the projected negative health outcomes due to heat waves. Copyright © 2014 Elsevier B.V. All rights reserved.
Defining a Family of Cognitive Diagnosis Models Using Log-Linear Models with Latent Variables
ERIC Educational Resources Information Center
Henson, Robert A.; Templin, Jonathan L.; Willse, John T.
2009-01-01
This paper uses log-linear models with latent variables (Hagenaars, in "Loglinear Models with Latent Variables," 1993) to define a family of cognitive diagnosis models. In doing so, the relationship between many common models is explicitly defined and discussed. In addition, because the log-linear model with latent variables is a general model for…
Code of Federal Regulations, 2014 CFR
2014-01-01
... AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE GENERAL REGULATIONS OF THE ENVIRONMENTAL DATA SERVICE DEEP SEABED MINING REGULATIONS FOR COMMERCIAL RECOVERY PERMITS General § 971.100 Purpose. The...
Code of Federal Regulations, 2013 CFR
2013-01-01
... AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE GENERAL REGULATIONS OF THE ENVIRONMENTAL DATA SERVICE DEEP SEABED MINING REGULATIONS FOR COMMERCIAL RECOVERY PERMITS General § 971.100 Purpose. The...
Code of Federal Regulations, 2012 CFR
2012-01-01
... AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE GENERAL REGULATIONS OF THE ENVIRONMENTAL DATA SERVICE DEEP SEABED MINING REGULATIONS FOR COMMERCIAL RECOVERY PERMITS General § 971.100 Purpose. The...
Code of Federal Regulations, 2011 CFR
2011-01-01
... AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE GENERAL REGULATIONS OF THE ENVIRONMENTAL DATA SERVICE DEEP SEABED MINING REGULATIONS FOR COMMERCIAL RECOVERY PERMITS General § 971.100 Purpose. The...
Code of Federal Regulations, 2010 CFR
2010-01-01
... AND ATMOSPHERIC ADMINISTRATION, DEPARTMENT OF COMMERCE GENERAL REGULATIONS OF THE ENVIRONMENTAL DATA SERVICE DEEP SEABED MINING REGULATIONS FOR COMMERCIAL RECOVERY PERMITS General § 971.100 Purpose. The...
47 CFR 4.1 - Scope, basis and purpose.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Scope, basis and purpose. 4.1 Section 4.1 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL DISRUPTIONS TO COMMUNICATIONS General § 4.1 Scope, basis and purpose. In this part, the Federal Communications Commission is setting forth requirements...
47 CFR 4.1 - Scope, basis and purpose.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 1 2012-10-01 2012-10-01 false Scope, basis and purpose. 4.1 Section 4.1 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL DISRUPTIONS TO COMMUNICATIONS General § 4.1 Scope, basis and purpose. In this part, the Federal Communications Commission is setting forth requirements...
42 CFR 456.241 - Purpose and general description.
Code of Federal Regulations, 2011 CFR
2011-10-01
... health care. (b) Medical care evaluation studies— (1) Emphasize identification and analysis of patterns... Ur Plan: Medical Care Evaluation Studies § 456.241 Purpose and general description. (a) The purpose of medical care evaluation studies is to promote the most effective and efficient use of available...
42 CFR 456.241 - Purpose and general description.
Code of Federal Regulations, 2010 CFR
2010-10-01
... health care. (b) Medical care evaluation studies— (1) Emphasize identification and analysis of patterns... Ur Plan: Medical Care Evaluation Studies § 456.241 Purpose and general description. (a) The purpose of medical care evaluation studies is to promote the most effective and efficient use of available...
Nonlinear and linear wave equations for propagation in media with frequency power law losses
NASA Astrophysics Data System (ADS)
Szabo, Thomas L.
2003-10-01
The Burgers, KZK, and Westervelt wave equations used for simulating wave propagation in nonlinear media are based on absorption that has a quadratic dependence on frequency. Unfortunately, most lossy media, such as tissue, follow a more general frequency power law. The authors first research involved measurements of loss and dispersion associated with a modification to Blackstock's solution to the linear thermoviscous wave equation [J. Acoust. Soc. Am. 41, 1312 (1967)]. A second paper by Blackstock [J. Acoust. Soc. Am. 77, 2050 (1985)] showed the loss term in the Burgers equation for plane waves could be modified for other known instances of loss. The authors' work eventually led to comprehensive time-domain convolutional operators that accounted for both dispersion and general frequency power law absorption [Szabo, J. Acoust. Soc. Am. 96, 491 (1994)]. Versions of appropriate loss terms were developed to extend the standard three nonlinear wave equations to these more general losses. Extensive experimental data has verified the predicted phase velocity dispersion for different power exponents for the linear case. Other groups are now working on methods suitable for solving wave equations numerically for these types of loss directly in the time domain for both linear and nonlinear media.
Parametrizing linear generalized Langevin dynamics from explicit molecular dynamics simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gottwald, Fabian; Karsten, Sven; Ivanov, Sergei D., E-mail: sergei.ivanov@uni-rostock.de
2015-06-28
Fundamental understanding of complex dynamics in many-particle systems on the atomistic level is of utmost importance. Often the systems of interest are of macroscopic size but can be partitioned into a few important degrees of freedom which are treated most accurately and others which constitute a thermal bath. Particular attention in this respect attracts the linear generalized Langevin equation, which can be rigorously derived by means of a linear projection technique. Within this framework, a complicated interaction with the bath can be reduced to a single memory kernel. This memory kernel in turn is parametrized for a particular system studied,more » usually by means of time-domain methods based on explicit molecular dynamics data. Here, we discuss that this task is more naturally achieved in frequency domain and develop a Fourier-based parametrization method that outperforms its time-domain analogues. Very surprisingly, the widely used rigid bond method turns out to be inappropriate in general. Importantly, we show that the rigid bond approach leads to a systematic overestimation of relaxation times, unless the system under study consists of a harmonic bath bi-linearly coupled to the relevant degrees of freedom.« less
Brittle failure of rock: A review and general linear criterion
NASA Astrophysics Data System (ADS)
Labuz, Joseph F.; Zeng, Feitao; Makhnenko, Roman; Li, Yuan
2018-07-01
A failure criterion typically is phenomenological since few models exist to theoretically derive the mathematical function. Indeed, a successful failure criterion is a generalization of experimental data obtained from strength tests on specimens subjected to known stress states. For isotropic rock that exhibits a pressure dependence on strength, a popular failure criterion is a linear equation in major and minor principal stresses, independent of the intermediate principal stress. A general linear failure criterion called Paul-Mohr-Coulomb (PMC) contains all three principal stresses with three material constants: friction angles for axisymmetric compression ϕc and extension ϕe and isotropic tensile strength V0. PMC provides a framework to describe a nonlinear failure surface by a set of planes "hugging" the curved surface. Brittle failure of rock is reviewed and multiaxial test methods are summarized. Equations are presented to implement PMC for fitting strength data and determining the three material parameters. A piecewise linear approximation to a nonlinear failure surface is illustrated by fitting two planes with six material parameters to form either a 6- to 12-sided pyramid or a 6- to 12- to 6-sided pyramid. The particular nature of the failure surface is dictated by the experimental data.
A general theory of linear cosmological perturbations: scalar-tensor and vector-tensor theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lagos, Macarena; Baker, Tessa; Ferreira, Pedro G.
We present a method for parametrizing linear cosmological perturbations of theories of gravity, around homogeneous and isotropic backgrounds. The method is sufficiently general and systematic that it can be applied to theories with any degrees of freedom (DoFs) and arbitrary gauge symmetries. In this paper, we focus on scalar-tensor and vector-tensor theories, invariant under linear coordinate transformations. In the case of scalar-tensor theories, we use our framework to recover the simple parametrizations of linearized Horndeski and ''Beyond Horndeski'' theories, and also find higher-derivative corrections. In the case of vector-tensor theories, we first construct the most general quadratic action for perturbationsmore » that leads to second-order equations of motion, which propagates two scalar DoFs. Then we specialize to the case in which the vector field is time-like (à la Einstein-Aether gravity), where the theory only propagates one scalar DoF. As a result, we identify the complete forms of the quadratic actions for perturbations, and the number of free parameters that need to be defined, to cosmologically characterize these two broad classes of theories.« less
Variable Selection with Prior Information for Generalized Linear Models via the Prior LASSO Method.
Jiang, Yuan; He, Yunxiao; Zhang, Heping
LASSO is a popular statistical tool often used in conjunction with generalized linear models that can simultaneously select variables and estimate parameters. When there are many variables of interest, as in current biological and biomedical studies, the power of LASSO can be limited. Fortunately, so much biological and biomedical data have been collected and they may contain useful information about the importance of certain variables. This paper proposes an extension of LASSO, namely, prior LASSO (pLASSO), to incorporate that prior information into penalized generalized linear models. The goal is achieved by adding in the LASSO criterion function an additional measure of the discrepancy between the prior information and the model. For linear regression, the whole solution path of the pLASSO estimator can be found with a procedure similar to the Least Angle Regression (LARS). Asymptotic theories and simulation results show that pLASSO provides significant improvement over LASSO when the prior information is relatively accurate. When the prior information is less reliable, pLASSO shows great robustness to the misspecification. We illustrate the application of pLASSO using a real data set from a genome-wide association study.
Gpufit: An open-source toolkit for GPU-accelerated curve fitting.
Przybylski, Adrian; Thiel, Björn; Keller-Findeisen, Jan; Stock, Bernd; Bates, Mark
2017-11-16
We present a general purpose, open-source software library for estimation of non-linear parameters by the Levenberg-Marquardt algorithm. The software, Gpufit, runs on a Graphics Processing Unit (GPU) and executes computations in parallel, resulting in a significant gain in performance. We measured a speed increase of up to 42 times when comparing Gpufit with an identical CPU-based algorithm, with no loss of precision or accuracy. Gpufit is designed such that it is easily incorporated into existing applications or adapted for new ones. Multiple software interfaces, including to C, Python, and Matlab, ensure that Gpufit is accessible from most programming environments. The full source code is published as an open source software repository, making its function transparent to the user and facilitating future improvements and extensions. As a demonstration, we used Gpufit to accelerate an existing scientific image analysis package, yielding significantly improved processing times for super-resolution fluorescence microscopy datasets.
The use of the Space Shuttle for land remote sensing
NASA Technical Reports Server (NTRS)
Thome, P. G.
1982-01-01
The use of the Space Shuttle for land remote sensing will grow significantly during the 1980's. The main use will be for general land cover and geological mapping purposes by worldwide users employing specialized sensors such as: high resolution film systems, synthetic aperture radars, and multispectral visible/IR electronic linear array scanners. Because these type sensors have low Space Shuttle load factors, the user's preference will be for shared flights. With this strong preference and given the present prognosis for Space Shuttle flight frequency as a function of orbit inclination, the strongest demand will be for 57 deg orbits. However, significant use will be made of lower inclination orbits. Compared with freeflying satellites, Space Shuttle mission investment requirements will be significantly lower. The use of the Space Shuttle for testing R and D land remote sensors will replace the free-flying satellites for most test programs.
Calculation of Dose for Skyshine Radiation From a 45 MeV Electron LINAC
NASA Astrophysics Data System (ADS)
Hori, M.; Hikoji, M.; Takahashi, H.; Takahashi, K.; Kitaichi, M.; Sawamura, S.; Nojiri, I.
1996-11-01
Dose estimation for skyshine plays an important role in the evaluation of the environment around nuclear facilities. We performed calculations for the skyshine radiation from a Hokkaido University 45 MeV linear accelerator using a general purpose user's version of the EGS4 Monte Carlo Code. To verify accuracy of the code, the simulation results have been compared with our experimental results, in which a gated counting method was used to measure low-level pulsed leakage radiation. In experiment, measurements were carried out up to 600 m away from the LINAC. The simulation results are consistent with the experimental values at the distance between 100 and 400 m from the LINAC. However, agreements of both results up to 100 m from the LINAC are not as good because of the simplification of geometrical modeling in the simulation. It could be said that it is useful to apply this version to the calculation for skyshine.
Salivary Cortisol and Cold Pain Sensitivity in Female Twins
Godfrey, Kathryn M; Strachan, Eric; Dansie, Elizabeth; Crofford, Leslie J; Buchwald, Dedra; Goldberg, Jack; Poeschla, Brian; Succop, Annemarie; Noonan, Carolyn; Afari, Niloofar
2013-01-01
Background There is a dearth of knowledge about the link between cortisol and pain sensitivity. Purpose We examined the association of salivary cortisol with indices of cold pain sensitivity in 198 female twins and explored the role of familial confounding. Methods Three-day saliva samples were collected for cortisol levels and a cold pressor test was used to collect pain ratings and time to threshold and tolerance. Linear regression modeling with generalized estimating equations examined the overall and within-pair associations. Results Lower diurnal variation of cortisol was associated with higher pain ratings at threshold (p = 0.02) and tolerance (p < 0.01). The relationship of diurnal variation with pain ratings at threshold and tolerance was minimally influenced by familial factors (i.e., genetics and common environment). Conclusions Understanding the genetic and non-genetic mechanisms underlying the link between HPA axis dysregulation and pain sensitivity may help to prevent chronic pain development and maintenance. PMID:23955075
DSP Implementation of the Retinex Image Enhancement Algorithm
NASA Technical Reports Server (NTRS)
Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn
2004-01-01
The Retinex is a general-purpose image enhancement algorithm that is used to produce good visual representations of scenes. It performs a non-linear spatial/spectral transform that synthesizes strong local contrast enhancement and color constancy. A real-time, video frame rate implementation of the Retinex is required to meet the needs of various potential users. Retinex processing contains a relatively large number of complex computations, thus to achieve real-time performance using current technologies requires specialized hardware and software. In this paper we discuss the design and development of a digital signal processor (DSP) implementation of the Retinex. The target processor is a Texas Instruments TMS320C6711 floating point DSP. NTSC video is captured using a dedicated frame-grabber card, Retinex processed, and displayed on a standard monitor. We discuss the optimizations used to achieve real-time performance of the Retinex and also describe our future plans on using alternative architectures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weiland, T.; Bartsch, M.; Becker, U.
1997-02-01
MAFIA Version 4.0 is an almost completely new version of the general purpose electromagnetic simulator known since 13 years. The major improvements concern the new graphical user interface based on state of the art technology as well as a series of new solvers for new physics problems. MAFIA now covers heat distribution, electro-quasistatics, S-parameters in frequency domain, particle beam tracking in linear accelerators, acoustics and even elastodynamics. The solvers that were available in earlier versions have also been improved and/or extended, as for example the complex eigenmode solver, the 2D--3D coupled PIC solvers. Time domain solvers have new waveguide boundarymore » conditions with an extremely low reflection even near cutoff frequency, concentrated elements are available as well as a variety of signal processing options. Probably the most valuable addition are recursive sub-grid capabilities that enable modeling of very small details in large structures. {copyright} {ital 1997 American Institute of Physics.}« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weiland, T.; Bartsch, M.; Becker, U.
1997-02-01
MAFIA Version 4.0 is an almost completely new version of the general purpose electromagnetic simulator known since 13 years. The major improvements concern the new graphical user interface based on state of the art technology as well as a series of new solvers for new physics problems. MAFIA now covers heat distribution, electro-quasistatics, S-parameters in frequency domain, particle beam tracking in linear accelerators, acoustics and even elastodynamics. The solvers that were available in earlier versions have also been improved and/or extended, as for example the complex eigenmode solver, the 2D-3D coupled PIC solvers. Time domain solvers have new waveguide boundarymore » conditions with an extremely low reflection even near cutoff frequency, concentrated elements are available as well as a variety of signal processing options. Probably the most valuable addition are recursive sub-grid capabilities that enable modeling of very small details in large structures.« less