Design sensitivity analysis of nonlinear structural response
NASA Technical Reports Server (NTRS)
Cardoso, J. B.; Arora, J. S.
1987-01-01
A unified theory is described of design sensitivity analysis of linear and nonlinear structures for shape, nonshape and material selection problems. The concepts of reference volume and adjoint structure are used to develop the unified viewpoint. A general formula for design sensitivity analysis is derived. Simple analytical linear and nonlinear examples are used to interpret various terms of the formula and demonstrate its use.
Hu, Shiang; Yao, Dezhong; Valdes-Sosa, Pedro A
2018-01-01
The choice of reference for the electroencephalogram (EEG) is a long-lasting unsolved issue resulting in inconsistent usages and endless debates. Currently, both the average reference (AR) and the reference electrode standardization technique (REST) are two primary, apparently irreconcilable contenders. We propose a theoretical framework to resolve this reference issue by formulating both (a) estimation of potentials at infinity, and (b) determination of the reference, as a unified Bayesian linear inverse problem, which can be solved by maximum a posterior estimation. We find that AR and REST are very particular cases of this unified framework: AR results from biophysically non-informative prior; while REST utilizes the prior based on the EEG generative model. To allow for simultaneous denoising and reference estimation, we develop the regularized versions of AR and REST, named rAR and rREST, respectively. Both depend on a regularization parameter that is the noise to signal variance ratio. Traditional and new estimators are evaluated with this framework, by both simulations and analysis of real resting EEGs. Toward this end, we leverage the MRI and EEG data from 89 subjects which participated in the Cuban Human Brain Mapping Project. Generated artificial EEGs-with a known ground truth, show that relative error in estimating the EEG potentials at infinity is lowest for rREST. It also reveals that realistic volume conductor models improve the performances of REST and rREST. Importantly, for practical applications, it is shown that an average lead field gives the results comparable to the individual lead field. Finally, it is shown that the selection of the regularization parameter with Generalized Cross-Validation (GCV) is close to the "oracle" choice based on the ground truth. When evaluated with the real 89 resting state EEGs, rREST consistently yields the lowest GCV. This study provides a novel perspective to the EEG reference problem by means of a unified inverse solution framework. It may allow additional principled theoretical formulations and numerical evaluation of performance.
Hu, Shiang; Yao, Dezhong; Valdes-Sosa, Pedro A.
2018-01-01
The choice of reference for the electroencephalogram (EEG) is a long-lasting unsolved issue resulting in inconsistent usages and endless debates. Currently, both the average reference (AR) and the reference electrode standardization technique (REST) are two primary, apparently irreconcilable contenders. We propose a theoretical framework to resolve this reference issue by formulating both (a) estimation of potentials at infinity, and (b) determination of the reference, as a unified Bayesian linear inverse problem, which can be solved by maximum a posterior estimation. We find that AR and REST are very particular cases of this unified framework: AR results from biophysically non-informative prior; while REST utilizes the prior based on the EEG generative model. To allow for simultaneous denoising and reference estimation, we develop the regularized versions of AR and REST, named rAR and rREST, respectively. Both depend on a regularization parameter that is the noise to signal variance ratio. Traditional and new estimators are evaluated with this framework, by both simulations and analysis of real resting EEGs. Toward this end, we leverage the MRI and EEG data from 89 subjects which participated in the Cuban Human Brain Mapping Project. Generated artificial EEGs—with a known ground truth, show that relative error in estimating the EEG potentials at infinity is lowest for rREST. It also reveals that realistic volume conductor models improve the performances of REST and rREST. Importantly, for practical applications, it is shown that an average lead field gives the results comparable to the individual lead field. Finally, it is shown that the selection of the regularization parameter with Generalized Cross-Validation (GCV) is close to the “oracle” choice based on the ground truth. When evaluated with the real 89 resting state EEGs, rREST consistently yields the lowest GCV. This study provides a novel perspective to the EEG reference problem by means of a unified inverse solution framework. It may allow additional principled theoretical formulations and numerical evaluation of performance. PMID:29780302
A methodology for design of a linear referencing system for surface transportation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vonderohe, A.; Hepworth, T.
1997-06-01
The transportation community has recently placed significant emphasis on development of data models, procedural standards, and policies for management of linearly-referenced data. There is an Intelligent Transportation Systems initiative underway to create a spatial datum for location referencing in one, two, and three dimensions. Most recently, a call was made for development of a unified linear reference system to support public, private, and military surface transportation needs. A methodology for design of the linear referencing system was developed from geodetic engineering principles and techniques used for designing geodetic control networks. The method is founded upon the law of propagation ofmore » random error and the statistical analysis of systems of redundant measurements, used to produce best estimates for unknown parameters. A complete mathematical development is provided. Example adjustments of linear distance measurement systems are included. The classical orders of design are discussed with regard to the linear referencing system. A simple design example is provided. A linear referencing system designed and analyzed with this method will not only be assured of meeting the accuracy requirements of users, it will have the potential for supporting delivery of error estimates along with the results of spatial analytical queries. Modeling considerations, alternative measurement methods, implementation strategies, maintenance issues, and further research needs are discussed. Recommendations are made for further advancement of the unified linear referencing system concept.« less
A numerical technique for linear elliptic partial differential equations in polygonal domains.
Hashemzadeh, P; Fokas, A S; Smitheman, S A
2015-03-08
Integral representations for the solution of linear elliptic partial differential equations (PDEs) can be obtained using Green's theorem. However, these representations involve both the Dirichlet and the Neumann values on the boundary, and for a well-posed boundary-value problem (BVPs) one of these functions is unknown. A new transform method for solving BVPs for linear and integrable nonlinear PDEs usually referred to as the unified transform ( or the Fokas transform ) was introduced by the second author in the late Nineties. For linear elliptic PDEs, this method can be considered as the analogue of Green's function approach but now it is formulated in the complex Fourier plane instead of the physical plane. It employs two global relations also formulated in the Fourier plane which couple the Dirichlet and the Neumann boundary values. These relations can be used to characterize the unknown boundary values in terms of the given boundary data, yielding an elegant approach for determining the Dirichlet to Neumann map . The numerical implementation of the unified transform can be considered as the counterpart in the Fourier plane of the well-known boundary integral method which is formulated in the physical plane. For this implementation, one must choose (i) a suitable basis for expanding the unknown functions and (ii) an appropriate set of complex values, which we refer to as collocation points, at which to evaluate the global relations. Here, by employing a variety of examples we present simple guidelines of how the above choices can be made. Furthermore, we provide concrete rules for choosing the collocation points so that the condition number of the matrix of the associated linear system remains low.
Trees, B-series and G-symplectic methods
NASA Astrophysics Data System (ADS)
Butcher, J. C.
2017-07-01
The order conditions for Runge-Kutta methods are intimately connected with the graphs known as rooted trees. The conditions can be expressed in terms of Taylor expansions written as weighted sums of elementary differentials, that is as B-series. Polish notation provides a unifying structure for representing many of the quantities appearing in this theory. Applications include the analysis of general linear methods with special reference to G-symplectic methods. A new order 6 method has recently been constructed.
The Unified Levelling Network of Sarawak and its Adjustment
NASA Astrophysics Data System (ADS)
Som, Z. A. M.; Yazid, A. M.; Ming, T. K.; Yazid, N. M.
2016-09-01
The height reference network of Sarawak has seen major improvement in over the past two decades. The most significant improvement was the establishment of extended precise leveling network of which is now able to connect all three major datum points at Pulau Lakei, Original and Bintulu. Datum by following the major accessible routes across Sarawak. This means the leveling network in Sarawak has now been inter-connected and unified. By having such a unified network leads to the possibility of having a common single least squares adjustment been performed for the first time. The least squares adjustment of this unified levelling network was attempted in order to compute the height of all Bench Marks established in the entire levelling network. The adjustment was done by using MoreFix levelling adjustment package developed at FGHT UTM. The computational procedure adopted is linear parametric adjustment by minimum constraint. Since Sarawak has three separate datums therefore three separate adjustments were implemented by utilizing datum at Pulau Lakei, Original Miri and Bintulu Datum respectively. Results of the MoreFix unified adjustment agreed very well with adjustment repeated using Starnet. Further the results were compared with solution given by Jupem and they are in good agreement as well. The difference in height analysed were within 10mm for the case of minimum constraint at Pulau Lakei datum and with much better agreement in the case of Original Miri Datum.
Unified implementation of the reference architecture : concept of operations.
DOT National Transportation Integrated Search
2015-10-19
This document describes the Concept of Operations (ConOps) for the Unified Implementation of the Reference Architecture, located in Southeast Michigan, which supports connected vehicle research and development. This ConOps describes the current state...
Physics of Alfvén waves and energetic particles in burning plasmas
NASA Astrophysics Data System (ADS)
Chen, Liu; Zonca, Fulvio
2016-01-01
Dynamics of shear Alfvén waves and energetic particles are crucial to the performance of burning fusion plasmas. This article reviews linear as well as nonlinear physics of shear Alfvén waves and their self-consistent interaction with energetic particles in tokamak fusion devices. More specifically, the review on the linear physics deals with wave spectral properties and collective excitations by energetic particles via wave-particle resonances. The nonlinear physics deals with nonlinear wave-wave interactions as well as nonlinear wave-energetic particle interactions. Both linear as well as nonlinear physics demonstrate the qualitatively important roles played by realistic equilibrium nonuniformities, magnetic field geometries, and the specific radial mode structures in determining the instability evolution, saturation, and, ultimately, energetic-particle transport. These topics are presented within a single unified theoretical framework, where experimental observations and numerical simulation results are referred to elucidate concepts and physics processes.
Rapid iterative reanalysis for automated design
NASA Technical Reports Server (NTRS)
Bhatia, K. G.
1973-01-01
A method for iterative reanalysis in automated structural design is presented for a finite-element analysis using the direct stiffness approach. A basic feature of the method is that the generalized stiffness and inertia matrices are expressed as functions of structural design parameters, and these generalized matrices are expanded in Taylor series about the initial design. Only the linear terms are retained in the expansions. The method is approximate because it uses static condensation, modal reduction, and the linear Taylor series expansions. The exact linear representation of the expansions of the generalized matrices is also described and a basis for the present method is established. Results of applications of the present method to the recalculation of the natural frequencies of two simple platelike structural models are presented and compared with results obtained by using a commonly applied analysis procedure used as a reference. In general, the results are in good agreement. A comparison of the computer times required for the use of the present method and the reference method indicated that the present method required substantially less time for reanalysis. Although the results presented are for relatively small-order problems, the present method will become more efficient relative to the reference method as the problem size increases. An extension of the present method to static reanalysis is described, ana a basis for unifying the static and dynamic reanalysis procedures is presented.
Control of Distributed Parameter Systems
1990-08-01
vari- ant of the general Lotka - Volterra model for interspecific competition. The variant described the emergence of one subpopulation from another as a...distribut ion unlimited. I&. ARSTRACT (MAUMUnw2O1 A unified arioroximation framework for Parameter estimation In general linear POE models has been completed...unified approximation framework for parameter estimation in general linear PDE models. This framework has provided the theoretical basis for a number of
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-30
... California State Implementation Plan Revisions, Monterey Bay Unified Air Pollution Control District AGENCY... to the Monterey Bay Unified Air Pollution Control District (MBUAPCD) portion of the California State... Environmental protection, Air pollution control, Incorporation by reference, Intergovernmental relations...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-16
... the California State Implementation Plan, San Joaquin Valley Unified Air Pollution Control District... revisions to the San Joaquin Valley Unified Air Pollution Control District (SJVUAPCD) portion of the... CFR Part 52 Environmental protection, Air pollution control, Incorporation by reference...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-10
... the California State Implementation Plan, Monterey Bay Unified Air Pollution Control District AGENCY... approve revisions to the Monterey Bay Unified Air Pollution Control District (MBUAPCD) portion of the... Part 52 Environmental protection, Air pollution control, Incorporation by reference, Intergovernmental...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-06
... the California State Implementation Plan, San Joaquin Valley Unified Air Pollution Control District... revisions to the San Joaquin Valley Unified Air Pollution Control District (SJVUAPCD) portion of the... pollution control, Incorporation by reference, Intergovernmental relations, Nitrogen dioxide, Ozone...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-31
... the California State Implementation Plan, San Joaquin Valley Unified Air Pollution Control District... revisions to the San Joaquin Valley Unified Air Pollution Control District (SJVUAPCD) portion of the... protection, Air pollution control, Incorporation by reference, Intergovernmental relations, Nitrogen dioxide...
NASA Astrophysics Data System (ADS)
Wu, Xiaoping; Abbondanza, Claudio; Altamimi, Zuheir; Chin, T. Mike; Collilieux, Xavier; Gross, Richard S.; Heflin, Michael B.; Jiang, Yan; Parker, Jay W.
2015-05-01
The current International Terrestrial Reference Frame is based on a piecewise linear site motion model and realized by reference epoch coordinates and velocities for a global set of stations. Although linear motions due to tectonic plates and glacial isostatic adjustment dominate geodetic signals, at today's millimeter precisions, nonlinear motions due to earthquakes, volcanic activities, ice mass losses, sea level rise, hydrological changes, and other processes become significant. Monitoring these (sometimes rapid) changes desires consistent and precise realization of the terrestrial reference frame (TRF) quasi-instantaneously. Here, we use a Kalman filter and smoother approach to combine time series from four space geodetic techniques to realize an experimental TRF through weekly time series of geocentric coordinates. In addition to secular, periodic, and stochastic components for station coordinates, the Kalman filter state variables also include daily Earth orientation parameters and transformation parameters from input data frames to the combined TRF. Local tie measurements among colocated stations are used at their known or nominal epochs of observation, with comotion constraints applied to almost all colocated stations. The filter/smoother approach unifies different geodetic time series in a single geocentric frame. Fragmented and multitechnique tracking records at colocation sites are bridged together to form longer and coherent motion time series. While the time series approach to TRF reflects the reality of a changing Earth more closely than the linear approximation model, the filter/smoother is computationally powerful and flexible to facilitate incorporation of other data types and more advanced characterization of stochastic behavior of geodetic time series.
Stability and Performance Metrics for Adaptive Flight Control
NASA Technical Reports Server (NTRS)
Stepanyan, Vahram; Krishnakumar, Kalmanje; Nguyen, Nhan; VanEykeren, Luarens
2009-01-01
This paper addresses the problem of verifying adaptive control techniques for enabling safe flight in the presence of adverse conditions. Since the adaptive systems are non-linear by design, the existing control verification metrics are not applicable to adaptive controllers. Moreover, these systems are in general highly uncertain. Hence, the system's characteristics cannot be evaluated by relying on the available dynamical models. This necessitates the development of control verification metrics based on the system's input-output information. For this point of view, a set of metrics is introduced that compares the uncertain aircraft's input-output behavior under the action of an adaptive controller to that of a closed-loop linear reference model to be followed by the aircraft. This reference model is constructed for each specific maneuver using the exact aerodynamic and mass properties of the aircraft to meet the stability and performance requirements commonly accepted in flight control. The proposed metrics are unified in the sense that they are model independent and not restricted to any specific adaptive control methods. As an example, we present simulation results for a wing damaged generic transport aircraft with several existing adaptive controllers.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-12
... Determination To Stay and Defer Sanctions, San Joaquin Valley Unified Air Pollution Control District AGENCY... on a proposed approval of revisions to the San Joaquin Valley Unified Air Pollution Control District... Part 52 Environmental protection, Air pollution control, Incorporation by reference, Intergovernmental...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-05
... the California State Implementation Plan, San Joaquin Valley Unified Air Pollution Control District... approve revisions to the San Joaquin Valley Unified Air Pollution Control District (SJVUAPCD) portion of... 1994. 11. ``Integrated Pollution Prevention and Control (IPPC) Reference Document on Best Available...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-12
... Determination to Stay and Defer Sanctions, San Joaquin Valley Unified Air Pollution Control District AGENCY... on a proposed approval of revisions to the San Joaquin Valley Unified Air Pollution Control District... Part 52 Environmental protection, Air pollution control, Incorporation by reference, Intergovernmental...
libdrdc: software standards library
NASA Astrophysics Data System (ADS)
Erickson, David; Peng, Tie
2008-04-01
This paper presents the libdrdc software standards library including internal nomenclature, definitions, units of measure, coordinate reference frames, and representations for use in autonomous systems research. This library is a configurable, portable C-function wrapped C++ / Object Oriented C library developed to be independent of software middleware, system architecture, processor, or operating system. It is designed to use the automatically-tuned linear algebra suite (ATLAS) and Basic Linear Algebra Suite (BLAS) and port to firmware and software. The library goal is to unify data collection and representation for various microcontrollers and Central Processing Unit (CPU) cores and to provide a common Application Binary Interface (ABI) for research projects at all scales. The library supports multi-platform development and currently works on Windows, Unix, GNU/Linux, and Real-Time Executive for Multiprocessor Systems (RTEMS). This library is made available under LGPL version 2.1 license.
Unified Framework for Deriving Simultaneous Equation Algorithms for Water Distribution Networks
The known formulations for steady state hydraulics within looped water distribution networks are re-derived in terms of linear and non-linear transformations of the original set of partly linear and partly non-linear equations that express conservation of mass and energy. All of ...
Adaptive feedback synchronization of a unified chaotic system
NASA Astrophysics Data System (ADS)
Lu, Junan; Wu, Xiaoqun; Han, Xiuping; Lü, Jinhu
2004-08-01
This Letter further improves and extends the work of Wang et al. [Phys. Lett. A 312 (2003) 34]. In detailed, the linear feedback synchronization and adaptive feedback synchronization with only one controller for a unified chaotic system are discussed here. It is noticed that this unified system contains the noted Lorenz and Chen systems. Two chaotic synchronization theorems are attained. Also, numerical simulations are given to show the effectiveness of these methods.
Relevance, textual unity, and politeness in writing about science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kreml, N.M.P.
1992-01-01
The question of whether there are social implications of linguistic choices in unifying a text is investigated empirically by this study which accounts for the interpretation of implicatures in conversation and written texts. It considers Relevance Theory (Sperber and Wilson 1988, Blakemore 1987, Blass 1990) to be the explanation of the unity of the text, as opposed to semantic theories of cohesion (Halliday and Hasan 1976) or pragmatic theories of coherence (van Dijk 1977). This study presents a model of three types of textual unifiers: overt (referring specifically to the text), embedded (referring to intra- and extra-textual information), and inferencemore » (not referring to the text at all). It hypothesizes that different genres are characterized by the predominance of different types of textual unifiers, and that readers will prefer those texts that rely on inferential unifiers which emphasize the reader's ability to participate in creating the meaning of the text. Eighteen texts of 275 words each are selected from three genres: scientific magazines, introductory science textbooks, and essays on science. The texts are found to vary significantly by genre in the type of textual unifier used. An Overtness Index expresses the ratio of the marked forms: science textbooks have more Overt unifiers (such as connective phrases) and thus a high Overtness Index; essays rely more on Inference unifiers (not represented by words) and thus have a low Overtness Index. The texts are submitted to 188 readers, and a significantly high number of all types of readers prefer the texts with the lower Overtness Indices-the essays. Thus a low Overtness Index is one feature of texts preferred by readers, supporting the hypotheses that genres of texts vary in the type of unifier used and that readers prefer texts that allow them to participate in constructing the meaning of the text.« less
NASA Astrophysics Data System (ADS)
Kota, Venkata Reddy; Vinnakoti, Sudheer
2017-12-01
Today, maintaining Power Quality (PQ) is very important in the growing competent world. With new equipments and devices, new challenges are also being put before power system operators. Unified Power Quality Conditioner (UPQC) is proposed to mitigate many power quality problems and to improve the performance of the power system. In this paper, an UPQC with Fuzzy Logic controller for capacitor voltage balancing is proposed in Synchronous Reference Frame (SRF) based control with Modified Phased Locked Loop (MPLL). The proposed controller with SRF-MPLL based control is tested under non-linear and unbalanced load conditions. The system is developed in Matlab/Simulink and its performance is analyzed under various conditions like non-linear, unbalanced load and polluted supply voltage including voltage sag/swells. Active and reactive power flow in the system, power factor and %THD of voltages and currents before and after compensation are also analyzed in this work. Results prove the applicability of the proposed scheme for power quality improvement. It is observed that the fuzzy controller gives better performance than PI controller with faster capacitor voltage balancing and also improves the dynamic performance of the system.
A Unified Mathematical Definition of Classical Information Retrieval.
ERIC Educational Resources Information Center
Dominich, Sandor
2000-01-01
Presents a unified mathematical definition for the classical models of information retrieval and identifies a mathematical structure behind relevance feedback. Highlights include vector information retrieval; probabilistic information retrieval; and similarity information retrieval. (Contains 118 references.) (Author/LRW)
A Unified Introduction to Ordinary Differential Equations
ERIC Educational Resources Information Center
Lutzer, Carl V.
2006-01-01
This article describes how a presentation from the point of view of differential operators can be used to (partially) unify the myriad techniques in an introductory course in ordinary differential equations by providing students with a powerful, flexible paradigm that extends into (or from) linear algebra. (Contains 1 footnote.)
Study on sampling of continuous linear system based on generalized Fourier transform
NASA Astrophysics Data System (ADS)
Li, Huiguang
2003-09-01
In the research of signal and system, the signal's spectrum and the system's frequency characteristic can be discussed through Fourier Transform (FT) and Laplace Transform (LT). However, some singular signals such as impulse function and signum signal don't satisfy Riemann integration and Lebesgue integration. They are called generalized functions in Maths. This paper will introduce a new definition -- Generalized Fourier Transform (GFT) and will discuss generalized function, Fourier Transform and Laplace Transform under a unified frame. When the continuous linear system is sampled, this paper will propose a new method to judge whether the spectrum will overlap after generalized Fourier transform (GFT). Causal and non-causal systems are studied, and sampling method to maintain system's dynamic performance is presented. The results can be used on ordinary sampling and non-Nyquist sampling. The results also have practical meaning on research of "discretization of continuous linear system" and "non-Nyquist sampling of signal and system." Particularly, condition for ensuring controllability and observability of MIMO continuous systems in references 13 and 14 is just an applicable example of this paper.
NASA Technical Reports Server (NTRS)
McGowan, David M.; Anderson, Melvin S.
1998-01-01
The analytical formulation of curved-plate non-linear equilibrium equations that include transverse-shear-deformation effects is presented. A unified set of non-linear strains that contains terms from both physical and tensorial strain measures is used. Using several simplifying assumptions, linearized, stability equations are derived that describe the response of the plate just after bifurcation buckling occurs. These equations are then modified to allow the plate reference surface to be located a distance z(c), from the centroid surface which is convenient for modeling stiffened-plate assemblies. The implementation of the new theory into the VICONOPT buckling and vibration analysis and optimum design program code is described. Either classical plate theory (CPT) or first-order shear-deformation plate theory (SDPT) may be selected in VICONOPT. Comparisons of numerical results for several example problems with different loading states are made. Results from the new curved-plate analysis compare well with closed-form solution results and with results from known example problems in the literature. Finally, a design-optimization study of two different cylindrical shells subject to uniform axial compression is presented.
ERIC Educational Resources Information Center
Zandieh, Michelle; Ellis, Jessica; Rasmussen, Chris
2017-01-01
As part of a larger study of student understanding of concepts in linear algebra, we interviewed 10 university linear algebra students as to their conceptions of functions from high school algebra and linear transformation from their study of linear algebra. An overarching goal of this study was to examine how linear algebra students see linear…
A Unified Analysis of Japanese Aspect Marker "te iru."
ERIC Educational Resources Information Center
Shinzato, Rumiko
1993-01-01
Following Jacobson's 1990 work, this study is another attempt to offer a unified analysis of the Japanese aspect marker "te iru" that touches upon Gestalt psychologists' ideas of figure/ground opposition, Langacker's cognitive grammar, and Kunihiro's cognitive analysis. (Contains 34 references.) (LB)
Kernel-imbedded Gaussian processes for disease classification using microarray gene expression data
Zhao, Xin; Cheung, Leo Wang-Kit
2007-01-01
Background Designing appropriate machine learning methods for identifying genes that have a significant discriminating power for disease outcomes has become more and more important for our understanding of diseases at genomic level. Although many machine learning methods have been developed and applied to the area of microarray gene expression data analysis, the majority of them are based on linear models, which however are not necessarily appropriate for the underlying connection between the target disease and its associated explanatory genes. Linear model based methods usually also bring in false positive significant features more easily. Furthermore, linear model based algorithms often involve calculating the inverse of a matrix that is possibly singular when the number of potentially important genes is relatively large. This leads to problems of numerical instability. To overcome these limitations, a few non-linear methods have recently been introduced to the area. Many of the existing non-linear methods have a couple of critical problems, the model selection problem and the model parameter tuning problem, that remain unsolved or even untouched. In general, a unified framework that allows model parameters of both linear and non-linear models to be easily tuned is always preferred in real-world applications. Kernel-induced learning methods form a class of approaches that show promising potentials to achieve this goal. Results A hierarchical statistical model named kernel-imbedded Gaussian process (KIGP) is developed under a unified Bayesian framework for binary disease classification problems using microarray gene expression data. In particular, based on a probit regression setting, an adaptive algorithm with a cascading structure is designed to find the appropriate kernel, to discover the potentially significant genes, and to make the optimal class prediction accordingly. A Gibbs sampler is built as the core of the algorithm to make Bayesian inferences. Simulation studies showed that, even without any knowledge of the underlying generative model, the KIGP performed very close to the theoretical Bayesian bound not only in the case with a linear Bayesian classifier but also in the case with a very non-linear Bayesian classifier. This sheds light on its broader usability to microarray data analysis problems, especially to those that linear methods work awkwardly. The KIGP was also applied to four published microarray datasets, and the results showed that the KIGP performed better than or at least as well as any of the referred state-of-the-art methods did in all of these cases. Conclusion Mathematically built on the kernel-induced feature space concept under a Bayesian framework, the KIGP method presented in this paper provides a unified machine learning approach to explore both the linear and the possibly non-linear underlying relationship between the target features of a given binary disease classification problem and the related explanatory gene expression data. More importantly, it incorporates the model parameter tuning into the framework. The model selection problem is addressed in the form of selecting a proper kernel type. The KIGP method also gives Bayesian probabilistic predictions for disease classification. These properties and features are beneficial to most real-world applications. The algorithm is naturally robust in numerical computation. The simulation studies and the published data studies demonstrated that the proposed KIGP performs satisfactorily and consistently. PMID:17328811
Wang, Juan; Nishikawa, Robert M; Yang, Yongyi
2016-01-01
In computer-aided detection of microcalcifications (MCs), the detection accuracy is often compromised by frequent occurrence of false positives (FPs), which can be attributed to a number of factors, including imaging noise, inhomogeneity in tissue background, linear structures, and artifacts in mammograms. In this study, the authors investigated a unified classification approach for combating the adverse effects of these heterogeneous factors for accurate MC detection. To accommodate FPs caused by different factors in a mammogram image, the authors developed a classification model to which the input features were adapted according to the image context at a detection location. For this purpose, the input features were defined in two groups, of which one group was derived from the image intensity pattern in a local neighborhood of a detection location, and the other group was used to characterize how a MC is different from its structural background. Owing to the distinctive effect of linear structures in the detector response, the authors introduced a dummy variable into the unified classifier model, which allowed the input features to be adapted according to the image context at a detection location (i.e., presence or absence of linear structures). To suppress the effect of inhomogeneity in tissue background, the input features were extracted from different domains aimed for enhancing MCs in a mammogram image. To demonstrate the flexibility of the proposed approach, the authors implemented the unified classifier model by two widely used machine learning algorithms, namely, a support vector machine (SVM) classifier and an Adaboost classifier. In the experiment, the proposed approach was tested for two representative MC detectors in the literature [difference-of-Gaussians (DoG) detector and SVM detector]. The detection performance was assessed using free-response receiver operating characteristic (FROC) analysis on a set of 141 screen-film mammogram (SFM) images (66 cases) and a set of 188 full-field digital mammogram (FFDM) images (95 cases). The FROC analysis results show that the proposed unified classification approach can significantly improve the detection accuracy of two MC detectors on both SFM and FFDM images. Despite the difference in performance between the two detectors, the unified classifiers can reduce their FP rate to a similar level in the output of the two detectors. In particular, with true-positive rate at 85%, the FP rate on SFM images for the DoG detector was reduced from 1.16 to 0.33 clusters/image (unified SVM) and 0.36 clusters/image (unified Adaboost), respectively; similarly, for the SVM detector, the FP rate was reduced from 0.45 clusters/image to 0.30 clusters/image (unified SVM) and 0.25 clusters/image (unified Adaboost), respectively. Similar FP reduction results were also achieved on FFDM images for the two MC detectors. The proposed unified classification approach can be effective for discriminating MCs from FPs caused by different factors (such as MC-like noise patterns and linear structures) in MC detection. The framework is general and can be applicable for further improving the detection accuracy of existing MC detectors.
A unified RANS–LES model: Computational development, accuracy and cost
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gopalan, Harish, E-mail: hgopalan@uwyo.edu; Heinz, Stefan, E-mail: heinz@uwyo.edu; Stöllinger, Michael K., E-mail: MStoell@uwyo.edu
2013-09-15
Large eddy simulation (LES) is computationally extremely expensive for the investigation of wall-bounded turbulent flows at high Reynolds numbers. A way to reduce the computational cost of LES by orders of magnitude is to combine LES equations with Reynolds-averaged Navier–Stokes (RANS) equations used in the near-wall region. A large variety of such hybrid RANS–LES methods are currently in use such that there is the question of which hybrid RANS-LES method represents the optimal approach. The properties of an optimal hybrid RANS–LES model are formulated here by taking reference to fundamental properties of fluid flow equations. It is shown that unifiedmore » RANS–LES models derived from an underlying stochastic turbulence model have the properties of optimal hybrid RANS–LES models. The rest of the paper is organized in two parts. First, a priori and a posteriori analyses of channel flow data are used to find the optimal computational formulation of the theoretically derived unified RANS–LES model and to show that this computational model, which is referred to as linear unified model (LUM), does also have all the properties of an optimal hybrid RANS–LES model. Second, a posteriori analyses of channel flow data are used to study the accuracy and cost features of the LUM. The following conclusions are obtained. (i) Compared to RANS, which require evidence for their predictions, the LUM has the significant advantage that the quality of predictions is relatively independent of the RANS model applied. (ii) Compared to LES, the significant advantage of the LUM is a cost reduction of high-Reynolds number simulations by a factor of 0.07Re{sup 0.46}. For coarse grids, the LUM has a significant accuracy advantage over corresponding LES. (iii) Compared to other usually applied hybrid RANS–LES models, it is shown that the LUM provides significantly improved predictions.« less
Radio morphology and parent population of X-ray selected BL Lacertae objects
NASA Technical Reports Server (NTRS)
Laurent-Muehleisen, S. A.; Kollgaard, R. I.; Moellenbrock, G. A.; Feigelson, E. D.
1993-01-01
High-dynamic range (typically 1700:1) radio maps of 15 X-ray BL Lac (XBL) objects from the HEAO-1 Large Area Sky Survey are presented. Morphological characteristics of these sources are compared with Fanaroff-Riley (FR) class I radio galaxies in the context of unified schemes, with reference to one-sided kiloparsec-scale emission. Evidence that cluster membership of XBLs is significantly higher than previously thought is also presented. It is shown that the extended radio powers, X-ray emission, core-to-lobe ratios, and linear sizes of the radio selected BL Lac (RBL) and XBL populations are consistent with an FR I radio galaxy parent population. A source list and VLA observing log and map parameters are provided.
A unified perspective on robot control - The energy Lyapunov function approach
NASA Technical Reports Server (NTRS)
Wen, John T.
1990-01-01
A unified framework for the stability analysis of robot tracking control is presented. By using an energy-motivated Lyapunov function candidate, the closed-loop stability is shown for a large family of control laws sharing a common structure of proportional and derivative feedback and a model-based feedforward. The feedforward can be zero, partial or complete linearized dynamics, partial or complete nonlinear dynamics, or linearized or nonlinear dynamics with parameter adaptation. As result, the dichotomous approaches to the robot control problem based on the open-loop linearization and nonlinear Lyapunov analysis are both included in this treatment. Furthermore, quantitative estimates of the trade-offs between different schemes in terms of the tracking performance, steady state error, domain of convergence, realtime computation load and required a prior model information are derived.
32 CFR 151.4 - Procedures and responsibilities.
Code of Federal Regulations, 2010 CFR
2010-07-01
... country for personnel assigned to foreign areas. (c) Designated commanding officer. Formal invocation of... geographical areas for which a unified command exists, the commander shall designate within each country the “Commanding Officer” referred to in the Senate Resolution (§ 151.6). (2) In areas where a unified command does...
Deep linear autoencoder and patch clustering-based unified one-dimensional coding of image and video
NASA Astrophysics Data System (ADS)
Li, Honggui
2017-09-01
This paper proposes a unified one-dimensional (1-D) coding framework of image and video, which depends on deep learning neural network and image patch clustering. First, an improved K-means clustering algorithm for image patches is employed to obtain the compact inputs of deep artificial neural network. Second, for the purpose of best reconstructing original image patches, deep linear autoencoder (DLA), a linear version of the classical deep nonlinear autoencoder, is introduced to achieve the 1-D representation of image blocks. Under the circumstances of 1-D representation, DLA is capable of attaining zero reconstruction error, which is impossible for the classical nonlinear dimensionality reduction methods. Third, a unified 1-D coding infrastructure for image, intraframe, interframe, multiview video, three-dimensional (3-D) video, and multiview 3-D video is built by incorporating different categories of videos into the inputs of patch clustering algorithm. Finally, it is shown in the results of simulation experiments that the proposed methods can simultaneously gain higher compression ratio and peak signal-to-noise ratio than those of the state-of-the-art methods in the situation of low bitrate transmission.
ERIC Educational Resources Information Center
Abode, Philip Sanmi
2005-01-01
The purpose of this study was to understand the nature of the relationship between organizational strategy and district performance among California's largest unified school districts. Organizational strategy was measured using planned and realized strategies (independent variables). Realized strategy is also referred to strategic orientation.…
A unified frame of predicting side effects of drugs by using linear neighborhood similarity.
Zhang, Wen; Yue, Xiang; Liu, Feng; Chen, Yanlin; Tu, Shikui; Zhang, Xining
2017-12-14
Drug side effects are one of main concerns in the drug discovery, which gains wide attentions. Investigating drug side effects is of great importance, and the computational prediction can help to guide wet experiments. As far as we known, a great number of computational methods have been proposed for the side effect predictions. The assumption that similar drugs may induce same side effects is usually employed for modeling, and how to calculate the drug-drug similarity is critical in the side effect predictions. In this paper, we present a novel measure of drug-drug similarity named "linear neighborhood similarity", which is calculated in a drug feature space by exploring linear neighborhood relationship. Then, we transfer the similarity from the feature space into the side effect space, and predict drug side effects by propagating known side effect information through a similarity-based graph. Under a unified frame based on the linear neighborhood similarity, we propose method "LNSM" and its extension "LNSM-SMI" to predict side effects of new drugs, and propose the method "LNSM-MSE" to predict unobserved side effect of approved drugs. We evaluate the performances of LNSM and LNSM-SMI in predicting side effects of new drugs, and evaluate the performances of LNSM-MSE in predicting missing side effects of approved drugs. The results demonstrate that the linear neighborhood similarity can improve the performances of side effect prediction, and the linear neighborhood similarity-based methods can outperform existing side effect prediction methods. More importantly, the proposed methods can predict side effects of new drugs as well as unobserved side effects of approved drugs under a unified frame.
Recurrence theorems: A unified account
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wallace, David, E-mail: david.wallace@balliol.ox.ac.uk
I discuss classical and quantum recurrence theorems in a unified manner, treating both as generalisations of the fact that a system with a finite state space only has so many places to go. Along the way, I prove versions of the recurrence theorem applicable to dynamics on linear and metric spaces and make some comments about applications of the classical recurrence theorem in the foundations of statistical mechanics.
NASA Astrophysics Data System (ADS)
Nguyen, Van-Dung; Wu, Ling; Noels, Ludovic
2017-03-01
This work provides a unified treatment of arbitrary kinds of microscopic boundary conditions usually considered in the multi-scale computational homogenization method for nonlinear multi-physics problems. An efficient procedure is developed to enforce the multi-point linear constraints arising from the microscopic boundary condition either by the direct constraint elimination or by the Lagrange multiplier elimination methods. The macroscopic tangent operators are computed in an efficient way from a multiple right hand sides linear system whose left hand side matrix is the stiffness matrix of the microscopic linearized system at the converged solution. The number of vectors at the right hand side is equal to the number of the macroscopic kinematic variables used to formulate the microscopic boundary condition. As the resolution of the microscopic linearized system often follows a direct factorization procedure, the computation of the macroscopic tangent operators is then performed using this factorized matrix at a reduced computational time.
NASA Astrophysics Data System (ADS)
Fukuda, Jun'ichi; Johnson, Kaj M.
2010-06-01
We present a unified theoretical framework and solution method for probabilistic, Bayesian inversions of crustal deformation data. The inversions involve multiple data sets with unknown relative weights, model parameters that are related linearly or non-linearly through theoretic models to observations, prior information on model parameters and regularization priors to stabilize underdetermined problems. To efficiently handle non-linear inversions in which some of the model parameters are linearly related to the observations, this method combines both analytical least-squares solutions and a Monte Carlo sampling technique. In this method, model parameters that are linearly and non-linearly related to observations, relative weights of multiple data sets and relative weights of prior information and regularization priors are determined in a unified Bayesian framework. In this paper, we define the mixed linear-non-linear inverse problem, outline the theoretical basis for the method, provide a step-by-step algorithm for the inversion, validate the inversion method using synthetic data and apply the method to two real data sets. We apply the method to inversions of multiple geodetic data sets with unknown relative data weights for interseismic fault slip and locking depth. We also apply the method to the problem of estimating the spatial distribution of coseismic slip on faults with unknown fault geometry, relative data weights and smoothing regularization weight.
A quasi-likelihood approach to non-negative matrix factorization
Devarajan, Karthik; Cheung, Vincent C.K.
2017-01-01
A unified approach to non-negative matrix factorization based on the theory of generalized linear models is proposed. This approach embeds a variety of statistical models, including the exponential family, within a single theoretical framework and provides a unified view of such factorizations from the perspective of quasi-likelihood. Using this framework, a family of algorithms for handling signal-dependent noise is developed and its convergence proven using the Expectation-Maximization algorithm. In addition, a measure to evaluate the goodness-of-fit of the resulting factorization is described. The proposed methods allow modeling of non-linear effects via appropriate link functions and are illustrated using an application in biomedical signal processing. PMID:27348511
User's manual for UCAP: Unified Counter-Rotation Aero-Acoustics Program
NASA Technical Reports Server (NTRS)
Culver, E. M.; Mccolgan, C. J.
1993-01-01
This is the user's manual for the Unified Counter-rotation Aeroacoustics Program (UCAP), the counter-rotation derivative of the UAAP (Unified Aero-Acoustic Program). The purpose of this program is to predict steady and unsteady air loading on the blades and the noise produced by a counter-rotation Prop-Fan. The aerodynamic method is based on linear potential theory with corrections for nonlinearity associated with axial flux induction, vortex lift on the blades, and rotor-to-rotor interference. The theory for acoustics and the theory for individual blade loading and wakes are derived in Unified Aeroacoustics Analysis for High Speed Turboprop Aerodynamics and Noise, Volume 1 (NASA CR-4329). This user's manual also includes a brief explanation of the theory used for the modelling of counter-rotation.
User's manual for UCAP: Unified Counter-Rotation Aero-Acoustics Program
NASA Astrophysics Data System (ADS)
Culver, E. M.; McColgan, C. J.
1993-04-01
This is the user's manual for the Unified Counter-rotation Aeroacoustics Program (UCAP), the counter-rotation derivative of the UAAP (Unified Aero-Acoustic Program). The purpose of this program is to predict steady and unsteady air loading on the blades and the noise produced by a counter-rotation Prop-Fan. The aerodynamic method is based on linear potential theory with corrections for nonlinearity associated with axial flux induction, vortex lift on the blades, and rotor-to-rotor interference. The theory for acoustics and the theory for individual blade loading and wakes are derived in Unified Aeroacoustics Analysis for High Speed Turboprop Aerodynamics and Noise, Volume 1 (NASA CR-4329). This user's manual also includes a brief explanation of the theory used for the modelling of counter-rotation.
Generalized Multilevel Structural Equation Modeling
ERIC Educational Resources Information Center
Rabe-Hesketh, Sophia; Skrondal, Anders; Pickles, Andrew
2004-01-01
A unifying framework for generalized multilevel structural equation modeling is introduced. The models in the framework, called generalized linear latent and mixed models (GLLAMM), combine features of generalized linear mixed models (GLMM) and structural equation models (SEM) and consist of a response model and a structural model for the latent…
Unified powered flight guidance
NASA Technical Reports Server (NTRS)
Brand, T. J.; Brown, D. W.; Higgins, J. P.
1973-01-01
A complete revision of the orbiter powered flight guidance scheme is presented. A unified approach to powered flight guidance was taken to accommodate all phases of exo-atmospheric orbiter powered flight, from ascent through deorbit. The guidance scheme was changed from the previous modified version of the Lambert Aim Point Maneuver Mode used in Apollo to one that employs linear tangent guidance concepts. This document replaces the previous ascent phase equation document.
Dharmalingam, Rajasekaran; Dash, Subhransu Sekhar; Senthilnathan, Karthikrajan; Mayilvaganan, Arun Bhaskar; Chinnamuthu, Subramani
2014-01-01
This paper deals with the performance of unified power quality conditioner (UPQC) based on current source converter (CSC) topology. UPQC is used to mitigate the power quality problems like harmonics and sag. The shunt and series active filter performs the simultaneous elimination of current and voltage problems. The power fed is linked through common DC link and maintains constant real power exchange. The DC link is connected through the reactor. The real power supply is given by the photovoltaic system for the compensation of power quality problems. The reference current and voltage generation for shunt and series converter is based on phase locked loop and synchronous reference frame theory. The proposed UPQC-CSC design has superior performance for mitigating the power quality problems. PMID:25013854
Dharmalingam, Rajasekaran; Dash, Subhransu Sekhar; Senthilnathan, Karthikrajan; Mayilvaganan, Arun Bhaskar; Chinnamuthu, Subramani
2014-01-01
This paper deals with the performance of unified power quality conditioner (UPQC) based on current source converter (CSC) topology. UPQC is used to mitigate the power quality problems like harmonics and sag. The shunt and series active filter performs the simultaneous elimination of current and voltage problems. The power fed is linked through common DC link and maintains constant real power exchange. The DC link is connected through the reactor. The real power supply is given by the photovoltaic system for the compensation of power quality problems. The reference current and voltage generation for shunt and series converter is based on phase locked loop and synchronous reference frame theory. The proposed UPQC-CSC design has superior performance for mitigating the power quality problems.
Building a Unified Information Network.
ERIC Educational Resources Information Center
Avram, Henriette D.
1988-01-01
Discusses cooperative efforts between research organizations and libraries to create a national information network. Topics discussed include the Linked System Project (LSP); technical processing versus reference and research functions; Open Systems Interconnection (OSI) Reference Model; the National Science Foundation Network (NSFNET); and…
Landsat D Thematic Mapper image dimensionality reduction and geometric correction accuracy
NASA Technical Reports Server (NTRS)
Ford, G. E.
1986-01-01
To characterize and quantify the performance of the Landsat thematic mapper (TM), techniques for dimensionality reduction by linear transformation have been studied and evaluated and the accuracy of the correction of geometric errors in TM images analyzed. Theoretical evaluations and comparisons for existing methods for the design of linear transformation for dimensionality reduction are presented. These methods include the discrete Karhunen Loeve (KL) expansion, Multiple Discriminant Analysis (MDA), Thematic Mapper (TM)-Tasseled Cap Linear Transformation and Singular Value Decomposition (SVD). A unified approach to these design problems is presented in which each method involves optimizing an objective function with respect to the linear transformation matrix. From these studies, four modified methods are proposed. They are referred to as the Space Variant Linear Transformation, the KL Transform-MDA hybrid method, and the First and Second Version of the Weighted MDA method. The modifications involve the assignment of weights to classes to achieve improvements in the class conditional probability of error for classes with high weights. Experimental evaluations of the existing and proposed methods have been performed using the six reflective bands of the TM data. It is shown that in terms of probability of classification error and the percentage of the cumulative eigenvalues, the six reflective bands of the TM data require only a three dimensional feature space. It is shown experimentally as well that for the proposed methods, the classes with high weights have improvements in class conditional probability of error estimates as expected.
Unification of the general non-linear sigma model and the Virasoro master equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boer, J. de; Halpern, M.B.
1997-06-01
The Virasoro master equation describes a large set of conformal field theories known as the affine-Virasoro constructions, in the operator algebra (affinie Lie algebra) of the WZW model, while the einstein equations of the general non-linear sigma model describe another large set of conformal field theories. This talk summarizes recent work which unifies these two sets of conformal field theories, together with a presumable large class of new conformal field theories. The basic idea is to consider spin-two operators of the form L{sub ij}{partial_derivative}x{sup i}{partial_derivative}x{sup j} in the background of a general sigma model. The requirement that these operators satisfymore » the Virasoro algebra leads to a set of equations called the unified Einstein-Virasoro master equation, in which the spin-two spacetime field L{sub ij} cuples to the usual spacetime fields of the sigma model. The one-loop form of this unified system is presented, and some of its algebraic and geometric properties are discussed.« less
Framework Design of Unified Cross-Authentication Based on the Fourth Platform Integrated Payment
NASA Astrophysics Data System (ADS)
Yong, Xu; Yujin, He
The essay advances a unified authentication based on the fourth integrated payment platform. The research aims at improving the compatibility of the authentication in electronic business and providing a reference for the establishment of credit system by seeking a way to carry out a standard unified authentication on a integrated payment platform. The essay introduces the concept of the forth integrated payment platform and finally put forward the whole structure and different components. The main issue of the essay is about the design of the credit system of the fourth integrated payment platform and the PKI/CA structure design.
Simultaneous analysis and design
NASA Technical Reports Server (NTRS)
Haftka, R. T.
1984-01-01
Optimization techniques are increasingly being used for performing nonlinear structural analysis. The development of element by element (EBE) preconditioned conjugate gradient (CG) techniques is expected to extend this trend to linear analysis. Under these circumstances the structural design problem can be viewed as a nested optimization problem. There are computational benefits to treating this nested problem as a large single optimization problem. The response variables (such as displacements) and the structural parameters are all treated as design variables in a unified formulation which performs simultaneously the design and analysis. Two examples are used for demonstration. A seventy-two bar truss is optimized subject to linear stress constraints and a wing box structure is optimized subject to nonlinear collapse constraints. Both examples show substantial computational savings with the unified approach as compared to the traditional nested approach.
Definition and Proposed Realization of the International Height Reference System (IHRS)
NASA Astrophysics Data System (ADS)
Ihde, Johannes; Sánchez, Laura; Barzaghi, Riccardo; Drewes, Hermann; Foerste, Christoph; Gruber, Thomas; Liebsch, Gunter; Marti, Urs; Pail, Roland; Sideris, Michael
2017-05-01
Studying, understanding and modelling global change require geodetic reference frames with an order of accuracy higher than the magnitude of the effects to be actually studied and with high consistency and reliability worldwide. The International Association of Geodesy, taking care of providing a precise geodetic infrastructure for monitoring the Earth system, promotes the implementation of an integrated global geodetic reference frame that provides a reliable frame for consistent analysis and modelling of global phenomena and processes affecting the Earth's gravity field, the Earth's surface geometry and the Earth's rotation. The definition, realization, maintenance and wide utilization of the International Terrestrial Reference System guarantee a globally unified geometric reference frame with an accuracy at the millimetre level. An equivalent high-precision global physical reference frame that supports the reliable description of changes in the Earth's gravity field (such as sea level variations, mass displacements, processes associated with geophysical fluids) is missing. This paper addresses the theoretical foundations supporting the implementation of such a physical reference surface in terms of an International Height Reference System and provides guidance for the coming activities required for the practical and sustainable realization of this system. Based on conceptual approaches of physical geodesy, the requirements for a unified global height reference system are derived. In accordance with the practice, its realization as the International Height Reference Frame is designed. Further steps for the implementation are also proposed.
A unified model for transfer alignment at random misalignment angles based on second-order EKF
NASA Astrophysics Data System (ADS)
Cui, Xiao; Mei, Chunbo; Qin, Yongyuan; Yan, Gongmin; Liu, Zhenbo
2017-04-01
In the transfer alignment process of inertial navigation systems (INSs), the conventional linear error model based on the small misalignment angle assumption cannot be applied to large misalignment situations. Furthermore, the nonlinear model based on the large misalignment angle suffers from redundant computation with nonlinear filters. This paper presents a unified model for transfer alignment suitable for arbitrary misalignment angles. The alignment problem is transformed into an estimation of the relative attitude between the master INS (MINS) and the slave INS (SINS), by decomposing the attitude matrix of the latter. Based on the Rodriguez parameters, a unified alignment model in the inertial frame with the linear state-space equation and a second order nonlinear measurement equation are established, without making any assumptions about the misalignment angles. Furthermore, we employ the Taylor series expansions on the second-order nonlinear measurement equation to implement the second-order extended Kalman filter (EKF2). Monte-Carlo simulations demonstrate that the initial alignment can be fulfilled within 10 s, with higher accuracy and much smaller computational cost compared with the traditional unscented Kalman filter (UKF) at large misalignment angles.
Defocusing effects of lensless ghost imaging and ghost diffraction with partially coherent sources
NASA Astrophysics Data System (ADS)
Zhou, Shuang-Xi; Sheng, Wei; Bi, Yu-Bo; Luo, Chun-Ling
2018-04-01
The defocusing effect is inevitable and degrades the image quality in the conventional optical imaging process significantly due to the close confinement of the imaging lens. Based on classical optical coherent theory and linear algebra, we develop a unified formula to describe the defocusing effects of both lensless ghost imaging (LGI) and lensless ghost diffraction (LGD) systems with a partially coherent source. Numerical examples are given to illustrate the influence of defocusing length on the quality of LGI and LGD. We find that the defocusing effects of the test and reference paths in the LGI or LGD systems are entirely different, while the LGD system is more robust against defocusing than the LGI system. Specifically, we find that the imaging process for LGD systems can be viewed as pinhole imaging, which may find applications in ultra-short-wave band imaging without imaging lenses, e.g. x-ray diffraction and γ-ray imaging.
Optimization-Based Robust Nonlinear Control
2006-08-01
ABSTRACT New control algorithms were developed for robust stabilization of nonlinear dynamical systems . Novel, linear matrix inequality-based synthesis...was to further advance optimization-based robust nonlinear control design, for general nonlinear systems (especially in discrete time ), for linear...Teel, IEEE Transactions on Control Systems Technology, vol. 14, no. 3, p. 398-407, May 2006. 3. "A unified framework for input-to-state stability in
An Evaluation of Feature Learning Methods for High Resolution Image Classification
NASA Astrophysics Data System (ADS)
Tokarczyk, P.; Montoya, J.; Schindler, K.
2012-07-01
Automatic image classification is one of the fundamental problems of remote sensing research. The classification problem is even more challenging in high-resolution images of urban areas, where the objects are small and heterogeneous. Two questions arise, namely which features to extract from the raw sensor data to capture the local radiometry and image structure at each pixel or segment, and which classification method to apply to the feature vectors. While classifiers are nowadays well understood, selecting the right features remains a largely empirical process. Here we concentrate on the features. Several methods are evaluated which allow one to learn suitable features from unlabelled image data by analysing the image statistics. In a comparative study, we evaluate unsupervised feature learning with different linear and non-linear learning methods, including principal component analysis (PCA) and deep belief networks (DBN). We also compare these automatically learned features with popular choices of ad-hoc features including raw intensity values, standard combinations like the NDVI, a few PCA channels, and texture filters. The comparison is done in a unified framework using the same images, the target classes, reference data and a Random Forest classifier.
Unified web-based network management based on distributed object orientated software agents
NASA Astrophysics Data System (ADS)
Djalalian, Amir; Mukhtar, Rami; Zukerman, Moshe
2002-09-01
This paper presents an architecture that provides a unified web interface to managed network devices that support CORBA, OSI or Internet-based network management protocols. A client gains access to managed devices through a web browser, which is used to issue management operations and receive event notifications. The proposed architecture is compatible with both the OSI Management reference Model and CORBA. The steps required for designing the building blocks of such architecture are identified.
A Novel Grid SINS/DVL Integrated Navigation Algorithm for Marine Application
Kang, Yingyao; Zhao, Lin; Cheng, Jianhua; Fan, Xiaoliang
2018-01-01
Integrated navigation algorithms under the grid frame have been proposed based on the Kalman filter (KF) to solve the problem of navigation in some special regions. However, in the existing study of grid strapdown inertial navigation system (SINS)/Doppler velocity log (DVL) integrated navigation algorithms, the Earth models of the filter dynamic model and the SINS mechanization are not unified. Besides, traditional integrated systems with the KF based correction scheme are susceptible to measurement errors, which would decrease the accuracy and robustness of the system. In this paper, an adaptive robust Kalman filter (ARKF) based hybrid-correction grid SINS/DVL integrated navigation algorithm is designed with the unified reference ellipsoid Earth model to improve the navigation accuracy in middle-high latitude regions for marine application. Firstly, to unify the Earth models, the mechanization of grid SINS is introduced and the error equations are derived based on the same reference ellipsoid Earth model. Then, a more accurate grid SINS/DVL filter model is designed according to the new error equations. Finally, a hybrid-correction scheme based on the ARKF is proposed to resist the effect of measurement errors. Simulation and experiment results show that, compared with the traditional algorithms, the proposed navigation algorithm can effectively improve the navigation performance in middle-high latitude regions by the unified Earth models and the ARKF based hybrid-correction scheme. PMID:29373549
Fu, Wei; Nijhoff, Frank W
2017-07-01
A unified framework is presented for the solution structure of three-dimensional discrete integrable systems, including the lattice AKP, BKP and CKP equations. This is done through the so-called direct linearizing transform, which establishes a general class of integral transforms between solutions. As a particular application, novel soliton-type solutions for the lattice CKP equation are obtained.
ERIC Educational Resources Information Center
Grenier-Boley, Nicolas
2014-01-01
Certain mathematical concepts were not introduced to solve a specific open problem but rather to solve different problems with the same tools in an economic formal way or to unify several approaches: such concepts, as some of those of linear algebra, are presumably difficult to introduce to students as they are potentially interwoven with many…
NASA Astrophysics Data System (ADS)
Prudden, R.; Arribas, A.; Tomlinson, J.; Robinson, N.
2017-12-01
The Unified Model is a numerical model of the atmosphere used at the UK Met Office (and numerous partner organisations including Korean Meteorological Agency, Australian Bureau of Meteorology and US Air Force) for both weather and climate applications.Especifically, dynamical models such as the Unified Model are now a central part of weather forecasting. Starting from basic physical laws, these models make it possible to predict events such as storms before they have even begun to form. The Unified Model can be simply described as having two components: one component solves the navier-stokes equations (usually referred to as the "dynamics"); the other solves relevant sub-grid physical processes (usually referred to as the "physics"). Running weather forecasts requires substantial computing resources - for example, the UK Met Office operates the largest operational High Performance Computer in Europe - and the cost of a typical simulation is spent roughly 50% in the "dynamics" and 50% in the "physics". Therefore there is a high incentive to reduce cost of weather forecasts and Machine Learning is a possible option because, once a machine learning model has been trained, it is often much faster to run than a full simulation. This is the motivation for a technique called model emulation, the idea being to build a fast statistical model which closely approximates a far more expensive simulation. In this paper we discuss the use of Machine Learning as an emulator to replace the "physics" component of the Unified Model. Various approaches and options will be presented and the implications for further model development, operational running of forecasting systems, development of data assimilation schemes, and development of ensemble prediction techniques will be discussed.
Unified Pairwise Spatial Relations: An Application to Graphical Symbol Retrieval
NASA Astrophysics Data System (ADS)
Santosh, K. C.; Wendling, Laurent; Lamiroy, Bart
In this paper, we present a novel unifying concept of pairwise spatial relations. We develop two way directional relations with respect to a unique point set, based on topology of the studied objects and thus avoids problems related to erroneous choices of reference objects while preserving symmetry. The method is robust to any type of image configuration since the directional relations are topologically guided. An automatic prototype graphical symbol retrieval is presented in order to establish its expressiveness.
Nonlinearization and waves in bounded media: old wine in a new bottle
NASA Astrophysics Data System (ADS)
Mortell, Michael P.; Seymour, Brian R.
2017-02-01
We consider problems such as a standing wave in a closed straight tube, a self-sustained oscillation, damped resonance, evolution of resonance and resonance between concentric spheres. These nonlinear problems, and other similar ones, have been solved by a variety of techniques when it is seen that linear theory fails. The unifying approach given here is to initially set up the appropriate linear difference equation, where the difference is the linear travel time. When the linear travel time is replaced by a corrected nonlinear travel time, the nonlinear difference equation yields the required solution.
24 CFR 3285.4 - Incorporation by reference (IBR).
Code of Federal Regulations, 2010 CFR
2010-04-01
... purchase from the Structural Engineering Institute/American Society of Civil Engineers (SEI/ASCE), 1801... for Engineering Purposes (Unified Soil Classification System), 2000, IBR approved for the table at...
Coriolis effect in optics: unified geometric phase and spin-Hall effect.
Bliokh, Konstantin Y; Gorodetski, Yuri; Kleiner, Vladimir; Hasman, Erez
2008-07-18
We examine the spin-orbit coupling effects that appear when a wave carrying intrinsic angular momentum interacts with a medium. The Berry phase is shown to be a manifestation of the Coriolis effect in a noninertial reference frame attached to the wave. In the most general case, when both the direction of propagation and the state of the wave are varied, the phase is given by a simple expression that unifies the spin redirection Berry phase and the Pancharatnam-Berry phase. The theory is supported by the experiment demonstrating the spin-orbit coupling of electromagnetic waves via a surface plasmon nanostructure. The measurements verify the unified geometric phase, demonstrated by the observed polarization-dependent shift (spin-Hall effect) of the waves.
Projective formulation of Maggi's method for nonholonomic systems analysis
NASA Astrophysics Data System (ADS)
Blajer, Wojciech
1992-04-01
A projective interpretation of Maggi'a approach to dynamic analysis of nonholonomic systems is presented. Both linear and nonlinear constraint cases are treatment in unified fashion, using the language of vector spaces and tensor algebra analysis.
Robust nonlinear control of vectored thrust aircraft
NASA Technical Reports Server (NTRS)
Doyle, John C.; Murray, Richard; Morris, John
1993-01-01
An interdisciplinary program in robust control for nonlinear systems with applications to a variety of engineering problems is outlined. Major emphasis will be placed on flight control, with both experimental and analytical studies. This program builds on recent new results in control theory for stability, stabilization, robust stability, robust performance, synthesis, and model reduction in a unified framework using Linear Fractional Transformations (LFT's), Linear Matrix Inequalities (LMI's), and the structured singular value micron. Most of these new advances have been accomplished by the Caltech controls group independently or in collaboration with researchers in other institutions. These recent results offer a new and remarkably unified framework for all aspects of robust control, but what is particularly important for this program is that they also have important implications for system identification and control of nonlinear systems. This combines well with Caltech's expertise in nonlinear control theory, both in geometric methods and methods for systems with constraints and saturations.
A Unified Global Reference Frame of Vertical Crustal Movements by Satellite Laser Ranging.
Zhu, Xinhui; Wang, Ren; Sun, Fuping; Wang, Jinling
2016-02-08
Crustal movement is one of the main factors influencing the change of the Earth system, especially in its vertical direction, which affects people's daily life through the frequent occurrence of earthquakes, geological disasters, and so on. In order to get a better study and application of the vertical crustal movement,as well as its changes, the foundation and prerequisite areto devise and establish its reference frame; especially, a unified global reference frame is required. Since SLR (satellite laser ranging) is one of the most accurate space techniques for monitoring geocentric motion and can directly measure the ground station's geocentric coordinates and velocities relative to the centre of the Earth's mass, we proposed to take the vertical velocity of the SLR technique in the ITRF2008 framework as the reference frame of vertical crustal motion, which we defined as the SLR vertical reference frame (SVRF). The systematic bias between other velocity fields and the SVRF was resolved by using the GPS (Global Positioning System) and VLBI (very long baseline interferometry) velocity observations, and the unity of other velocity fields and SVRF was realized,as well. The results show that it is feasible and suitable to take the SVRF as a reference frame, which has both geophysical meanings and geodetic observations, so we recommend taking the SLR vertical velocity under ITRF2008 as the global reference frame of vertical crustal movement.
A Unified Global Reference Frame of Vertical Crustal Movements by Satellite Laser Ranging
Zhu, Xinhui; Wang, Ren; Sun, Fuping; Wang, Jinling
2016-01-01
Crustal movement is one of the main factors influencing the change of the Earth system, especially in its vertical direction, which affects people’s daily life through the frequent occurrence of earthquakes, geological disasters, and so on. In order to get a better study and application of the vertical crustal movement, as well as its changes, the foundation and prerequisite areto devise and establish its reference frame; especially, a unified global reference frame is required. Since SLR (satellite laser ranging) is one of the most accurate space techniques for monitoring geocentric motion and can directly measure the ground station’s geocentric coordinates and velocities relative to the centre of the Earth’s mass, we proposed to take the vertical velocity of the SLR technique in the ITRF2008 framework as the reference frame of vertical crustal motion, which we defined as the SLR vertical reference frame (SVRF). The systematic bias between other velocity fields and the SVRF was resolved by using the GPS (Global Positioning System) and VLBI (very long baseline interferometry) velocity observations, and the unity of other velocity fields and SVRF was realized, as well. The results show that it is feasible and suitable to take the SVRF as a reference frame, which has both geophysical meanings and geodetic observations, so we recommend taking the SLR vertical velocity under ITRF2008 as the global reference frame of vertical crustal movement. PMID:26867197
Unified Theory for Decoding the Signals from X-Ray Florescence and X-Ray Diffraction of Mixtures.
Chung, Frank H
2017-05-01
For research and development or for solving technical problems, we often need to know the chemical composition of an unknown mixture, which is coded and stored in the signals of its X-ray fluorescence (XRF) and X-ray diffraction (XRD). X-ray fluorescence gives chemical elements, whereas XRD gives chemical compounds. The major problem in XRF and XRD analyses is the complex matrix effect. The conventional technique to deal with the matrix effect is to construct empirical calibration lines with standards for each element or compound sought, which is tedious and time-consuming. A unified theory of quantitative XRF analysis is presented here. The idea is to cancel the matrix effect mathematically. It turns out that the decoding equation for quantitative XRF analysis is identical to that for quantitative XRD analysis although the physics of XRD and XRF are fundamentally different. The XRD work has been published and practiced worldwide. The unified theory derives a new intensity-concentration equation of XRF, which is free from the matrix effect and valid for a wide range of concentrations. The linear decoding equation establishes a constant slope for each element sought, hence eliminating the work on calibration lines. The simple linear decoding equation has been verified by 18 experiments.
Why and How. The Future of the Central Questions of Consciousness
Havlík, Marek; Kozáková, Eva; Horáček, Jiří
2017-01-01
In this review, we deal with two central questions of consciousness how and why, and we outline their possible future development. The question how refers to the empirical endeavor to reveal the neural correlates and mechanisms that form consciousness. On the other hand, the question why generally refers to the “hard problem” of consciousness, which claims that empirical science will always fail to provide a satisfactory answer to the question why is there conscious experience at all. Unfortunately, the hard problem of consciousness will probably never completely disappear because it will always have its most committed supporters. However, there is a good chance that its weight and importance will be highly reduced by empirically tackling consciousness in the near future. We expect that future empirical endeavor of consciousness will be based on a unifying brain theory and will answer the question as to what is the function of conscious experience, which will in turn replace the implications of the hard problem. The candidate of such a unifying brain theory is predictive coding, which will have to explain both perceptual consciousness and conscious mind-wandering in order to become the truly unifying theory of brain functioning. PMID:29075226
Linear transformation and oscillation criteria for Hamiltonian systems
NASA Astrophysics Data System (ADS)
Zheng, Zhaowen
2007-08-01
Using a linear transformation similar to the Kummer transformation, some new oscillation criteria for linear Hamiltonian systems are established. These results generalize and improve the oscillation criteria due to I.S. Kumari and S. Umanaheswaram [I. Sowjaya Kumari, S. Umanaheswaram, Oscillation criteria for linear matrix Hamiltonian systems, J. Differential Equations 165 (2000) 174-198], Q. Yang et al. [Q. Yang, R. Mathsen, S. Zhu, Oscillation theorems for self-adjoint matrix Hamiltonian systems, J. Differential Equations 190 (2003) 306-329], and S. Chen and Z. Zheng [Shaozhu Chen, Zhaowen Zheng, Oscillation criteria of Yan type for linear Hamiltonian systems, Comput. Math. Appl. 46 (2003) 855-862]. These criteria also unify many of known criteria in literature and simplify the proofs.
Zhou, Xiang
2017-12-01
Linear mixed models (LMMs) are among the most commonly used tools for genetic association studies. However, the standard method for estimating variance components in LMMs-the restricted maximum likelihood estimation method (REML)-suffers from several important drawbacks: REML requires individual-level genotypes and phenotypes from all samples in the study, is computationally slow, and produces downward-biased estimates in case control studies. To remedy these drawbacks, we present an alternative framework for variance component estimation, which we refer to as MQS. MQS is based on the method of moments (MoM) and the minimal norm quadratic unbiased estimation (MINQUE) criterion, and brings two seemingly unrelated methods-the renowned Haseman-Elston (HE) regression and the recent LD score regression (LDSC)-into the same unified statistical framework. With this new framework, we provide an alternative but mathematically equivalent form of HE that allows for the use of summary statistics. We provide an exact estimation form of LDSC to yield unbiased and statistically more efficient estimates. A key feature of our method is its ability to pair marginal z -scores computed using all samples with SNP correlation information computed using a small random subset of individuals (or individuals from a proper reference panel), while capable of producing estimates that can be almost as accurate as if both quantities are computed using the full data. As a result, our method produces unbiased and statistically efficient estimates, and makes use of summary statistics, while it is computationally efficient for large data sets. Using simulations and applications to 37 phenotypes from 8 real data sets, we illustrate the benefits of our method for estimating and partitioning SNP heritability in population studies as well as for heritability estimation in family studies. Our method is implemented in the GEMMA software package, freely available at www.xzlab.org/software.html.
Roux-Rouquié, Magali; Caritey, Nicolas; Gaubert, Laurent; Rosenthal-Sabroux, Camille
2004-07-01
One of the main issues in Systems Biology is to deal with semantic data integration. Previously, we examined the requirements for a reference conceptual model to guide semantic integration based on the systemic principles. In the present paper, we examine the usefulness of the Unified Modelling Language (UML) to describe and specify biological systems and processes. This makes unambiguous representations of biological systems, which would be suitable for translation into mathematical and computational formalisms, enabling analysis, simulation and prediction of these systems behaviours.
Unified quantum no-go theorems and transforming of quantum pure states in a restricted set
NASA Astrophysics Data System (ADS)
Luo, Ming-Xing; Li, Hui-Ran; Lai, Hong; Wang, Xiaojun
2017-12-01
The linear superposition principle in quantum mechanics is essential for several no-go theorems such as the no-cloning theorem, the no-deleting theorem and the no-superposing theorem. In this paper, we investigate general quantum transformations forbidden or permitted by the superposition principle for various goals. First, we prove a no-encoding theorem that forbids linearly superposing of an unknown pure state and a fixed pure state in Hilbert space of a finite dimension. The new theorem is further extended for multiple copies of an unknown state as input states. These generalized results of the no-encoding theorem include the no-cloning theorem, the no-deleting theorem and the no-superposing theorem as special cases. Second, we provide a unified scheme for presenting perfect and imperfect quantum tasks (cloning and deleting) in a one-shot manner. This scheme may lead to fruitful results that are completely characterized with the linear independence of the representative vectors of input pure states. The upper bounds of the efficiency are also proved. Third, we generalize a recent superposing scheme of unknown states with a fixed overlap into new schemes when multiple copies of an unknown state are as input states.
NASA Technical Reports Server (NTRS)
McGowan, David M.
1999-01-01
The analytical formulation of curved-plate non-linear equilibrium equations including transverse-shear-deformation effects is presented. A unified set of non-linear strains that contains terms from both physical and tensorial strain measures is used. Linearized, perturbed equilibrium equations (stability equations) that describe the response of the plate just after buckling occurs are derived. These equations are then modified to allow the plate reference surface to be located a distance z(sub c) from the centroidal surface. The implementation of the new theory into the VICONOPT exact buckling and vibration analysis and optimum design computer program is described. The terms of the plate stiffness matrix using both classical plate theory (CPT) and first-order shear-deformation plate theory (SDPT) are presented. The effects of in-plane transverse and in-plane shear loads are included in the in-plane stability equations. Numerical results for several example problems with different loading states are presented. Comparisons of analyses using both physical and tensorial strain measures as well as CPT and SDPT are made. The computational effort required by the new analysis is compared to that of the analysis currently in the VICONOPT program. The effects of including terms related to in-plane transverse and in-plane shear loadings in the in-plane stability equations are also examined. Finally, results of a design-optimization study of two different cylindrical shells subject to uniform axial compression are presented.
Shuttle unified navigation filter, revision 1
NASA Technical Reports Server (NTRS)
Muller, E. S., Jr.
1973-01-01
Equations designed to meet the navigation requirements of the separate shuttle mission phases are presented in a series of reports entitled, Space Shuttle GN and C Equation Document. The development of these equations is based on performance studies carried out for each particular mission phase. Although navigation equations have been documented separately for each mission phase, a single unified navigation filter design is embodied in these separate designs. The purpose of this document is to present the shuttle navigation equations in a form in which they would most likely be coded-as the single unified navigation filter used in each mission phase. This document will then serve as a single general reference for the navigation equations replacing each of the individual mission phase navigation documents (which may still be used as a description of a particular navigation phase).
Secondary School Mathematics Curriculum Improvement Study Information Bulletin 7.
ERIC Educational Resources Information Center
Secondary School Mathematics Curriculum Improvement Study, New York, NY.
The background, objectives, and design of Secondary School Mathematics Curriculum Improvement Study (SSMCIS) are summarized. Details are given of the content of the text series, "Unified Modern Mathematics," in the areas of algebra, geometry, linear algebra, probability and statistics, analysis (calculus), logic, and computer…
NASA Technical Reports Server (NTRS)
Yao, Tse-Min; Choi, Kyung K.
1987-01-01
An automatic regridding method and a three dimensional shape design parameterization technique were constructed and integrated into a unified theory of shape design sensitivity analysis. An algorithm was developed for general shape design sensitivity analysis of three dimensional eleastic solids. Numerical implementation of this shape design sensitivity analysis method was carried out using the finite element code ANSYS. The unified theory of shape design sensitivity analysis uses the material derivative of continuum mechanics with a design velocity field that represents shape change effects over the structural design. Automatic regridding methods were developed by generating a domain velocity field with boundary displacement method. Shape design parameterization for three dimensional surface design problems was illustrated using a Bezier surface with boundary perturbations that depend linearly on the perturbation of design parameters. A linearization method of optimization, LINRM, was used to obtain optimum shapes. Three examples from different engineering disciplines were investigated to demonstrate the accuracy and versatility of this shape design sensitivity analysis method.
NASA Astrophysics Data System (ADS)
Arendt, V.; Shalchi, A.
2018-06-01
We explore numerically the transport of energetic particles in a turbulent magnetic field configuration. A test-particle code is employed to compute running diffusion coefficients as well as particle distribution functions in the different directions of space. Our numerical findings are compared with models commonly used in diffusion theory such as Gaussian distribution functions and solutions of the cosmic ray Fokker-Planck equation. Furthermore, we compare the running diffusion coefficients across the mean magnetic field with solutions obtained from the time-dependent version of the unified non-linear transport theory. In most cases we find that particle distribution functions are indeed of Gaussian form as long as a two-component turbulence model is employed. For turbulence setups with reduced dimensionality, however, the Gaussian distribution can no longer be obtained. It is also shown that the unified non-linear transport theory agrees with simulated perpendicular diffusion coefficients as long as the pure two-dimensional model is excluded.
NASA Technical Reports Server (NTRS)
Gupta, R. N.; Rodkiewicz, C. M.
1975-01-01
The numerical results are obtained for heat transfer, skin-friction, and viscous interaction induced pressure for a step-wise accelerated flat plate in hypersonic flow. In the unified approach here the results are presented for both weak and strong-interaction problems without employing any linearization scheme. With the help of the numerical method used in this work an accurate prediction of wall shear can be made for the problems with plate velocity changes of 1% or larger. The obtained results indicate that the transient contribution to the induced pressure for helium is greater than that for air.
He, Jinwei; Ge, Miao; Wang, Congxia; Jiang, Naigui; Zhang, Mingxin; Yun, Pujun
2014-07-01
The aim of this study was to provide a scientific basic for a unified standard of the reference value of vital capacity (VC) of healthy subjects from 6 and 84 years old in China. The normal reference value of VC was correlated to seven geographical factors, including altitude (X1), annual duration of sunshine (X2), annual mean air temperature (X3), annual mean relative humidity (X4), annual precipitation amount (X5), annual air temperature range (X6) and annual mean wind speed (X7). Predictive models were established by five different linear and nonlinear methods. The best models were selected by t-test. The geographical distribution map of VC in different age groups can be interpolated by Kriging's method using ArcGIS software. It was found that the correlation of VC and geographical factors in China was quite significant, especially for both males and females aged from 6 to 45. The best models were built for different age groups. The geographical distribution map shows the spatial variations of VC in China precisely. The VC of healthy subjects can be simulated by the best model or acquired from the geographical distribution map provided the geographical factors for that city or county of China are known.
A Unified Model for BDS Wide Area and Local Area Augmentation Positioning Based on Raw Observations.
Tu, Rui; Zhang, Rui; Lu, Cuixian; Zhang, Pengfei; Liu, Jinhai; Lu, Xiaochun
2017-03-03
In this study, a unified model for BeiDou Navigation Satellite System (BDS) wide area and local area augmentation positioning based on raw observations has been proposed. Applying this model, both the Real-Time Kinematic (RTK) and Precise Point Positioning (PPP) service can be realized by performing different corrections at the user end. This algorithm was assessed and validated with the BDS data collected at four regional stations from Day of Year (DOY) 080 to 083 of 2016. When the users are located within the local reference network, the fast and high precision RTK service can be achieved using the regional observation corrections, revealing a convergence time of about several seconds and a precision of about 2-3 cm. For the users out of the regional reference network, the global broadcast State-Space Represented (SSR) corrections can be utilized to realize the global PPP service which shows a convergence time of about 25 min for achieving an accuracy of 10 cm. With this unified model, it can not only integrate the Network RTK (NRTK) and PPP into a seamless positioning service, but also recover the ionosphere Vertical Total Electronic Content (VTEC) and Differential Code Bias (DCB) values that are useful for the ionosphere monitoring and modeling.
A Unified Model for BDS Wide Area and Local Area Augmentation Positioning Based on Raw Observations
Tu, Rui; Zhang, Rui; Lu, Cuixian; Zhang, Pengfei; Liu, Jinhai; Lu, Xiaochun
2017-01-01
In this study, a unified model for BeiDou Navigation Satellite System (BDS) wide area and local area augmentation positioning based on raw observations has been proposed. Applying this model, both the Real-Time Kinematic (RTK) and Precise Point Positioning (PPP) service can be realized by performing different corrections at the user end. This algorithm was assessed and validated with the BDS data collected at four regional stations from Day of Year (DOY) 080 to 083 of 2016. When the users are located within the local reference network, the fast and high precision RTK service can be achieved using the regional observation corrections, revealing a convergence time of about several seconds and a precision of about 2–3 cm. For the users out of the regional reference network, the global broadcast State-Space Represented (SSR) corrections can be utilized to realize the global PPP service which shows a convergence time of about 25 min for achieving an accuracy of 10 cm. With this unified model, it can not only integrate the Network RTK (NRTK) and PPP into a seamless positioning service, but also recover the ionosphere Vertical Total Electronic Content (VTEC) and Differential Code Bias (DCB) values that are useful for the ionosphere monitoring and modeling. PMID:28273814
NASA Technical Reports Server (NTRS)
Banks, H. T.; Silcox, R. J.; Keeling, S. L.; Wang, C.
1989-01-01
A unified treatment of the linear quadratic tracking (LQT) problem, in which a control system's dynamics are modeled by a linear evolution equation with a nonhomogeneous component that is linearly dependent on the control function u, is presented; the treatment proceeds from the theoretical formulation to a numerical approximation framework. Attention is given to two categories of LQT problems in an infinite time interval: the finite energy and the finite average energy. The behavior of the optimal solution for finite time-interval problems as the length of the interval tends to infinity is discussed. Also presented are the formulations and properties of LQT problems in a finite time interval.
Solution of the determinantal assignment problem using the Grassmann matrices
NASA Astrophysics Data System (ADS)
Karcanias, Nicos; Leventides, John
2016-02-01
The paper provides a direct solution to the determinantal assignment problem (DAP) which unifies all frequency assignment problems of the linear control theory. The current approach is based on the solvability of the exterior equation ? where ? is an n -dimensional vector space over ? which is an integral part of the solution of DAP. New criteria for existence of solution and their computation based on the properties of structured matrices are referred to as Grassmann matrices. The solvability of this exterior equation is referred to as decomposability of ?, and it is in turn characterised by the set of quadratic Plücker relations (QPRs) describing the Grassmann variety of the corresponding projective space. Alternative new tests for decomposability of the multi-vector ? are given in terms of the rank properties of the Grassmann matrix, ? of the vector ?, which is constructed by the coordinates of ?. It is shown that the exterior equation is solvable (? is decomposable), if and only if ? where ?; the solution space for a decomposable ?, is the space ?. This provides an alternative linear algebra characterisation of the decomposability problem and of the Grassmann variety to that defined by the QPRs. Further properties of the Grassmann matrices are explored by defining the Hodge-Grassmann matrix as the dual of the Grassmann matrix. The connections of the Hodge-Grassmann matrix to the solution of exterior equations are examined, and an alternative new characterisation of decomposability is given in terms of the dimension of its image space. The framework based on the Grassmann matrices provides the means for the development of a new computational method for the solutions of the exact DAP (when such solutions exist), as well as computing approximate solutions, when exact solutions do not exist.
Browning, Brian L.; Browning, Sharon R.
2009-01-01
We present methods for imputing data for ungenotyped markers and for inferring haplotype phase in large data sets of unrelated individuals and parent-offspring trios. Our methods make use of known haplotype phase when it is available, and our methods are computationally efficient so that the full information in large reference panels with thousands of individuals is utilized. We demonstrate that substantial gains in imputation accuracy accrue with increasingly large reference panel sizes, particularly when imputing low-frequency variants, and that unphased reference panels can provide highly accurate genotype imputation. We place our methodology in a unified framework that enables the simultaneous use of unphased and phased data from trios and unrelated individuals in a single analysis. For unrelated individuals, our imputation methods produce well-calibrated posterior genotype probabilities and highly accurate allele-frequency estimates. For trios, our haplotype-inference method is four orders of magnitude faster than the gold-standard PHASE program and has excellent accuracy. Our methods enable genotype imputation to be performed with unphased trio or unrelated reference panels, thus accounting for haplotype-phase uncertainty in the reference panel. We present a useful measure of imputation accuracy, allelic R2, and show that this measure can be estimated accurately from posterior genotype probabilities. Our methods are implemented in version 3.0 of the BEAGLE software package. PMID:19200528
NASA Astrophysics Data System (ADS)
Liu, Fei; Tong, Huan; Ma, Rui; Ou-Yang, Zhong-can
2010-12-01
A formal apparatus is developed to unify derivations of the linear response theory and a variety of transient fluctuation relations for continuous diffusion processes from a backward point of view. The basis is a perturbed Kolmogorov backward equation and the path integral representation of its solution. We find that these exact transient relations could be interpreted as a consequence of a generalized Chapman-Kolmogorov equation, which intrinsically arises from the Markovian characteristic of diffusion processes.
GREIT: a unified approach to 2D linear EIT reconstruction of lung images.
Adler, Andy; Arnold, John H; Bayford, Richard; Borsic, Andrea; Brown, Brian; Dixon, Paul; Faes, Theo J C; Frerichs, Inéz; Gagnon, Hervé; Gärber, Yvo; Grychtol, Bartłomiej; Hahn, Günter; Lionheart, William R B; Malik, Anjum; Patterson, Robert P; Stocks, Janet; Tizzard, Andrew; Weiler, Norbert; Wolf, Gerhard K
2009-06-01
Electrical impedance tomography (EIT) is an attractive method for clinically monitoring patients during mechanical ventilation, because it can provide a non-invasive continuous image of pulmonary impedance which indicates the distribution of ventilation. However, most clinical and physiological research in lung EIT is done using older and proprietary algorithms; this is an obstacle to interpretation of EIT images because the reconstructed images are not well characterized. To address this issue, we develop a consensus linear reconstruction algorithm for lung EIT, called GREIT (Graz consensus Reconstruction algorithm for EIT). This paper describes the unified approach to linear image reconstruction developed for GREIT. The framework for the linear reconstruction algorithm consists of (1) detailed finite element models of a representative adult and neonatal thorax, (2) consensus on the performance figures of merit for EIT image reconstruction and (3) a systematic approach to optimize a linear reconstruction matrix to desired performance measures. Consensus figures of merit, in order of importance, are (a) uniform amplitude response, (b) small and uniform position error, (c) small ringing artefacts, (d) uniform resolution, (e) limited shape deformation and (f) high resolution. Such figures of merit must be attained while maintaining small noise amplification and small sensitivity to electrode and boundary movement. This approach represents the consensus of a large and representative group of experts in EIT algorithm design and clinical applications for pulmonary monitoring. All software and data to implement and test the algorithm have been made available under an open source license which allows free research and commercial use.
Survey of meshless and generalized finite element methods: A unified approach
NASA Astrophysics Data System (ADS)
Babuška, Ivo; Banerjee, Uday; Osborn, John E.
In the past few years meshless methods for numerically solving partial differential equations have come into the focus of interest, especially in the engineering community. This class of methods was essentially stimulated by difficulties related to mesh generation. Mesh generation is delicate in many situations, for instance, when the domain has complicated geometry; when the mesh changes with time, as in crack propagation, and remeshing is required at each time step; when a Lagrangian formulation is employed, especially with nonlinear PDEs. In addition, the need for flexibility in the selection of approximating functions (e.g., the flexibility to use non-polynomial approximating functions), has played a significant role in the development of meshless methods. There are many recent papers, and two books, on meshless methods; most of them are of an engineering character, without any mathematical analysis.In this paper we address meshless methods and the closely related generalized finite element methods for solving linear elliptic equations, using variational principles. We give a unified mathematical theory with proofs, briefly address implementational aspects, present illustrative numerical examples, and provide a list of references to the current literature.The aim of the paper is to provide a survey of a part of this new field, with emphasis on mathematics. We present proofs of essential theorems because we feel these proofs are essential for the understanding of the mathematical aspects of meshless methods, which has approximation theory as a major ingredient. As always, any new field is stimulated by and related to older ideas. This will be visible in our paper.
NASA Astrophysics Data System (ADS)
Liu, Zhijun; Zhang, Liangpei; Liu, Zhenmin; Jiao, Hongbo; Chen, Liqun
2008-12-01
In order to manage the internal resources of Gulf of Tonkin and integrate multiple-source spatial data, the establishment of region unified plan management system is needed. The data fusion and the integrated research should be carried on because there are some difficulties in the course of the system's establishment. For example, kinds of planning and the project data format are different, and data criterion is not unified. Besides, the time state property is strong, and spatial reference is inconsistent, etc. In this article the ARCGIS ENGINE is introduced as the developing platform, key technologies are researched, such as multiple-source data transformation and fusion, remote sensing data and DEM fusion and integrated, plan and project data integration, and so on. Practice shows that the system improves the working efficiency of Guangxi Gulf of Tonkin Economic Zone Management Committee significantly and promotes planning construction work of the economic zone remarkably.
NASA Technical Reports Server (NTRS)
Davidson, Paul; Pineda, Evan J.; Heinrich, Christian; Waas, Anthony M.
2013-01-01
The open hole tensile and compressive strengths are important design parameters in qualifying fiber reinforced laminates for a wide variety of structural applications in the aerospace industry. In this paper, we present a unified model that can be used for predicting both these strengths (tensile and compressive) using the same set of coupon level, material property data. As a prelude to the unified computational model that follows, simplified approaches, referred to as "zeroth order", "first order", etc. with increasing levels of fidelity are first presented. The results and methods presented are practical and validated against experimental data. They serve as an introductory step in establishing a virtual building block, bottom-up approach to designing future airframe structures with composite materials. The results are useful for aerospace design engineers, particularly those that deal with airframe design.
NASA Astrophysics Data System (ADS)
Made Tirta, I.; Anggraeni, Dian
2018-04-01
Statistical models have been developed rapidly into various directions to accommodate various types of data. Data collected from longitudinal, repeated measured, clustered data (either continuous, binary, count, or ordinal), are more likely to be correlated. Therefore statistical model for independent responses, such as Generalized Linear Model (GLM), Generalized Additive Model (GAM) are not appropriate. There are several models available to apply for correlated responses including GEEs (Generalized Estimating Equations), for marginal model and various mixed effect model such as GLMM (Generalized Linear Mixed Models) and HGLM (Hierarchical Generalized Linear Models) for subject spesific models. These models are available on free open source software R, but they can only be accessed through command line interface (using scrit). On the othe hand, most practical researchers very much rely on menu based or Graphical User Interface (GUI). We develop, using Shiny framework, standard pull down menu Web-GUI that unifies most models for correlated responses. The Web-GUI has accomodated almost all needed features. It enables users to do and compare various modeling for repeated measure data (GEE, GLMM, HGLM, GEE for nominal responses) much more easily trough online menus. This paper discusses the features of the Web-GUI and illustrates the use of them. In General we find that GEE, GLMM, HGLM gave very closed results.
Insights into linearized rotor dynamics, Part 2
NASA Astrophysics Data System (ADS)
Adams, M. L.
1987-01-01
This paper builds upon its 1981 namesake to extend and propose ideas which focus on some unique problems at the current center of interest in rotor vibration technology. These problems pertain to the ongoing extension of the linearized rotor-bearing model to include other rotor-stator interactive forces such as seals and turbomachinery stages. A unified linear model is proposed and contains an axiom which requires the coefficient matrix of the highest order term, in an interactive force model, to be symmetric. The paper ends on a fundamental question, namely, the potential weakness inherent in the whole idea of mechanical impedance modeling of rotor-stator interactive fluid flow fields.
Jena Reference Air Set (JRAS): a multi-point scale anchor for isotope measurements of CO2 in air
NASA Astrophysics Data System (ADS)
Wendeberg, M.; Richter, J. M.; Rothe, M.; Brand, W. A.
2013-03-01
The need for a unifying scale anchor for isotopes of CO2 in air was brought to light at the 11th WMO/IAEA Meeting of Experts on Carbon Dioxide in Tokyo 2001. During discussions about persistent discrepancies in isotope measurements between the worlds leading laboratories, it was concluded that a unifying scale anchor for Vienna Pee Dee Belemnite (VPDB) of CO2 in air was desperately needed. Ten years later, at the 2011 Meeting of Experts on Carbon Dioxide in Wellington, it was recommended that the Jena Reference Air Set (JRAS) become the official scale anchor for isotope measurements of CO2 in air (Brailsford, 2012). The source of CO2 used for JRAS is two calcites. After releasing CO2 by reaction with phosphoric acid, the gases are mixed into CO2-free air. This procedure ensures both isotopic stability and longevity of the CO2. That the reference CO2 is generated from calcites and supplied as an air mixture is unique to JRAS. This is made to ensure that any measurement bias arising from the extraction procedure is eliminated. As every laboratory has its own procedure for extracting the CO2, this is of paramount importance if the local scales are to be unified with a common anchor. For a period of four years, JRAS has been evaluated through the IMECC1 program, which made it possible to distribute sets of JRAS gases to 13 laboratories worldwide. A summary of data from the six laboratories that have reported the full set of results is given here along with a description of the production and maintenance of the JRAS scale anchors. 1 IMECC refers to the EU project "Infrastructure for Measurements of the European Carbon Cycle" (http://imecc.ipsl.jussieu.fr/).
Development and evaluation of a suite of isotope reference gases for methane in air
NASA Astrophysics Data System (ADS)
Sperlich, Peter; Uitslag, Nelly A. M.; Richter, Jürgen M.; Rothe, Michael; Geilmann, Heike; van der Veen, Carina; Röckmann, Thomas; Blunier, Thomas; Brand, Willi A.
2016-08-01
Measurements from multiple laboratories have to be related to unifying and traceable reference material in order to be comparable. However, such fundamental reference materials are not available for isotope ratios in atmospheric methane, which led to misinterpretations of combined data sets in the past. We developed a method to produce a suite of synthetic CH4-in-air standard gases that can be used to unify methane isotope ratio measurements of laboratories in the atmospheric monitoring community. Therefore, we calibrated a suite of pure methane gases of different methanogenic origin against international referencing materials that define the VSMOW (Vienna Standard Mean Ocean Water) and VPDB (Vienna Pee Dee Belemnite) isotope scales. The isotope ratios of our pure methane gases range between -320 and +40 ‰ for δ2H-CH4 and between -70 and -40 ‰ for δ13C-CH4, enveloping the isotope ratios of tropospheric methane (about -85 and -47 ‰ for δ2H-CH4 and δ13C-CH4 respectively). Estimated uncertainties, including the full traceability chain, are < 1.5 ‰ and < 0.2 ‰ for δ2H and δ13C calibrations respectively. Aliquots of the calibrated pure methane gases have been diluted with methane-free air to atmospheric methane levels and filled into 5 L glass flasks. The synthetic CH4-in-air standards comprise atmospheric oxygen/nitrogen ratios as well as argon, krypton and nitrous oxide mole fractions to prevent gas-specific measurement artefacts. The resulting synthetic CH4-in-air standards are referred to as JRAS-M16 (Jena Reference Air Set - Methane 2016) and will be available to the atmospheric monitoring community. JRAS-M16 may be used as unifying isotope scale anchor for isotope ratio measurements in atmospheric methane, so that data sets can be merged into a consistent global data frame.
Tensor scale: An analytic approach with efficient computation and applications☆
Xu, Ziyue; Saha, Punam K.; Dasgupta, Soura
2015-01-01
Scale is a widely used notion in computer vision and image understanding that evolved in the form of scale-space theory where the key idea is to represent and analyze an image at various resolutions. Recently, we introduced a notion of local morphometric scale referred to as “tensor scale” using an ellipsoidal model that yields a unified representation of structure size, orientation and anisotropy. In the previous work, tensor scale was described using a 2-D algorithmic approach and a precise analytic definition was missing. Also, the application of tensor scale in 3-D using the previous framework is not practical due to high computational complexity. In this paper, an analytic definition of tensor scale is formulated for n-dimensional (n-D) images that captures local structure size, orientation and anisotropy. Also, an efficient computational solution in 2- and 3-D using several novel differential geometric approaches is presented and the accuracy of results is experimentally examined. Also, a matrix representation of tensor scale is derived facilitating several operations including tensor field smoothing to capture larger contextual knowledge. Finally, the applications of tensor scale in image filtering and n-linear interpolation are presented and the performance of their results is examined in comparison with respective state-of-art methods. Specifically, the performance of tensor scale based image filtering is compared with gradient and Weickert’s structure tensor based diffusive filtering algorithms. Also, the performance of tensor scale based n-linear interpolation is evaluated in comparison with standard n-linear and windowed-sinc interpolation methods. PMID:26236148
Numerical solution of the general coupled nonlinear Schrödinger equations on unbounded domains.
Li, Hongwei; Guo, Yue
2017-12-01
The numerical solution of the general coupled nonlinear Schrödinger equations on unbounded domains is considered by applying the artificial boundary method in this paper. In order to design the local absorbing boundary conditions for the coupled nonlinear Schrödinger equations, we generalize the unified approach previously proposed [J. Zhang et al., Phys. Rev. E 78, 026709 (2008)PLEEE81539-375510.1103/PhysRevE.78.026709]. Based on the methodology underlying the unified approach, the original problem is split into two parts, linear and nonlinear terms, and we then achieve a one-way operator to approximate the linear term to make the wave out-going, and finally we combine the one-way operator with the nonlinear term to derive the local absorbing boundary conditions. Then we reduce the original problem into an initial boundary value problem on the bounded domain, which can be solved by the finite difference method. The stability of the reduced problem is also analyzed by introducing some auxiliary variables. Ample numerical examples are presented to verify the accuracy and effectiveness of our proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Yongbin; White, R. D.
In the calculation of the linearized Boltzmann collision operator for an inverse-square force law interaction (Coulomb interaction) F(r)=κ/r{sup 2}, we found the widely used scattering angle cutoff θ≥θ{sub min} is a wrong practise since the divergence still exists after the cutoff has been made. When the correct velocity change cutoff |v′−v|≥δ{sub min} is employed, the scattering angle can be integrated. A unified linearized Boltzmann collision operator for both inverse-square force law and rigid-sphere interactions is obtained. Like many other unified quantities such as transition moments, Fokker-Planck expansion coefficients and energy exchange rates obtained recently [Y. B. Chang and L. A.more » Viehland, AIP Adv. 1, 032128 (2011)], the difference between the two kinds of interactions is characterized by a parameter, γ, which is 1 for rigid-sphere interactions and −3 for inverse-square force law interactions. When the cutoff is removed by setting δ{sub min}=0, Hilbert's well known kernel for rigid-sphere interactions is recovered for γ = 1.« less
Semiotics, Information Science, Documents and Computers.
ERIC Educational Resources Information Center
Warner, Julian
1990-01-01
Discusses the relationship and value of semiotics to the established domains of information science. Highlights include documentation; computer operations; the language of computing; automata theory; linguistics; speech and writing; and the written language as a unifying principle for the document and the computer. (93 references) (LRW)
24 CFR 3285.4 - Incorporation by reference (IBR).
Code of Federal Regulations, 2012 CFR
2012-04-01
...-6600, fax number (253) 565-7265. (1) PS1-95, Construction and Industrial Plywood (with typical APA... for Engineering Purposes (Unified Soil Classification System), 2000, IBR approved for the table at... purchase from the Structural Engineering Institute/American Society of Civil Engineers (SEI/ASCE), 1801...
24 CFR 3285.4 - Incorporation by reference (IBR).
Code of Federal Regulations, 2011 CFR
2011-04-01
...-6600, fax number (253) 565-7265. (1) PS1-95, Construction and Industrial Plywood (with typical APA... for Engineering Purposes (Unified Soil Classification System), 2000, IBR approved for the table at... purchase from the Structural Engineering Institute/American Society of Civil Engineers (SEI/ASCE), 1801...
A unified material decomposition framework for quantitative dual- and triple-energy CT imaging.
Zhao, Wei; Vernekohl, Don; Han, Fei; Han, Bin; Peng, Hao; Yang, Yong; Xing, Lei; Min, James K
2018-04-21
Many clinical applications depend critically on the accurate differentiation and classification of different types of materials in patient anatomy. This work introduces a unified framework for accurate nonlinear material decomposition and applies it, for the first time, in the concept of triple-energy CT (TECT) for enhanced material differentiation and classification as well as dual-energy CT (DECT). We express polychromatic projection into a linear combination of line integrals of material-selective images. The material decomposition is then turned into a problem of minimizing the least-squares difference between measured and estimated CT projections. The optimization problem is solved iteratively by updating the line integrals. The proposed technique is evaluated by using several numerical phantom measurements under different scanning protocols. The triple-energy data acquisition is implemented at the scales of micro-CT and clinical CT imaging with commercial "TwinBeam" dual-source DECT configuration and a fast kV switching DECT configuration. Material decomposition and quantitative comparison with a photon counting detector and with the presence of a bow-tie filter are also performed. The proposed method provides quantitative material- and energy-selective images examining realistic configurations for both DECT and TECT measurements. Compared to the polychromatic kV CT images, virtual monochromatic images show superior image quality. For the mouse phantom, quantitative measurements show that the differences between gadodiamide and iodine concentrations obtained using TECT and idealized photon counting CT (PCCT) are smaller than 8 and 1 mg/mL, respectively. TECT outperforms DECT for multicontrast CT imaging and is robust with respect to spectrum estimation. For the thorax phantom, the differences between the concentrations of the contrast map and the corresponding true reference values are smaller than 7 mg/mL for all of the realistic configurations. A unified framework for both DECT and TECT imaging has been established for the accurate extraction of material compositions using currently available commercial DECT configurations. The novel technique is promising to provide an urgently needed solution for several CT-based diagnostic and therapy applications, especially for the diagnosis of cardiovascular and abdominal diseases where multicontrast imaging is involved. © 2018 American Association of Physicists in Medicine.
A unified development of several techniques for the representation of random vectors and data sets
NASA Technical Reports Server (NTRS)
Bundick, W. T.
1973-01-01
Linear vector space theory is used to develop a general representation of a set of data vectors or random vectors by linear combinations of orthonormal vectors such that the mean squared error of the representation is minimized. The orthonormal vectors are shown to be the eigenvectors of an operator. The general representation is applied to several specific problems involving the use of the Karhunen-Loeve expansion, principal component analysis, and empirical orthogonal functions; and the common properties of these representations are developed.
Oizumi, Ryo
2014-01-01
Life history of organisms is exposed to uncertainty generated by internal and external stochasticities. Internal stochasticity is generated by the randomness in each individual life history, such as randomness in food intake, genetic character and size growth rate, whereas external stochasticity is due to the environment. For instance, it is known that the external stochasticity tends to affect population growth rate negatively. It has been shown in a recent theoretical study using path-integral formulation in structured linear demographic models that internal stochasticity can affect population growth rate positively or negatively. However, internal stochasticity has not been the main subject of researches. Taking account of effect of internal stochasticity on the population growth rate, the fittest organism has the optimal control of life history affected by the stochasticity in the habitat. The study of this control is known as the optimal life schedule problems. In order to analyze the optimal control under internal stochasticity, we need to make use of "Stochastic Control Theory" in the optimal life schedule problem. There is, however, no such kind of theory unifying optimal life history and internal stochasticity. This study focuses on an extension of optimal life schedule problems to unify control theory of internal stochasticity into linear demographic models. First, we show the relationship between the general age-states linear demographic models and the stochastic control theory via several mathematical formulations, such as path-integral, integral equation, and transition matrix. Secondly, we apply our theory to a two-resource utilization model for two different breeding systems: semelparity and iteroparity. Finally, we show that the diversity of resources is important for species in a case. Our study shows that this unification theory can address risk hedges of life history in general age-states linear demographic models.
Unification Theory of Optimal Life Histories and Linear Demographic Models in Internal Stochasticity
Oizumi, Ryo
2014-01-01
Life history of organisms is exposed to uncertainty generated by internal and external stochasticities. Internal stochasticity is generated by the randomness in each individual life history, such as randomness in food intake, genetic character and size growth rate, whereas external stochasticity is due to the environment. For instance, it is known that the external stochasticity tends to affect population growth rate negatively. It has been shown in a recent theoretical study using path-integral formulation in structured linear demographic models that internal stochasticity can affect population growth rate positively or negatively. However, internal stochasticity has not been the main subject of researches. Taking account of effect of internal stochasticity on the population growth rate, the fittest organism has the optimal control of life history affected by the stochasticity in the habitat. The study of this control is known as the optimal life schedule problems. In order to analyze the optimal control under internal stochasticity, we need to make use of “Stochastic Control Theory” in the optimal life schedule problem. There is, however, no such kind of theory unifying optimal life history and internal stochasticity. This study focuses on an extension of optimal life schedule problems to unify control theory of internal stochasticity into linear demographic models. First, we show the relationship between the general age-states linear demographic models and the stochastic control theory via several mathematical formulations, such as path–integral, integral equation, and transition matrix. Secondly, we apply our theory to a two-resource utilization model for two different breeding systems: semelparity and iteroparity. Finally, we show that the diversity of resources is important for species in a case. Our study shows that this unification theory can address risk hedges of life history in general age-states linear demographic models. PMID:24945258
NASA Astrophysics Data System (ADS)
Brown, Judith Ann
2000-10-01
Nine high school biology teachers from rural, suburban, and urban school settings, were interviewed about what idea, topic, or concept, if any, they use to unify their high school biology curriculum. Professional scientists and educational organizations have proposed that high school biology teachers use "biological evolution" as a unifying concept of their curriculum. Interviews, concept maps, and classroom syllabi and outlines were provided as data for these nine case studies. Each teacher was asked what topics were included in their curriculum to determine if a wide enough content was taught to warrant unification. The teachers' responses were compared to content and concepts listed in the National Science Content Standards, the Washington State Essential Academic Learning Requirements for Science, and a paper (Hurd, Bybee, Kahle, and Yager, 1980) that proposed what and how topics should be taught in the high school biology class by the year 2000. The nine teachers were asked to draw a concept map showing how these topics were interrelated and what concept, if any, "unified" them. A unifying concept is defined as a concept introduced early in the year and referred back to whenever new topics are introduced illustrating how the new topic is related to the previous ones through this unifying concept. Seven of the nine teachers did use at least one unifying concept. Two use evolution or natural selection, two use the web of life, two use the characteristics of life, and one uses scientific inquiry. The data collected in this study indicate high school teachers use these concepts such as the web of life and the characteristics of life because they believe they are easily understood by their students and less controversial than the theory of evolution. This study also presents evidence that school setting has minimal influence on what concept teachers use to unify their curriculum and that teachers have a significant amount of academic freedom to choose what they want to teach and how they want to teach it. The data suggests that teachers choose their unifying concept based on their personal beliefs of what their students will accept and on their past interactions with parents, students, and administrators when various unifying concepts were used.
A unified view on weakly correlated recurrent networks
Grytskyy, Dmytro; Tetzlaff, Tom; Diesmann, Markus; Helias, Moritz
2013-01-01
The diversity of neuron models used in contemporary theoretical neuroscience to investigate specific properties of covariances in the spiking activity raises the question how these models relate to each other. In particular it is hard to distinguish between generic properties of covariances and peculiarities due to the abstracted model. Here we present a unified view on pairwise covariances in recurrent networks in the irregular regime. We consider the binary neuron model, the leaky integrate-and-fire (LIF) model, and the Hawkes process. We show that linear approximation maps each of these models to either of two classes of linear rate models (LRM), including the Ornstein–Uhlenbeck process (OUP) as a special case. The distinction between both classes is the location of additive noise in the rate dynamics, which is located on the output side for spiking models and on the input side for the binary model. Both classes allow closed form solutions for the covariance. For output noise it separates into an echo term and a term due to correlated input. The unified framework enables us to transfer results between models. For example, we generalize the binary model and the Hawkes process to the situation with synaptic conduction delays and simplify derivations for established results. Our approach is applicable to general network structures and suitable for the calculation of population averages. The derived averages are exact for fixed out-degree network architectures and approximate for fixed in-degree. We demonstrate how taking into account fluctuations in the linearization procedure increases the accuracy of the effective theory and we explain the class dependent differences between covariances in the time and the frequency domain. Finally we show that the oscillatory instability emerging in networks of LIF models with delayed inhibitory feedback is a model-invariant feature: the same structure of poles in the complex frequency plane determines the population power spectra. PMID:24151463
Coplen, Tyler B.; Brand, Willi A.; Assonov, Sergey S.
2010-01-01
Measurements of δ(13C) determined on CO2 with an isotope-ratio mass spectrometer (IRMS) must be corrected for the amount of 17O in the CO2. For data consistency, this must be done using identical methods by different laboratories. This report aims at unifying data treatment for CO2 IRMS by proposing (i) a unified set of numerical values, and (ii) a unified correction algorithm, based on a simple, linear approximation formula. Because the oxygen of natural CO2 is derived mostly from the global water pool, it is recommended that a value of 0.528 be employed for the factor λ, which relates differences in 17O and 18O abundances. With the currently accepted N(13C)/N(12C) of 0.011 180(28) in VPDB (Vienna Peedee belemnite) reevaluation of data yields a value of 0.000 393(1) for the oxygen isotope ratio N(17O)/N(16O) of the evolved CO2. The ratio of these quantities, a ratio of isotope ratios, is essential for the 17O abundance correction: [N(17O)/N(16O)]/[N(13C)/N(12C)] = 0.035 16(8). The equation [δ(13C) ≈ 45δVPDB-CO2 + 2 17R/13R (45δVPDB-CO2 – λ46δVPDB-CO2)] closely approximates δ(13C) values with less than 0.010 ‰ deviation for normal oxygen-bearing materials and no more than 0.026 ‰ in extreme cases. Other materials containing oxygen of non-mass-dependent isotope composition require a more specific data treatment. A similar linear approximation is also suggested for δ(18O). The linear approximations are easy to implement in a data spreadsheet, and also help in generating a simplified uncertainty budget.
Lexical Problems in Large Distributed Information Systems.
ERIC Educational Resources Information Center
Berkovich, Simon Ya; Shneiderman, Ben
1980-01-01
Suggests a unified concept of a lexical subsystem as part of an information system to deal with lexical problems in local and network environments. The linguistic and control functions of the lexical subsystems in solving problems for large computer systems are described, and references are included. (Author/BK)
NASA Astrophysics Data System (ADS)
Motta Dias, M. H.; Jansen, K. M. B.; Luinge, J. W.; Bersee, H. E. N.; Benedictus, R.
2016-06-01
The influence of fiber-matrix adhesion on the linear viscoelastic creep behavior of `as received' and `surface modified' carbon fibers (AR-CF and SM-CF, respectively) reinforced polyphenylene sulfide (PPS) composite materials was investigated. Short-term tensile creep tests were performed on ±45° specimens under six different isothermal conditions, 40, 50, 60, 65, 70 and 75 °C. Physical aging effects were evaluated on both systems using the short-term test method established by Struik. The results showed that the shapes of the curves were affected neither by physical aging nor by the test temperature, allowing then superposition to be made. A unified model was proposed with a single physical aging and temperature-dependent shift factor, a_{T,te}. It was suggested that the surface treatment carried out in SM-CF/PPS had two major effects on the creep response of CF/PPS composites at a reference temperature of 40 °C: a lowering of the initial compliance of about 25 % and a slowing down of the creep response of about 1.1 decade.
A Comparative Study of Pairwise Learning Methods Based on Kernel Ridge Regression.
Stock, Michiel; Pahikkala, Tapio; Airola, Antti; De Baets, Bernard; Waegeman, Willem
2018-06-12
Many machine learning problems can be formulated as predicting labels for a pair of objects. Problems of that kind are often referred to as pairwise learning, dyadic prediction, or network inference problems. During the past decade, kernel methods have played a dominant role in pairwise learning. They still obtain a state-of-the-art predictive performance, but a theoretical analysis of their behavior has been underexplored in the machine learning literature. In this work we review and unify kernel-based algorithms that are commonly used in different pairwise learning settings, ranging from matrix filtering to zero-shot learning. To this end, we focus on closed-form efficient instantiations of Kronecker kernel ridge regression. We show that independent task kernel ridge regression, two-step kernel ridge regression, and a linear matrix filter arise naturally as a special case of Kronecker kernel ridge regression, implying that all these methods implicitly minimize a squared loss. In addition, we analyze universality, consistency, and spectral filtering properties. Our theoretical results provide valuable insights into assessing the advantages and limitations of existing pairwise learning methods.
A Fully Associative, Non-Linear Kinematic, Unified Viscoplastic Model for Titanium Based Matrices
NASA Technical Reports Server (NTRS)
Arnold, S. M.; Saleeb, A. F.; Castelli, M. G.
1994-01-01
Specific forms for both the Gibb's and complementary dissipation potentials are chosen such that a complete (i.e., fully associative) potential based multiaxial unified viscoplastic model is obtained. This model possesses one tensorial internal state variable that is associated with dislocation substructure, with an evolutionary law that has nonlinear kinematic hardening and both thermal and strain induced recovery mechanisms. A unique aspect of the present model is the inclusion of non-linear hardening through the use of a compliance operator, derived from the Gibb's potential, in the evolution law for the back stress. This non-linear tensorial operator is significant in that it allows both the flow and evolutionary laws to be fully associative (and therefore easily integrated) and greatly influences the multiaxial response under non-proportional loading paths. In addition to this nonlinear compliance operator, a new consistent, potential preserving, internal strain unloading criterion has been introduced to prevent abnormalities in the predicted stress-strain curves, which are present with nonlinear hardening formulations, during unloading and reversed loading of the external variables. Specification of an experimental program for the complete determination of the material functions and parameters for characterizing a metallic matrix, e.g., TIMETAL 21S, is given. The experiments utilized are tensile, creep, and step creep tests. Finally, a comparison of this model and a commonly used Bodner-Partom model is made on the basis of predictive accuracy and numerical efficiency.
On Similarity Coefficients for 2x2 Tables and Correction for Chance
ERIC Educational Resources Information Center
Warrens, Matthijs J.
2008-01-01
This paper studies correction for chance in coefficients that are linear functions of the observed proportion of agreement. The paper unifies and extends various results on correction for chance in the literature. A specific class of coefficients is used to illustrate the results derived in this paper. Coefficients in this class, e.g. the simple…
Optimal Clustering in Graphs with Weighted Edges: A Unified Approach to the Threshold Problem.
ERIC Educational Resources Information Center
Goetschel, Roy; Voxman, William
1987-01-01
Relations on a finite set V are viewed as weighted graphs. Using the language of graph theory, two methods of partitioning V are examined: selecting threshold values and applying them to a maximal weighted spanning forest, and using a parametric linear program to obtain a most adhesive partition. (Author/EM)
A Unified Approach to Optimization
2014-10-02
employee scheduling, ad placement, latin squares, disjunctions of linear systems, temporal modeling with interval variables, and traveling salesman problems ...integrating technologies. A key to integrated modeling is to formulate a problem with high-levelmetaconstraints, which are inspired by the “global... problem substructure to the solver. This contrasts with the atomistic modeling style of mixed integer programming (MIP) and satisfiability (SAT) solvers
In Search of Optimal Cognitive Diagnostic Model(s) for ESL Grammar Test Data
ERIC Educational Resources Information Center
Yi, Yeon-Sook
2017-01-01
This study compares five cognitive diagnostic models in search of optimal one(s) for English as a Second Language grammar test data. Using a unified modeling framework that can represent specific models with proper constraints, the article first fit the full model (the log-linear cognitive diagnostic model, LCDM) and investigated which model…
Business Intelligence: Applying the Unified Theory of Acceptance and Use of Technology
ERIC Educational Resources Information Center
Pope, Angela D.
2014-01-01
The purpose of this study was to explore the variables that affect an individual's intention to use business intelligence technology in organizations. Constructs in the study were social influence, performance expectancy, effort expectancy, and behavioral intention. Social influence refers to verbal comments from executives and coworkers that…
Affiliate Stigma among Caregivers of People with Intellectual Disability or Mental Illness
ERIC Educational Resources Information Center
Mak, Winnie W. S.; Cheung, Rebecca Y. M.
2008-01-01
Background: Affiliate stigma refers to the extent of self-stigmatization among associates of the targeted minorities. Given previous studies on caregiver stigma were mostly qualitative in nature, a conceptually based, unified, quantitative instrument to measure affiliate stigma is still lacking. Materials and Methods: Two hundred and ten…
A Unified Approach to Electron Counting in Main-Group Clusters
ERIC Educational Resources Information Center
McGrady, John E.
2004-01-01
A presentation of an extensive review of traditional approaches to teaching electron counting is given. The electron-precise clusters are usually taken as a reference point for rationalizing the structures of their electron-rich counterparts, which are characterized by valence electron counts greater than 5n.
Bilingual Program Application for Continuation Proposal: Compton Unified School District.
ERIC Educational Resources Information Center
Compton City Schools, CA.
This document contains the continuation proposal for the fourth grade Compton bilingual education program. A review of the third year is included with details on process evaluation, project personnel and duties, new vocabulary developed by the project for lexical references, and inservice training of teachers. Information concerning the proposed…
The Bachelor’s Degree in Military Arts and Science: A Foundation for Key Leader Development
2016-06-10
8 Headquarters, Department of the Army, Army Regulation 350-1, Army Training and Leader Development (Washington, DC: HQ DA G3/5/7, 19 September...Units and Developing Leaders (Washington, DC: HQ DA G3/5/7, 23 August 2012). 10 Headquarters, Department of the Army, Army Doctrine Reference...Publication 3, Unified Land Operations (Washington, DC: HQ DA G3/5/7, 16 May 2012). 11 Headquarters, Department of the Army, Army Doctrine Reference
Requirements for data integration platforms in biomedical research networks: a reference model.
Ganzinger, Matthias; Knaup, Petra
2015-01-01
Biomedical research networks need to integrate research data among their members and with external partners. To support such data sharing activities, an adequate information technology infrastructure is necessary. To facilitate the establishment of such an infrastructure, we developed a reference model for the requirements. The reference model consists of five reference goals and 15 reference requirements. Using the Unified Modeling Language, the goals and requirements are set into relation to each other. In addition, all goals and requirements are described textually in tables. This reference model can be used by research networks as a basis for a resource efficient acquisition of their project specific requirements. Furthermore, a concrete instance of the reference model is described for a research network on liver cancer. The reference model is transferred into a requirements model of the specific network. Based on this concrete requirements model, a service-oriented information technology architecture is derived and also described in this paper.
Liu, Meiqin; Zhang, Senlin
2008-10-01
A unified neural network model termed standard neural network model (SNNM) is advanced. Based on the robust L(2) gain (i.e. robust H(infinity) performance) analysis of the SNNM with external disturbances, a state-feedback control law is designed for the SNNM to stabilize the closed-loop system and eliminate the effect of external disturbances. The control design constraints are shown to be a set of linear matrix inequalities (LMIs) which can be easily solved by various convex optimization algorithms (e.g. interior-point algorithms) to determine the control law. Most discrete-time recurrent neural network (RNNs) and discrete-time nonlinear systems modelled by neural networks or Takagi and Sugeno (T-S) fuzzy models can be transformed into the SNNMs to be robust H(infinity) performance analyzed or robust H(infinity) controller synthesized in a unified SNNM's framework. Finally, some examples are presented to illustrate the wide application of the SNNMs to the nonlinear systems, and the proposed approach is compared with related methods reported in the literature.
Palacios-Flores, Kim; García-Sotelo, Jair; Castillo, Alejandra; Uribe, Carina; Aguilar, Luis; Morales, Lucía; Gómez-Romero, Laura; Reyes, José; Garciarubio, Alejandro; Boege, Margareta; Dávila, Guillermo
2018-01-01
We present a conceptually simple, sensitive, precise, and essentially nonstatistical solution for the analysis of genome variation in haploid organisms. The generation of a Perfect Match Genomic Landscape (PMGL), which computes intergenome identity with single nucleotide resolution, reveals signatures of variation wherever a query genome differs from a reference genome. Such signatures encode the precise location of different types of variants, including single nucleotide variants, deletions, insertions, and amplifications, effectively introducing the concept of a general signature of variation. The precise nature of variants is then resolved through the generation of targeted alignments between specific sets of sequence reads and known regions of the reference genome. Thus, the perfect match logic decouples the identification of the location of variants from the characterization of their nature, providing a unified framework for the detection of genome variation. We assessed the performance of the PMGL strategy via simulation experiments. We determined the variation profiles of natural genomes and of a synthetic chromosome, both in the context of haploid yeast strains. Our approach uncovered variants that have previously escaped detection. Moreover, our strategy is ideally suited for further refining high-quality reference genomes. The source codes for the automated PMGL pipeline have been deposited in a public repository. PMID:29367403
NASA Technical Reports Server (NTRS)
McGowan, David Michael
1997-01-01
The analytical formulation of curved-plate non-linear equilibrium equations including transverse-shear-deformation effects is presented. The formulation uses the principle of virtual work. A unified set of non-linear strains that contains terms from both physical and tensorial strain measures is used. Linearized, perturbed equilibrium equations (stability equations) that describe the response of the plate just after buckling occurs are then derived after the application of several simplifying assumptions. These equations are then modified to allow the reference surface of the plate to be located at a distance z(sub c) from the centroidal surface. The implementation of the new theory into the VICONOPT exact buckling and vibration analysis and optimum design computer program is described as well. The terms of the plate stiffness matrix using both Classical Plate Theory (CPT) and first-order Shear-Deformation Plate Theory (SDPT) are presented. The necessary steps to include the effects of in-plane transverse and in-plane shear loads in the in-plane stability equations are also outlined. Numerical results are presented using the newly implemented capability. Comparisons of results for several example problems with different loading states are made. Comparisons of analyses using both physical and tensorial strain measures as well as CPT and SDPF are also made. Results comparing the computational effort required by the new analysis to that of the analysis currently in the VICONOPT program are presented. The effects of including terms related to in-plane transverse and in-plane shear loadings in the in-plane stability equations are also examined. Finally, results of a design-optimization study of two different cylindrical shells subject to uniform axial compression are presented.
NASA Astrophysics Data System (ADS)
Taousser, Fatima; Defoort, Michael; Djemai, Mohamed
2016-01-01
This paper investigates the consensus problem for linear multi-agent system with fixed communication topology in the presence of intermittent communication using the time-scale theory. Since each agent can only obtain relative local information intermittently, the proposed consensus algorithm is based on a discontinuous local interaction rule. The interaction among agents happens at a disjoint set of continuous-time intervals. The closed-loop multi-agent system can be represented using mixed linear continuous-time and linear discrete-time models due to intermittent information transmissions. The time-scale theory provides a powerful tool to combine continuous-time and discrete-time cases and study the consensus protocol under a unified framework. Using this theory, some conditions are derived to achieve exponential consensus under intermittent information transmissions. Simulations are performed to validate the theoretical results.
Coordinate references for the indoor/outdoor seamless positioning
NASA Astrophysics Data System (ADS)
Ruan, Ling; Zhang, Ling; Long, Yi; Cheng, Fei
2018-05-01
Indoor positioning technologies are being developed rapidly, and seamless positioning which connected indoor and outdoor space is a new trend. The indoor and outdoor positioning are not applying the same coordinate system and different indoor positioning scenes uses different indoor local coordinate reference systems. A specific and unified coordinate reference frame is needed as the space basis and premise in seamless positioning application. Trajectory analysis of indoor and outdoor integration also requires a uniform coordinate reference. However, the coordinate reference frame in seamless positioning which can applied to various complex scenarios is lacking of research for a long time. In this paper, we proposed a universal coordinate reference frame in indoor/outdoor seamless positioning. The research focus on analysis and classify the indoor positioning scenes and put forward the coordinate reference system establishment and coordinate transformation methods in each scene. And, through some experiments, the calibration method feasibility was verified.
NASA Technical Reports Server (NTRS)
Yedavalli, R. K.
1992-01-01
The problem of analyzing and designing controllers for linear systems subject to real parameter uncertainty is considered. An elegant, unified theory for robust eigenvalue placement is presented for a class of D-regions defined by algebraic inequalities by extending the nominal matrix root clustering theory of Gutman and Jury (1981) to linear uncertain time systems. The author presents explicit conditions for matrix root clustering for different D-regions and establishes the relationship between the eigenvalue migration range and the parameter range. The bounds are all obtained by one-shot computation in the matrix domain and do not need any frequency sweeping or parameter gridding. The method uses the generalized Lyapunov theory for getting the bounds.
Papadimitriou, Konstantinos I.; Liu, Shih-Chii; Indiveri, Giacomo; Drakakis, Emmanuel M.
2014-01-01
The field of neuromorphic silicon synapse circuits is revisited and a parsimonious mathematical framework able to describe the dynamics of this class of log-domain circuits in the aggregate and in a systematic manner is proposed. Starting from the Bernoulli Cell Formalism (BCF), originally formulated for the modular synthesis and analysis of externally linear, time-invariant logarithmic filters, and by means of the identification of new types of Bernoulli Cell (BC) operators presented here, a generalized formalism (GBCF) is established. The expanded formalism covers two new possible and practical combinations of a MOS transistor (MOST) and a linear capacitor. The corresponding mathematical relations codifying each case are presented and discussed through the tutorial treatment of three well-known transistor-level examples of log-domain neuromorphic silicon synapses. The proposed mathematical tool unifies past analysis approaches of the same circuits under a common theoretical framework. The speed advantage of the proposed mathematical framework as an analysis tool is also demonstrated by a compelling comparative circuit analysis example of high order, where the GBCF and another well-known log-domain circuit analysis method are used for the determination of the input-output transfer function of the high (4th) order topology. PMID:25653579
Papadimitriou, Konstantinos I; Liu, Shih-Chii; Indiveri, Giacomo; Drakakis, Emmanuel M
2014-01-01
The field of neuromorphic silicon synapse circuits is revisited and a parsimonious mathematical framework able to describe the dynamics of this class of log-domain circuits in the aggregate and in a systematic manner is proposed. Starting from the Bernoulli Cell Formalism (BCF), originally formulated for the modular synthesis and analysis of externally linear, time-invariant logarithmic filters, and by means of the identification of new types of Bernoulli Cell (BC) operators presented here, a generalized formalism (GBCF) is established. The expanded formalism covers two new possible and practical combinations of a MOS transistor (MOST) and a linear capacitor. The corresponding mathematical relations codifying each case are presented and discussed through the tutorial treatment of three well-known transistor-level examples of log-domain neuromorphic silicon synapses. The proposed mathematical tool unifies past analysis approaches of the same circuits under a common theoretical framework. The speed advantage of the proposed mathematical framework as an analysis tool is also demonstrated by a compelling comparative circuit analysis example of high order, where the GBCF and another well-known log-domain circuit analysis method are used for the determination of the input-output transfer function of the high (4(th)) order topology.
Lift and drag in three-dimensional steady viscous and compressible flow
NASA Astrophysics Data System (ADS)
Liu, L. Q.; Wu, J. Z.; Su, W. D.; Kang, L. L.
2017-11-01
In a recent paper, Liu, Zhu, and Wu ["Lift and drag in two-dimensional steady viscous and compressible flow," J. Fluid Mech. 784, 304-341 (2015)] present a force theory for a body in a two-dimensional, viscous, compressible, and steady flow. In this companion paper, we do the same for three-dimensional flows. Using the fundamental solution of the linearized Navier-Stokes equations, we improve the force formula for incompressible flows originally derived by Goldstein in 1931 and summarized by Milne-Thomson in 1968, both being far from complete, to its perfect final form, which is further proved to be universally true from subsonic to supersonic flows. We call this result the unified force theorem, which states that the forces are always determined by the vector circulation Γϕ of longitudinal velocity and the scalar inflow Qψ of transverse velocity. Since this theorem is not directly observable either experimentally or computationally, a testable version is also derived, which, however, holds only in the linear far field. We name this version the testable unified force formula. After that, a general principle to increase the lift-drag ratio is proposed.
Conceptualizing the Suicide-Alcohol Relationship.
ERIC Educational Resources Information Center
Rogers, James R.
Despite the strong empirical evidence linking alcohol use across varying levels to suicidal behavior, the field is lacking a unifying theoretical framework in this area. The concept of alcohol induced myopia to explain the varied effects of alcohol on the behaviors of individuals who drink has been proposed. The term "alcohol myopia" refers to its…
Science Teachers' Proficiency Levels and Patterns of TPACK in a Practical Context
ERIC Educational Resources Information Center
Yeh, Yi-Fen; Lin, Tzu-Chiang; Hsu, Ying-Shao; Wu, Hisn-Kai; Hwang, Fu-Kwun
2015-01-01
Technological pedagogical content knowledge-practical (TPACK-P) refers to a unified body of knowledge that teachers develop from and for actual teaching practices with information communication technologies (ICT). This study attempted to unveil the longitudinal and multidimensional development of knowledge that teachers possess by interviewing 40…
Teacher Bilingual Instruction and Educational Malpractice: California Teachers Association v. Davis.
ERIC Educational Resources Information Center
DeMitchell, Todd A.
2000-01-01
As a policy pronouncement, California's Proposition 227 mandates a duty of care that educators owe their students. Failure to teach primarily in English creates a private cause of action against an educator that overcomes legal and policy concerns of "Peter W. v. San Francisco Unified School District." (Contains 57 notes and references.)
Applications of the Functional Writing Model in Technical and Professional Writing.
ERIC Educational Resources Information Center
Brostoff, Anita
The functional writing model is a method by which students learn to devise and organize a written argument. Salient features of functional writing include the organizing idea (a component that logically unifies a paragraph or sequence of paragraphs), the reader's frame of reference, forecasting (prediction of the sequence by which the organizing…
Unifying Psychology and Experiential Education: Toward an Integrated Understanding of "Why" It Works
ERIC Educational Resources Information Center
Houge Mackenzie, Susan; Son, Julie S.; Hollenhorst, Steve
2014-01-01
This article examines the significance of psychology to experiential education (EE) and critiques EE models that have developed in isolation from larger psychological theories and developments. Following a review of literature and current issues, select areas of psychology are explored with reference to experiential learning processes. The state…
Techniques for Single System Integration of Elastic Simulation Features
NASA Astrophysics Data System (ADS)
Mitchell, Nathan M.
Techniques for simulating the behavior of elastic objects have matured considerably over the last several decades, tackling diverse problems from non-linear models for incompressibility to accurate self-collisions. Alongside these contributions, advances in parallel hardware design and algorithms have made simulation more efficient and affordable than ever before. However, prior research often has had to commit to design choices that compromise certain simulation features to better optimize others, resulting in a fragmented landscape of solutions. For complex, real-world tasks, such as virtual surgery, a holistic approach is desirable, where complex behavior, performance, and ease of modeling are supported equally. This dissertation caters to this goal in the form of several interconnected threads of investigation, each of which contributes a piece of an unified solution. First, it will be demonstrated how various non-linear materials can be combined with lattice deformers to yield simulations with behavioral richness and a high potential for parallelism. This potential will be exploited to show how a hybrid solver approach based on large macroblocks can accelerate the convergence of these deformers. Further extensions of the lattice concept with non-manifold topology will allow for efficient processing of self-collisions and topology change. Finally, these concepts will be explored in the context of a case study on virtual plastic surgery, demonstrating a real-world problem space where these ideas can be combined to build an expressive authoring tool, allowing surgeons to record procedures digitally for future reference or education.
NASA Technical Reports Server (NTRS)
Chung, Ching-Luan
1990-01-01
The term trajectory planning has been used to refer to the process of determining the time history of joint trajectory of each joint variable corresponding to a specified trajectory of the end effector. The trajectory planning problem was solved as a purely kinematic problem. The drawback is that there is no guarantee that the actuators can deliver the effort necessary to track the planned trajectory. To overcome this limitation, a motion planning approach which addresses the kinematics, dynamics, and feedback control of a manipulator in a unified framework was developed. Actuator constraints are taken into account explicitly and a priori in the synthesis of the feedback control law. Therefore the result of applying the motion planning approach described is not only the determination of the entire set of joint trajectories but also a complete specification of the feedback control strategy which would yield these joint trajectories without violating actuator constraints. The effectiveness of the unified motion planning approach is demonstrated on two problems which are of practical interest in manipulator robotics.
NASA Astrophysics Data System (ADS)
Pérez-Moreno, Javier; Clays, Koen; Kuzyk, Mark G.
2010-05-01
We present a procedure for the modeling of the dispersion of the nonlinear optical response of complex molecular structures that is based strictly on the results from experimental characterization. We show how under some general conditions, the use of the Thomas-Kuhn sum-rules leads to a successful modeling of the nonlinear response of complex molecular structures.
Scientific Activities Pursuant to the Provisions of AFOSR Grant 79-0018.
1984-01-01
controllability implies stabilizability n the case of autono- mous finite dimensional linear systems , we are not surprised to find control ...Current Status of the Control Theory of Single Space Dim- ension Hyperbolicr Systems " was presented at the NASA JPL Symposium on Cbntrol and Stabilization ...theory of hyperbolic systems , including controllability , stabilization , control canonical form theory, etc. To allow a unified and not
Requirements for data integration platforms in biomedical research networks: a reference model
Knaup, Petra
2015-01-01
Biomedical research networks need to integrate research data among their members and with external partners. To support such data sharing activities, an adequate information technology infrastructure is necessary. To facilitate the establishment of such an infrastructure, we developed a reference model for the requirements. The reference model consists of five reference goals and 15 reference requirements. Using the Unified Modeling Language, the goals and requirements are set into relation to each other. In addition, all goals and requirements are described textually in tables. This reference model can be used by research networks as a basis for a resource efficient acquisition of their project specific requirements. Furthermore, a concrete instance of the reference model is described for a research network on liver cancer. The reference model is transferred into a requirements model of the specific network. Based on this concrete requirements model, a service-oriented information technology architecture is derived and also described in this paper. PMID:25699205
Natural Science of the Great Plains as it Relates to the American Indian: A Syllabus and Sourcebook.
ERIC Educational Resources Information Center
Bluemle, Mary E.
Providing an Indian Studies field course in natural science, this dissertation includes: a sourcebook of pertinent reference materials; reservation specific sample lesson plans; natural science roadlogs; a syllabus designed to stress natural science processes and to serve as a unifying factor for field work, lecture, and course discussions.…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-14
... section 4 of TSCA was to support ATSDR's Substance Specific Applied Research Program, a program [email protected] . SUPPLEMENTARY INFORMATION: I. Does this Action Apply to Me? This action is directed to the... Office of Air and Radiation (OAR), along with EPA's Office of Research and Development (ORD), referred...
ERIC Educational Resources Information Center
Fagan, Judy Condit
2001-01-01
Discusses the need for libraries to routinely redesign their Web sites, and presents a case study that describes how a Perl-driven database at Southern Illinois University's library improved Web site organization and patron access, simplified revisions, and allowed staff unfamiliar with HTML to update content. (Contains 56 references.) (Author/LRW)
How Less Is Truly More: Merging Library Support Services
ERIC Educational Resources Information Center
Skellen, Kendra; Kyrychenko, Alex
2016-01-01
In the summer of 2010, to provide a "one-stop shop" service point to Woodruff Library patrons, the Circulation, Reference, and Learning Commons (LC) desks merged into the unified Library Service Desk (LSD) under Access Services. Last year, due to organizational changes in the library and IT, and anticipated support needs of the new LC…
USDA-ARS?s Scientific Manuscript database
We often refer to the American Society of Agronomy (ASA) as being both a scientific and professional society. Membership within the organization includes a wide range of people from diverse regions and cultures of the world working with complex and diverse cropping systems. Yet members are unified...
Toward a Unified Modeling of Learner's Growth Process and Flow Theory
ERIC Educational Resources Information Center
Challco, Geiser C.; Andrade, Fernando R. H.; Borges, Simone S.; Bittencourt, Ig I.; Isotani, Seiji
2016-01-01
Flow is the affective state in which a learner is so engaged and involved in an activity that nothing else seems to matter. In this sense, to help students in the skill development and knowledge acquisition (referred to as learners' growth process) under optimal conditions, the instructional designers should create learning scenarios that favor…
NASA Technical Reports Server (NTRS)
Hanson, D. B.
1991-01-01
A unified theory for the aerodynamics and noise of advanced turboprops are presented. Aerodynamic topics include calculation of performance, blade load distribution, and non-uniform wake flow fields. Blade loading can be steady or unsteady due to fixed distortion, counter-rotating wakes, or blade vibration. The aerodynamic theory is based on the pressure potential method and is therefore basically linear. However, nonlinear effects associated with finite axial induction and blade vortex flow are included via approximate methods. Acoustic topics include radiation of noise caused by blade thickness, steady loading (including vortex lift), and unsteady loading. Shielding of the fuselage by its boundary layer and the wing are treated in separate analyses that are compatible but not integrated with the aeroacoustic theory for rotating blades.
Palacios-Flores, Kim; García-Sotelo, Jair; Castillo, Alejandra; Uribe, Carina; Aguilar, Luis; Morales, Lucía; Gómez-Romero, Laura; Reyes, José; Garciarubio, Alejandro; Boege, Margareta; Dávila, Guillermo
2018-04-01
We present a conceptually simple, sensitive, precise, and essentially nonstatistical solution for the analysis of genome variation in haploid organisms. The generation of a Perfect Match Genomic Landscape (PMGL), which computes intergenome identity with single nucleotide resolution, reveals signatures of variation wherever a query genome differs from a reference genome. Such signatures encode the precise location of different types of variants, including single nucleotide variants, deletions, insertions, and amplifications, effectively introducing the concept of a general signature of variation. The precise nature of variants is then resolved through the generation of targeted alignments between specific sets of sequence reads and known regions of the reference genome. Thus, the perfect match logic decouples the identification of the location of variants from the characterization of their nature, providing a unified framework for the detection of genome variation. We assessed the performance of the PMGL strategy via simulation experiments. We determined the variation profiles of natural genomes and of a synthetic chromosome, both in the context of haploid yeast strains. Our approach uncovered variants that have previously escaped detection. Moreover, our strategy is ideally suited for further refining high-quality reference genomes. The source codes for the automated PMGL pipeline have been deposited in a public repository. Copyright © 2018 by the Genetics Society of America.
Dirac relaxation of the Israel junction conditions: Unified Randall-Sundrum brane theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidson, Aharon; Gurwich, Ilya
2006-08-15
Following Dirac's brane variation prescription, the brane must not be deformed during the variation process, or else the linearity of the variation may be lost. Alternatively, the variation of the brane is done, in a special Dirac frame, by varying the bulk coordinate system itself. Imposing appropriate Dirac-style boundary conditions on the constrained 'sandwiched' gravitational action, we show how Israel junction conditions get relaxed, but remarkably, all solutions of the original Israel equations are still respected. The Israel junction conditions are traded, in the Z{sub 2}-symmetric case, for a generalized Regge-Teitelboim type equation (plus a local conservation law), and inmore » the generic Z{sub 2}-asymmetric case, for a pair of coupled Regge-Teitelboim equations. The Randall-Sundrum model and its derivatives, such as the Dvali-Gabadadze-Porrati and the Collins-Holdom models, get generalized accordingly. Furthermore, Randall-Sundrum and Regge-Teitelboim brane theories appear now to be two different faces of the one and the same unified brane theory. Within the framework of unified brane cosmology, we examine the dark matter/energy interpretation of the effective energy/momentum deviations from general relativity.« less
An Expert System for the Evaluation of Cost Models
1990-09-01
contrast to the condition of equal error variance, called homoscedasticity. (Reference: Applied Linear Regression Models by John Neter - page 423...normal. (Reference: Applied Linear Regression Models by John Neter - page 125) Click Here to continue -> Autocorrelation Click Here for the index - Index...over time. Error terms correlated over time are said to be autocorrelated or serially correlated. (REFERENCE: Applied Linear Regression Models by John
Towards Optimal Connectivity on Multi-layered Networks.
Chen, Chen; He, Jingrui; Bliss, Nadya; Tong, Hanghang
2017-10-01
Networks are prevalent in many high impact domains. Moreover, cross-domain interactions are frequently observed in many applications, which naturally form the dependencies between different networks. Such kind of highly coupled network systems are referred to as multi-layered networks , and have been used to characterize various complex systems, including critical infrastructure networks, cyber-physical systems, collaboration platforms, biological systems and many more. Different from single-layered networks where the functionality of their nodes is mainly affected by within-layer connections, multi-layered networks are more vulnerable to disturbance as the impact can be amplified through cross-layer dependencies, leading to the cascade failure to the entire system. To manipulate the connectivity in multi-layered networks, some recent methods have been proposed based on two-layered networks with specific types of connectivity measures. In this paper, we address the above challenges in multiple dimensions. First, we propose a family of connectivity measures (SUBLINE) that unifies a wide range of classic network connectivity measures. Third, we reveal that the connectivity measures in SUBLINE family enjoy diminishing returns property , which guarantees a near-optimal solution with linear complexity for the connectivity optimization problem. Finally, we evaluate our proposed algorithm on real data sets to demonstrate its effectiveness and efficiency.
Clouser, K D; Gert, B
1990-04-01
The authors use the term "principlism" to refer to the practice of using "principles" to replace both moral theory and particular moral rules and ideals in dealing with the moral problems that arise in medical practice. The authors argue that these "principles" do not function as claimed, and that their use is misleading both practically and theoretically. The "principles" are in fact not guides to action, but rather they are merely names for a collection of sometimes superficially related matters for consideration when dealing with a moral problem. The "principles" lack any systematic relationship to each other, and they often conflict with each other. These conflicts are unresolvable, since there is no unified moral theory from which they are all derived. For comparison the authors sketch the advantages of using a unified moral theory.
An alternative model of free fall
NASA Astrophysics Data System (ADS)
Lattery, Mark
2018-03-01
In Two World Systems (Galileo 1632/1661 Dialogues Concerning Two New Sciences (New York: Prometheus)), Galileo attempted to unify terrestrial and celestial motions using the Aristotelian principle of circularity. The result was a model of free fall that correctly predicts the linear increase of the velocity of an object released from rest near the surface of the Earth. This historical episode provides an opportunity to communicate the nature of science to students.
An efficient direct solver for rarefied gas flows with arbitrary statistics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diaz, Manuel A., E-mail: f99543083@ntu.edu.tw; Yang, Jaw-Yen, E-mail: yangjy@iam.ntu.edu.tw; Center of Advanced Study in Theoretical Science, National Taiwan University, Taipei 10167, Taiwan
2016-01-15
A new numerical methodology associated with a unified treatment is presented to solve the Boltzmann–BGK equation of gas dynamics for the classical and quantum gases described by the Bose–Einstein and Fermi–Dirac statistics. Utilizing a class of globally-stiffly-accurate implicit–explicit Runge–Kutta scheme for the temporal evolution, associated with the discrete ordinate method for the quadratures in the momentum space and the weighted essentially non-oscillatory method for the spatial discretization, the proposed scheme is asymptotic-preserving and imposes no non-linear solver or requires the knowledge of fugacity and temperature to capture the flow structures in the hydrodynamic (Euler) limit. The proposed treatment overcomes themore » limitations found in the work by Yang and Muljadi (2011) [33] due to the non-linear nature of quantum relations, and can be applied in studying the dynamics of a gas with internal degrees of freedom with correct values of the ratio of specific heat for the flow regimes for all Knudsen numbers and energy wave lengths. The present methodology is numerically validated with the unified treatment by the one-dimensional shock tube problem and the two-dimensional Riemann problems for gases of arbitrary statistics. Descriptions of ideal quantum gases including rotational degrees of freedom have been successfully achieved under the proposed methodology.« less
Amoroso, N; Errico, R; Bruno, S; Chincarini, A; Garuccio, E; Sensi, F; Tangaro, S; Tateo, A; Bellotti, R
2015-11-21
In this study we present a novel fully automated Hippocampal Unified Multi-Atlas-Networks (HUMAN) algorithm for the segmentation of the hippocampus in structural magnetic resonance imaging. In multi-atlas approaches atlas selection is of crucial importance for the accuracy of the segmentation. Here we present an optimized method based on the definition of a small peri-hippocampal region to target the atlas learning with linear and non-linear embedded manifolds. All atlases were co-registered to a data driven template resulting in a computationally efficient method that requires only one test registration. The optimal atlases identified were used to train dedicated artificial neural networks whose labels were then propagated and fused to obtain the final segmentation. To quantify data heterogeneity and protocol inherent effects, HUMAN was tested on two independent data sets provided by the Alzheimer's Disease Neuroimaging Initiative and the Open Access Series of Imaging Studies. HUMAN is accurate and achieves state-of-the-art performance (Dice[Formula: see text] and Dice[Formula: see text]). It is also a robust method that remains stable when applied to the whole hippocampus or to sub-regions (patches). HUMAN also compares favorably with a basic multi-atlas approach and a benchmark segmentation tool such as FreeSurfer.
NASA Astrophysics Data System (ADS)
Lucio Rapoport, Diego
2013-04-01
We present a unified principle for science that surmounts dualism, in terms of torsion fields and the non-orientable surfaces, notably the Klein Bottle and its logic, the Möbius strip and the projective plane. We apply it to the complex numbers and cosmology, to non-linear systems integrating the issue of hyperbolic divergences with the change of orientability, to the biomechanics of vision and the mammal heart, to the morphogenesis of crustal shapes on Earth in connection to the wavefronts of gravitation, elasticity and electromagnetism, to pattern recognition of artificial images and visual recognition, to neurology and the topographic maps of the sensorium, to perception, in particular of music. We develop it in terms of the fundamental 2:1 resonance inherent to the Möbius strip and the Klein Bottle, the minimal surfaces representation of the wavefronts, and the non-dual Klein Bottle logic inherent to pattern recognition, to the harmonic functions and vector fields that lay at the basis of geophysics and physics at large. We discuss the relation between the topographic maps of the sensorium, and the issue of turning inside-out of the visual world as a general principle for cognition, topological chemistry, cell biology and biological morphogenesis in particular in embryology
NASA Astrophysics Data System (ADS)
Amoroso, N.; Errico, R.; Bruno, S.; Chincarini, A.; Garuccio, E.; Sensi, F.; Tangaro, S.; Tateo, A.; Bellotti, R.; Alzheimers Disease Neuroimaging Initiative,the
2015-11-01
In this study we present a novel fully automated Hippocampal Unified Multi-Atlas-Networks (HUMAN) algorithm for the segmentation of the hippocampus in structural magnetic resonance imaging. In multi-atlas approaches atlas selection is of crucial importance for the accuracy of the segmentation. Here we present an optimized method based on the definition of a small peri-hippocampal region to target the atlas learning with linear and non-linear embedded manifolds. All atlases were co-registered to a data driven template resulting in a computationally efficient method that requires only one test registration. The optimal atlases identified were used to train dedicated artificial neural networks whose labels were then propagated and fused to obtain the final segmentation. To quantify data heterogeneity and protocol inherent effects, HUMAN was tested on two independent data sets provided by the Alzheimer’s Disease Neuroimaging Initiative and the Open Access Series of Imaging Studies. HUMAN is accurate and achieves state-of-the-art performance (Dice{{}\\text{ADNI}} =0.929+/- 0.003 and Dice{{}\\text{OASIS}} =0.869+/- 0.002 ). It is also a robust method that remains stable when applied to the whole hippocampus or to sub-regions (patches). HUMAN also compares favorably with a basic multi-atlas approach and a benchmark segmentation tool such as FreeSurfer.
Koay, Cheng Guan; Chang, Lin-Ching; Carew, John D; Pierpaoli, Carlo; Basser, Peter J
2006-09-01
A unifying theoretical and algorithmic framework for diffusion tensor estimation is presented. Theoretical connections among the least squares (LS) methods, (linear least squares (LLS), weighted linear least squares (WLLS), nonlinear least squares (NLS) and their constrained counterparts), are established through their respective objective functions, and higher order derivatives of these objective functions, i.e., Hessian matrices. These theoretical connections provide new insights in designing efficient algorithms for NLS and constrained NLS (CNLS) estimation. Here, we propose novel algorithms of full Newton-type for the NLS and CNLS estimations, which are evaluated with Monte Carlo simulations and compared with the commonly used Levenberg-Marquardt method. The proposed methods have a lower percent of relative error in estimating the trace and lower reduced chi2 value than those of the Levenberg-Marquardt method. These results also demonstrate that the accuracy of an estimate, particularly in a nonlinear estimation problem, is greatly affected by the Hessian matrix. In other words, the accuracy of a nonlinear estimation is algorithm-dependent. Further, this study shows that the noise variance in diffusion weighted signals is orientation dependent when signal-to-noise ratio (SNR) is low (
Factors Affecting Students' Acceptance of Tablet PCs: A Study in Italian High Schools
ERIC Educational Resources Information Center
Cacciamani, Stefano; Villani, Daniela; Bonanomi, Andrea; Carissoli, Claudia; Olivari, Maria Giulia; Morganti, Laura; Riva, Giuseppe; Confalonieri, Emanuela
2018-01-01
To maximize the advantages of the tablet personal computer (TPC) at school, this technology needs to be accepted by students as new tool for learning. With reference to the Technology Acceptance Model and the Unified Theory of Acceptance and Use of Technology, the aims of this study were (a) to analyze factors influencing high school students'…
NASA Astrophysics Data System (ADS)
Wagner, Manfred Hermann; Rolón-Garrido, Víctor Hugo
2015-04-01
An extended interchain tube pressure model for polymer melts and concentrated solutions is presented, based on the idea that the pressures exerted by a polymer chain on the walls of an anisotropic confinement are anisotropic (M. Doi and S. F. Edwards, The Theory of Polymer Dynamics, Oxford University Press, New York, 1986). In a tube model with variable tube diameter, chain stretch and tube diameter reduction are related, and at deformation rates larger than the inverse Rouse time τR, the chain is stretched and its confining tube becomes increasingly anisotropic. Tube diameter reduction leads to an interchain pressure in the lateral direction of the tube, which is proportional to the 3rd power of stretch (G. Marrucci and G. Ianniruberto. Macromolecules 37, 3934-3942, 2004). In the extended interchain tube pressure (EIP) model, it is assumed that chain stretch is balanced by interchain tube pressure in the lateral direction, and by a spring force in the longitudinal direction of the tube, which is linear in stretch. The scaling relations established for the relaxation modulus of concentrated solutions of polystyrene in oligomeric styrene (M. H. Wagner, Rheol. Acta 53, 765-777, 2014, M. H. Wagner, J. Non-Newtonian Fluid Mech. http://dx.doi.org/10.1016/j.jnnfm.2014.09.017, 2014) are applied to the solutions of polystyrene (PS) in diethyl phthalate (DEP) investigated by Bhattacharjee et al. (P. K. Bhattacharjee et al., Macromolecules 35, 10131-10148, 2002) and Acharya et al. (M. V. Acharya et al. AIP Conference Proceedings 1027, 391-393, 2008). The scaling relies on the difference ΔTg between the glass-transition temperatures of the melt and the glass-transition temperatures of the solutions. ΔTg can be inferred from the reported zero-shear viscosities, and the BSW spectra of the solutions are obtained from the BSW spectrum of the reference melt with good accuracy. Predictions of the EIP model are compared to the steady-state elongational viscosity data of PS/DEP solutions. Except for a possible influence of solvent quality, linear and nonlinear viscoelasticity of entangled polystyrene solutions can thus be obtained from the linear-viscoelastic characteristics of a reference polymer melt and the shift of the glass transition temperature between melt and solution.
In quest of a systematic framework for unifying and defining nanoscience
2009-01-01
This article proposes a systematic framework for unifying and defining nanoscience based on historic first principles and step logic that led to a “central paradigm” (i.e., unifying framework) for traditional elemental/small-molecule chemistry. As such, a Nanomaterials classification roadmap is proposed, which divides all nanomatter into Category I: discrete, well-defined and Category II: statistical, undefined nanoparticles. We consider only Category I, well-defined nanoparticles which are >90% monodisperse as a function of Critical Nanoscale Design Parameters (CNDPs) defined according to: (a) size, (b) shape, (c) surface chemistry, (d) flexibility, and (e) elemental composition. Classified as either hard (H) (i.e., inorganic-based) or soft (S) (i.e., organic-based) categories, these nanoparticles were found to manifest pervasive atom mimicry features that included: (1) a dominance of zero-dimensional (0D) core–shell nanoarchitectures, (2) the ability to self-assemble or chemically bond as discrete, quantized nanounits, and (3) exhibited well-defined nanoscale valencies and stoichiometries reminiscent of atom-based elements. These discrete nanoparticle categories are referred to as hard or soft particle nanoelements. Many examples describing chemical bonding/assembly of these nanoelements have been reported in the literature. We refer to these hard:hard (H-n:H-n), soft:soft (S-n:S-n), or hard:soft (H-n:S-n) nanoelement combinations as nanocompounds. Due to their quantized features, many nanoelement and nanocompound categories are reported to exhibit well-defined nanoperiodic property patterns. These periodic property patterns are dependent on their quantized nanofeatures (CNDPs) and dramatically influence intrinsic physicochemical properties (i.e., melting points, reactivity/self-assembly, sterics, and nanoencapsulation), as well as important functional/performance properties (i.e., magnetic, photonic, electronic, and toxicologic properties). We propose this perspective as a modest first step toward more clearly defining synthetic nanochemistry as well as providing a systematic framework for unifying nanoscience. With further progress, one should anticipate the evolution of future nanoperiodic table(s) suitable for predicting important risk/benefit boundaries in the field of nanoscience. Electronic supplementary material The online version of this article (doi:10.1007/s11051-009-9632-z) contains supplementary material, which is available to authorized users. PMID:21170133
Generalized quantum no-go theorems of pure states
NASA Astrophysics Data System (ADS)
Li, Hui-Ran; Luo, Ming-Xing; Lai, Hong
2018-07-01
Various results of the no-cloning theorem, no-deleting theorem and no-superposing theorem in quantum mechanics have been proved using the superposition principle and the linearity of quantum operations. In this paper, we investigate general transformations forbidden by quantum mechanics in order to unify these theorems. First, we prove that any useful information cannot be created from an unknown pure state which is randomly chosen from a Hilbert space according to the Harr measure. And then, we propose a unified no-go theorem based on a generalized no-superposing result. The new theorem includes the no-cloning theorem, no-anticloning theorem, no-partial-erasure theorem, no-splitting theorem, no-superposing theorem or no-encoding theorem as a special case. Moreover, it implies various new results. Third, we extend the new theorem into another form that includes the no-deleting theorem as a special case.
Standard representation and unified stability analysis for dynamic artificial neural network models.
Kim, Kwang-Ki K; Patrón, Ernesto Ríos; Braatz, Richard D
2018-02-01
An overview is provided of dynamic artificial neural network models (DANNs) for nonlinear dynamical system identification and control problems, and convex stability conditions are proposed that are less conservative than past results. The three most popular classes of dynamic artificial neural network models are described, with their mathematical representations and architectures followed by transformations based on their block diagrams that are convenient for stability and performance analyses. Classes of nonlinear dynamical systems that are universally approximated by such models are characterized, which include rigorous upper bounds on the approximation errors. A unified framework and linear matrix inequality-based stability conditions are described for different classes of dynamic artificial neural network models that take additional information into account such as local slope restrictions and whether the nonlinearities within the DANNs are odd. A theoretical example shows reduced conservatism obtained by the conditions. Copyright © 2017. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Lagos, Macarena; Bellini, Emilio; Noller, Johannes; Ferreira, Pedro G.; Baker, Tessa
2018-03-01
We analyse cosmological perturbations around a homogeneous and isotropic background for scalar-tensor, vector-tensor and bimetric theories of gravity. Building on previous results, we propose a unified view of the effective parameters of all these theories. Based on this structure, we explore the viable space of parameters for each family of models by imposing the absence of ghosts and gradient instabilities. We then focus on the quasistatic regime and confirm that all these theories can be approximated by the phenomenological two-parameter model described by an effective Newton's constant and the gravitational slip. Within the quasistatic regime we pinpoint signatures which can distinguish between the broad classes of models (scalar-tensor, vector-tensor or bimetric). Finally, we present the equations of motion for our unified approach in such a way that they can be implemented in Einstein-Boltzmann solvers.
NASA Astrophysics Data System (ADS)
Gonzalez-Ayala, Julian; Calvo Hernández, A.; Roco, J. M. M.
2016-07-01
The main unified energetic properties of low dissipation heat engines and refrigerator engines allow for both endoreversible or irreversible configurations. This is accomplished by means of the constraints imposed on the characteristic global operation time or the contact times between the working system with the external heat baths and modulated by the dissipation symmetries. A suited unified figure of merit (which becomes power output for heat engines) is analyzed and the influence of the symmetries on the optimum performance discussed. The obtained results, independent on any heat transfer law, are faced with those obtained from Carnot-like heat models where specific heat transfer laws are needed. Thus, it is shown that only the inverse phenomenological law, often used in linear irreversible thermodynamics, correctly reproduces all optimized values for both the efficiency and coefficient of performance values.
Unified nonlinear analysis for nonhomogeneous anisotropic beams with closed cross sections
NASA Technical Reports Server (NTRS)
Atilgan, Ali R.; Hodges, Dewey H.
1991-01-01
A unified methodology for geometrically nonlinear analysis of nonhomogeneous, anisotropic beams is presented. A 2D cross-sectional analysis and a nonlinear 1D global deformation analysis are derived from the common framework of a 3D, geometrically nonlinear theory of elasticity. The only restrictions are that the strain and local rotation are small compared to unity and that warping displacements are small relative to the cross-sectional dimensions. It is concluded that the warping solutions can be affected by large deformation and that this could alter the incremental stiffnes of the section. It is shown that sectional constants derived from the published, linear analysis can be used in the present nonlinear, 1D analysis governing the global deformation of the beam, which is based on intrinsic equations for nonlinear beam behavior. Excellent correlation is obtained with published experimental results for both isotropic and anisotropic beams undergoing large deflections.
General Multivariate Linear Modeling of Surface Shapes Using SurfStat
Chung, Moo K.; Worsley, Keith J.; Nacewicz, Brendon, M.; Dalton, Kim M.; Davidson, Richard J.
2010-01-01
Although there are many imaging studies on traditional ROI-based amygdala volumetry, there are very few studies on modeling amygdala shape variations. This paper present a unified computational and statistical framework for modeling amygdala shape variations in a clinical population. The weighted spherical harmonic representation is used as to parameterize, to smooth out, and to normalize amygdala surfaces. The representation is subsequently used as an input for multivariate linear models accounting for nuisance covariates such as age and brain size difference using SurfStat package that completely avoids the complexity of specifying design matrices. The methodology has been applied for quantifying abnormal local amygdala shape variations in 22 high functioning autistic subjects. PMID:20620211
Convergence Results on Iteration Algorithms to Linear Systems
Wang, Zhuande; Yang, Chuansheng; Yuan, Yubo
2014-01-01
In order to solve the large scale linear systems, backward and Jacobi iteration algorithms are employed. The convergence is the most important issue. In this paper, a unified backward iterative matrix is proposed. It shows that some well-known iterative algorithms can be deduced with it. The most important result is that the convergence results have been proved. Firstly, the spectral radius of the Jacobi iterative matrix is positive and the one of backward iterative matrix is strongly positive (lager than a positive constant). Secondly, the mentioned two iterations have the same convergence results (convergence or divergence simultaneously). Finally, some numerical experiments show that the proposed algorithms are correct and have the merit of backward methods. PMID:24991640
Time-Dependent Thermal Transport Theory.
Biele, Robert; D'Agosta, Roberto; Rubio, Angel
2015-07-31
Understanding thermal transport in nanoscale systems presents important challenges to both theory and experiment. In particular, the concept of local temperature at the nanoscale appears difficult to justify. Here, we propose a theoretical approach where we replace the temperature gradient with controllable external blackbody radiations. The theory recovers known physical results, for example, the linear relation between the thermal current and the temperature difference of two blackbodies. Furthermore, our theory is not limited to the linear regime and goes beyond accounting for nonlinear effects and transient phenomena. Since the present theory is general and can be adapted to describe both electron and phonon dynamics, it provides a first step toward a unified formalism for investigating thermal and electronic transport.
Self-stigma among concealable minorities in Hong Kong: conceptualization and unified measurement.
Mak, Winnie W S; Cheung, Rebecca Y M
2010-04-01
Self-stigma refers to the internalized stigma that individuals may have toward themselves as a result of their minority status. Not only can self-stigma dampen the mental health of individuals, it can deter them from seeking professional help lest disclosing their minority status lead to being shunned by service providers. No unified instrument has been developed to measure consistently self-stigma that could be applied to different concealable minority groups. The present study presented findings based on 4 studies on the development and validation of the Self-Stigma Scale, conducted in Hong Kong with community samples of mental health consumers, recent immigrants from Mainland China, and sexual minorities. Upon a series of validation procedures, a 9-item Self-Stigma Scale-Short Form was developed. Initial support on its reliability and construct validity (convergent and criterion validities) were found among 3 stigmatized groups. Utility of this unified measure was to establish an empirical basis upon which self-stigma of different concealable minority groups could be assessed under the same dimensions. Health-care professionals could make use of this short scale to assess potential self-stigmatization among concealable minorities, which may hamper their treatment process as well as their overall well-being.
Unified Aerosol Microphysics for NWP
2013-09-30
it may be treated as a generic variable such as when it is processed by advection, or it may be used specifically like dust in ice nucleation...interactions. We shifted instead to a winter-time passage of a low pressure system across North Africa and the Mediterranean Sea (Figure 1). The strong...MODIS multispectral albedo data, MODIS land surface data, and the NRL DSD for SW Asia and E Asia a multi-variate, non-linear classification was
Impact of Linear Programming on Computer Development.
1985-06-01
soon see. It all really began when Dal Hitchcock, an advisor to General Rawlings , the Air Comptroller, and Marshall Wood, an expert on military...unifying principles . Of course, I thought first to try to adapt the Leontief Input-Output Model. But Marshall and I also talked about certain...still with the Ford Motor Company. I told him about my presentation to General Rawlings on the possibility of a "program Integrator" for planning and
Transducer Workshop (12th) Held at Melbourne, Florida on 7-9 June 1983.
1983-06-01
applications since above 200OF (930C) the heat treatment CES among others, history of the Manganin is changes and (*) The Specific products referenced the...linearization scheme is compromised. and manufacturers’ addresses are given Heat treatment history change means ind thebibliographyi add sesparate pa... pulsating (dynamic) flow, even ment with the Unified Approach to the when readings are averaged over a period Engineering of Measuring Systems on of the
A Maximal Element Theorem in FWC-Spaces and Its Applications
Hu, Qingwen; Miao, Yulin
2014-01-01
A maximal element theorem is proved in finite weakly convex spaces (FWC-spaces, in short) which have no linear, convex, and topological structure. Using the maximal element theorem, we develop new existence theorems of solutions to variational relation problem, generalized equilibrium problem, equilibrium problem with lower and upper bounds, and minimax problem in FWC-spaces. The results represented in this paper unify and extend some known results in the literature. PMID:24782672
Bioinspired Concepts: Unified Theory for Complex Biological and Engineering Systems
2006-01-01
i.e., data flows of finite size arrive at the system randomly. For such a system , we propose a modified dual scheduling algorithm that stabilizes ...demon. We compute the efficiency of the controller over finite and infinite time intervals, and since the controller is optimal, this yields hard limits...and highly optimized tolerance. PNAS, 102, 2005. 51. G. N. Nair and R. J. Evans. Stabilizability of stochastic linear systems with finite feedback
Sulica, Lucian
2011-06-01
Hoarseness is the colloquial expression for dysphonia ; these terms are often used interchangeably in medicine to refer to altered voice quality. Hoarseness may be both a symptom and a sign of dysfunction of the phonatory apparatus. It is never a diagnosis, despite having a corresponding International Classification of Diseases code and sometimes serving as such for purposes of administrative convenience. The same anatomical and physiological features that make the vocal folds uniquely suited for the high-speed vibration necessary for sound production render them exquisitely sensitive to a wide range of abnormalities. The breadth of pathologic conditions that can cause hoarseness makes a unified overview a challenge; hoarseness is simply not a homogeneous category after the initial laryngoscopy. Moreover, the available literature predominantly focuses on specific diagnoses rather than on hoarseness as a whole, so scant published data exist to support an evidence-based approach. Nevertheless, certain unifying principles exist.
The grand unified photon spectrum: A coherent view of the diffuse extragalactic background radiation
NASA Technical Reports Server (NTRS)
Ressell, M. Ted; Turner, Michael S.
1989-01-01
The spectrum of diffuse extragalactic background radiation (DEBRA) at wavelengths from 10(exp 5) to 10(exp -24) cm is presented in a coherent fashion. Each wavelength region, from the radio to ultra-high energy photons and cosmic rays, is treated both separately and as part of the grand unified photon spectrum (GUPS). A discussion of, and references to, the relevant literature for each wavelength region is included. This review should provide a useful tool for those interested in diffuse backgrounds, the epoch of galaxy formation, astrophysical/cosmological constraints to particle properties, exotic early Universe processes, and many other astrophysical and cosmological enterprises. As a worked example, researchers derive the cosmological constraints to an unstable-neutrino spies (with arbitrary branching ratio to a radiative decay mode) that follow from the GUPS.
From Combat to Collaboration: The Labor-Management Partnership in San José Unified School District
ERIC Educational Resources Information Center
Knudson, Joel; Castro, Marina; Blum, Jarah
2017-01-01
It started with a cup of coffee. In the wake of an intense contract negotiation, and against the backdrop of a district bankruptcy, multiple teacher strikes, and a wave of mistrust that veterans of the era still refer to as "rock bottom," the San José superintendent and the San José Teachers Association president decided to chart a…
ERIC Educational Resources Information Center
Knudson, Joel; Castro, Marina; Blum, Jarah
2017-01-01
In the wake of an intense contract negotiation, and against the backdrop of a district bankruptcy, multiple teacher strikes, and a wave of mistrust that veterans of the era still refer to as "rock bottom," the San José superintendent and the San José Teachers Association president decided to chart a different path forward. This report is…
SU-G-BRB-14: Uncertainty of Radiochromic Film Based Relative Dose Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Devic, S; Tomic, N; DeBlois, F
2016-06-15
Purpose: Due to inherently non-linear dose response, measurement of relative dose distribution with radiochromic film requires measurement of absolute dose using a calibration curve following previously established reference dosimetry protocol. On the other hand, a functional form that converts the inherently non-linear dose response curve of the radiochromic film dosimetry system into linear one has been proposed recently [Devic et al, Med. Phys. 39 4850–4857 (2012)]. However, there is a question what would be the uncertainty of such measured relative dose. Methods: If the relative dose distribution is determined going through the reference dosimetry system (conversion of the response bymore » using calibration curve into absolute dose) the total uncertainty of such determined relative dose will be calculated by summing in quadrature total uncertainties of doses measured at a given and at the reference point. On the other hand, if the relative dose is determined using linearization method, the new response variable is calculated as ζ=a(netOD)n/ln(netOD). In this case, the total uncertainty in relative dose will be calculated by summing in quadrature uncertainties for a new response function (σζ) for a given and the reference point. Results: Except at very low doses, where the measurement uncertainty dominates, the total relative dose uncertainty is less than 1% for the linear response method as compared to almost 2% uncertainty level for the reference dosimetry method. The result is not surprising having in mind that the total uncertainty of the reference dose method is dominated by the fitting uncertainty, which is mitigated in the case of linearization method. Conclusion: Linearization of the radiochromic film dose response provides a convenient and a more precise method for relative dose measurements as it does not require reference dosimetry and creation of calibration curve. However, the linearity of the newly introduced function must be verified. Dave Lewis is inventor and runs a consulting company for radiochromic films.« less
Multimodal Deep Autoencoder for Human Pose Recovery.
Hong, Chaoqun; Yu, Jun; Wan, Jian; Tao, Dacheng; Wang, Meng
2015-12-01
Video-based human pose recovery is usually conducted by retrieving relevant poses using image features. In the retrieving process, the mapping between 2D images and 3D poses is assumed to be linear in most of the traditional methods. However, their relationships are inherently non-linear, which limits recovery performance of these methods. In this paper, we propose a novel pose recovery method using non-linear mapping with multi-layered deep neural network. It is based on feature extraction with multimodal fusion and back-propagation deep learning. In multimodal fusion, we construct hypergraph Laplacian with low-rank representation. In this way, we obtain a unified feature description by standard eigen-decomposition of the hypergraph Laplacian matrix. In back-propagation deep learning, we learn a non-linear mapping from 2D images to 3D poses with parameter fine-tuning. The experimental results on three data sets show that the recovery error has been reduced by 20%-25%, which demonstrates the effectiveness of the proposed method.
Non-linear Analysis of Scalp EEG by Using Bispectra: The Effect of the Reference Choice
Chella, Federico; D'Andrea, Antea; Basti, Alessio; Pizzella, Vittorio; Marzetti, Laura
2017-01-01
Bispectral analysis is a signal processing technique that makes it possible to capture the non-linear and non-Gaussian properties of the EEG signals. It has found various applications in EEG research and clinical practice, including the assessment of anesthetic depth, the identification of epileptic seizures, and more recently, the evaluation of non-linear cross-frequency brain functional connectivity. However, the validity and reliability of the indices drawn from bispectral analysis of EEG signals are potentially biased by the use of a non-neutral EEG reference. The present study aims at investigating the effects of the reference choice on the analysis of the non-linear features of EEG signals through bicoherence, as well as on the estimation of cross-frequency EEG connectivity through two different non-linear measures, i.e., the cross-bicoherence and the antisymmetric cross-bicoherence. To this end, four commonly used reference schemes were considered: the vertex electrode (Cz), the digitally linked mastoids, the average reference, and the Reference Electrode Standardization Technique (REST). The reference effects were assessed both in simulations and in a real EEG experiment. The simulations allowed to investigated: (i) the effects of the electrode density on the performance of the above references in the estimation of bispectral measures; and (ii) the effects of the head model accuracy in the performance of the REST. For real data, the EEG signals recorded from 10 subjects during eyes open resting state were examined, and the distortions induced by the reference choice in the patterns of alpha-beta bicoherence, cross-bicoherence, and antisymmetric cross-bicoherence were assessed. The results showed significant differences in the findings depending on the chosen reference, with the REST providing superior performance than all the other references in approximating the ideal neutral reference. In conclusion, this study highlights the importance of considering the effects of the reference choice in the interpretation and comparison of the results of bispectral analysis of scalp EEG. PMID:28559790
Focal points and principal solutions of linear Hamiltonian systems revisited
NASA Astrophysics Data System (ADS)
Šepitka, Peter; Šimon Hilscher, Roman
2018-05-01
In this paper we present a novel view on the principal (and antiprincipal) solutions of linear Hamiltonian systems, as well as on the focal points of their conjoined bases. We present a new and unified theory of principal (and antiprincipal) solutions at a finite point and at infinity, and apply it to obtain new representation of the multiplicities of right and left proper focal points of conjoined bases. We show that these multiplicities can be characterized by the abnormality of the system in a neighborhood of the given point and by the rank of the associated T-matrix from the theory of principal (and antiprincipal) solutions. We also derive some additional important results concerning the representation of T-matrices and associated normalized conjoined bases. The results in this paper are new even for completely controllable linear Hamiltonian systems. We also discuss other potential applications of our main results, in particular in the singular Sturmian theory.
Generalized massive optimal data compression
NASA Astrophysics Data System (ADS)
Alsing, Justin; Wandelt, Benjamin
2018-05-01
In this paper, we provide a general procedure for optimally compressing N data down to n summary statistics, where n is equal to the number of parameters of interest. We show that compression to the score function - the gradient of the log-likelihood with respect to the parameters - yields n compressed statistics that are optimal in the sense that they preserve the Fisher information content of the data. Our method generalizes earlier work on linear Karhunen-Loéve compression for Gaussian data whilst recovering both lossless linear compression and quadratic estimation as special cases when they are optimal. We give a unified treatment that also includes the general non-Gaussian case as long as mild regularity conditions are satisfied, producing optimal non-linear summary statistics when appropriate. As a worked example, we derive explicitly the n optimal compressed statistics for Gaussian data in the general case where both the mean and covariance depend on the parameters.
Optimal weighted combinatorial forecasting model of QT dispersion of ECGs in Chinese adults.
Wen, Zhang; Miao, Ge; Xinlei, Liu; Minyi, Cen
2016-07-01
This study aims to provide a scientific basis for unifying the reference value standard of QT dispersion of ECGs in Chinese adults. Three predictive models including regression model, principal component model, and artificial neural network model are combined to establish the optimal weighted combination model. The optimal weighted combination model and single model are verified and compared. Optimal weighted combinatorial model can reduce predicting risk of single model and improve the predicting precision. The reference value of geographical distribution of Chinese adults' QT dispersion was precisely made by using kriging methods. When geographical factors of a particular area are obtained, the reference value of QT dispersion of Chinese adults in this area can be estimated by using optimal weighted combinatorial model and reference value of the QT dispersion of Chinese adults anywhere in China can be obtained by using geographical distribution figure as well.
NASA Astrophysics Data System (ADS)
Paszkiewicz, Zbigniew; Picard, Willy
Performance management (PM) is a key function of virtual organization (VO) management. A large set of PM indicators has been proposed and evaluated within the context of virtual breeding environments (VBEs). However, it is currently difficult to describe and select suitable PM indicators because of the lack of a common vocabulary and taxonomies of PM indicators. Therefore, there is a need for a framework unifying concepts in the domain of VO PM. In this paper, a reference model for VO PM is presented in the context of service-oriented VBEs. In the proposed reference model, both a set of terms that could be used to describe key performance indicators, and a set of taxonomies reflecting various aspects of PM are proposed. The proposed reference model is a first attempt and a work in progress that should not be supposed exhaustive.
ANOS1: a unified nomenclature for Kallmann syndrome 1 gene (KAL1) and anosmin-1
de Castro, Fernando; Seal, Ruth
2017-01-01
Abstract It is accepted that confusion regarding the description of genetic variants occurs when researchers do not use standard nomenclature. The Human Genome Organization Gene Nomenclature Committee contacted a panel of consultants, all working on the KAL1 gene, to propose an update of the nomenclature of the gene, as there was a convention in the literature of using the ‘KAL1’ symbol, when referring to the gene, but using the name ‘anosmin-1’ when referring to the protein. The new name, ANOS1, reflects protein name and is more transferrable across species. PMID:27899353
Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam
2009-01-01
This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.
Yock, Adam D; Rao, Arvind; Dong, Lei; Beadle, Beth M; Garden, Adam S; Kudchadker, Rajat J; Court, Laurence E
2014-05-01
The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Nineteen patients with oropharyngeal cancers were imaged daily with CT-on-rails for image-guided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leave-one-out cross-validation. In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: -11.6%-23.8%) and 14.6% (range: -7.3%-27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: -6.8%-40.3%) and 13.1% (range: -1.5%-52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: -11.1%-20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography images and facilitate improved treatment management.
1990-06-01
on simple railgun accelerators andI homopolar generators. Complex rotating flux compressors would drastically improve the performance of EM launchers...velocities. If this is the direction of improvement, then energies stored in the electric trains built with linear electric motors in Japan and Western I...laboratories which had power supplies 3 already built for other programs ( homopolar generators in conjunction with an inductor and an opening switch
Discontinuous Galerkin methods for Hamiltonian ODEs and PDEs
NASA Astrophysics Data System (ADS)
Tang, Wensheng; Sun, Yajuan; Cai, Wenjun
2017-02-01
In this article, we present a unified framework of discontinuous Galerkin (DG) discretizations for Hamiltonian ODEs and PDEs. We show that with appropriate numerical fluxes the numerical algorithms deduced from DG discretizations can be combined with the symplectic methods in time to derive the multi-symplectic PRK schemes. The resulting numerical discretizations are applied to the linear and nonlinear Schrödinger equations. Some conservative properties of the numerical schemes are investigated and confirmed in the numerical experiments.
Observers for Systems with Nonlinearities Satisfying an Incremental Quadratic Inequality
NASA Technical Reports Server (NTRS)
Acikmese, Ahmet Behcet; Corless, Martin
2004-01-01
We consider the problem of state estimation for nonlinear time-varying systems whose nonlinearities satisfy an incremental quadratic inequality. These observer results unifies earlier results in the literature; and extend it to some additional classes of nonlinearities. Observers are presented which guarantee that the state estimation error exponentially converges to zero. Observer design involves solving linear matrix inequalities for the observer gain matrices. Results are illustrated by application to a simple model of an underwater.
NASA Astrophysics Data System (ADS)
Hau, Jan-Niklas; Oberlack, Martin; Chagelishvili, George
2017-04-01
We present a unifying solution framework for the linearized compressible equations for two-dimensional linearly sheared unbounded flows using the Lie symmetry analysis. The full set of symmetries that are admitted by the underlying system of equations is employed to systematically derive the one- and two-dimensional optimal systems of subalgebras, whose connected group reductions lead to three distinct invariant ansatz functions for the governing sets of partial differential equations (PDEs). The purpose of this analysis is threefold and explicitly we show that (i) there are three invariant solutions that stem from the optimal system. These include a general ansatz function with two free parameters, as well as the ansatz functions of the Kelvin mode and the modal approach. Specifically, the first approach unifies these well-known ansatz functions. By considering two limiting cases of the free parameters and related algebraic transformations, the general ansatz function is reduced to either of them. This fact also proves the existence of a link between the Kelvin mode and modal ansatz functions, as these appear to be the limiting cases of the general one. (ii) The Lie algebra associated with the Lie group admitted by the PDEs governing the compressible dynamics is a subalgebra associated with the group admitted by the equations governing the incompressible dynamics, which allows an additional (scaling) symmetry. Hence, any consequences drawn from the compressible case equally hold for the incompressible counterpart. (iii) In any of the systems of ordinary differential equations, derived by the three ansatz functions in the compressible case, the linearized potential vorticity is a conserved quantity that allows us to analyze vortex and wave mode perturbations separately.
Assessing the Utility of Work Team Theory in a Unified Command Environment at Catastrophic Incidents
2005-03-01
between agencies that potentially affects command post (CP) interactions . All of the foregoing factors contribute to a turbulent management environment...requiring special strategy consideration with and IMT preparation. “Conflict refers to a process of social interaction involving a struggle over...from interactions . These schemas can be grouped as cultural norms perpetuated generationally from seasoned officers to raw recruits, and shared by
NASA Astrophysics Data System (ADS)
Sun, Yuxing
2018-05-01
In this paper, a grey prediction model is used to predict the carbon emission in Hebei province, and the impact analysis model based on TermCo2 is established. At the same time, we read a lot about CGE and study on how to build the scene, the selection of key parameters, and sensitivity analysis of application scenarios do industry for reference.
Beyond naïve cue combination: salience and social cues in early word learning.
Yurovsky, Daniel; Frank, Michael C
2017-03-01
Children learn their earliest words through social interaction, but it is unknown how much they rely on social information. Some theories argue that word learning is fundamentally social from its outset, with even the youngest infants understanding intentions and using them to infer a social partner's target of reference. In contrast, other theories argue that early word learning is largely a perceptual process in which young children map words onto salient objects. One way of unifying these accounts is to model word learning as weighted cue combination, in which children attend to many potential cues to reference, but only gradually learn the correct weight to assign each cue. We tested four predictions of this kind of naïve cue combination account, using an eye-tracking paradigm that combines social word teaching and two-alternative forced-choice testing. None of the predictions were supported. We thus propose an alternative unifying account: children are sensitive to social information early, but their ability to gather and deploy this information is constrained by domain-general cognitive processes. Developmental changes in children's use of social cues emerge not from learning the predictive power of social cues, but from the gradual development of attention, memory, and speed of information processing. © 2015 John Wiley & Sons Ltd.
Mitre, Sandra Minardi; Andrade, Eli Iola Gurgel; Cotta, Rosângela Minardi Mitre
2013-07-01
The rehabilitation centers have emerged and become legitimized in the biomedical model, which from the implementation of attendance, namely the operational guidelines of the national policy of humanization in care and management of the Unified Health System (SUS), have been seeking changes to ensure humanized access and the resolution of health problems. The aim of this study was to analyze the attendance in Rehabilitation Centers of Reference (CCR) of SUS in Belo Horizonte (MG), from the perspectives of professionals and patients. Using a qualitative approach, the research was carried out from August 9 to December 27, 2010, in three CRRs. For data collection, focus groups were conducted with 21 professionals and interviews with 30 patients. This study showed that the current biomedical model in the view of professionals restricts its activities in attendance, limiting the participation and autonomy of patients. The attendance has led to reflections and questions when broadening the vision and governability of the teams. The results reveal the need to equip teams for the construction of innovative practices through ongoing education and the creation of protected spaces for reflection and discussion.
Beyond Naïve Cue Combination: Salience and Social Cues in Early Word Learning
Yurovsky, Daniel
2015-01-01
Children learn their earliest words through social interaction, but it is unknown how much they rely on social information. Some theories argue that word learning is fundamentally social from its outset, with even the youngest infants understanding intentions and using them to infer a social partner’s target of reference. In contrast, other theories argue that early word learning is largely a perceptual process in which young children map words onto salient objects. One way of unifying these accounts is to model word learning as weighted cue-combination, in which children attend to many potential cues to reference, but only gradually learn the correct weight to assign each cue. We tested four predictions of this kind of naïve cue-combination account, using an eye-tracking paradigm that combines social word-teaching and two-alternative forced-choice testing. None of the predictions were supported. We thus propose an alternative unifying account: children are sensitive to social information early, but their ability to gather and deploy this information is constrained by domain-general cognitive processes. Developmental changes in children’s use of social cues emerge not from learning the predictive power of social cues, but from the gradual development of attention, memory, and speed of information processing. PMID:26575408
Characterization of intermittency in renewal processes: Application to earthquakes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akimoto, Takuma; Hasumi, Tomohiro; Aizawa, Yoji
2010-03-15
We construct a one-dimensional piecewise linear intermittent map from the interevent time distribution for a given renewal process. Then, we characterize intermittency by the asymptotic behavior near the indifferent fixed point in the piecewise linear intermittent map. Thus, we provide a framework to understand a unified characterization of intermittency and also present the Lyapunov exponent for renewal processes. This method is applied to the occurrence of earthquakes using the Japan Meteorological Agency and the National Earthquake Information Center catalog. By analyzing the return map of interevent times, we find that interevent times are not independent and identically distributed random variablesmore » but that the conditional probability distribution functions in the tail obey the Weibull distribution.« less
Three Tier Unified Process Model for Requirement Negotiations and Stakeholder Collaborations
NASA Astrophysics Data System (ADS)
Niazi, Muhammad Ashraf Khan; Abbas, Muhammad; Shahzad, Muhammad
2012-11-01
This research paper is focused towards carrying out a pragmatic qualitative analysis of various models and approaches of requirements negotiations (a sub process of requirements management plan which is an output of scope managementís collect requirements process) and studies stakeholder collaborations methodologies (i.e. from within communication management knowledge area). Experiential analysis encompass two tiers; first tier refers to the weighted scoring model while second tier focuses on development of SWOT matrices on the basis of findings of weighted scoring model for selecting an appropriate requirements negotiation model. Finally the results are simulated with the help of statistical pie charts. On the basis of simulated results of prevalent models and approaches of negotiations, a unified approach for requirements negotiations and stakeholder collaborations is proposed where the collaboration methodologies are embeded into selected requirements negotiation model as internal parameters of the proposed process alongside some external required parameters like MBTI, opportunity analysis etc.
Liu, Yawei; Zhang, Xianren
2014-10-07
In this paper, we apply the molecular dynamics simulation method to study the stability of surface nanobubbles in both pure fluids and gas-liquid mixtures. First, we demonstrate with molecular simulations, for the first time, that surface nanobubbles can be stabilized in superheated or gas supersaturated liquid by the contact line pinning caused by the surface heterogeneity. Then, a unified mechanism for nanobubble stability is put forward here that stabilizing nanobubbles require both the contact line pinning and supersaturation. In the mechanism, the supersaturation refers to superheating for pure fluids and gas supersaturation or superheating for the gas-liquid mixtures, both of which exert the same effect on nanobubble stability. As the level of supersaturation increases, we found a Wenzel or Cassie wetting state for undersaturated and saturated fluids, stable nanobubbles at moderate supersaturation with decreasing curvature radius and contact angle, and finally the liquid-to-vapor phase transition at high supersaturation.
Exploring natural supersymmetry at the LHC
NASA Astrophysics Data System (ADS)
Nasir, Fariha
This dissertation demonstrates how a variety of supersymmetric grand unified theories can resolve the little hierarchy problem in the minimal supersymmetric standard model and also explain the observed deviation in the anomalous magnetic moment of the muon. The origin of the little hierarchy problem lies in the sensitive manner in which the Z boson mass depends on parameters that can be much larger than its mass. Large values of these parameters imply that a large fine tuning is required to obtain the correct Z boson mass. With large fine tuning supersymmetry appears unnatural which is why models that attempt to resolve this problem are referred to as natural SUSY models. We show that a possible way to exhibit natural supersymmetry is to assume non-universal gauginos in a class of supersymmetric grand unified models. We further show that considering non-universal gauginos in a class of supersymmetric models can help explain the apparent anomaly in the magnetic moment of the muon.
Some characteristics of supernetworks based on unified hybrid network theory framework
NASA Astrophysics Data System (ADS)
Liu, Qiang; Fang, Jin-Qing; Li, Yong
Comparing with single complex networks, supernetworks are more close to the real world in some ways, and have become the newest research hot spot in the network science recently. Some progresses have been made in the research of supernetworks, but the theoretical research method and complex network characteristics of supernetwork models are still needed to further explore. In this paper, we propose three kinds of supernetwork models with three layers based on the unified hybrid network theory framework (UHNTF), and introduce preferential and random linking, respectively, between the upper and lower layers. Then we compared the topological characteristics of the single networks with the supernetwork models. In order to analyze the influence of the interlayer edges on network characteristics, the cross-degree is defined as a new important parameter. Then some interesting new phenomena are found, the results imply this supernetwork model has reference value and application potential.
Explicit reference governor for linear systems
NASA Astrophysics Data System (ADS)
Garone, Emanuele; Nicotra, Marco; Ntogramatzidis, Lorenzo
2018-06-01
The explicit reference governor is a constrained control scheme that was originally introduced for generic nonlinear systems. This paper presents two explicit reference governor strategies that are specifically tailored for the constrained control of linear time-invariant systems subject to linear constraints. Both strategies are based on the idea of maintaining the system states within an invariant set which is entirely contained in the constraints. This invariant set can be constructed by exploiting either the Lyapunov inequality or modal decomposition. To improve the performance, we show that the two strategies can be combined by choosing at each time instant the least restrictive set. Numerical simulations illustrate that the proposed scheme achieves performances that are comparable to optimisation-based reference governors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sham, Sam; Walker, Kevin P.
The expected service life of the Next Generation Nuclear Plant is 60 years. Structural analyses of the Intermediate Heat Exchanger (IHX) will require the development of unified viscoplastic constitutive models that address the material behavior of Alloy 617, a construction material of choice, over a wide range of strain rates. Many unified constitutive models employ a yield stress state variable which is used to account for cyclic hardening and softening of the material. For low stress values below the yield stress state variable these constitutive models predict that no inelastic deformation takes place which is contrary to experimental results. Themore » ability to model creep deformation at low stresses for the IHX application is very important as the IHX operational stresses are restricted to very small values due to the low creep strengths at elevated temperatures and long design lifetime. This paper presents some preliminary work in modeling the unified viscoplastic constitutive behavior of Alloy 617 which accounts for the long term, low stress, creep behavior and the hysteretic behavior of the material at elevated temperatures. The preliminary model is presented in one-dimensional form for ease of understanding, but the intent of the present work is to produce a three-dimensional model suitable for inclusion in the user subroutines UMAT and USERPL of the ABAQUS and ANSYS nonlinear finite element codes. Further experiments and constitutive modeling efforts are planned to model the material behavior of Alloy 617 in more detail.« less
An improved artifact removal in exposure fusion with local linear constraints
NASA Astrophysics Data System (ADS)
Zhang, Hai; Yu, Mali
2018-04-01
In exposure fusion, it is challenging to remove artifacts because of camera motion and moving objects in the scene. An improved artifact removal method is proposed in this paper, which performs local linear adjustment in artifact removal progress. After determining a reference image, we first perform high-dynamic-range (HDR) deghosting to generate an intermediate image stack from the input image stack. Then, a linear Intensity Mapping Function (IMF) in each window is extracted based on the intensities of intermediate image and reference image, the intensity mean and variance of reference image. Finally, with the extracted local linear constraints, we reconstruct a target image stack, which can be directly used for fusing a single HDR-like image. Some experiments have been implemented and experimental results demonstrate that the proposed method is robust and effective in removing artifacts especially in the saturated regions of the reference image.
A Unified Probabilistic Framework for Dose–Response Assessment of Human Health Effects
Slob, Wout
2015-01-01
Background When chemical health hazards have been identified, probabilistic dose–response assessment (“hazard characterization”) quantifies uncertainty and/or variability in toxicity as a function of human exposure. Existing probabilistic approaches differ for different types of endpoints or modes-of-action, lacking a unifying framework. Objectives We developed a unified framework for probabilistic dose–response assessment. Methods We established a framework based on four principles: a) individual and population dose responses are distinct; b) dose–response relationships for all (including quantal) endpoints can be recast as relating to an underlying continuous measure of response at the individual level; c) for effects relevant to humans, “effect metrics” can be specified to define “toxicologically equivalent” sizes for this underlying individual response; and d) dose–response assessment requires making adjustments and accounting for uncertainty and variability. We then derived a step-by-step probabilistic approach for dose–response assessment of animal toxicology data similar to how nonprobabilistic reference doses are derived, illustrating the approach with example non-cancer and cancer datasets. Results Probabilistically derived exposure limits are based on estimating a “target human dose” (HDMI), which requires risk management–informed choices for the magnitude (M) of individual effect being protected against, the remaining incidence (I) of individuals with effects ≥ M in the population, and the percent confidence. In the example datasets, probabilistically derived 90% confidence intervals for HDMI values span a 40- to 60-fold range, where I = 1% of the population experiences ≥ M = 1%–10% effect sizes. Conclusions Although some implementation challenges remain, this unified probabilistic framework can provide substantially more complete and transparent characterization of chemical hazards and support better-informed risk management decisions. Citation Chiu WA, Slob W. 2015. A unified probabilistic framework for dose–response assessment of human health effects. Environ Health Perspect 123:1241–1254; http://dx.doi.org/10.1289/ehp.1409385 PMID:26006063
Global plate motion frames: Toward a unified model
NASA Astrophysics Data System (ADS)
Torsvik, Trond H.; Müller, R. Dietmar; van der Voo, Rob; Steinberger, Bernhard; Gaina, Carmen
2008-09-01
Plate tectonics constitutes our primary framework for understanding how the Earth works over geological timescales. High-resolution mapping of relative plate motions based on marine geophysical data has followed the discovery of geomagnetic reversals, mid-ocean ridges, transform faults, and seafloor spreading, cementing the plate tectonic paradigm. However, so-called "absolute plate motions," describing how the fragments of the outer shell of the Earth have moved relative to a reference system such as the Earth's mantle, are still poorly understood. Accurate absolute plate motion models are essential surface boundary conditions for mantle convection models as well as for understanding past ocean circulation and climate as continent-ocean distributions change with time. A fundamental problem with deciphering absolute plate motions is that the Earth's rotation axis and the averaged magnetic dipole axis are not necessarily fixed to the mantle reference system. Absolute plate motion models based on volcanic hot spot tracks are largely confined to the last 130 Ma and ideally would require knowledge about the motions within the convecting mantle. In contrast, models based on paleomagnetic data reflect plate motion relative to the magnetic dipole axis for most of Earth's history but cannot provide paleolongitudes because of the axial symmetry of the Earth's magnetic dipole field. We analyze four different reference frames (paleomagnetic, African fixed hot spot, African moving hot spot, and global moving hot spot), discuss their uncertainties, and develop a unifying approach for connecting a hot spot track system and a paleomagnetic absolute plate reference system into a "hybrid" model for the time period from the assembly of Pangea (˜320 Ma) to the present. For the last 100 Ma we use a moving hot spot reference frame that takes mantle convection into account, and we connect this to a pre-100 Ma global paleomagnetic frame adjusted 5° in longitude to smooth the reference frame transition. Using plate driving force arguments and the mapping of reconstructed large igneous provinces to core-mantle boundary topography, we argue that continental paleolongitudes can be constrained with reasonable confidence.
Electrical transport and low-frequency noise in chemical vapor deposited single-layer MoS2 devices.
Sharma, Deepak; Amani, Matin; Motayed, Abhishek; Shah, Pankaj B; Birdwell, A Glen; Najmaei, Sina; Ajayan, Pulickel M; Lou, Jun; Dubey, Madan; Li, Qiliang; Davydov, Albert V
2014-04-18
We have studied temperature-dependent (77-300 K) electrical characteristics and low-frequency noise (LFN) in chemical vapor deposited (CVD) single-layer molybdenum disulfide (MoS2) based back-gated field-effect transistors (FETs). Electrical characterization and LFN measurements were conducted on MoS2 FETs with Al2O3 top-surface passivation. We also studied the effect of top-surface passivation etching on the electrical characteristics of the device. Significant decrease in channel current and transconductance was observed in these devices after the Al2O3 passivation etching. For passivated devices, the two-terminal resistance variation with temperature showed a good fit to the activation energy model, whereas for the etched devices the trend indicated a hopping transport mechanism. A significant increase in the normalized drain current noise power spectral density (PSD) was observed after the etching of the top passivation layer. The observed channel current noise was explained using a standard unified model incorporating carrier number fluctuation and correlated surface mobility fluctuation mechanisms. Detailed analysis of the gate-referred noise voltage PSD indicated the presence of different trapping states in passivated devices when compared to the etched devices. Etched devices showed weak temperature dependence of the channel current noise, whereas passivated devices exhibited near-linear temperature dependence.
Radiation from violently accelerated bodies
NASA Astrophysics Data System (ADS)
Gerlach, Ulrich H.
2001-11-01
A determination is made of the radiation emitted by a linearly uniformly accelerated uncharged dipole transmitter. It is found that, first of all, the radiation rate is given by the familiar Larmor formula, but it is augmented by an amount which becomes dominant for sufficiently high acceleration. For an accelerated dipole oscillator, the criterion is that the center of mass motion become relativistic within one oscillation period. The augmented formula and the measurements which it summarizes presuppose an expanding inertial observation frame. A static inertial reference frame will not do. Secondly, it is found that the radiation measured in the expanding inertial frame is received with 100% fidelity. There is no blueshift or redshift due to the accelerative motion of the transmitter. Finally, it is found that a pair of coherently radiating oscillators accelerating (into opposite directions) in their respective causally disjoint Rindler-coordinatized sectors produces an interference pattern in the expanding inertial frame. Like the pattern of a Young double slit interferometer, this Rindler interferometer pattern has a fringe spacing which is inversely proportional to the proper separation and the proper frequency of the accelerated sources. The interferometer, as well as the augmented Larmor formula, provide a unifying perspective. It joins adjacent Rindler-coordinatized neighborhoods into a single spacetime arena for scattering and radiation from accelerated bodies.
2013-01-01
Background The measurement of the Erythrocyte Sedimentation Rate (ESR) value is a standard procedure performed during a typical blood test. In order to formulate a unified standard of establishing reference ESR values, this paper presents a novel prediction model in which local normal ESR values and corresponding geographical factors are used to predict reference ESR values using multi-layer feed-forward artificial neural networks (ANN). Methods and findings Local normal ESR values were obtained from hospital data, while geographical factors that include altitude, sunshine hours, relative humidity, temperature and precipitation were obtained from the National Geographical Data Information Centre in China. The results show that predicted values are statistically in agreement with measured values. Model results exhibit significant agreement between training data and test data. Consequently, the model is used to predict the unseen local reference ESR values. Conclusions Reference ESR values can be established with geographical factors by using artificial intelligence techniques. ANN is an effective method for simulating and predicting reference ESR values because of its ability to model nonlinear and complex relationships. PMID:23497145
Yang, Qingsheng; Mwenda, Kevin M; Ge, Miao
2013-03-12
The measurement of the Erythrocyte Sedimentation Rate (ESR) value is a standard procedure performed during a typical blood test. In order to formulate a unified standard of establishing reference ESR values, this paper presents a novel prediction model in which local normal ESR values and corresponding geographical factors are used to predict reference ESR values using multi-layer feed-forward artificial neural networks (ANN). Local normal ESR values were obtained from hospital data, while geographical factors that include altitude, sunshine hours, relative humidity, temperature and precipitation were obtained from the National Geographical Data Information Centre in China.The results show that predicted values are statistically in agreement with measured values. Model results exhibit significant agreement between training data and test data. Consequently, the model is used to predict the unseen local reference ESR values. Reference ESR values can be established with geographical factors by using artificial intelligence techniques. ANN is an effective method for simulating and predicting reference ESR values because of its ability to model nonlinear and complex relationships.
KMgene: a unified R package for gene-based association analysis for complex traits.
Yan, Qi; Fang, Zhou; Chen, Wei; Stegle, Oliver
2018-02-09
In this report, we introduce an R package KMgene for performing gene-based association tests for familial, multivariate or longitudinal traits using kernel machine (KM) regression under a generalized linear mixed model (GLMM) framework. Extensive simulations were performed to evaluate the validity of the approaches implemented in KMgene. http://cran.r-project.org/web/packages/KMgene. qi.yan@chp.edu or wei.chen@chp.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2018. Published by Oxford University Press.
Unified Theory for Aircraft Handling Qualities and Adverse Aircraft-Pilot Coupling
NASA Technical Reports Server (NTRS)
Hess, R. A.
1997-01-01
A unified theory for aircraft handling qualities and adverse aircraft-pilot coupling or pilot-induced oscillations is introduced. The theory is based on a structural model of the human pilot. A methodology is presented for the prediction of (1) handling qualities levels; (2) pilot-induced oscillation rating levels; and (3) a frequency range in which pilot-induced oscillations are likely to occur. Although the dynamics of the force-feel system of the cockpit inceptor is included, the methodology will not account for effects attributable to control sensitivity and is limited to single-axis tasks and, at present, to linear vehicle models. The theory is derived from the feedback topology of the structural model and an examination of flight test results for 32 aircraft configurations simulated by the U.S. Air Force/CALSPAN NT-33A and Total In-Flight Simulator variable stability aircraft. An extension to nonlinear vehicle dynamics such as that encountered with actuator saturation is discussed.
User's design handbook for a Standardized Control Module (SCM) for DC to DC Converters, volume 2
NASA Technical Reports Server (NTRS)
Lee, F. C.
1980-01-01
A unified design procedure is presented for selecting the key SCM control parameters for an arbitrarily given power stage configuration and parameter values, such that all regulator performance specifications can be met and optimized concurrently in a single design attempt. All key results and performance indices, for buck, boost, and buck/boost switching regulators which are relevant to SCM design considerations are included to facilitate frequent references.
[The planning of resource support of secondary medical care in hospital].
Kungurov, N V; Zil'berberg, N V
2010-01-01
The Ural Institute of dermatovenerology and immunopathology developed and implemented the software concerning the personalized total recording of medical services and pharmaceuticals. The Institute also presents such software as listing of medical services, software module of calculation of financial costs of implementing full standards of secondary medical care in case of chronic dermatopathy, reference book of standards of direct specific costs on laboratory and physiotherapy services, reference book of pharmaceuticals, testing systems and consumables. The unified information system of management recording is a good technique to substantiate the costs of the implementation of standards of medical care, including high-tech care with taking into account the results of total calculation of provided medical services.
STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION.
Fan, Jianqing; Xue, Lingzhou; Zou, Hui
2014-06-01
Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression.
NASA Astrophysics Data System (ADS)
Barra, Adriano; Contucci, Pierluigi; Sandell, Rickard; Vernia, Cecilia
2014-02-01
How does immigrant integration in a country change with immigration density? Guided by a statistical mechanics perspective we propose a novel approach to this problem. The analysis focuses on classical integration quantifiers such as the percentage of jobs (temporary and permanent) given to immigrants, mixed marriages, and newborns with parents of mixed origin. We find that the average values of different quantifiers may exhibit either linear or non-linear growth on immigrant density and we suggest that social action, a concept identified by Max Weber, causes the observed non-linearity. Using the statistical mechanics notion of interaction to quantitatively emulate social action, a unified mathematical model for integration is proposed and it is shown to explain both growth behaviors observed. The linear theory instead, ignoring the possibility of interaction effects would underestimate the quantifiers up to 30% when immigrant densities are low, and overestimate them as much when densities are high. The capacity to quantitatively isolate different types of integration mechanisms makes our framework a suitable tool in the quest for more efficient integration policies.
STRONG ORACLE OPTIMALITY OF FOLDED CONCAVE PENALIZED ESTIMATION
Fan, Jianqing; Xue, Lingzhou; Zou, Hui
2014-01-01
Folded concave penalization methods have been shown to enjoy the strong oracle property for high-dimensional sparse estimation. However, a folded concave penalization problem usually has multiple local solutions and the oracle property is established only for one of the unknown local solutions. A challenging fundamental issue still remains that it is not clear whether the local optimum computed by a given optimization algorithm possesses those nice theoretical properties. To close this important theoretical gap in over a decade, we provide a unified theory to show explicitly how to obtain the oracle solution via the local linear approximation algorithm. For a folded concave penalized estimation problem, we show that as long as the problem is localizable and the oracle estimator is well behaved, we can obtain the oracle estimator by using the one-step local linear approximation. In addition, once the oracle estimator is obtained, the local linear approximation algorithm converges, namely it produces the same estimator in the next iteration. The general theory is demonstrated by using four classical sparse estimation problems, i.e., sparse linear regression, sparse logistic regression, sparse precision matrix estimation and sparse quantile regression. PMID:25598560
Vajargah, Kianoush Fathi; Sadeghi-Bazargani, Homayoun; Mehdizadeh-Esfanjani, Robab; Savadi-Oskouei, Daryoush; Farhoudi, Mehdi
2012-01-01
The objective of the present study was to assess the comparable applicability of orthogonal projections to latent structures (OPLS) statistical model vs traditional linear regression in order to investigate the role of trans cranial doppler (TCD) sonography in predicting ischemic stroke prognosis. The study was conducted on 116 ischemic stroke patients admitted to a specialty neurology ward. The Unified Neurological Stroke Scale was used once for clinical evaluation on the first week of admission and again six months later. All data was primarily analyzed using simple linear regression and later considered for multivariate analysis using PLS/OPLS models through the SIMCA P+12 statistical software package. The linear regression analysis results used for the identification of TCD predictors of stroke prognosis were confirmed through the OPLS modeling technique. Moreover, in comparison to linear regression, the OPLS model appeared to have higher sensitivity in detecting the predictors of ischemic stroke prognosis and detected several more predictors. Applying the OPLS model made it possible to use both single TCD measures/indicators and arbitrarily dichotomized measures of TCD single vessel involvement as well as the overall TCD result. In conclusion, the authors recommend PLS/OPLS methods as complementary rather than alternative to the available classical regression models such as linear regression.
Effects of non-tidal atmospheric loading on a Kalman filter-based terrestrial reference frame
NASA Astrophysics Data System (ADS)
Abbondanza, C.; Altamimi, Z.; Chin, T. M.; Collilieux, X.; Dach, R.; Heflin, M. B.; Gross, R. S.; König, R.; Lemoine, F. G.; MacMillan, D. S.; Parker, J. W.; van Dam, T. M.; Wu, X.
2013-12-01
The International Terrestrial Reference Frame (ITRF) adopts a piece-wise linear model to parameterize regularized station positions and velocities. The space-geodetic (SG) solutions from VLBI, SLR, GPS and DORIS global networks used as input in the ITRF combination process account for tidal loading deformations, but ignore the non-tidal part. As a result, the non-linear signal observed in the time series of SG-derived station positions in part reflects non-tidal loading displacements not introduced in the SG data reduction. In this analysis, the effect of non-tidal atmospheric loading (NTAL) corrections on the TRF is assessed adopting a Remove/Restore approach: (i) Focusing on the a-posteriori approach, the NTAL model derived from the National Center for Environmental Prediction (NCEP) surface pressure is removed from the SINEX files of the SG solutions used as inputs to the TRF determinations. (ii) Adopting a Kalman-filter based approach, a linear TRF is estimated combining the 4 SG solutions free from NTAL displacements. (iii) Linear fits to the NTAL displacements removed at step (i) are restored to the linear reference frame estimated at (ii). The velocity fields of the (standard) linear reference frame in which the NTAL model has not been removed and the one in which the model has been removed/restored are compared and discussed.
NASA Astrophysics Data System (ADS)
Grombein, T.; Seitz, K.; Heck, B.
2013-12-01
In general, national height reference systems are related to individual vertical datums defined by specific tide gauges. The discrepancy of these vertical datums causes height system biases that range in an order of 1-2 m at a global scale. Continental height systems can be connected by spirit leveling and gravity measurements along the leveling lines as performed for the definition of the European Vertical Reference Frame. In order to unify intercontinental height systems, an indirect connection is needed. For this purpose, global geopotential models derived from recent satellite missions like GOCE provide an important contribution. However, to achieve a highly-precise solution, a combination with local terrestrial gravity data is indispensable. Such combinations result in the solution of a Geodetic Boundary Value Problem (GBVP). In contrast to previous studies, mostly related to the traditional (scalar) free GBVP, the present paper discusses the use of the fixed GBVP for height system unification, where gravity disturbances instead of gravity anomalies are applied as boundary values. The basic idea of our approach is a conversion of measured gravity anomalies to gravity disturbances, where unknown datum parameters occur that can be associated with height system biases. In this way, the fixed GBVP can be extended by datum parameters for each datum zone. By evaluating the GBVP at GNSS/leveling benchmarks, the unknown datum parameters can be estimated in a least squares adjustment. Beside the developed theory, we present numerical results of a case study based on the spherical fixed GBVP and boundary values simulated by the use of the global geopotential model EGM2008. In a further step, the impact of approximations like linearization as well as topographic and ellipsoidal effects is taken into account by suitable reduction and correction terms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ren, S; Tianjin University, Tianjin; Hara, W
Purpose: MRI has a number of advantages over CT as a primary modality for radiation treatment planning (RTP). However, one key bottleneck problem still remains, which is the lack of electron density information in MRI. In the work, a reliable method to map electron density is developed by leveraging the differential contrast of multi-parametric MRI. Methods: We propose a probabilistic Bayesian approach for electron density mapping based on T1 and T2-weighted MRI, using multiple patients as atlases. For each voxel, we compute two conditional probabilities: (1) electron density given its image intensity on T1 and T2-weighted MR images, and (2)more » electron density given its geometric location in a reference anatomy. The two sources of information (image intensity and spatial location) are combined into a unifying posterior probability density function using the Bayesian formalism. The mean value of the posterior probability density function provides the estimated electron density. Results: We evaluated the method on 10 head and neck patients and performed leave-one-out cross validation (9 patients as atlases and remaining 1 as test). The proposed method significantly reduced the errors in electron density estimation, with a mean absolute HU error of 138, compared with 193 for the T1-weighted intensity approach and 261 without density correction. For bone detection (HU>200), the proposed method had an accuracy of 84% and a sensitivity of 73% at specificity of 90% (AUC = 87%). In comparison, the AUC for bone detection is 73% and 50% using the intensity approach and without density correction, respectively. Conclusion: The proposed unifying method provides accurate electron density estimation and bone detection based on multi-parametric MRI of the head with highly heterogeneous anatomy. This could allow for accurate dose calculation and reference image generation for patient setup in MRI-based radiation treatment planning.« less
On the classification of elliptic foliations induced by real quadratic fields with center
NASA Astrophysics Data System (ADS)
Puchuri, Liliana; Bueno, Orestes
2016-12-01
Related to the study of Hilbert's infinitesimal problem, is the problem of determining the existence and estimating the number of limit cycles of the linear perturbation of Hamiltonian fields. A classification of the elliptic foliations in the projective plane induced by the fields obtained by quadratic fields with center was already studied by several authors. In this work, we devise a unified proof of the classification of elliptic foliations induced by quadratic fields with center. This technique involves using a formula due to Cerveau & Lins Neto to calculate the genus of the generic fiber of a first integral of foliations of these kinds. Furthermore, we show that these foliations induce several examples of linear families of foliations which are not bimeromorphically equivalent to certain remarkable examples given by Lins Neto.
Assessing Aircraft Susceptibility to Nonlinear Aircraft-Pilot Coupling/Pilot-Induced Oscillations
NASA Technical Reports Server (NTRS)
Hess, R.A.; Stout, P. W.
1997-01-01
A unified approach for assessing aircraft susceptibility to aircraft-pilot coupling (or pilot-induced oscillations) which was previously reported in the literature and applied to linear systems is extended to nonlinear systems, with emphasis upon vehicles with actuator rate saturation. The linear methodology provided a tool for predicting: (1) handling qualities levels, (2) pilot-induced oscillation rating levels and (3) a frequency range in which pilot-induced oscillations are likely to occur. The extension to nonlinear systems provides a methodology for predicting the latter two quantities. Eight examples are presented to illustrate the use of the technique. The dearth of experimental flight-test data involving systematic variation and assessment of the effects of actuator rate limits presently prevents a more thorough evaluation of the methodology.
McConkey, R; Dowling, S; Hassan, D; Menke, S
2013-10-01
Although the promotion of social inclusion through sports has received increased attention with other disadvantaged groups, this is not the case for children and adults with intellectual disability who experience marked social isolation. The study evaluated the outcomes from one sports programme with particular reference to the processes that were perceived to enhance social inclusion. The Youth Unified Sports programme of Special Olympics combines players with intellectual disabilities (called athletes) and those without intellectual disabilities (called partners) of similar skill level in the same sports teams for training and competition. Alongside the development of sporting skills, the programme offers athletes a platform to socialise with peers and to take part in the life of their community. Unified football and basketball teams from five countries--Germany, Hungary, Poland, Serbia and Ukraine--participated. Individual and group interviews were held with athletes, partners, coaches, parents and community leaders: totalling around 40 informants per country. Qualitative data analysis identified four thematic processes that were perceived by informants across all countries and the two sports to facilitate social inclusion of athletes. These were: (1) the personal development of athletes and partners; (2) the creation of inclusive and equal bonds; (3) the promotion of positive perceptions of athletes; and (4) building alliances within local communities. Unified Sports does provide a vehicle for promoting the social inclusion of people with intellectual disabilities that is theoretically credible in terms of social capital scholarship and which contains lessons for advancing social inclusion in other contexts. Nonetheless, certain limitations are identified that require further consideration to enhance athletes' social inclusion in the wider community. © 2012 The Authors. Journal of Intellectual Disability Research © 2012 John Wiley & Sons Ltd, MENCAP & IASSID.
A Unified Probabilistic Framework for Dose-Response Assessment of Human Health Effects.
Chiu, Weihsueh A; Slob, Wout
2015-12-01
When chemical health hazards have been identified, probabilistic dose-response assessment ("hazard characterization") quantifies uncertainty and/or variability in toxicity as a function of human exposure. Existing probabilistic approaches differ for different types of endpoints or modes-of-action, lacking a unifying framework. We developed a unified framework for probabilistic dose-response assessment. We established a framework based on four principles: a) individual and population dose responses are distinct; b) dose-response relationships for all (including quantal) endpoints can be recast as relating to an underlying continuous measure of response at the individual level; c) for effects relevant to humans, "effect metrics" can be specified to define "toxicologically equivalent" sizes for this underlying individual response; and d) dose-response assessment requires making adjustments and accounting for uncertainty and variability. We then derived a step-by-step probabilistic approach for dose-response assessment of animal toxicology data similar to how nonprobabilistic reference doses are derived, illustrating the approach with example non-cancer and cancer datasets. Probabilistically derived exposure limits are based on estimating a "target human dose" (HDMI), which requires risk management-informed choices for the magnitude (M) of individual effect being protected against, the remaining incidence (I) of individuals with effects ≥ M in the population, and the percent confidence. In the example datasets, probabilistically derived 90% confidence intervals for HDMI values span a 40- to 60-fold range, where I = 1% of the population experiences ≥ M = 1%-10% effect sizes. Although some implementation challenges remain, this unified probabilistic framework can provide substantially more complete and transparent characterization of chemical hazards and support better-informed risk management decisions.
Chemical library subset selection algorithms: a unified derivation using spatial statistics.
Hamprecht, Fred A; Thiel, Walter; van Gunsteren, Wilfred F
2002-01-01
If similar compounds have similar activity, rational subset selection becomes superior to random selection in screening for pharmacological lead discovery programs. Traditional approaches to this experimental design problem fall into two classes: (i) a linear or quadratic response function is assumed (ii) some space filling criterion is optimized. The assumptions underlying the first approach are clear but not always defendable; the second approach yields more intuitive designs but lacks a clear theoretical foundation. We model activity in a bioassay as realization of a stochastic process and use the best linear unbiased estimator to construct spatial sampling designs that optimize the integrated mean square prediction error, the maximum mean square prediction error, or the entropy. We argue that our approach constitutes a unifying framework encompassing most proposed techniques as limiting cases and sheds light on their underlying assumptions. In particular, vector quantization is obtained, in dimensions up to eight, in the limiting case of very smooth response surfaces for the integrated mean square error criterion. Closest packing is obtained for very rough surfaces under the integrated mean square error and entropy criteria. We suggest to use either the integrated mean square prediction error or the entropy as optimization criteria rather than approximations thereof and propose a scheme for direct iterative minimization of the integrated mean square prediction error. Finally, we discuss how the quality of chemical descriptors manifests itself and clarify the assumptions underlying the selection of diverse or representative subsets.
The modified unified interaction model: incorporation of dose-dependent localised recombination.
Lavon, A; Eliyahu, I; Oster, L; Horowitz, Y S
2015-02-01
The unified interaction model (UNIM) was developed to simulate thermoluminescence (TL) linear/supralinear dose-response and the dependence of the supralinearity on ionisation density, i.e. particle type and energy. Before the development of the UNIM, this behaviour had eluded all types of TL modelling including conduction band/valence band (CB/VB) kinetic models. The dependence of the supralinearity on photon energy was explained in the UNIM as due to the increasing role of geminate (localised recombination) with decreasing photon/electron energy. Recently, the Ben Gurion University group has incorporated the concept of trapping centre/luminescent centre (TC/LC) spatially correlated complexes and localised/delocalised recombination into the CB/VB kinetic modelling of the LiF:Mg,Ti system. Track structure considerations are used to describe the relative population of the TC/LC complexes by an electron-hole or by an electron-only as a function of both photon/electron energy and dose. The latter dependence was not included in the original UNIM formulation, a significant over-simplification that is herein corrected. The modified version, the M-UNIM, is then applied to the simulation of the linear/supralinear dose-response characteristics of composite peak 5 in the TL glow curve of LiF:Mg,Ti at two representative average photon/electron energies of 500 and 8 keV. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Linear Elastic Waves - Series: Cambridge Texts in Applied Mathematics (No. 26)
NASA Astrophysics Data System (ADS)
Harris, John G.
2001-10-01
Wave propagation and scattering are among the most fundamental processes that we use to comprehend the world around us. While these processes are often very complex, one way to begin to understand them is to study wave propagation in the linear approximation. This is a book describing such propagation using, as a context, the equations of elasticity. Two unifying themes are used. The first is that an understanding of plane wave interactions is fundamental to understanding more complex wave interactions. The second is that waves are best understood in an asymptotic approximation where they are free of the complications of their excitation and are governed primarily by their propagation environments. The topics covered include reflection, refraction, the propagation of interfacial waves, integral representations, radiation and diffraction, and propagation in closed and open waveguides. Linear Elastic Waves is an advanced level textbook directed at applied mathematicians, seismologists, and engineers. Aimed at beginning graduate students Includes examples and exercises Has application in a wide range of disciplines
Robust Stabilization of Uncertain Systems Based on Energy Dissipation Concepts
NASA Technical Reports Server (NTRS)
Gupta, Sandeep
1996-01-01
Robust stability conditions obtained through generalization of the notion of energy dissipation in physical systems are discussed in this report. Linear time-invariant (LTI) systems which dissipate energy corresponding to quadratic power functions are characterized in the time-domain and the frequency-domain, in terms of linear matrix inequalities (LMls) and algebraic Riccati equations (ARE's). A novel characterization of strictly dissipative LTI systems is introduced in this report. Sufficient conditions in terms of dissipativity and strict dissipativity are presented for (1) stability of the feedback interconnection of dissipative LTI systems, (2) stability of dissipative LTI systems with memoryless feedback nonlinearities, and (3) quadratic stability of uncertain linear systems. It is demonstrated that the framework of dissipative LTI systems investigated in this report unifies and extends small gain, passivity, and sector conditions for stability. Techniques for selecting power functions for characterization of uncertain plants and robust controller synthesis based on these stability results are introduced. A spring-mass-damper example is used to illustrate the application of these methods for robust controller synthesis.
The stability cycle—A universal pathway for the stability of films over topography
NASA Astrophysics Data System (ADS)
Schörner, Mario; Aksel, Nuri
2018-01-01
In the present study on the linear stability of gravity-driven Newtonian films flowing over inclined topographies, we consider a fundamental question: Is there a universal principle, being valid to describe the parametric evolution of the flow's stability chart for variations of different system parameters? For this sake, we first screened all experimental and numerical stability charts available in the literature. In a second step, we performed experiments to fill the gaps which remained. Variations of the fluid's viscosity and the topography's specific shape, amplitude, wavelength, tip width, and inclination were considered. That way, we identified a set of six characteristic patterns of stability charts to be sufficient to describe and unify all results on the linear stability of Newtonian films flowing over undulated inclines. We unveiled a universal pathway—the stability cycle—along which the linear stability charts of all considered Newtonian films flowing down periodically corrugated inclines evolved when the system parameters were changed.
A geometric approach to failure detection and identification in linear systems
NASA Technical Reports Server (NTRS)
Massoumnia, M. A.
1986-01-01
Using concepts of (C,A)-invariant and unobservability (complementary observability) subspaces, a geometric formulation of the failure detection and identification filter problem is stated. Using these geometric concepts, it is shown that it is possible to design a causal linear time-invariant processor that can be used to detect and uniquely identify a component failure in a linear time-invariant system, assuming: (1) The components can fail simultaneously, and (2) The components can fail only one at a time. In addition, a geometric formulation of Beard's failure detection filter problem is stated. This new formulation completely clarifies of output separability and mutual detectability introduced by Beard and also exploits the dual relationship between a restricted version of the failure detection and identification problem and the control decoupling problem. Moreover, the frequency domain interpretation of the results is used to relate the concepts of failure sensitive observers with the generalized parity relations introduced by Chow. This interpretation unifies the various failure detection and identification concepts and design procedures.
NASA Astrophysics Data System (ADS)
Schuch, Dieter
2014-04-01
Theoretical physics seems to be in a kind of schizophrenic state. Many phenomena in the observable macroscopic world obey nonlinear evolution equations, whereas the microscopic world is governed by quantum mechanics, a fundamental theory that is supposedly linear. In order to combine these two worlds in a common formalism, at least one of them must sacrifice one of its dogmas. I claim that linearity in quantum mechanics is not as essential as it apparently seems since quantum mechanics can be reformulated in terms of nonlinear Riccati equations. In a first step, it will be shown where complex Riccati equations appear in time-dependent quantum mechanics and how they can be treated and compared with similar space-dependent Riccati equations in supersymmetric quantum mechanics. Furthermore, the time-independent Schrödinger equation can also be rewritten as a complex Riccati equation. Finally, it will be shown that (real and complex) Riccati equations also appear in many other fields of physics, like statistical thermodynamics and cosmology.
40 CFR 1065.307 - Linearity verification.
Code of Federal Regulations, 2010 CFR
2010-07-01
... different flow rates. Use a gravimetric reference measurement (such as a scale, balance, or mass comparator... the gas-division system to divide the span gas with purified air or nitrogen. Select gas divisions... PM balance, m max refers to the typical mass of a PM filter. (ii) For linearity verification of...
It’s More Than Stamp Collecting: How Genome Sequencing Can Unify Biological Research
Richards, Stephen
2015-01-01
The availability of reference genome sequences, especially the human reference, has revolutionized the study of biology. However, whilst the genomes of some species have been fully sequenced, a wide range of biological problems still cannot be effectively studied for lack of genome sequence information. Here, I identify neglected areas of biology and describe how both targeted species sequencing and more broad taxonomic surveys of the tree of life can address important biological questions. I enumerate the significant benefits that would accrue from sequencing a broader range of taxa, as well as discuss the technical advances in sequencing and assembly methods that would allow for wide-ranging application of whole-genome analysis. Finally, I suggest that in addition to “Big Science” survey initiatives to sequence the tree of life, a modified infrastructure-funding paradigm would better support reference genome sequence generation for research communities most in need. PMID:26003218
It's more than stamp collecting: how genome sequencing can unify biological research.
Richards, Stephen
2015-07-01
The availability of reference genome sequences, especially the human reference, has revolutionized the study of biology. However, while the genomes of some species have been fully sequenced, a wide range of biological problems still cannot be effectively studied for lack of genome sequence information. Here, I identify neglected areas of biology and describe how both targeted species sequencing and more broad taxonomic surveys of the tree of life can address important biological questions. I enumerate the significant benefits that would accrue from sequencing a broader range of taxa, as well as discuss the technical advances in sequencing and assembly methods that would allow for wide-ranging application of whole-genome analysis. Finally, I suggest that in addition to 'big science' survey initiatives to sequence the tree of life, a modified infrastructure-funding paradigm would better support reference genome sequence generation for research communities most in need. Copyright © 2015 Elsevier Ltd. All rights reserved.
The International Human Epigenome Consortium Data Portal.
Bujold, David; Morais, David Anderson de Lima; Gauthier, Carol; Côté, Catherine; Caron, Maxime; Kwan, Tony; Chen, Kuang Chung; Laperle, Jonathan; Markovits, Alexei Nordell; Pastinen, Tomi; Caron, Bryan; Veilleux, Alain; Jacques, Pierre-Étienne; Bourque, Guillaume
2016-11-23
The International Human Epigenome Consortium (IHEC) coordinates the production of reference epigenome maps through the characterization of the regulome, methylome, and transcriptome from a wide range of tissues and cell types. To define conventions ensuring the compatibility of datasets and establish an infrastructure enabling data integration, analysis, and sharing, we developed the IHEC Data Portal (http://epigenomesportal.ca/ihec). The portal provides access to >7,000 reference epigenomic datasets, generated from >600 tissues, which have been contributed by seven international consortia: ENCODE, NIH Roadmap, CEEHRC, Blueprint, DEEP, AMED-CREST, and KNIH. The portal enhances the utility of these reference maps by facilitating the discovery, visualization, analysis, download, and sharing of epigenomics data. The IHEC Data Portal is the official source to navigate through IHEC datasets and represents a strategy for unifying the distributed data produced by international research consortia. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.
Adaptive control in the presence of unmodeled dynamics. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Rohrs, C. E.
1982-01-01
Stability and robustness properties of a wide class of adaptive control algorithms in the presence of unmodeled dynamics and output disturbances were investigated. The class of adaptive algorithms considered are those commonly referred to as model reference adaptive control algorithms, self-tuning controllers, and dead beat adaptive controllers, developed for both continuous-time systems and discrete-time systems. A unified analytical approach was developed to examine the class of existing adaptive algorithms. It was discovered that all existing algorithms contain an infinite gain operator in the dynamic system that defines command reference errors and parameter errors; it is argued that such an infinite gain operator appears to be generic to all adaptive algorithms, whether they exhibit explicit or implicit parameter identification. It is concluded that none of the adaptive algorithms considered can be used with confidence in a practical control system design, because instability will set in with a high probability.
A Neurobehavioral Model of Flexible Spatial Language Behaviors
Lipinski, John; Schneegans, Sebastian; Sandamirskaya, Yulia; Spencer, John P.; Schöner, Gregor
2012-01-01
We propose a neural dynamic model that specifies how low-level visual processes can be integrated with higher level cognition to achieve flexible spatial language behaviors. This model uses real-word visual input that is linked to relational spatial descriptions through a neural mechanism for reference frame transformations. We demonstrate that the system can extract spatial relations from visual scenes, select items based on relational spatial descriptions, and perform reference object selection in a single unified architecture. We further show that the performance of the system is consistent with behavioral data in humans by simulating results from 2 independent empirical studies, 1 spatial term rating task and 1 study of reference object selection behavior. The architecture we present thereby achieves a high degree of task flexibility under realistic stimulus conditions. At the same time, it also provides a detailed neural grounding for complex behavioral and cognitive processes. PMID:21517224
Tomalia, Donald A; Khanna, Shiv N
2016-02-24
Development of a central paradigm is undoubtedly the single most influential force responsible for advancing Dalton's 19th century atomic/molecular chemistry concepts to the current maturity enjoyed by traditional chemistry. A similar central dogma for guiding and unifying nanoscience has been missing. This review traces the origins, evolution, and current status of such a critical nanoperiodic concept/framework for defining and unifying nanoscience. Based on parallel efforts and a mutual consensus now shared by both chemists and physicists, a nanoperiodic/systematic framework concept has emerged. This concept is based on the well-documented existence of discrete, nanoscale collections of traditional inorganic/organic atoms referred to as hard and soft superatoms (i.e., nanoelement categories). These nanometric entities are widely recognized to exhibit nanoscale atom mimicry features reminiscent of traditional picoscale atoms. All unique superatom/nanoelement physicochemical features are derived from quantized structural control defined by six critical nanoscale design parameters (CNDPs), namely, size, shape, surface chemistry, flexibility/rigidity, architecture, and elemental composition. These CNDPs determine all intrinsic superatom properties, their combining behavior to form stoichiometric nanocompounds/assemblies as well as to exhibit nanoperiodic properties leading to new nanoperiodic rules and predictive Mendeleev-like nanoperiodic tables, and they portend possible extension of these principles to larger quantized building blocks including meta-atoms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Guang; Fan, Jiwen; Xu, Kuan-Man
2015-06-01
Arakawa and Wu (2013, hereafter referred to as AW13) recently developed a formal approach to a unified parameterization of atmospheric convection for high-resolution numerical models. The work is based on ideas formulated by Arakawa et al. (2011). It lays the foundation for a new parameterization pathway in the era of high-resolution numerical modeling of the atmosphere. The key parameter in this approach is convective cloud fraction. In conventional parameterization, it is assumed that <<1. This assumption is no longer valid when horizontal resolution of numerical models approaches a few to a few tens kilometers, since in such situations convective cloudmore » fraction can be comparable to unity. Therefore, they argue that the conventional approach to parameterizing convective transport must include a factor 1 - in order to unify the parameterization for the full range of model resolutions so that it is scale-aware and valid for large convective cloud fractions. While AW13’s approach provides important guidance for future convective parameterization development, in this note we intend to show that the conventional approach already has this scale awareness factor 1 - built in, although not recognized for the last forty years. Therefore, it should work well even in situations of large convective cloud fractions in high-resolution numerical models.« less
NASA Technical Reports Server (NTRS)
Hanson, D. B.; Mccolgan, C. J.; Ladden, R. M.; Klatte, R. J.
1991-01-01
Results of the program for the generation of a computer prediction code for noise of advanced single rotation, turboprops (prop-fans) such as the SR3 model are presented. The code is based on a linearized theory developed at Hamilton Standard in which aerodynamics and acoustics are treated as a unified process. Both steady and unsteady blade loading are treated. Capabilities include prediction of steady airload distributions and associated aerodynamic performance, unsteady blade pressure response to gust interaction or blade vibration, noise fields associated with thickness and steady and unsteady loading, and wake velocity fields associated with steady loading. The code was developed on the Hamilton Standard IBM computer and has now been installed on the Cray XMP at NASA-Lewis. The work had its genesis in the frequency domain acoustic theory developed at Hamilton Standard in the late 1970s. It was found that the method used for near field noise predictions could be adapted as a lifting surface theory for aerodynamic work via the pressure potential technique that was used for both wings and ducted turbomachinery. In the first realization of the theory for propellers, the blade loading was represented in a quasi-vortex lattice form. This was upgraded to true lifting surface loading. Originally, it was believed that a purely linear approach for both aerodynamics and noise would be adequate. However, two sources of nonlinearity in the steady aerodynamics became apparent and were found to be a significant factor at takeoff conditions. The first is related to the fact that the steady axial induced velocity may be of the same order of magnitude as the flight speed and the second is the formation of leading edge vortices which increases lift and redistribute loading. Discovery and properties of prop-fan leading edge vortices were reported in two papers. The Unified AeroAcoustic Program (UAAP) capabilites are demonstrated and the theory verified by comparison with the predictions with data from tests at NASA-Lewis. Steady aerodyanmic performance, unsteady blade loading, wakes, noise, and wing and boundary layer shielding are examined.
An inherent curvature-compensated voltage reference using non-linearity of gate coupling coefficient
NASA Astrophysics Data System (ADS)
Hande, Vinayak; Shojaei Baghini, Maryam
2015-08-01
A novel current-mode voltage reference circuit which is capable of generating sub-1 V output voltage is presented. The proposed architecture exhibits the inherent curvature compensation ability. The curvature compensation is achieved by utilizing the non-linear behavior of gate coupling coefficient to compensate non-linear temperature dependence of base-emitter voltage. We have also utilized the developments in CMOS process to reduce power and area consumption. The proposed voltage reference is analyzed theoretically and compared with other existing methods. The circuit is designed and simulated in 180 nm mixed-mode CMOS UMC technology which gives a reference level of 246 mV. The minimum required supply voltage is 1 V with maximum current drawn of 9.24 μA. A temperature coefficient of 9 ppm/°C is achieved over -25 to 125 °C temperature range. The reference voltage varies by ±11 mV across process corners. The reference circuit shows the line sensitivity of 0.9 mV/V with area consumption of 100 × 110 μm2
Tirado-Ramos, Alfredo; Hu, Jingkun; Lee, K.P.
2002-01-01
Supplement 23 to DICOM (Digital Imaging and Communications for Medicine), Structured Reporting, is a specification that supports a semantically rich representation of image and waveform content, enabling experts to share image and related patient information. DICOM SR supports the representation of textual and coded data linked to images and waveforms. Nevertheless, the medical information technology community needs models that work as bridges between the DICOM relational model and open object-oriented technologies. The authors assert that representations of the DICOM Structured Reporting standard, using object-oriented modeling languages such as the Unified Modeling Language, can provide a high-level reference view of the semantically rich framework of DICOM and its complex structures. They have produced an object-oriented model to represent the DICOM SR standard and have derived XML-exchangeable representations of this model using World Wide Web Consortium specifications. They expect the model to benefit developers and system architects who are interested in developing applications that are compliant with the DICOM SR specification. PMID:11751804
An Integrative Account of Constraints on Cross-Situational Learning
Yurovsky, Daniel; Frank, Michael C.
2015-01-01
Word-object co-occurrence statistics are a powerful information source for vocabulary learning, but there is considerable debate about how learners actually use them. While some theories hold that learners accumulate graded, statistical evidence about multiple referents for each word, others suggest that they track only a single candidate referent. In two large-scale experiments, we show that neither account is sufficient: Cross-situational learning involves elements of both. Further, the empirical data are captured by a computational model that formalizes how memory and attention interact with co-occurrence tracking. Together, the data and model unify opposing positions in a complex debate and underscore the value of understanding the interaction between computational and algorithmic levels of explanation. PMID:26302052
2013-01-01
is the derivative of the N th-order Legendre polynomial . Given these definitions, the one-dimensional Lagrange polynomials hi(ξ) are hi(ξ) = − 1 N(N...2. Detail of one interface patch in the northern hemisphere. The high-order Legendre -Gauss-Lobatto (LGL) points are added to the linear grid by...smaller ones by a Lagrange polynomial of order nI . The number of quadrilateral elements and grid points of the final grid are then given by Np = 6(N
FREQ: A computational package for multivariable system loop-shaping procedures
NASA Technical Reports Server (NTRS)
Giesy, Daniel P.; Armstrong, Ernest S.
1989-01-01
Many approaches in the field of linear, multivariable time-invariant systems analysis and controller synthesis employ loop-sharing procedures wherein design parameters are chosen to shape frequency-response singular value plots of selected transfer matrices. A software package, FREQ, is documented for computing within on unified framework many of the most used multivariable transfer matrices for both continuous and discrete systems. The matrices are evaluated at user-selected frequency-response values, and singular values against frequency. Example computations are presented to demonstrate the use of the FREQ code.
Assessing the formability of metallic sheets by means of localized and diffuse necking models
NASA Astrophysics Data System (ADS)
Comşa, Dan-Sorin; Lǎzǎrescu, Lucian; Banabic, Dorel
2016-10-01
The main objective of the paper consists in elaborating a unified framework that allows the theoretical assessment of sheet metal formability. Hill's localized necking model and the Extended Maximum Force Criterion proposed by Mattiasson, Sigvant, and Larsson have been selected for this purpose. Both models are thoroughly described together with their solution procedures. A comparison of the theoretical predictions with experimental data referring to the formability of a DP600 steel sheet is also presented by the authors.
NASA Astrophysics Data System (ADS)
Ducru, Pablo; Josey, Colin; Dibert, Karia; Sobes, Vladimir; Forget, Benoit; Smith, Kord
2017-04-01
This article establishes a new family of methods to perform temperature interpolation of nuclear interactions cross sections, reaction rates, or cross sections times the energy. One of these quantities at temperature T is approximated as a linear combination of quantities at reference temperatures (Tj). The problem is formalized in a cross section independent fashion by considering the kernels of the different operators that convert cross section related quantities from a temperature T0 to a higher temperature T - namely the Doppler broadening operation. Doppler broadening interpolation of nuclear cross sections is thus here performed by reconstructing the kernel of the operation at a given temperature T by means of linear combination of kernels at reference temperatures (Tj). The choice of the L2 metric yields optimal linear interpolation coefficients in the form of the solutions of a linear algebraic system inversion. The optimization of the choice of reference temperatures (Tj) is then undertaken so as to best reconstruct, in the L∞ sense, the kernels over a given temperature range [Tmin ,Tmax ]. The performance of these kernel reconstruction methods is then assessed in light of previous temperature interpolation methods by testing them upon isotope 238U. Temperature-optimized free Doppler kernel reconstruction significantly outperforms all previous interpolation-based methods, achieving 0.1% relative error on temperature interpolation of 238U total cross section over the temperature range [ 300 K , 3000 K ] with only 9 reference temperatures.
Structured sparse linear graph embedding.
Wang, Haixian
2012-03-01
Subspace learning is a core issue in pattern recognition and machine learning. Linear graph embedding (LGE) is a general framework for subspace learning. In this paper, we propose a structured sparse extension to LGE (SSLGE) by introducing a structured sparsity-inducing norm into LGE. Specifically, SSLGE casts the projection bases learning into a regression-type optimization problem, and then the structured sparsity regularization is applied to the regression coefficients. The regularization selects a subset of features and meanwhile encodes high-order information reflecting a priori structure information of the data. The SSLGE technique provides a unified framework for discovering structured sparse subspace. Computationally, by using a variational equality and the Procrustes transformation, SSLGE is efficiently solved with closed-form updates. Experimental results on face image show the effectiveness of the proposed method. Copyright © 2011 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Hua-Sheng
2013-09-15
A unified, fast, and effective approach is developed for numerical calculation of the well-known plasma dispersion function with extensions from Maxwellian distribution to almost arbitrary distribution functions, such as the δ, flat top, triangular, κ or Lorentzian, slowing down, and incomplete Maxwellian distributions. The singularity and analytic continuation problems are also solved generally. Given that the usual conclusion γ∝∂f{sub 0}/∂v is only a rough approximation when discussing the distribution function effects on Landau damping, this approach provides a useful tool for rigorous calculations of the linear wave and instability properties of plasma for general distribution functions. The results are alsomore » verified via a linear initial value simulation approach. Intuitive visualizations of the generalized plasma dispersion function are also provided.« less
Dai, James Y.; Hughes, James P.
2012-01-01
The meta-analytic approach to evaluating surrogate end points assesses the predictiveness of treatment effect on the surrogate toward treatment effect on the clinical end point based on multiple clinical trials. Definition and estimation of the correlation of treatment effects were developed in linear mixed models and later extended to binary or failure time outcomes on a case-by-case basis. In a general regression setting that covers nonnormal outcomes, we discuss in this paper several metrics that are useful in the meta-analytic evaluation of surrogacy. We propose a unified 3-step procedure to assess these metrics in settings with binary end points, time-to-event outcomes, or repeated measures. First, the joint distribution of estimated treatment effects is ascertained by an estimating equation approach; second, the restricted maximum likelihood method is used to estimate the means and the variance components of the random treatment effects; finally, confidence intervals are constructed by a parametric bootstrap procedure. The proposed method is evaluated by simulations and applications to 2 clinical trials. PMID:22394448
NASA Astrophysics Data System (ADS)
Yan, Jiawei; Ke, Youqi
In realistic nanoelectronics, disordered impurities/defects are inevitable and play important roles in electron transport. However, due to the lack of effective quantum transport method, the important effects of disorders remain poorly understood. Here, we report a generalized non-equilibrium vertex correction (NVC) method with coherent potential approximation to treat the disorder effects in quantum transport simulation. With this generalized NVC method, any averaged product of two single-particle Green's functions can be obtained by solving a set of simple linear equations. As a result, the averaged non-equilibrium density matrix and various important transport properties, including averaged current, disordered induced current fluctuation and the averaged shot noise, can all be efficiently computed in a unified scheme. Moreover, a generalized form of conditionally averaged non-equilibrium Green's function is derived to incorporate with density functional theory to enable first-principles simulation. We prove the non-equilibrium coherent potential equals the non-equilibrium vertex correction. Our approach provides a unified, efficient and self-consistent method for simulating non-equilibrium quantum transport through disorder nanoelectronics. Shanghaitech start-up fund.
The mass-action law based algorithms for quantitative econo-green bio-research.
Chou, Ting-Chao
2011-05-01
The relationship between dose and effect is not random, but rather governed by the unified theory based on the median-effect equation (MEE) of the mass-action law. Rearrangement of MEE yields the mathematical form of the Michaelis-Menten, Hill, Henderson-Hasselbalch and Scatchard equations of biochemistry and biophysics, and the median-effect plot allows linearization of all dose-effect curves regardless of potency and shape. The "median" is the universal common-link and reference-point for the 1st-order to higher-order dynamics, and from single-entities to multiple-entities and thus, it allows the all for one and one for all unity theory to "integrate" simple and complex systems. Its applications include the construction of a dose-effect curve with a theoretical minimum of only two data points if they are accurately determined; quantification of synergism or antagonism at all dose and effect levels; the low-dose risk assessment for carcinogens, toxic substances or radiation; and the determination of competitiveness and exclusivity for receptor binding. Since the MEE algorithm allows the reduced requirement of the number of data points for small size experimentation, and yields quantitative bioinformatics, it points to the deterministic, efficient, low-cost biomedical research and drug discovery, and ethical planning for clinical trials. It is concluded that the contemporary biomedical sciences would greatly benefit from the mass-action law based "Green Revolution".
Sparse representation of whole-brain fMRI signals for identification of functional networks.
Lv, Jinglei; Jiang, Xi; Li, Xiang; Zhu, Dajiang; Chen, Hanbo; Zhang, Tuo; Zhang, Shu; Hu, Xintao; Han, Junwei; Huang, Heng; Zhang, Jing; Guo, Lei; Liu, Tianming
2015-02-01
There have been several recent studies that used sparse representation for fMRI signal analysis and activation detection based on the assumption that each voxel's fMRI signal is linearly composed of sparse components. Previous studies have employed sparse coding to model functional networks in various modalities and scales. These prior contributions inspired the exploration of whether/how sparse representation can be used to identify functional networks in a voxel-wise way and on the whole brain scale. This paper presents a novel, alternative methodology of identifying multiple functional networks via sparse representation of whole-brain task-based fMRI signals. Our basic idea is that all fMRI signals within the whole brain of one subject are aggregated into a big data matrix, which is then factorized into an over-complete dictionary basis matrix and a reference weight matrix via an effective online dictionary learning algorithm. Our extensive experimental results have shown that this novel methodology can uncover multiple functional networks that can be well characterized and interpreted in spatial, temporal and frequency domains based on current brain science knowledge. Importantly, these well-characterized functional network components are quite reproducible in different brains. In general, our methods offer a novel, effective and unified solution to multiple fMRI data analysis tasks including activation detection, de-activation detection, and functional network identification. Copyright © 2014 Elsevier B.V. All rights reserved.
2002-01-01
This report presents detailed information on age- and gender-related differences in the anatomical and physiological characteristics of reference individuals. These reference values provide needed input to prospective dosimetry calculations for radiation protection purposes for both workers and members of the general public. The purpose of this report is to consolidate and unify in one publication, important new information on reference anatomical and physiological values that has become available since Publication 23 was published by the ICRP in 1975. There are two aspects of this work. The first is to revise and extend the information in Publication 23 as appropriate. The second is to provide additional information on individual variation among grossly normal individuals resulting from differences in age, gender, race, or other factors. This publication collects, unifies, and expands the updated ICRP reference values for the purpose of providing a comprehensive and consistent set of age- and gender-specific reference values for anatomical and physiological features of the human body pertinent to radiation dosimetry. The reference values given in this report are based on: (a) anatomical and physiological information not published before by the ICRP; (b) recent ICRP publications containing reference value information; and (c) information in Publication 23 that is still considered valid and appropriate for radiation protection purposes. Moving from the past emphasis on 'Reference Man', the new report presents a series of reference values for both male and female subjects of six different ages: newborn, 1 year, 5 years, 10 years, 15 years, and adult. In selecting reference values, the Commission has used data on Western Europeans and North Americans because these populations have been well studied with respect to antomy, body composition, and physiology. When appropriate, comparisons are made between the chosen reference values and data from several Asian populations. The first section of the report provides summary tables of all the anatomical and physiological parameters given as reference values in this publication. These results give a comprehensive view of reference values for an individual as influenced by age and gender. The second section describes characteristics of dosimetric importance for the embryo and fetus. Information is provided on the development of the total body and the timing of appearance and development of the various organ systems. Reference values are provided on the mass of the total body and selected organs and tissues, as well as a number of physiological parameters. The third section deals with reference values of important anatomical and physiological characteristics of reference individuals from birth to adulthood. This section begins with details on the growth and composition of the total body in males and females. It then describes and quantifies anatomical and physiological characteristics of various organ systems and changes in these characteristics during growth, maturity, and pregnancy. Reference values are specified for characteristics of dosimetric importance. The final section gives a brief summary of the elemental composition of individuals. Focusing on the elements of dosimetric importance, information is presented on the body content of 13 elements: calcium, carbon, chloride, hydrogen, iodine, iron, magnesium, nitrogen, oxygen, potassium, sodium, sulphur, and phosphorus.
Rubert, Josep; James, Kevin J; Mañes, Jordi; Soler, Carla
2012-02-03
Recent developments in mass spectrometers have created a paradoxical situation; different mass spectrometers are available, each of them with their specific strengths and drawbacks. Hybrid instruments try to unify several advantages in one instrument. In this study two of wide-used hybrid instruments were compared: hybrid quadrupole-linear ion trap-mass spectrometry (QTRAP®) and the hybrid linear ion trap-high resolution mass spectrometry (LTQ-Orbitrap®). Both instruments were applied to detect the presence of 18 selected mycotoxins in baby food. Analytical parameters were validated according to 2002/657/CE. Limits of quantification (LOQs) obtained by QTRAP® instrument ranged from 0.45 to 45 μg kg⁻¹ while lower limits of quantification (LLOQs) values were obtained by LTQ-Orbitrap®: 7-70 μg kg⁻¹. The correlation coefficients (r) in both cases were upper than 0.989. These values highlighted that both instruments were complementary for the analysis of mycotoxin in baby food; while QTRAP® reached best sensitivity and selectivity, LTQ-Orbitrap® allowed the identification of non-target and unknowns compounds. Copyright © 2011 Elsevier B.V. All rights reserved.
Use of Linear and Circular Polarization: The Secret LCD Screen and 3D Cinema
NASA Astrophysics Data System (ADS)
Richtberg, Stefan; Girwidz, Raimund
2017-10-01
References to everyday life are important for teaching physics. Discussing polarization phenomena, liquid crystal displays (LCDs) and 3D cinemas provide such references. In this paper we describe experiments to support students' understanding of linearly polarized light as well as the phenomenon of inverted colors using a secret LCD screen. Moreover we explain how 3D glasses work (when using polarizers) and introduce some experiments to point out why 3D cinemas use circularly polarized light instead of linearly polarized light. When using linearly polarized light, viewers must keep their head level all the time. Using circularly polarized light, this is not necessary.
Use of Linear and Circular Polarization: The Secret LCD Screen and 3D Cinema
ERIC Educational Resources Information Center
Richtberg, Stefan; Girwidz, Raimund
2017-01-01
References to everyday life are important for teaching physics. Discussing polarization phenomena, liquid crystal displays (LCDs) and 3D cinemas provide such references. In this paper we describe experiments to support students' understanding of linearly polarized light as well as the phenomenon of inverted colors using a secret LCD screen.…
A Thermodynamic Theory Of Solid Viscoelasticity. Part 1: Linear Viscoelasticity.
NASA Technical Reports Server (NTRS)
Freed, Alan D.; Leonov, Arkady I.
2002-01-01
The present series of three consecutive papers develops a general theory for linear and finite solid viscoelasticity. Because the most important object for nonlinear studies are rubber-like materials, the general approach is specified in a form convenient for solving problems important for many industries that involve rubber-like materials. General linear and nonlinear theories for non-isothermal deformations of viscoelastic solids are developed based on the quasi-linear approach of non-equilibrium thermodynamics. In this, the first paper of the series, we analyze non-isothermal linear viscoelasticity, which is applicable in a range of small strains not only to all synthetic polymers and bio-polymers but also to some non-polymeric materials. Although the linear case seems to be well developed, there still are some reasons to implement a thermodynamic derivation of constitutive equations for solid-like, non-isothermal, linear viscoelasticity. The most important is the thermodynamic modeling of thermo-rheological complexity , i.e. different temperature dependences of relaxation parameters in various parts of relaxation spectrum. A special structure of interaction matrices is established for different physical mechanisms contributed to the normal relaxation modes. This structure seems to be in accord with observations, and creates a simple mathematical framework for both continuum and molecular theories of the thermo-rheological complex relaxation phenomena. Finally, a unified approach is briefly discussed that, in principle, allows combining both the long time (discrete) and short time (continuous) descriptions of relaxation behaviors for polymers in the rubbery and glassy regions.
Ducru, Pablo; Josey, Colin; Dibert, Karia; ...
2017-01-25
This paper establishes a new family of methods to perform temperature interpolation of nuclear interactions cross sections, reaction rates, or cross sections times the energy. One of these quantities at temperature T is approximated as a linear combination of quantities at reference temperatures (T j). The problem is formalized in a cross section independent fashion by considering the kernels of the different operators that convert cross section related quantities from a temperature T 0 to a higher temperature T — namely the Doppler broadening operation. Doppler broadening interpolation of nuclear cross sections is thus here performed by reconstructing the kernelmore » of the operation at a given temperature T by means of linear combination of kernels at reference temperatures (T j). The choice of the L 2 metric yields optimal linear interpolation coefficients in the form of the solutions of a linear algebraic system inversion. The optimization of the choice of reference temperatures (T j) is then undertaken so as to best reconstruct, in the L∞ sense, the kernels over a given temperature range [T min,T max]. The performance of these kernel reconstruction methods is then assessed in light of previous temperature interpolation methods by testing them upon isotope 238U. Temperature-optimized free Doppler kernel reconstruction significantly outperforms all previous interpolation-based methods, achieving 0.1% relative error on temperature interpolation of 238U total cross section over the temperature range [300 K,3000 K] with only 9 reference temperatures.« less
Detecting nonlinear dynamics of functional connectivity
NASA Astrophysics Data System (ADS)
LaConte, Stephen M.; Peltier, Scott J.; Kadah, Yasser; Ngan, Shing-Chung; Deshpande, Gopikrishna; Hu, Xiaoping
2004-04-01
Functional magnetic resonance imaging (fMRI) is a technique that is sensitive to correlates of neuronal activity. The application of fMRI to measure functional connectivity of related brain regions across hemispheres (e.g. left and right motor cortices) has great potential for revealing fundamental physiological brain processes. Primarily, functional connectivity has been characterized by linear correlations in resting-state data, which may not provide a complete description of its temporal properties. In this work, we broaden the measure of functional connectivity to study not only linear correlations, but also those arising from deterministic, non-linear dynamics. Here the delta-epsilon approach is extended and applied to fMRI time series. The method of delays is used to reconstruct the joint system defined by a reference pixel and a candidate pixel. The crux of this technique relies on determining whether the candidate pixel provides additional information concerning the time evolution of the reference. As in many correlation-based connectivity studies, we fix the reference pixel. Every brain location is then used as a candidate pixel to estimate the spatial pattern of deterministic coupling with the reference. Our results indicate that measured connectivity is often emphasized in the motor cortex contra-lateral to the reference pixel, demonstrating the suitability of this approach for functional connectivity studies. In addition, discrepancies with traditional correlation analysis provide initial evidence for non-linear dynamical properties of resting-state fMRI data. Consequently, the non-linear characterization provided from our approach may provide a more complete description of the underlying physiology and brain function measured by this type of data.
ERIC Educational Resources Information Center
Lamothe, Alain R.
2013-01-01
This paper reports the results from a quantitative study examining the strength of linear relationships between Laurentian University students and faculty members and the J. N. Desmarais Library's reference and monograph e-book collections. The number of full-text items accessed, searches performed, and undergraduate, graduate, and faculty…
Microcomputer Scheduling of Reference Desk Staff.
ERIC Educational Resources Information Center
Cornick, Donna; Owen, Willy
1988-01-01
Presents a model that can accommodate staff preferences when determining a reference desk schedule using a microcomputer, the Lotus 1-2-3 spreadsheet software, and the linear programing software LP83. (eight references) (MES)
Zhang, Yi-Fan; Tian, Yu; Zhou, Tian-Shu; Araki, Kenji; Li, Jing-Song
2016-01-01
The broad adoption of clinical decision support systems within clinical practice has been hampered mainly by the difficulty in expressing domain knowledge and patient data in a unified formalism. This paper presents a semantic-based approach to the unified representation of healthcare domain knowledge and patient data for practical clinical decision making applications. A four-phase knowledge engineering cycle is implemented to develop a semantic healthcare knowledge base based on an HL7 reference information model, including an ontology to model domain knowledge and patient data and an expression repository to encode clinical decision making rules and queries. A semantic clinical decision support system is designed to provide patient-specific healthcare recommendations based on the knowledge base and patient data. The proposed solution is evaluated in the case study of type 2 diabetes mellitus inpatient management. The knowledge base is successfully instantiated with relevant domain knowledge and testing patient data. Ontology-level evaluation confirms model validity. Application-level evaluation of diagnostic accuracy reaches a sensitivity of 97.5%, a specificity of 100%, and a precision of 98%; an acceptance rate of 97.3% is given by domain experts for the recommended care plan orders. The proposed solution has been successfully validated in the case study as providing clinical decision support at a high accuracy and acceptance rate. The evaluation results demonstrate the technical feasibility and application prospect of our approach. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
A unified and efficient framework for court-net sports video analysis using 3D camera modeling
NASA Astrophysics Data System (ADS)
Han, Jungong; de With, Peter H. N.
2007-01-01
The extensive amount of video data stored on available media (hard and optical disks) necessitates video content analysis, which is a cornerstone for different user-friendly applications, such as, smart video retrieval and intelligent video summarization. This paper aims at finding a unified and efficient framework for court-net sports video analysis. We concentrate on techniques that are generally applicable for more than one sports type to come to a unified approach. To this end, our framework employs the concept of multi-level analysis, where a novel 3-D camera modeling is utilized to bridge the gap between the object-level and the scene-level analysis. The new 3-D camera modeling is based on collecting features points from two planes, which are perpendicular to each other, so that a true 3-D reference is obtained. Another important contribution is a new tracking algorithm for the objects (i.e. players). The algorithm can track up to four players simultaneously. The complete system contributes to summarization by various forms of information, of which the most important are the moving trajectory and real-speed of each player, as well as 3-D height information of objects and the semantic event segments in a game. We illustrate the performance of the proposed system by evaluating it for a variety of court-net sports videos containing badminton, tennis and volleyball, and we show that the feature detection performance is above 92% and events detection about 90%.
NASA Astrophysics Data System (ADS)
Donges, Jonathan; Heitzig, Jobst; Beronov, Boyan; Wiedermann, Marc; Runge, Jakob; Feng, Qing Yi; Tupikina, Liubov; Stolbova, Veronika; Donner, Reik; Marwan, Norbert; Dijkstra, Henk; Kurths, Jürgen
2016-04-01
We introduce the pyunicorn (Pythonic unified complex network and recurrence analysis toolbox) open source software package for applying and combining modern methods of data analysis and modeling from complex network theory and nonlinear time series analysis. pyunicorn is a fully object-oriented and easily parallelizable package written in the language Python. It allows for the construction of functional networks such as climate networks in climatology or functional brain networks in neuroscience representing the structure of statistical interrelationships in large data sets of time series and, subsequently, investigating this structure using advanced methods of complex network theory such as measures and models for spatial networks, networks of interacting networks, node-weighted statistics, or network surrogates. Additionally, pyunicorn provides insights into the nonlinear dynamics of complex systems as recorded in uni- and multivariate time series from a non-traditional perspective by means of recurrence quantification analysis, recurrence networks, visibility graphs, and construction of surrogate time series. The range of possible applications of the library is outlined, drawing on several examples mainly from the field of climatology. pyunicorn is available online at https://github.com/pik-copan/pyunicorn. Reference: J.F. Donges, J. Heitzig, B. Beronov, M. Wiedermann, J. Runge, Q.-Y. Feng, L. Tupikina, V. Stolbova, R.V. Donner, N. Marwan, H.A. Dijkstra, and J. Kurths, Unified functional network and nonlinear time series analysis for complex systems science: The pyunicorn package, Chaos 25, 113101 (2015), DOI: 10.1063/1.4934554, Preprint: arxiv.org:1507.01571 [physics.data-an].
Unifying dynamical and structural stability of equilibria
NASA Astrophysics Data System (ADS)
Arnoldi, Jean-François; Haegeman, Bart
2016-09-01
We exhibit a fundamental relationship between measures of dynamical and structural stability of linear dynamical systems-e.g. linearized models in the vicinity of equilibria. We show that dynamical stability, quantified via the response to external perturbations (i.e. perturbation of dynamical variables), coincides with the minimal internal perturbation (i.e. perturbations of interactions between variables) able to render the system unstable. First, by reformulating a result of control theory, we explain that harmonic external perturbations reflect the spectral sensitivity of the Jacobian matrix at the equilibrium, with respect to constant changes of its coefficients. However, for this equivalence to hold, imaginary changes of the Jacobian's coefficients have to be allowed. The connection with dynamical stability is thus lost for real dynamical systems. We show that this issue can be avoided, thus recovering the fundamental link between dynamical and structural stability, by considering stochastic noise as external and internal perturbations. More precisely, we demonstrate that a linear system's response to white-noise perturbations directly reflects the intensity of internal white-noise disturbance that it can accommodate before becoming stochastically unstable.
Unifying dynamical and structural stability of equilibria.
Arnoldi, Jean-François; Haegeman, Bart
2016-09-01
We exhibit a fundamental relationship between measures of dynamical and structural stability of linear dynamical systems-e.g. linearized models in the vicinity of equilibria. We show that dynamical stability, quantified via the response to external perturbations (i.e. perturbation of dynamical variables), coincides with the minimal internal perturbation (i.e. perturbations of interactions between variables) able to render the system unstable. First, by reformulating a result of control theory, we explain that harmonic external perturbations reflect the spectral sensitivity of the Jacobian matrix at the equilibrium, with respect to constant changes of its coefficients. However, for this equivalence to hold, imaginary changes of the Jacobian's coefficients have to be allowed. The connection with dynamical stability is thus lost for real dynamical systems. We show that this issue can be avoided, thus recovering the fundamental link between dynamical and structural stability, by considering stochastic noise as external and internal perturbations. More precisely, we demonstrate that a linear system's response to white-noise perturbations directly reflects the intensity of internal white-noise disturbance that it can accommodate before becoming stochastically unstable.
A new look at the robust control of discrete-time Markov jump linear systems
NASA Astrophysics Data System (ADS)
Todorov, M. G.; Fragoso, M. D.
2016-03-01
In this paper, we make a foray in the role played by a set of four operators on the study of robust H2 and mixed H2/H∞ control problems for discrete-time Markov jump linear systems. These operators appear in the study of mean square stability for this class of systems. By means of new linear matrix inequality (LMI) characterisations of controllers, which include slack variables that, to some extent, separate the robustness and performance objectives, we introduce four alternative approaches to the design of controllers which are robustly stabilising and at the same time provide a guaranteed level of H2 performance. Since each operator provides a different degree of conservatism, the results are unified in the form of an iterative LMI technique for designing robust H2 controllers, whose convergence is attained in a finite number of steps. The method yields a new way of computing mixed H2/H∞ controllers, whose conservatism decreases with iteration. Two numerical examples illustrate the applicability of the proposed results for the control of a small unmanned aerial vehicle, and for an underactuated robotic arm.
A comparative study of minimum norm inverse methods for MEG imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leahy, R.M.; Mosher, J.C.; Phillips, J.W.
1996-07-01
The majority of MEG imaging techniques currently in use fall into the general class of (weighted) minimum norm methods. The minimization of a norm is used as the basis for choosing one from a generally infinite set of solutions that provide an equally good fit to the data. This ambiguity in the solution arises from the inherent non- uniqueness of the continuous inverse problem and is compounded by the imbalance between the relatively small number of measurements and the large number of source voxels. Here we present a unified view of the minimum norm methods and describe how we canmore » use Tikhonov regularization to avoid instabilities in the solutions due to noise. We then compare the performance of regularized versions of three well known linear minimum norm methods with the non-linear iteratively reweighted minimum norm method and a Bayesian approach.« less
Genetic mixed linear models for twin survival data.
Ha, Il Do; Lee, Youngjo; Pawitan, Yudi
2007-07-01
Twin studies are useful for assessing the relative importance of genetic or heritable component from the environmental component. In this paper we develop a methodology to study the heritability of age-at-onset or lifespan traits, with application to analysis of twin survival data. Due to limited period of observation, the data can be left truncated and right censored (LTRC). Under the LTRC setting we propose a genetic mixed linear model, which allows general fixed predictors and random components to capture genetic and environmental effects. Inferences are based upon the hierarchical-likelihood (h-likelihood), which provides a statistically efficient and unified framework for various mixed-effect models. We also propose a simple and fast computation method for dealing with large data sets. The method is illustrated by the survival data from the Swedish Twin Registry. Finally, a simulation study is carried out to evaluate its performance.
Targeted ENO schemes with tailored resolution property for hyperbolic conservation laws
NASA Astrophysics Data System (ADS)
Fu, Lin; Hu, Xiangyu Y.; Adams, Nikolaus A.
2017-11-01
In this paper, we extend the range of targeted ENO (TENO) schemes (Fu et al. (2016) [18]) by proposing an eighth-order TENO8 scheme. A general formulation to construct the high-order undivided difference τK within the weighting strategy is proposed. With the underlying scale-separation strategy, sixth-order accuracy for τK in the smooth solution regions is designed for good performance and robustness. Furthermore, a unified framework to optimize independently the dispersion and dissipation properties of high-order finite-difference schemes is proposed. The new framework enables tailoring of dispersion and dissipation as function of wavenumber. The optimal linear scheme has minimum dispersion error and a dissipation error that satisfies a dispersion-dissipation relation. Employing the optimal linear scheme, a sixth-order TENO8-opt scheme is constructed. A set of benchmark cases involving strong discontinuities and broadband fluctuations is computed to demonstrate the high-resolution properties of the new schemes.
Color Sparse Representations for Image Processing: Review, Models, and Prospects.
Barthélemy, Quentin; Larue, Anthony; Mars, Jérôme I
2015-11-01
Sparse representations have been extended to deal with color images composed of three channels. A review of dictionary-learning-based sparse representations for color images is made here, detailing the differences between the models, and comparing their results on the real and simulated data. These models are considered in a unifying framework that is based on the degrees of freedom of the linear filtering/transformation of the color channels. Moreover, this allows it to be shown that the scalar quaternionic linear model is equivalent to constrained matrix-based color filtering, which highlights the filtering implicitly applied through this model. Based on this reformulation, the new color filtering model is introduced, using unconstrained filters. In this model, spatial morphologies of color images are encoded by atoms, and colors are encoded by color filters. Color variability is no longer captured in increasing the dictionary size, but with color filters, this gives an efficient color representation.
Fast algorithms for Quadrature by Expansion I: Globally valid expansions
NASA Astrophysics Data System (ADS)
Rachh, Manas; Klöckner, Andreas; O'Neil, Michael
2017-09-01
The use of integral equation methods for the efficient numerical solution of PDE boundary value problems requires two main tools: quadrature rules for the evaluation of layer potential integral operators with singular kernels, and fast algorithms for solving the resulting dense linear systems. Classically, these tools were developed separately. In this work, we present a unified numerical scheme based on coupling Quadrature by Expansion, a recent quadrature method, to a customized Fast Multipole Method (FMM) for the Helmholtz equation in two dimensions. The method allows the evaluation of layer potentials in linear-time complexity, anywhere in space, with a uniform, user-chosen level of accuracy as a black-box computational method. Providing this capability requires geometric and algorithmic considerations beyond the needs of standard FMMs as well as careful consideration of the accuracy of multipole translations. We illustrate the speed and accuracy of our method with various numerical examples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Múnera, Héctor A., E-mail: hmunera@hotmail.com; Retired professor, Department of Physics, Universidad Nacional de Colombia, Bogotá, Colombia, South America
2016-07-07
It is postulated that there exists a fundamental energy-like fluid, which occupies the flat three-dimensional Euclidean space that contains our universe, and obeys the two basic laws of classical physics: conservation of linear momentum, and conservation of total energy; the fluid is described by the classical wave equation (CWE), which was Schrödinger’s first candidate to develop his quantum theory. Novel solutions for the CWE discovered twenty years ago are nonharmonic, inherently quantized, and universal in the sense of scale invariance, thus leading to quantization at all scales of the universe, from galactic clusters to the sub-quark world, and yielding amore » unified Lorentz-invariant quantum theory ab initio. Quingal solutions are isomorphic under both neo-Galilean and Lorentz transformations, and exhibit nother remarkable property: intrinsic unstability for large values of ℓ (a quantum number), thus limiting the size of each system at a given scale. Unstability and scale-invariance together lead to nested structures observed in our solar system; unstability may explain the small number of rows in the chemical periodic table, and nuclear unstability of nuclides beyond lead and bismuth. Quingal functions lend mathematical basis for Boscovich’s unified force (which is compatible with many pieces of evidence collected over the past century), and also yield a simple geometrical solution for the classical three-body problem, which is a useful model for electronic orbits in simple diatomic molecules. A testable prediction for the helicoidal-type force is suggested.« less
MBAT: a scalable informatics system for unifying digital atlasing workflows.
Lee, Daren; Ruffins, Seth; Ng, Queenie; Sane, Nikhil; Anderson, Steve; Toga, Arthur
2010-12-22
Digital atlases provide a common semantic and spatial coordinate system that can be leveraged to compare, contrast, and correlate data from disparate sources. As the quality and amount of biological data continues to advance and grow, searching, referencing, and comparing this data with a researcher's own data is essential. However, the integration process is cumbersome and time-consuming due to misaligned data, implicitly defined associations, and incompatible data sources. This work addressing these challenges by providing a unified and adaptable environment to accelerate the workflow to gather, align, and analyze the data. The MouseBIRN Atlasing Toolkit (MBAT) project was developed as a cross-platform, free open-source application that unifies and accelerates the digital atlas workflow. A tiered, plug-in architecture was designed for the neuroinformatics and genomics goals of the project to provide a modular and extensible design. MBAT provides the ability to use a single query to search and retrieve data from multiple data sources, align image data using the user's preferred registration method, composite data from multiple sources in a common space, and link relevant informatics information to the current view of the data or atlas. The workspaces leverage tool plug-ins to extend and allow future extensions of the basic workspace functionality. A wide variety of tool plug-ins were developed that integrate pre-existing as well as newly created technology into each workspace. Novel atlasing features were also developed, such as supporting multiple label sets, dynamic selection and grouping of labels, and synchronized, context-driven display of ontological data. MBAT empowers researchers to discover correlations among disparate data by providing a unified environment for bringing together distributed reference resources, a user's image data, and biological atlases into the same spatial or semantic context. Through its extensible tiered plug-in architecture, MBAT allows researchers to customize all platform components to quickly achieve personalized workflows.
Axial linear patellar displacement: a new measurement of patellofemoral congruence.
Urch, Scott E; Tritle, Benjamin A; Shelbourne, K Donald; Gray, Tinker
2009-05-01
The tools for measuring the congruence angle with digital radiography software can be difficult to use; therefore, the authors sought to develop a new, easy, and reliable method for measuring patellofemoral congruence. The abstract goes here and covers two columns. The abstract goes The linear displacement measurement will correlate well with the congruence angle measurement. here and covers two columns. Cohort study (diagnosis); Level of evidence, 2. On Merchant view radiographs obtained digitally, the authors measured the congruence angle and a new linear displacement measurement on preoperative and postoperative radiographs of 31 patients who suffered unilateral patellar dislocations and 100 uninjured subjects. The linear displacement measurement was obtained by drawing a reference line across the medial and lateral trochlear facets. Perpendicular lines were drawn from the depth of the sulcus through the reference line and from the apex of the posterior tip of the patella through the reference line. The distance between the perpendicular lines was the linear displacement measurement. The measurements were obtained twice at different sittings. The observer was blinded as to the previous measurements to establish reliability. Measurements were compared to determine whether the linear displacement measurement correlated with congruence angle. Intraobserver reliability was above r(2) = .90 for all measurements. In patients with patellar dislocations, the mean congruence angle preoperatively was 33.5 degrees , compared with 12.1 mm for linear displacement (r(2) = .92). The mean congruence angle postoperatively was 11.2 degrees, compared with 4.0 mm for linear displacement (r(2) = .89). For normal subjects, the mean congruence angle was -3 degrees and the mean linear displacement was 0.2 mm. The linear displacement measurement was found to correlate with congruence angle measurements and may be an easy and useful tool for clinicians to evaluate patellofemoral congruence objectively.
NASA Astrophysics Data System (ADS)
Abbondanza, Claudio; Altamimi, Zuheir; Chin, Toshio; Collilieux, Xavier; Dach, Rolf; Gross, Richard; Heflin, Michael; König, Rolf; Lemoine, Frank; Macmillan, Dan; Parker, Jay; van Dam, Tonie; Wu, Xiaoping
2014-05-01
The International Terrestrial Reference Frame (ITRF) adopts a piece-wise linear model to parameterize regularized station positions and velocities. The space-geodetic (SG) solutions from VLBI, SLR, GPS and DORIS used as input in the ITRF combination process account for tidal loading deformations, but ignore the non-tidal part. As a result, the non-linear signal observed in the time series of SG-derived station positions in part reflects non-tidal loading displacements not introduced in the SG data reduction. In this analysis, we assess the impact of non-tidal atmospheric loading (NTAL) corrections on the TRF computation. Focusing on the a-posteriori approach, (i) the NTAL model derived from the National Centre for Environmental Prediction (NCEP) surface pressure is removed from the SINEX files of the SG solutions used as inputs to the TRF determinations; (ii) adopting a Kalman-filter based approach, two distinct linear TRFs are estimated combining the 4 SG solutions with (corrected TRF solution) and without the NTAL displacements (standard TRF solution). Linear fits (offset and atmospheric velocity) of the NTAL displacements removed during step (i) are estimated accounting for the station position discontinuities introduced in the SG solutions and adopting different weighting strategies. The NTAL-derived (atmospheric) velocity fields are compared to those obtained from the TRF reductions during step (ii). The consistency between the atmospheric and the TRF-derived velocity fields is examined. We show how the presence of station position discontinuities in SG solutions degrades the agreement between the velocity fields and compare the effect of different weighting structure adopted while estimating the linear fits to the NTAL displacements. Finally, we evaluate the effect of restoring the atmospheric velocities determined through the linear fits of the NTAL displacements to the single-technique linear reference frames obtained by stacking the standard SG SINEX files. Differences between the velocity fields obtained restoring the NTAL displacements and the standard stacked linear reference frames are discussed.
Reference Models for Multi-Layer Tissue Structures
2016-09-01
simulation, finite element analysis 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT 18. NUMBER OF PAGES 19a. NAME OF RESPONSIBLE PERSON USAMRMC...Physiologically realistic, fully specimen-specific, nonlinear reference models. Tasks. Finite element analysis of non-linear mechanics of cadaver...models. Tasks. Finite element analysis of non-linear mechanics of multi-layer tissue regions of human subjects. Deliverables. Partially subject- and
Patient Turnover: A Concept Analysis.
VanFosson, Christopher A; Yoder, Linda H; Jones, Terry L
Patient turnover influences the quality and safety of patient care. However, variations in the conceptual underpinnings of patient turnover limit the understanding of the phenomenon. A concept analysis was completed to clarify the role of patient turnover in relation to outcomes in the acute care hospital setting. The defining attributes, antecedents, consequences, and empirical referents of patient turnover were proposed. Nursing leaders should account for patient turnover in workload and staffing calculations. Further research is needed to clarify the influence of patient turnover on the quality and safety of nursing care using a unified understanding of the phenomenon.
NASA Technical Reports Server (NTRS)
Rosner, D. E.; Gokoglu, S. A.; Israel, R.
1982-01-01
A multiparameter correlation approach to the study of particle deposition rates in engineering applications is discussed with reference to two specific examples, one dealing with thermophoretically augmented small particle convective diffusion and the other involving larger particle inertial impaction. The validity of the correlations proposed here is demonstrated through rigorous computations including all relevant phenomena and interactions. Such representations are shown to minimize apparent differences between various geometric, flow, and physicochemical parameters, allowing many apparently different physicochemical situations to be described in a unified way.
A General Symbolic Method with Physical Applications
NASA Astrophysics Data System (ADS)
Smith, Gregory M.
2000-06-01
A solution to the problem of unifying the General Relativistic and Quantum Theoretical formalisms is given which introduces a new non-axiomatic symbolic method and an algebraic generalization of the Calculus to non-finite symbolisms without reference to the concept of a limit. An essential feature of the non-axiomatic method is the inadequacy of any (finite) statements: Identifying this aspect of the theory with the "existence of an external physical reality" both allows for the consistency of the method with the results of experiments and avoids the so-called "measurement problem" of quantum theory.
Continuous excitation chlorophyll fluorescence parameters: a review for practitioners.
Banks, Jonathan M
2017-08-01
This review introduces, defines and critically reviews a number of chlorophyll fluorescence parameters with specific reference to those derived from continuous excitation chlorophyll fluorescence. A number of common issues and criticisms are addressed. The parameters fluorescence origin (F0) and the performance indices (PI) are discussed as examples. This review attempts to unify definitions for the wide range of parameters available for measuring plant vitality, facilitating their calculation and use. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
24 CFR 578.11 - Unified Funding Agency.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 24 Housing and Urban Development 3 2014-04-01 2013-04-01 true Unified Funding Agency. 578.11... of Care § 578.11 Unified Funding Agency. (a) Becoming a Unified Funding Agency. To become designated as the Unified Funding Agency (UFA) for a Continuum, a collaborative applicant must be selected by...
Establishing a conceptual framework for handoffs using communication theory.
Mohorek, Matthew; Webb, Travis P
2015-01-01
A significant consequence of the 2003 Accreditation Council for Graduate Medical Education duty hour restrictions has been the dramatic increase in patient care handoffs. Ineffective handoffs have been identified as the third most common cause of medical error. However, research into health care handoffs lacks a unifying foundational structure. We sought to identify a conceptual framework that could be used to critically analyze handoffs. A scholarly review focusing on communication theory as a possible conceptual framework for handoffs was conducted. A PubMed search of published handoff research was also performed, and the literature was analyzed and matched to the most relevant theory for health care handoff models. The Shannon-Weaver Linear Model of Communication was identified as the most appropriate conceptual framework for health care handoffs. The Linear Model describes communication as a linear process. A source encodes a message into a signal, the signal is sent through a channel, and the signal is decoded back into a message at the destination, all in the presence of internal and external noise. The Linear Model identifies 3 separate instances in handoff communication where error occurs: the transmitter (message encoding), channel, and receiver (signal decoding). The Linear Model of Communication is a suitable conceptual framework for handoff research and provides a structured approach for describing handoff variables. We propose the Linear Model should be used as a foundation for further research into interventions to improve health care handoffs. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
Li, Wei
2016-06-01
This paper considers a unified geometric projection approach for: 1) decomposing a general system of cooperative agents coupled via Laplacian matrices or stochastic matrices and 2) deriving a centroid-subsystem and many shape-subsystems, where each shape-subsystem has the distinct properties (e.g., preservation of formation and stability of the original system, sufficiently simple structures and explicit formation evolution of agents, and decoupling from the centroid-subsystem) which will facilitate subsequent analyses. Particularly, this paper provides an additional merit of the approach: considering adjustments of coupling topologies of agents which frequently occur in system design (e.g., to add or remove an edge, to move an edge to a new place, and to change the weight of an edge), the corresponding new shape-subsystems can be derived by a few simple computations merely from the old shape-subsystems and without referring to the original system, which will provide further convenience for analysis and flexibility of choice. Finally, such fast recalculations of new subsystems under topology adjustments are provided with examples.
Tirado-Ramos, Alfredo; Hu, Jingkun; Lee, K P
2002-01-01
Supplement 23 to DICOM (Digital Imaging and Communications for Medicine), Structured Reporting, is a specification that supports a semantically rich representation of image and waveform content, enabling experts to share image and related patient information. DICOM SR supports the representation of textual and coded data linked to images and waveforms. Nevertheless, the medical information technology community needs models that work as bridges between the DICOM relational model and open object-oriented technologies. The authors assert that representations of the DICOM Structured Reporting standard, using object-oriented modeling languages such as the Unified Modeling Language, can provide a high-level reference view of the semantically rich framework of DICOM and its complex structures. They have produced an object-oriented model to represent the DICOM SR standard and have derived XML-exchangeable representations of this model using World Wide Web Consortium specifications. They expect the model to benefit developers and system architects who are interested in developing applications that are compliant with the DICOM SR specification.
Theory of Remote Image Formation
NASA Astrophysics Data System (ADS)
Blahut, Richard E.
2004-11-01
In many applications, images, such as ultrasonic or X-ray signals, are recorded and then analyzed with digital or optical processors in order to extract information. Such processing requires the development of algorithms of great precision and sophistication. This book presents a unified treatment of the mathematical methods that underpin the various algorithms used in remote image formation. The author begins with a review of transform and filter theory. He then discusses two- and three-dimensional Fourier transform theory, the ambiguity function, image construction and reconstruction, tomography, baseband surveillance systems, and passive systems (where the signal source might be an earthquake or a galaxy). Information-theoretic methods in image formation are also covered, as are phase errors and phase noise. Throughout the book, practical applications illustrate theoretical concepts, and there are many homework problems. The book is aimed at graduate students of electrical engineering and computer science, and practitioners in industry. Presents a unified treatment of the mathematical methods that underpin the algorithms used in remote image formation Illustrates theoretical concepts with reference to practical applications Provides insights into the design parameters of real systems
Bahia, Ligia
2008-01-01
Trailing the whole group of trends and changes in the scenario of relations between the public and the private, this article analyses the effects of the rise in the rates of return of health plan operators and health insurance companies in 2007. Special attention is given to the segmentation of the system, the complaints about the naturalization of inequitable access to health services and to the depreciation of the original concepts of the Unified Health System. The study also gathers information regarding the production of knowledge about supplementary care with the intent to systemize the bases and methodological approaches adopted by a selected sub-group of scientific papers. Finally, the article develops conjectures and hypotheses with regard to possible associations between growth and stability of the health plan and insurance market and as refers to the nature of scientific production about this issue, taking into consideration the contradictions between the political and economical circuit in which the health plan and insurance companies are operating and the universality of the Brazilian Health System.
Neither real nor fictitious but 'as if real'? A political ontology of the state.
Hay, Colin
2014-09-01
The state is one of series of concepts (capitalism, patriarchy and class being others) which pose a particular kind of ontological difficulty and provoke a particular kind of ontological controversy - for it is far from self-evident that the object or entity to which they refer is in any obvious sense 'real'. In this paper I make the case for developing a distinct political ontology of the state which builds from such a reflection. In the process, I argue that the state is neither real nor fictitious, but 'as if real' - a conceptual abstraction whose value is best seen as an open analytical question. Thus understood, the state possesses no agency per se though it serves to define and construct a series of contexts within which political agency is both authorized (in the name of the state) and enacted (by those thereby authorized). The state is thus revealed as a dynamic institutional complex whose unity is at best partial, the constantly evolving outcome of unifying tendencies and dis-unifying counter-tendencies. © London School of Economics and Political Science 2014.
Sun, Lei; Jin, Hong-Yu; Tian, Run-Tao; Wang, Ming-Juan; Liu, Li-Na; Ye, Liu-Ping; Zuo, Tian-Tian; Ma, Shuang-Cheng
2017-01-01
Analysis of related substances in pharmaceutical chemicals and multi-components in traditional Chinese medicines needs bulk of reference substances to identify the chromatographic peaks accurately. But the reference substances are costly. Thus, the relative retention (RR) method has been widely adopted in pharmacopoeias and literatures for characterizing HPLC behaviors of those reference substances unavailable. The problem is it is difficult to reproduce the RR on different columns due to the error between measured retention time (t R ) and predicted t R in some cases. Therefore, it is useful to develop an alternative and simple method for prediction of t R accurately. In the present study, based on the thermodynamic theory of HPLC, a method named linear calibration using two reference substances (LCTRS) was proposed. The method includes three steps, procedure of two points prediction, procedure of validation by multiple points regression and sequential matching. The t R of compounds on a HPLC column can be calculated by standard retention time and linear relationship. The method was validated in two medicines on 30 columns. It was demonstrated that, LCTRS method is simple, but more accurate and more robust on different HPLC columns than RR method. Hence quality standards using LCTRS method are easy to reproduce in different laboratories with lower cost of reference substances.
Investigating the unification of LOFAR-detected powerful AGN in the Boötes field
NASA Astrophysics Data System (ADS)
Morabito, Leah K.; Williams, W. L.; Duncan, Kenneth J.; Röttgering, H. J. A.; Miley, George; Saxena, Aayush; Barthel, Peter; Best, P. N.; Bruggen, M.; Brunetti, G.; Chyży, K. T.; Engels, D.; Hardcastle, M. J.; Harwood, J. J.; Jarvis, Matt J.; Mahony, E. K.; Prandoni, I.; Shimwell, T. W.; Shulevski, A.; Tasse, C.
2017-08-01
Low radio frequency surveys are important for testing unified models of radio-loud quasars and radio galaxies. Intrinsically similar sources that are randomly oriented on the sky will have different projected linear sizes. Measuring the projected linear sizes of these sources provides an indication of their orientation. Steep-spectrum isotropic radio emission allows for orientation-free sample selection at low radio frequencies. We use a new radio survey of the Boötes field at 150 MHz made with the Low-Frequency Array (LOFAR) to select a sample of radio sources. We identify 60 radio sources with powers P > 1025.5 W Hz-1 at 150 MHz using cross-matched multiwavelength information from the AGN and Galaxy Evolution Survey, which provides spectroscopic redshifts and photometric identification of 16 quasars and 44 radio galaxies. When considering the radio spectral slope only, we find that radio sources with steep spectra have projected linear sizes that are on average 4.4 ± 1.4 larger than those with flat spectra. The projected linear sizes of radio galaxies are on average 3.1 ± 1.0 larger than those of quasars (2.0 ± 0.3 after correcting for redshift evolution). Combining these results with three previous surveys, we find that the projected linear sizes of radio galaxies and quasars depend on redshift but not on power. The projected linear size ratio does not correlate with either parameter. The LOFAR data are consistent within the uncertainties with theoretical predictions of the correlation between the quasar fraction and linear size ratio, based on an orientation-based unification scheme.
The concept of collision strength and its applications
NASA Astrophysics Data System (ADS)
Chang, Yongbin
Collision strength, the measure of strength for a binary collision, hasn't been defined clearly. In practice, many physical arguments have been employed for the purpose and taken for granted. A scattering angle has been widely and intensively used as a measure of collision strength in plasma physics for years. The result of this is complication and unnecessary approximation in deriving some of the basic kinetic equations and in calculating some of the basic physical terms. The Boltzmann equation has a five-fold integral collision term that is complicated. Chandrasekhar and Spitzer's approaches to the linear Fokker-Planck coefficients have several approximations. An effective variable-change technique has been developed in this dissertation as an alternative to scattering angle as the measure of collision strength. By introducing the square of the reduced impulse or its equivalencies as a collision strength variable, many plasma calculations have been simplified. The five-fold linear Boltzmann collision integral and linearized Boltzmann collision integral are simplified to three-fold integrals. The arbitrary order linear Fokker-Planck coefficients are calculated and expressed in a uniform expression. The new theory provides a simple and exact method for describing the equilibrium plasma collision rate, and a precise calculation of the equilibrium relaxation time. It generalizes bimolecular collision reaction rate theory to a reaction rate theory for plasmas. A simple formula of high precision with wide temperature range has been developed for electron impact ionization rates for carbon atoms and ions. The universality of the concept of collision strength is emphasized. This dissertation will show how Arrhenius' chemical reaction rate theory and Thomson's ionization theory can be unified as one single theory under the concept of collision strength, and how many important physical terms in different disciplines, such as activation energy in chemical reaction theory, ionization energy in Thomson's ionization theory, and the Coulomb logarithm in plasma physics, can be unified into a single one---the threshold value of collision strength. The collision strength, which is a measure of a transfer of momentum in units of energy, can be used to reconcile the differences between Descartes' opinion and Leibnitz's opinion about the "true" measure of a force. Like Newton's second law, which provides an instantaneous measure of a force, collision strength, as a cumulative measure of a force, can be regarded as part of a law of force in general.
Unified continuum damage model for matrix cracking in composite rotor blades
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pollayi, Hemaraju; Harursampath, Dineshkumar
This paper deals with modeling of the first damage mode, matrix micro-cracking, in helicopter rotor/wind turbine blades and how this effects the overall cross-sectional stiffness. The helicopter/wind turbine rotor system operates in a highly dynamic and unsteady environment leading to severe vibratory loads present in the system. Repeated exposure to this loading condition can induce damage in the composite rotor blades. These rotor/turbine blades are generally made of fiber-reinforced laminated composites and exhibit various competing modes of damage such as matrix micro-cracking, delamination, and fiber breakage. There is a need to study the behavior of the composite rotor system undermore » various key damage modes in composite materials for developing Structural Health Monitoring (SHM) system. Each blade is modeled as a beam based on geometrically non-linear 3-D elasticity theory. Each blade thus splits into 2-D analyzes of cross-sections and non-linear 1-D analyzes along the beam reference curves. Two different tools are used here for complete 3-D analysis: VABS for 2-D cross-sectional analysis and GEBT for 1-D beam analysis. The physically-based failure models for matrix in compression and tension loading are used in the present work. Matrix cracking is detected using two failure criterion: Matrix Failure in Compression and Matrix Failure in Tension which are based on the recovered field. A strain variable is set which drives the damage variable for matrix cracking and this damage variable is used to estimate the reduced cross-sectional stiffness. The matrix micro-cracking is performed in two different approaches: (i) Element-wise, and (ii) Node-wise. The procedure presented in this paper is implemented in VABS as matrix micro-cracking modeling module. Three examples are presented to investigate the matrix failure model which illustrate the effect of matrix cracking on cross-sectional stiffness by varying the applied cyclic load.« less
Evaluation of 14 nonlinear deformation algorithms applied to human brain MRI registration
Klein, Arno; Andersson, Jesper; Ardekani, Babak A.; Ashburner, John; Avants, Brian; Chiang, Ming-Chang; Christensen, Gary E.; Collins, D. Louis; Gee, James; Hellier, Pierre; Song, Joo Hyun; Jenkinson, Mark; Lepage, Claude; Rueckert, Daniel; Thompson, Paul; Vercauteren, Tom; Woods, Roger P.; Mann, J. John; Parsey, Ramin V.
2009-01-01
All fields of neuroscience that employ brain imaging need to communicate their results with reference to anatomical regions. In particular, comparative morphometry and group analysis of functional and physiological data require coregistration of brains to establish correspondences across brain structures. It is well established that linear registration of one brain to another is inadequate for aligning brain structures, so numerous algorithms have emerged to nonlinearly register brains to one another. This study is the largest evaluation of nonlinear deformation algorithms applied to brain image registration ever conducted. Fourteen algorithms from laboratories around the world are evaluated using 8 different error measures. More than 45,000 registrations between 80 manually labeled brains were performed by algorithms including: AIR, ANIMAL, ART, Diffeomorphic Demons, FNIRT, IRTK, JRD-fluid, ROMEO, SICLE, SyN, and four different SPM5 algorithms (“SPM2-type” and regular Normalization, Unified Segmentation, and the DARTEL Toolbox). All of these registrations were preceded by linear registration between the same image pairs using FLIRT. One of the most significant findings of this study is that the relative performances of the registration methods under comparison appear to be little affected by the choice of subject population, labeling protocol, and type of overlap measure. This is important because it suggests that the findings are generalizable to new subject populations that are labeled or evaluated using different labeling protocols. Furthermore, we ranked the 14 methods according to three completely independent analyses (permutation tests, one-way ANOVA tests, and indifference-zone ranking) and derived three almost identical top rankings of the methods. ART, SyN, IRTK, and SPM's DARTEL Toolbox gave the best results according to overlap and distance measures, with ART and SyN delivering the most consistently high accuracy across subjects and label sets. Updates will be published on the http://www.mindboggle.info/papers/ website. PMID:19195496
Linear FBG Temperature Sensor Interrogation with Fabry-Perot ITU Multi-wavelength Reference.
Park, Hyoung-Jun; Song, Minho
2008-10-29
The equidistantly spaced multi-passbands of a Fabry-Perot ITU filter are used as an efficient multi-wavelength reference for fiber Bragg grating sensor demodulation. To compensate for the nonlinear wavelength tuning effect in the FBG sensor demodulator, a polynomial fitting algorithm was applied to the temporal peaks of the wavelength-scanned ITU filter. The fitted wavelength values are assigned to the peak locations of the FBG sensor reflections, obtaining constant accuracy, regardless of the wavelength scan range and frequency. A linearity error of about 0.18% against a reference thermocouple thermometer was obtained with the suggested method.
NASA Astrophysics Data System (ADS)
Proskurov, S.; Darbyshire, O. R.; Karabasov, S. A.
2017-12-01
The present work discusses modifications to the stochastic Fast Random Particle Mesh (FRPM) method featuring both tonal and broadband noise sources. The technique relies on the combination of incorporated vortex-shedding resolved flow available from Unsteady Reynolds-Averaged Navier-Stokes (URANS) simulation with the fine-scale turbulence FRPM solution generated via the stochastic velocity fluctuations in the context of vortex sound theory. In contrast to the existing literature, our method encompasses a unified treatment for broadband and tonal acoustic noise sources at the source level, thus, accounting for linear source interference as well as possible non-linear source interaction effects. When sound sources are determined, for the sound propagation, Acoustic Perturbation Equations (APE-4) are solved in the time-domain. Results of the method's application for two aerofoil benchmark cases, with both sharp and blunt trailing edges are presented. In each case, the importance of individual linear and non-linear noise sources was investigated. Several new key features related to the unsteady implementation of the method were tested and brought into the equation. Encouraging results have been obtained for benchmark test cases using the new technique which is believed to be potentially applicable to other airframe noise problems where both tonal and broadband parts are important.
A Textbook for a First Course in Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Zingg, D. W.; Pulliam, T. H.; Nixon, David (Technical Monitor)
1999-01-01
This paper describes and discusses the textbook, Fundamentals of Computational Fluid Dynamics by Lomax, Pulliam, and Zingg, which is intended for a graduate level first course in computational fluid dynamics. This textbook emphasizes fundamental concepts in developing, analyzing, and understanding numerical methods for the partial differential equations governing the physics of fluid flow. Its underlying philosophy is that the theory of linear algebra and the attendant eigenanalysis of linear systems provides a mathematical framework to describe and unify most numerical methods in common use in the field of fluid dynamics. Two linear model equations, the linear convection and diffusion equations, are used to illustrate concepts throughout. Emphasis is on the semi-discrete approach, in which the governing partial differential equations (PDE's) are reduced to systems of ordinary differential equations (ODE's) through a discretization of the spatial derivatives. The ordinary differential equations are then reduced to ordinary difference equations (O(Delta)E's) using a time-marching method. This methodology, using the progression from PDE through ODE's to O(Delta)E's, together with the use of the eigensystems of tridiagonal matrices and the theory of O(Delta)E's, gives the book its distinctiveness and provides a sound basis for a deep understanding of fundamental concepts in computational fluid dynamics.
rnaQUAST: a quality assessment tool for de novo transcriptome assemblies.
Bushmanova, Elena; Antipov, Dmitry; Lapidus, Alla; Suvorov, Vladimir; Prjibelski, Andrey D
2016-07-15
Ability to generate large RNA-Seq datasets created a demand for both de novo and reference-based transcriptome assemblers. However, while many transcriptome assemblers are now available, there is still no unified quality assessment tool for RNA-Seq assemblies. We present rnaQUAST-a tool for evaluating RNA-Seq assembly quality and benchmarking transcriptome assemblers using reference genome and gene database. rnaQUAST calculates various metrics that demonstrate completeness and correctness levels of the assembled transcripts, and outputs them in a user-friendly report. rnaQUAST is implemented in Python and is freely available at http://bioinf.spbau.ru/en/rnaquast ap@bioinf.spbau.ru Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Unified Least Squares Methods for the Evaluation of Diagnostic Tests With the Gold Standard
Tang, Liansheng Larry; Yuan, Ao; Collins, John; Che, Xuan; Chan, Leighton
2017-01-01
The article proposes a unified least squares method to estimate the receiver operating characteristic (ROC) parameters for continuous and ordinal diagnostic tests, such as cancer biomarkers. The method is based on a linear model framework using the empirically estimated sensitivities and specificities as input “data.” It gives consistent estimates for regression and accuracy parameters when the underlying continuous test results are normally distributed after some monotonic transformation. The key difference between the proposed method and the method of Tang and Zhou lies in the response variable. The response variable in the latter is transformed empirical ROC curves at different thresholds. It takes on many values for continuous test results, but few values for ordinal test results. The limited number of values for the response variable makes it impractical for ordinal data. However, the response variable in the proposed method takes on many more distinct values so that the method yields valid estimates for ordinal data. Extensive simulation studies are conducted to investigate and compare the finite sample performance of the proposed method with an existing method, and the method is then used to analyze 2 real cancer diagnostic example as an illustration. PMID:28469385
Verifying Stability of Dynamic Soft-Computing Systems
NASA Technical Reports Server (NTRS)
Wen, Wu; Napolitano, Marcello; Callahan, John
1997-01-01
Soft computing is a general term for algorithms that learn from human knowledge and mimic human skills. Example of such algorithms are fuzzy inference systems and neural networks. Many applications, especially in control engineering, have demonstrated their appropriateness in building intelligent systems that are flexible and robust. Although recent research have shown that certain class of neuro-fuzzy controllers can be proven bounded and stable, they are implementation dependent and difficult to apply to the design and validation process. Many practitioners adopt the trial and error approach for system validation or resort to exhaustive testing using prototypes. In this paper, we describe our on-going research towards establishing necessary theoretic foundation as well as building practical tools for the verification and validation of soft-computing systems. A unified model for general neuro-fuzzy system is adopted. Classic non-linear system control theory and recent results of its applications to neuro-fuzzy systems are incorporated and applied to the unified model. It is hoped that general tools can be developed to help the designer to visualize and manipulate the regions of stability and boundedness, much the same way Bode plots and Root locus plots have helped conventional control design and validation.
NASA Astrophysics Data System (ADS)
Rouffaud, R.; Levassort, F.; Hladky-Hennion, A.-C.
2017-02-01
Piezoelectric Single Crystals (PSC) are increasingly used in the manufacture of ultrasonic transducers and in particular for linear arrays or single element transducers. Among these PSCs, according to their microstructure and poled direction, some exhibit a mm2 symmetry. The analytical expression of the electromechanical coupling coefficient for a vibration mode along the poling direction for piezoelectric rectangular bar resonator is established. It is based on the mode coupling theory and fundamental energy ratio definition of electromechanical coupling coefficients. This unified formula for mm2 symmetry class material is obtained as a function of an aspect ratio (G) where the two extreme cases correspond to a thin plate (with a vibration mode characterized by the thickness coupling factor, kt) and a thin bar (characterized by k33'). To optimize the k33' value related to the thin bar design, a rotation of the crystallogaphic axis in the plane orthogonal to the poling direction is done to choose the highest value for PIN-PMN-PT single crystal. Finally, finite element calculations are performed to deduce resonance frequencies and coupling coefficients in a large range of G value to confirm developed analytical relations.
Petit, Caroline; Samson, Adeline; Morita, Satoshi; Ursino, Moreno; Guedj, Jérémie; Jullien, Vincent; Comets, Emmanuelle; Zohar, Sarah
2018-06-01
The number of trials conducted and the number of patients per trial are typically small in paediatric clinical studies. This is due to ethical constraints and the complexity of the medical process for treating children. While incorporating prior knowledge from adults may be extremely valuable, this must be done carefully. In this paper, we propose a unified method for designing and analysing dose-finding trials in paediatrics, while bridging information from adults. The dose-range is calculated under three extrapolation options, linear, allometry and maturation adjustment, using adult pharmacokinetic data. To do this, it is assumed that target exposures are the same in both populations. The working model and prior distribution parameters of the dose-toxicity and dose-efficacy relationships are obtained using early-phase adult toxicity and efficacy data at several dose levels. Priors are integrated into the dose-finding process through Bayesian model selection or adaptive priors. This calibrates the model to adjust for misspecification, if the adult and pediatric data are very different. We performed a simulation study which indicates that incorporating prior adult information in this way may improve dose selection in children.
A separate universe view of the asymmetric sky
NASA Astrophysics Data System (ADS)
Kobayashi, Takeshi; Cortês, Marina; Liddle, Andrew R.
2015-05-01
We provide a unified description of the hemispherical asymmetry in the cosmic microwave background generated by the mechanism proposed by Erickcek, Kamionkowski, and Carroll, using a δ Script N formalism that consistently accounts for the asymmetry-generating mode throughout. We derive a general form for the power spectrum which explicitly exhibits the broken translational invariance. This can be directly compared to cosmic microwave background observables, including the observed quadrupole and fNL values, automatically incorporating the Grishchuk-Zel'dovich effect. Our calculation unifies and extends previous calculations in the literature, in particular giving the full dependence of observables on the phase of our location in the super-horizon mode that generates the asymmetry. We demonstrate how the apparently different results obtained by previous authors arise as different limiting cases. We confirm the existence of non-linear contributions to the microwave background quadrupole from the super-horizon mode identified by Erickcek et al. and further explored by Kanno et al., and show that those contributions are always significant in parameter regimes capable of explaining the observed asymmetry. We indicate example parameter values capable of explaining the observed power asymmetry without violating other observational bounds.
Models of Neuronal Stimulus-Response Functions: Elaboration, Estimation, and Evaluation
Meyer, Arne F.; Williamson, Ross S.; Linden, Jennifer F.; Sahani, Maneesh
2017-01-01
Rich, dynamic, and dense sensory stimuli are encoded within the nervous system by the time-varying activity of many individual neurons. A fundamental approach to understanding the nature of the encoded representation is to characterize the function that relates the moment-by-moment firing of a neuron to the recent history of a complex sensory input. This review provides a unifying and critical survey of the techniques that have been brought to bear on this effort thus far—ranging from the classical linear receptive field model to modern approaches incorporating normalization and other nonlinearities. We address separately the structure of the models; the criteria and algorithms used to identify the model parameters; and the role of regularizing terms or “priors.” In each case we consider benefits or drawbacks of various proposals, providing examples for when these methods work and when they may fail. Emphasis is placed on key concepts rather than mathematical details, so as to make the discussion accessible to readers from outside the field. Finally, we review ways in which the agreement between an assumed model and the neuron's response may be quantified. Re-implemented and unified code for many of the methods are made freely available. PMID:28127278
NASA Astrophysics Data System (ADS)
Jaiswal, Priyank; Dasgupta, Rahul
2010-05-01
We demonstrate that imaging of 2-D multichannel land seismic data can be effectively accomplished by a combination of reflection traveltime tomography and pre-stack depth migration (PSDM); we refer to the combined process as "the unified imaging". The unified imaging comprises cyclic runs of joint reflection and direct arrival inversion and pre-stack depth migration. From one cycle to another, both the inversion and the migration provide mutual feedbacks that are guided by the geological interpretation. The unified imaging is implemented in two broad stages. The first stage is similar to the conventional imaging except that it involves a significant use of velocity model from the inversion of the direct arrivals for both datuming and stacking velocity analysis. The first stage ends with an initial interval velocity model (from the stacking velocity analysis) and a corresponding depth migrated image. The second stage updates the velocity model and the depth image from the first stage in a cyclic manner; a single cycle comprises a single run of reflection traveltime inversion followed by PSDM. Interfaces used in the inversion are interpretations of the PSDM image in the previous cycle and the velocity model used in PSDM is from the joint inversion in the current cycle. Additionally in every cycle interpreted horizons in the stacked data are inverted as zero-offset reflections for constraining the interfaces; the velocity model is maintained stationary for the zero-offset inversion. A congruency factor, j, which measures the discrepancy between interfaces from the interpretation of the PSDM image and their corresponding counterparts from the inversion of the zero-offset reflections within assigned uncertainties, is computed in every cycle. A value of unity for jindicates that images from both the inversion and the migration are equivalent; at this point the unified imaging is said to have converged and is halted. We apply the unified imaging to 2-D multichannel seismic data from the Naga Thrust and Fold Belt (NTFB), India, were several exploratory wells in the last decade targeting sub-thrust leads in the footwall have failed. This failure is speculatively due to incorrect depth images which are in turn attributed to incorrect velocity models that are developed using conventional methods. The 2-D seismic data in this study is acquired perpendicular to the trend of the NTFB where the outcropping hanging wall has a topographic culmination. The acquisition style is split-spread with 30 m shot and receiver spacing and a nominal fold of 90. The data are recorded with a sample interval of 2 ms. Overall the data have a moderate signal-to-noise ratio and a broad frequency bandwidth of 8-80 Hz. The seismic line contains the failed exploratory well in the central part. The final results from unified imaging (both the depth image and the corresponding velocity model) suggest presence of a triangle zone, which was previously undiscovered. Conventional imaging had falsely portrayed the triangle zone as structural high which was interpreted as an anticline. As a result, the exploratory well, meant to target the anticline, met with pressure changes which were neither expected nor explained. The unified imaging results not only explain the observations in the well but also reveal new leads in the region. The velocity model from unified imaging was also found to be adequate for frequency-domain full-waveform imaging of the hanging wall. Results from waveform inversion are further corroborated by the geological interpretation of the exploratory well.
GUDM: Automatic Generation of Unified Datasets for Learning and Reasoning in Healthcare.
Ali, Rahman; Siddiqi, Muhammad Hameed; Idris, Muhammad; Ali, Taqdir; Hussain, Shujaat; Huh, Eui-Nam; Kang, Byeong Ho; Lee, Sungyoung
2015-07-02
A wide array of biomedical data are generated and made available to healthcare experts. However, due to the diverse nature of data, it is difficult to predict outcomes from it. It is therefore necessary to combine these diverse data sources into a single unified dataset. This paper proposes a global unified data model (GUDM) to provide a global unified data structure for all data sources and generate a unified dataset by a "data modeler" tool. The proposed tool implements user-centric priority based approach which can easily resolve the problems of unified data modeling and overlapping attributes across multiple datasets. The tool is illustrated using sample diabetes mellitus data. The diverse data sources to generate the unified dataset for diabetes mellitus include clinical trial information, a social media interaction dataset and physical activity data collected using different sensors. To realize the significance of the unified dataset, we adopted a well-known rough set theory based rules creation process to create rules from the unified dataset. The evaluation of the tool on six different sets of locally created diverse datasets shows that the tool, on average, reduces 94.1% time efforts of the experts and knowledge engineer while creating unified datasets.
GUDM: Automatic Generation of Unified Datasets for Learning and Reasoning in Healthcare
Ali, Rahman; Siddiqi, Muhammad Hameed; Idris, Muhammad; Ali, Taqdir; Hussain, Shujaat; Huh, Eui-Nam; Kang, Byeong Ho; Lee, Sungyoung
2015-01-01
A wide array of biomedical data are generated and made available to healthcare experts. However, due to the diverse nature of data, it is difficult to predict outcomes from it. It is therefore necessary to combine these diverse data sources into a single unified dataset. This paper proposes a global unified data model (GUDM) to provide a global unified data structure for all data sources and generate a unified dataset by a “data modeler” tool. The proposed tool implements user-centric priority based approach which can easily resolve the problems of unified data modeling and overlapping attributes across multiple datasets. The tool is illustrated using sample diabetes mellitus data. The diverse data sources to generate the unified dataset for diabetes mellitus include clinical trial information, a social media interaction dataset and physical activity data collected using different sensors. To realize the significance of the unified dataset, we adopted a well-known rough set theory based rules creation process to create rules from the unified dataset. The evaluation of the tool on six different sets of locally created diverse datasets shows that the tool, on average, reduces 94.1% time efforts of the experts and knowledge engineer while creating unified datasets. PMID:26147731
A unifying model of concurrent spatial and temporal modularity in muscle activity.
Delis, Ioannis; Panzeri, Stefano; Pozzo, Thierry; Berret, Bastien
2014-02-01
Modularity in the central nervous system (CNS), i.e., the brain capability to generate a wide repertoire of movements by combining a small number of building blocks ("modules"), is thought to underlie the control of movement. Numerous studies reported evidence for such a modular organization by identifying invariant muscle activation patterns across various tasks. However, previous studies relied on decompositions differing in both the nature and dimensionality of the identified modules. Here, we derive a single framework that encompasses all influential models of muscle activation modularity. We introduce a new model (named space-by-time decomposition) that factorizes muscle activations into concurrent spatial and temporal modules. To infer these modules, we develop an algorithm, referred to as sample-based nonnegative matrix trifactorization (sNM3F). We test the space-by-time decomposition on a comprehensive electromyographic dataset recorded during execution of arm pointing movements and show that it provides a low-dimensional yet accurate, highly flexible and task-relevant representation of muscle patterns. The extracted modules have a well characterized functional meaning and implement an efficient trade-off between replication of the original muscle patterns and task discriminability. Furthermore, they are compatible with the modules extracted from existing models, such as synchronous synergies and temporal primitives, and generalize time-varying synergies. Our results indicate the effectiveness of a simultaneous but separate condensation of spatial and temporal dimensions of muscle patterns. The space-by-time decomposition accommodates a unified view of the hierarchical mapping from task parameters to coordinated muscle activations, which could be employed as a reference framework for studying compositional motor control.
The proper weighting function for retrieving temperatures from satellite measured radiances
NASA Technical Reports Server (NTRS)
Arking, A.
1976-01-01
One class of methods for converting satellite measured radiances into atmospheric temperature profiles, involves a linearization of the radiative transfer equation: delta r = the sum of (W sub i) (delta T sub i) where (i=1...s) and where delta T sub i is the deviation of the temperature in layer i from that of a reference atmosphere, delta R is the difference in the radiance at satellite altitude from the corresponding radiance for the reference atmosphere, and W sub i is the discrete (or vector) form of the T-weighting (i.e., temperature weighting) function W(P), where P is pressure. The top layer of the atmosphere corresponds to i = 1, the bottom layer to i = s - 1, and i = s refers to the surface. Linearization in temperature (or some function of temperature) is at the heart of all linear or matrix methods. The weighting function that should be used is developed.
Compound Identification Using Penalized Linear Regression on Metabolomics
Liu, Ruiqi; Wu, Dongfeng; Zhang, Xiang; Kim, Seongho
2014-01-01
Compound identification is often achieved by matching the experimental mass spectra to the mass spectra stored in a reference library based on mass spectral similarity. Because the number of compounds in the reference library is much larger than the range of mass-to-charge ratio (m/z) values so that the data become high dimensional data suffering from singularity. For this reason, penalized linear regressions such as ridge regression and the lasso are used instead of the ordinary least squares regression. Furthermore, two-step approaches using the dot product and Pearson’s correlation along with the penalized linear regression are proposed in this study. PMID:27212894
Kiefer, Markus; Ansorge, Ulrich; Haynes, John-Dylan; Hamker, Fred; Mattler, Uwe; Verleger, Rolf; Niedeggen, Michael
2011-01-01
Psychological and neuroscience approaches have promoted much progress in elucidating the cognitive and neural mechanisms that underlie phenomenal visual awareness during the last decades. In this article, we provide an overview of the latest research investigating important phenomena in conscious and unconscious vision. We identify general principles to characterize conscious and unconscious visual perception, which may serve as important building blocks for a unified model to explain the plethora of findings. We argue that in particular the integration of principles from both conscious and unconscious vision is advantageous and provides critical constraints for developing adequate theoretical models. Based on the principles identified in our review, we outline essential components of a unified model of conscious and unconscious visual perception. We propose that awareness refers to consolidated visual representations, which are accessible to the entire brain and therefore globally available. However, visual awareness not only depends on consolidation within the visual system, but is additionally the result of a post-sensory gating process, which is mediated by higher-level cognitive control mechanisms. We further propose that amplification of visual representations by attentional sensitization is not exclusive to the domain of conscious perception, but also applies to visual stimuli, which remain unconscious. Conscious and unconscious processing modes are highly interdependent with influences in both directions. We therefore argue that exactly this interdependence renders a unified model of conscious and unconscious visual perception valuable. Computational modeling jointly with focused experimental research could lead to a better understanding of the plethora of empirical phenomena in consciousness research. PMID:22253669
NASA Astrophysics Data System (ADS)
Nakano, T.; Oogane, M.; Furuichi, T.; Ando, Y.
2018-04-01
The automotive industry requires magnetic sensors exhibiting highly linear output within a dynamic range as wide as ±1 kOe. A simple model predicts that the magneto-conductance (G-H) curve in a magnetic tunnel junction (MTJ) is perfectly linear, whereas the magneto-resistance (R-H) curve inevitably contains a finite nonlinearity. We prepared two kinds of MTJs using in-plane or perpendicularly magnetized synthetic antiferromagnetic (i-SAF or p-SAF) reference layers and investigated their sensor performance. In the MTJ with the i-SAF reference layer, the G-H curve did not necessarily show smaller nonlinearities than those of the R-H curve with different dynamic ranges. This is because the magnetizations of the i-SAF reference layer start to rotate at a magnetic field even smaller than the switching field (Hsw) measured by a magnetometer, which significantly affects the tunnel magnetoresistance (TMR) effect. In the MTJ with the p-SAF reference layer, the G-H curve showed much smaller nonlinearities than those of the R-H curve, thanks to a large Hsw value of the p-SAF reference layer. We achieved a nonlinearity of 0.08% FS (full scale) in the G-H curve with a dynamic range of ±1 kOe, satisfying our target for automotive applications. This demonstrated that a reference layer exhibiting a large Hsw value is indispensable in order to achieve a highly linear G-H curve.
Vandenplas, Jérémie; Colinet, Frederic G; Gengler, Nicolas
2014-09-30
A condition to predict unbiased estimated breeding values by best linear unbiased prediction is to use simultaneously all available data. However, this condition is not often fully met. For example, in dairy cattle, internal (i.e. local) populations lead to evaluations based only on internal records while widely used foreign sires have been selected using internally unavailable external records. In such cases, internal genetic evaluations may be less accurate and biased. Because external records are unavailable, methods were developed to combine external information that summarizes these records, i.e. external estimated breeding values and associated reliabilities, with internal records to improve accuracy of internal genetic evaluations. Two issues of these methods concern double-counting of contributions due to relationships and due to records. These issues could be worse if external information came from several evaluations, at least partially based on the same records, and combined into a single internal evaluation. Based on a Bayesian approach, the aim of this research was to develop a unified method to integrate and blend simultaneously several sources of information into an internal genetic evaluation by avoiding double-counting of contributions due to relationships and due to records. This research resulted in equations that integrate and blend simultaneously several sources of information and avoid double-counting of contributions due to relationships and due to records. The performance of the developed equations was evaluated using simulated and real datasets. The results showed that the developed equations integrated and blended several sources of information well into a genetic evaluation. The developed equations also avoided double-counting of contributions due to relationships and due to records. Furthermore, because all available external sources of information were correctly propagated, relatives of external animals benefited from the integrated information and, therefore, more reliable estimated breeding values were obtained. The proposed unified method integrated and blended several sources of information well into a genetic evaluation by avoiding double-counting of contributions due to relationships and due to records. The unified method can also be extended to other types of situations such as single-step genomic or multi-trait evaluations, combining information across different traits.
ERIC Educational Resources Information Center
Wawro, Megan; Rasmussen, Chris; Zandieh, Michelle; Sweeney, George Franklin; Larson, Christine
2012-01-01
In this paper we present an innovative instructional sequence for an introductory linear algebra course that supports students' reinvention of the concepts of span, linear dependence, and linear independence. Referred to as the Magic Carpet Ride sequence, the problems begin with an imaginary scenario that allows students to build rich imagery and…
A policy analysis of teamwork as a proposal for healthcare humanization: implications for nursing.
da Silva, R N; de Freitas, F D da S; de Araújo, F P; Ferreira, M de A
2016-12-01
To analyse the implications of the political devices of the Brazilian National Humanization Policy, Singular Therapeutic Project and Reference Team and Matrix Support, for nursing as a professional discipline. The Brazilian Unified Health System, SUS-Brazil, has as its principles regarding health care: universal access at all levels of care; equality and non-discrimination; integrality; community participation; and political and administrative decentralization, regionalization, and hierarchization. The National Humanization Policy is a public health policy that serves as the methodological apparatus for the application of the SUS-Brazil principles. Reference Teams refers to inter- and transdisciplinary/professional teams. These team approaches are associated with increased quality of care. Qualitative lexical content policy analysis of the official documents for the Brazilian National Humanization Policy. The Reference Team model that is used to carry out Singular Therapeutic Projects leads to discussion of disciplinary boundaries in the context of health care. The Brazilian National Humanization Policy demands inclusion of various kinds of knowledge and networking. Research is needed to elucidate the nature of nursing care and its distinctive character in relation to the work objectives of other professional disciplines. © 2016 International Council of Nurses.
Barbara, Joanna E; Castro-Perez, Jose M
2011-10-30
Electrophilic reactive metabolite screening by liquid chromatography/mass spectrometry (LC/MS) is commonly performed during drug discovery and early-stage drug development. Accurate mass spectrometry has excellent utility in this application, but sophisticated data processing strategies are essential to extract useful information. Herein, a unified approach to glutathione (GSH) trapped reactive metabolite screening with high-resolution LC/TOF MS(E) analysis and drug-conjugate-specific in silico data processing was applied to rapid analysis of test compounds without the need for stable- or radio-isotope-labeled trapping agents. Accurate mass defect filtering (MDF) with a C-heteroatom dealkylation algorithm dynamic with mass range was compared to linear MDF and shown to minimize false positive results. MS(E) data-filtering, time-alignment and data mining post-acquisition enabled detection of 53 GSH conjugates overall formed from 5 drugs. Automated comparison of sample and control data in conjunction with the mass defect filter enabled detection of several conjugates that were not evident with mass defect filtering alone. High- and low-energy MS(E) data were time-aligned to generate in silico product ion spectra which were successfully applied to structural elucidation of detected GSH conjugates. Pseudo neutral loss and precursor ion chromatograms derived post-acquisition demonstrated 50.9% potential coverage, at best, of the detected conjugates by any individual precursor or neutral loss scan type. In contrast with commonly applied neutral loss and precursor-based techniques, the unified method has the advantage of applicability across different classes of GSH conjugates. The unified method was also successfully applied to cyanide trapping analysis and has potential for application to alternate trapping agents. Copyright © 2011 John Wiley & Sons, Ltd.
Torque ripple reduction of brushless DC motor based on adaptive input-output feedback linearization.
Shirvani Boroujeni, M; Markadeh, G R Arab; Soltani, J
2017-09-01
Torque ripple reduction of Brushless DC Motors (BLDCs) is an interesting subject in variable speed AC drives. In this paper at first, a mathematical expression for torque ripple harmonics is obtained. Then for a non-ideal BLDC motor with known harmonic contents of back-EMF, calculation of desired reference current amplitudes, which are required to eliminate some selected harmonics of torque ripple, are reviewed. In order to inject the reference harmonic currents to the motor windings, an Adaptive Input-Output Feedback Linearization (AIOFBL) control is proposed, which generates the reference voltages for three phases voltage source inverter in stationary reference frame. Experimental results are presented to show the capability and validity of the proposed control method and are compared with the vector control in Multi-Reference Frame (MRF) and Pseudo-Vector Control (P-VC) method results. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Linear FBG Temperature Sensor Interrogation with Fabry-Perot ITU Multi-wavelength Reference
Park, Hyoung-Jun; Song, Minho
2008-01-01
The equidistantly spaced multi-passbands of a Fabry-Perot ITU filter are used as an efficient multi-wavelength reference for fiber Bragg grating sensor demodulation. To compensate for the nonlinear wavelength tuning effect in the FBG sensor demodulator, a polynomial fitting algorithm was applied to the temporal peaks of the wavelength-scanned ITU filter. The fitted wavelength values are assigned to the peak locations of the FBG sensor reflections, obtaining constant accuracy, regardless of the wavelength scan range and frequency. A linearity error of about 0.18% against a reference thermocouple thermometer was obtained with the suggested method. PMID:27873898
ERIC Educational Resources Information Center
Schmitt, M. A.; And Others
1994-01-01
Compares traditional manure application planning techniques calculated to meet agronomic nutrient needs on a field-by-field basis with plans developed using computer-assisted linear programming optimization methods. Linear programming provided the most economical and environmentally sound manure application strategy. (Contains 15 references.) (MDH)
3D inelastic analysis methods for hot section components
NASA Technical Reports Server (NTRS)
Dame, L. T.; Chen, P. C.; Hartle, M. S.; Huang, H. T.
1985-01-01
The objective is to develop analytical tools capable of economically evaluating the cyclic time dependent plasticity which occurs in hot section engine components in areas of strain concentration resulting from the combination of both mechanical and thermal stresses. Three models were developed. A simple model performs time dependent inelastic analysis using the power law creep equation. The second model is the classical model of Professors Walter Haisler and David Allen of Texas A and M University. The third model is the unified model of Bodner, Partom, et al. All models were customized for linear variation of loads and temperatures with all material properties and constitutive models being temperature dependent.
NASA Technical Reports Server (NTRS)
Bennett, David P.
1988-01-01
Cosmic strings are linear topological defects which are predicted by some grand unified theories to form during a spontaneous symmetry breaking phase transition in the early universe. They are the basis for the only theories of galaxy formation aside from quantum fluctuations from inflation based on fundamental physics. In contrast to inflation, they can also be observed directly through gravitational lensing and their characterisitc microwave background anisotropy. It was recently discovered that details of cosmic string evolution are very differnt from the so-called standard model that was assumed in most of the string-induced galaxy formation calculations. Therefore, the details of galaxy formation in the cosmic string models are currently very uncertain.
Cognitive Foundry v. 3.0 (OSS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Basilico, Justin; Dixon, Kevin; McClain, Jonathan
2009-11-18
The Cognitive Foundry is a unified collection of tools designed for research and applications that use cognitive modeling, machine learning, or pattern recognition. The software library contains design patterns, interface definitions, and default implementations of reusable software components and algorithms designed to support a wide variety of research and development needs. The library contains three main software packages: the Common package that contains basic utilities and linear algebraic methods, the Cognitive Framework package that contains tools to assist in implementing and analyzing theories of cognition, and the Machine Learning package that provides general algorithms and methods for populating Cognitive Frameworkmore » components from domain-relevant data.« less
NASA Technical Reports Server (NTRS)
Turner, L. R.
1960-01-01
The problem of solving systems of nonlinear equations has been relatively neglected in the mathematical literature, especially in the textbooks, in comparison to the corresponding linear problem. Moreover, treatments that have an appearance of generality fail to discuss the nature of the solutions and the possible pitfalls of the methods suggested. Probably it is unrealistic to expect that a unified and comprehensive treatment of the subject will evolve, owing to the great variety of situations possible, especially in the applied field where some requirement of human or mechanical efficiency is always present. Therefore we attempt here simply to pose the problem and to describe and partially appraise the methods of solution currently in favor.
NASA Technical Reports Server (NTRS)
Mcknight, R. L.
1985-01-01
A series of interdisciplinary modeling and analysis techniques that were specialized to address three specific hot section components are presented. These techniques will incorporate data as well as theoretical methods from many diverse areas including cycle and performance analysis, heat transfer analysis, linear and nonlinear stress analysis, and mission analysis. Building on the proven techniques already available in these fields, the new methods developed will be integrated into computer codes to provide an accurate, and unified approach to analyzing combustor burner liners, hollow air cooled turbine blades, and air cooled turbine vanes. For these components, the methods developed will predict temperature, deformation, stress and strain histories throughout a complete flight mission.
NASA Astrophysics Data System (ADS)
Doronin, Alexander; Meglinski, Igor
2017-02-01
Current report considers development of a unified Monte Carlo (MC) -based computational model for simulation of propagation of Laguerre-Gaussian (LG) beams in turbid tissue-like scattering medium. With a primary goal to proof the concept of using complex light for tissue diagnosis we explore propagation of LG beams in comparison with Gaussian beams for both linear and circular polarization. MC simulations of radially and azimuthally polarized LG beams in turbid media have been performed, classic phenomena such as preservation of the orbital angular momentum, optical memory and helicity flip are observed, detailed comparison is presented and discussed.
Strictly stable high order difference approximations for computational aeroacoustics
NASA Astrophysics Data System (ADS)
Müller, Bernhard; Johansson, Stefan
2005-09-01
High order finite difference approximations with improved accuracy and stability properties have been developed for computational aeroacoustics (CAA). One of our new difference operators corresponds to Tam and Webb's DRP scheme in the interior, but is modified near the boundaries to be strictly stable. A unified formulation of the nonlinear and linearized Euler equations is used, which can be extended to the Navier-Stokes equations. The approach has been verified for 1D, 2D and axisymmetric test problems. We have simulated the sound propagation from a rocket launch before lift-off. To cite this article: B. Müller, S. Johansson, C. R. Mecanique 333 (2005).
Time Hierarchies and Model Reduction in Canonical Non-linear Models
Löwe, Hannes; Kremling, Andreas; Marin-Sanguino, Alberto
2016-01-01
The time-scale hierarchies of a very general class of models in differential equations is analyzed. Classical methods for model reduction and time-scale analysis have been adapted to this formalism and a complementary method is proposed. A unified theoretical treatment shows how the structure of the system can be much better understood by inspection of two sets of singular values: one related to the stoichiometric structure of the system and another to its kinetics. The methods are exemplified first through a toy model, then a large synthetic network and finally with numeric simulations of three classical benchmark models of real biological systems. PMID:27708665
On the Wind Generation of Water Waves
NASA Astrophysics Data System (ADS)
Bühler, Oliver; Shatah, Jalal; Walsh, Samuel; Zeng, Chongchun
2016-11-01
In this work, we consider the mathematical theory of wind generated water waves. This entails determining the stability properties of the family of laminar flow solutions to the two-phase interface Euler equation. We present a rigorous derivation of the linearized evolution equations about an arbitrary steady solution, and, using this, we give a complete proof of the instability criterion of M iles [16]. Our analysis is valid even in the presence of surface tension and a vortex sheet (discontinuity in the tangential velocity across the air-sea interface). We are thus able to give a unified equation connecting the Kelvin-Helmholtz and quasi-laminar models of wave generation.
Global non-linear effect of temperature on economic production.
Burke, Marshall; Hsiang, Solomon M; Miguel, Edward
2015-11-12
Growing evidence demonstrates that climatic conditions can have a profound impact on the functioning of modern human societies, but effects on economic activity appear inconsistent. Fundamental productive elements of modern economies, such as workers and crops, exhibit highly non-linear responses to local temperature even in wealthy countries. In contrast, aggregate macroeconomic productivity of entire wealthy countries is reported not to respond to temperature, while poor countries respond only linearly. Resolving this conflict between micro and macro observations is critical to understanding the role of wealth in coupled human-natural systems and to anticipating the global impact of climate change. Here we unify these seemingly contradictory results by accounting for non-linearity at the macro scale. We show that overall economic productivity is non-linear in temperature for all countries, with productivity peaking at an annual average temperature of 13 °C and declining strongly at higher temperatures. The relationship is globally generalizable, unchanged since 1960, and apparent for agricultural and non-agricultural activity in both rich and poor countries. These results provide the first evidence that economic activity in all regions is coupled to the global climate and establish a new empirical foundation for modelling economic loss in response to climate change, with important implications. If future adaptation mimics past adaptation, unmitigated warming is expected to reshape the global economy by reducing average global incomes roughly 23% by 2100 and widening global income inequality, relative to scenarios without climate change. In contrast to prior estimates, expected global losses are approximately linear in global mean temperature, with median losses many times larger than leading models indicate.
Global non-linear effect of temperature on economic production
NASA Astrophysics Data System (ADS)
Burke, Marshall; Hsiang, Solomon M.; Miguel, Edward
2015-11-01
Growing evidence demonstrates that climatic conditions can have a profound impact on the functioning of modern human societies, but effects on economic activity appear inconsistent. Fundamental productive elements of modern economies, such as workers and crops, exhibit highly non-linear responses to local temperature even in wealthy countries. In contrast, aggregate macroeconomic productivity of entire wealthy countries is reported not to respond to temperature, while poor countries respond only linearly. Resolving this conflict between micro and macro observations is critical to understanding the role of wealth in coupled human-natural systems and to anticipating the global impact of climate change. Here we unify these seemingly contradictory results by accounting for non-linearity at the macro scale. We show that overall economic productivity is non-linear in temperature for all countries, with productivity peaking at an annual average temperature of 13 °C and declining strongly at higher temperatures. The relationship is globally generalizable, unchanged since 1960, and apparent for agricultural and non-agricultural activity in both rich and poor countries. These results provide the first evidence that economic activity in all regions is coupled to the global climate and establish a new empirical foundation for modelling economic loss in response to climate change, with important implications. If future adaptation mimics past adaptation, unmitigated warming is expected to reshape the global economy by reducing average global incomes roughly 23% by 2100 and widening global income inequality, relative to scenarios without climate change. In contrast to prior estimates, expected global losses are approximately linear in global mean temperature, with median losses many times larger than leading models indicate.
NASA Astrophysics Data System (ADS)
Nichols, Albert L., III; Calef, Daniel F.
A new method to solve the reference HNC equations is developed to treat systems with both asymmetric short-range and long-range interactions. This method is motivated by the work of Patey and co-workers and uses Lado's free-energy minimizing optimization criteria for the reference HNC approximation. The properties of several fluids composed of linear triatomic molecules with various dipole moments or hard-sphere molecules with different-length dipoles are investigated.
NASA Astrophysics Data System (ADS)
Wu, Bing-Fei; Ma, Li-Shan; Perng, Jau-Woei
This study analyzes the absolute stability in P and PD type fuzzy logic control systems with both certain and uncertain linear plants. Stability analysis includes the reference input, actuator gain and interval plant parameters. For certain linear plants, the stability (i.e. the stable equilibriums of error) in P and PD types is analyzed with the Popov or linearization methods under various reference inputs and actuator gains. The steady state errors of fuzzy control systems are also addressed in the parameter plane. The parametric robust Popov criterion for parametric absolute stability based on Lur'e systems is also applied to the stability analysis of P type fuzzy control systems with uncertain plants. The PD type fuzzy logic controller in our approach is a single-input fuzzy logic controller and is transformed into the P type for analysis. In our work, the absolute stability analysis of fuzzy control systems is given with respect to a non-zero reference input and an uncertain linear plant with the parametric robust Popov criterion unlike previous works. Moreover, a fuzzy current controlled RC circuit is designed with PSPICE models. Both numerical and PSPICE simulations are provided to verify the analytical results. Furthermore, the oscillation mechanism in fuzzy control systems is specified with various equilibrium points of view in the simulation example. Finally, the comparisons are also given to show the effectiveness of the analysis method.
Generalized concurrence in boson sampling.
Chin, Seungbeom; Huh, Joonsuk
2018-04-17
A fundamental question in linear optical quantum computing is to understand the origin of the quantum supremacy in the physical system. It is found that the multimode linear optical transition amplitudes are calculated through the permanents of transition operator matrices, which is a hard problem for classical simulations (boson sampling problem). We can understand this problem by considering a quantum measure that directly determines the runtime for computing the transition amplitudes. In this paper, we suggest a quantum measure named "Fock state concurrence sum" C S , which is the summation over all the members of "the generalized Fock state concurrence" (a measure analogous to the generalized concurrences of entanglement and coherence). By introducing generalized algorithms for computing the transition amplitudes of the Fock state boson sampling with an arbitrary number of photons per mode, we show that the minimal classical runtime for all the known algorithms directly depends on C S . Therefore, we can state that the Fock state concurrence sum C S behaves as a collective measure that controls the computational complexity of Fock state BS. We expect that our observation on the role of the Fock state concurrence in the generalized algorithm for permanents would provide a unified viewpoint to interpret the quantum computing power of linear optics.
Structured penalties for functional linear models-partially empirical eigenvectors for regression.
Randolph, Timothy W; Harezlak, Jaroslaw; Feng, Ziding
2012-01-01
One of the challenges with functional data is incorporating geometric structure, or local correlation, into the analysis. This structure is inherent in the output from an increasing number of biomedical technologies, and a functional linear model is often used to estimate the relationship between the predictor functions and scalar responses. Common approaches to the problem of estimating a coefficient function typically involve two stages: regularization and estimation. Regularization is usually done via dimension reduction, projecting onto a predefined span of basis functions or a reduced set of eigenvectors (principal components). In contrast, we present a unified approach that directly incorporates geometric structure into the estimation process by exploiting the joint eigenproperties of the predictors and a linear penalty operator. In this sense, the components in the regression are 'partially empirical' and the framework is provided by the generalized singular value decomposition (GSVD). The form of the penalized estimation is not new, but the GSVD clarifies the process and informs the choice of penalty by making explicit the joint influence of the penalty and predictors on the bias, variance and performance of the estimated coefficient function. Laboratory spectroscopy data and simulations are used to illustrate the concepts.
NASA Astrophysics Data System (ADS)
Hsieh, Chang-Yu; Cao, Jianshu
2018-01-01
We use the "generalized hierarchical equation of motion" proposed in Paper I [C.-Y. Hsieh and J. Cao, J. Chem. Phys. 148, 014103 (2018)] to study decoherence in a system coupled to a spin bath. The present methodology allows a systematic incorporation of higher-order anharmonic effects of the bath in dynamical calculations. We investigate the leading order corrections to the linear response approximations for spin bath models. Two kinds of spin-based environments are considered: (1) a bath of spins discretized from a continuous spectral density and (2) a bath of localized nuclear or electron spins. The main difference resides with how the bath frequency and the system-bath coupling parameters are distributed in an environment. When discretized from a continuous spectral density, the system-bath coupling typically scales as ˜1 /√{NB } where NB is the number of bath spins. This scaling suppresses the non-Gaussian characteristics of the spin bath and justifies the linear response approximations in the thermodynamic limit. For the nuclear/electron spin bath models, system-bath couplings are directly deduced from spin-spin interactions and do not necessarily obey the 1 /√{NB } scaling. It is not always possible to justify the linear response approximations in this case. Furthermore, if the spin-spin Hamiltonian is highly symmetrical, there exist additional constraints that generate highly non-Markovian and persistent dynamics that is beyond the linear response treatments.
Research on key technologies of data processing in internet of things
NASA Astrophysics Data System (ADS)
Zhu, Yangqing; Liang, Peiying
2017-08-01
The data of Internet of things (IOT) has the characteristics of polymorphism, heterogeneous, large amount and processing real-time. The traditional structured and static batch processing method has not met the requirements of data processing of IOT. This paper studied a middleware that can integrate heterogeneous data of IOT, and integrated different data formats into a unified format. Designed a data processing model of IOT based on the Storm flow calculation architecture, integrated the existing Internet security technology to build the Internet security system of IOT data processing, which provided reference for the efficient transmission and processing of IOT data.
From behavior to neural dynamics: An integrated theory of attention
Buschman, Timothy J.; Kastner, Sabine
2015-01-01
The brain has a limited capacity and therefore needs mechanisms to selectively enhance the information most relevant to one’s current behavior. We refer to these mechanisms as ‘attention’. Attention acts by increasing the strength of selected neural representations and preferentially routing them through the brain’s large-scale network. This is a critical component of cognition and therefore has been a central topic in cognitive neuroscience. Here we review a diverse literature that has studied attention at the level of behavior, networks, circuits and neurons. We then integrate these disparate results into a unified theory of attention. PMID:26447577
2016-01-01
The widespread use of ultrasonography places it in a key position for use in the risk stratification of thyroid nodules. The French proposal is a five-tier system, our version of a thyroid imaging reporting and database system (TI-RADS), which includes a standardized vocabulary and report and a quantified risk assessment. It allows the selection of the nodules that should be referred for fine-needle aspiration biopsies. Effort should be directed towards merging the different risk stratification systems utilized around the world and testing this unified system with multi-center studies. PMID:26324117
3 CFR - Unified Command Plan 2011
Code of Federal Regulations, 2012 CFR
2012-01-01
... 3 The President 1 2012-01-01 2012-01-01 false Unified Command Plan 2011 Presidential Documents Other Presidential Documents Memorandum of April 6, 2011 Unified Command Plan 2011 Memorandum for the... implementation of the revised Unified Command Plan. Consistent with title 10, United States Code, section 161(b...
WebGIS based community services architecture by griddization managements and crowdsourcing services
NASA Astrophysics Data System (ADS)
Wang, Haiyin; Wan, Jianhua; Zeng, Zhe; Zhou, Shengchuan
2016-11-01
Along with the fast economic development of cities, rapid urbanization, population surge, in China, the social community service mechanisms need to be rationalized and the policy standards need to be unified, which results in various types of conflicts and challenges for community services of government. Based on the WebGIS technology, the article provides a community service architecture by gridding management and crowdsourcing service. The WEBGIS service architecture includes two parts: the cloud part and the mobile part. The cloud part refers to community service centres, which can instantaneously response the emergency, visualize the scene of the emergency, and analyse the data from the emergency. The mobile part refers to the mobile terminal, which can call the centre, report the event, collect data and verify the feedback. This WebGIS based community service systems for Huangdao District of Qingdao, were awarded the “2015’ national innovation of social governance case of typical cases”.
A Standard Nomenclature for Referencing and Authentication of Pluripotent Stem Cells.
Kurtz, Andreas; Seltmann, Stefanie; Bairoch, Amos; Bittner, Marie-Sophie; Bruce, Kevin; Capes-Davis, Amanda; Clarke, Laura; Crook, Jeremy M; Daheron, Laurence; Dewender, Johannes; Faulconbridge, Adam; Fujibuchi, Wataru; Gutteridge, Alexander; Hei, Derek J; Kim, Yong-Ou; Kim, Jung-Hyun; Kokocinski, Anja Kolb-; Lekschas, Fritz; Lomax, Geoffrey P; Loring, Jeanne F; Ludwig, Tenneille; Mah, Nancy; Matsui, Tohru; Müller, Robert; Parkinson, Helen; Sheldon, Michael; Smith, Kelly; Stachelscheid, Harald; Stacey, Glyn; Streeter, Ian; Veiga, Anna; Xu, Ren-He
2018-01-09
Unambiguous cell line authentication is essential to avoid loss of association between data and cells. The risk for loss of references increases with the rapidity that new human pluripotent stem cell (hPSC) lines are generated, exchanged, and implemented. Ideally, a single name should be used as a generally applied reference for each cell line to access and unify cell-related information across publications, cell banks, cell registries, and databases and to ensure scientific reproducibility. We discuss the needs and requirements for such a unique identifier and implement a standard nomenclature for hPSCs, which can be automatically generated and registered by the human pluripotent stem cell registry (hPSCreg). To avoid ambiguities in PSC-line referencing, we strongly urge publishers to demand registration and use of the standard name when publishing research based on hPSC lines. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Issues and Advances in Understanding Landslide-Generated Tsunamis: Toward a Unified Model
NASA Astrophysics Data System (ADS)
Geist, E. L.; Locat, J.; Lee, H. J.; Lynett, P. J.; Parsons, T.; Kayen, R. E.; Hart, P. E.
2008-12-01
The physics of tsunamis generated from submarine landslides is highly complex, involving a cross- disciplinary exchange in geophysics. In the 10 years following the devastating Papua New Guinea tsunami, there have been significant advances in understanding landslide-generated tsunamis. However, persistent issues still remain related to submarine landslide dynamics that may be addressed with collection of new marine geologic and geophysical observations. We review critical elements of landslide tsunamis in the hope of developing a unified model that encompasses all stages of the process from triggering to tsunami runup. Because the majority of non-volcanogenic landslides that generate tsunamis are triggered seismically, advances in understanding inertial displacements and changes in strength and rheologic properties in response to strong-ground motion need to be included in a unified model. For example, interaction between compliant marine sediments and multi-direction ground motion results in greater permanent plastic displacements than predicted by traditional rigid-block analysis. When considering the coupling of the overlying water layer in the generation of tsunamis, the post-failure dynamics of landslides is important since the overall rate of seafloor deformation for landslides is less than or comparable to the phase speed of tsunami waves. As such, the rheologic and mechanical behavior of the slide material needs to be well understood. For clayey and silty debris flows, a non-linear (Herschel-Bulkley) and bilinear rheology have recently been developed to explain observed runout distances and deposit thicknesses. An additional complexity to this rheology is the inclusion of hydrate-laden sediment that commonly occurs along continental slopes. Although it has been proposed in the past that gas hydrate dissociation may provide potential failure planes for slide movement, it is unclear how zones of rigid hydrate-bearing sediment surrounded by a more viscoplastic matrix affects the overall rheologic behavior during slide dynamics. For more rigid materials, such as carbonate and volcanic rocks, models are being developed that encompass the initial fracturing and eventual disintegration associated with debris avalanches. Lastly, the physics dictating the hydrodynamics of landslide-generated tsunamis is equally complex. The effects of non-linearity and dispersion are not necessarily negligible for landslides (in contrast to most earthquake-generated tsunamis), indicating that numerical implementation of the non-linear Boussinesq equations is often needed. Moreover, we show that for near-field landslide tsunamis propagating across the continental shelf, bottom friction (bottom boundary layer turbulence) and wave breaking can be important energy sinks. Detailed geophysical surveys can dissect landslide complexes to determine the geometry of individual events and help estimate rheological properties of the flowing mass, whereas cores in landslide provinces can determine the mechanical properties and pore-pressure distribution for pre- and post-failure sediment. This information is critical toward developing well-documented case histories for validating physics-based landslide tsunami models.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-06
... Determination To Defer Sanctions, San Joaquin Valley Unified Air Pollution Control District AGENCY... Valley Unified Air Pollution Control District (SJVUAPCD or District) portion of the California State...), we finalized a limited approval and limited disapproval of San Joaquin Valley Unified Air Pollution...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-15
... the California State Implementation Plan, San Joaquin Valley Unified Air Pollution Control District... approval and limited disapproval of revisions to the San Joaquin Valley Unified Air Pollution Control... Valley Unified Air Pollution Control District (SJVUAPCD) Rule 4682, Polystyrene, Polyethylene, and...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-03
... the California State Implementation Plan, San Joaquin Valley Unified Air Pollution Control District... revisions to the San Joaquin Valley Unified Air Pollution Control District (SJVUAPCD) portion of the... State Implementation Plan, San Joaquin Valley Unified Air Pollution Control District Rule 4692...
Teachers' Evaluations and Students' Achievement: A "Deviation from the Reference" Analysis
ERIC Educational Resources Information Center
Iacus, Stefano M.; Porro, Giuseppe
2011-01-01
Several studies show that teachers make use of grading practices to affect students' effort and achievement. Generally linearity is assumed in the grading equation, while it is everyone's experience that grading practices are frequently non-linear. Representing grading practices as linear can be misleading both from a descriptive and a…
Derivation and definition of a linear aircraft model
NASA Technical Reports Server (NTRS)
Duke, Eugene L.; Antoniewicz, Robert F.; Krambeer, Keith D.
1988-01-01
A linear aircraft model for a rigid aircraft of constant mass flying over a flat, nonrotating earth is derived and defined. The derivation makes no assumptions of reference trajectory or vehicle symmetry. The linear system equations are derived and evaluated along a general trajectory and include both aircraft dynamics and observation variables.
Zuo, Shan; Song, Yongduan; Lewis, Frank L; Davoudi, Ali
2017-01-04
This paper studies the output containment control of linear heterogeneous multi-agent systems, where the system dynamics and even the state dimensions can generally be different. Since the states can have different dimensions, standard results from state containment control do not apply. Therefore, the control objective is to guarantee the convergence of the output of each follower to the dynamic convex hull spanned by the outputs of leaders. This can be achieved by making certain output containment errors go to zero asymptotically. Based on this formulation, two different control protocols, namely, full-state feedback and static output-feedback, are designed based on internal model principles. Sufficient local conditions for the existence of the proposed control protocols are developed in terms of stabilizing the local followers' dynamics and satisfying a certain H∞ criterion. Unified design procedures to solve the proposed two control protocols are presented by formulation and solution of certain local state-feedback and static output-feedback problems, respectively. Numerical simulations are given to validate the proposed control protocols.
Gregg, Robert D; Lenzi, Tommaso; Hargrove, Levi J; Sensinger, Jonathon W
2014-12-01
Recent powered (or robotic) prosthetic legs independently control different joints and time periods of the gait cycle, resulting in control parameters and switching rules that can be difficult to tune by clinicians. This challenge might be addressed by a unifying control model used by recent bipedal robots, in which virtual constraints define joint patterns as functions of a monotonic variable that continuously represents the gait cycle phase. In the first application of virtual constraints to amputee locomotion, this paper derives exact and approximate control laws for a partial feedback linearization to enforce virtual constraints on a prosthetic leg. We then encode a human-inspired invariance property called effective shape into virtual constraints for the stance period. After simulating the robustness of the partial feedback linearization to clinically meaningful conditions, we experimentally implement this control strategy on a powered transfemoral leg. We report the results of three amputee subjects walking overground and at variable cadences on a treadmill, demonstrating the clinical viability of this novel control approach.
Next Generation Extended Lagrangian Quantum-based Molecular Dynamics
NASA Astrophysics Data System (ADS)
Negre, Christian
2017-06-01
A new framework for extended Lagrangian first-principles molecular dynamics simulations is presented, which overcomes shortcomings of regular, direct Born-Oppenheimer molecular dynamics, while maintaining important advantages of the unified extended Lagrangian formulation of density functional theory pioneered by Car and Parrinello three decades ago. The new framework allows, for the first time, energy conserving, linear-scaling Born-Oppenheimer molecular dynamics simulations, which is necessary to study larger and more realistic systems over longer simulation times than previously possible. Expensive, self-consinstent-field optimizations are avoided and normal integration time steps of regular, direct Born-Oppenheimer molecular dynamics can be used. Linear scaling electronic structure theory is presented using a graph-based approach that is ideal for parallel calculations on hybrid computer platforms. For the first time, quantum based Born-Oppenheimer molecular dynamics simulation is becoming a practically feasible approach in simulations of +100,000 atoms-representing a competitive alternative to classical polarizable force field methods. In collaboration with: Anders Niklasson, Los Alamos National Laboratory.
Lenzi, Tommaso; Hargrove, Levi J.; Sensinger, Jonathon W.
2014-01-01
Recent powered (or robotic) prosthetic legs independently control different joints and time periods of the gait cycle, resulting in control parameters and switching rules that can be difficult to tune by clinicians. This challenge might be addressed by a unifying control model used by recent bipedal robots, in which virtual constraints define joint patterns as functions of a monotonic variable that continuously represents the gait cycle phase. In the first application of virtual constraints to amputee locomotion, this paper derives exact and approximate control laws for a partial feedback linearization to enforce virtual constraints on a prosthetic leg. We then encode a human-inspired invariance property called effective shape into virtual constraints for the stance period. After simulating the robustness of the partial feedback linearization to clinically meaningful conditions, we experimentally implement this control strategy on a powered transfemoral leg. We report the results of three amputee subjects walking overground and at variable cadences on a treadmill, demonstrating the clinical viability of this novel control approach. PMID:25558185
Practical robustness measures in multivariable control system analysis. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Lehtomaki, N. A.
1981-01-01
The robustness of the stability of multivariable linear time invariant feedback control systems with respect to model uncertainty is considered using frequency domain criteria. Available robustness tests are unified under a common framework based on the nature and structure of model errors. These results are derived using a multivariable version of Nyquist's stability theorem in which the minimum singular value of the return difference transfer matrix is shown to be the multivariable generalization of the distance to the critical point on a single input, single output Nyquist diagram. Using the return difference transfer matrix, a very general robustness theorem is presented from which all of the robustness tests dealing with specific model errors may be derived. The robustness tests that explicitly utilized model error structure are able to guarantee feedback system stability in the face of model errors of larger magnitude than those robustness tests that do not. The robustness of linear quadratic Gaussian control systems are analyzed.
A unified formulation of dichroic signals using the Borrmann effect and twisted photon beams.
Collins, Stephen P; Lovesey, Stephen W
2018-05-21
Dichroic X-ray signals derived from the Borrmann effect and a twisted photon beam with topological charge l = 1 are formulated with an effective wavevector. The unification applies for non-magnetic and magnetic materials. Electronic degrees of freedom associated with an ion are encapsulated in multipoles previously used to interpret conventional dichroism and Bragg diffraction enhanced by an atomic resonance. A dichroic signal exploiting the Borrmann effect with a linearly polarized beam presents charge-like multipoles that include a hexadecapole. A difference between dichroic signals obtained with a twisted beam carrying spin polarization (circular polarization) and opposite winding numbers presents charge-like atomic multipoles, whereas a twisted beam carrying linear polarization alone presents magnetic (time-odd) multipoles. Charge-like multipoles include a quadrupole, and magnetic multipoles include a dipole and an octupole. We discuss the practicalities and relative merits of spectroscopy exploiting the two remarkably closely-related processes. Signals using beams with topological charges l ≥ 2 present additional atomic multipoles.
Unified control/structure design and modeling research
NASA Technical Reports Server (NTRS)
Mingori, D. L.; Gibson, J. S.; Blelloch, P. A.; Adamian, A.
1986-01-01
To demonstrate the applicability of the control theory for distributed systems to large flexible space structures, research was focused on a model of a space antenna which consists of a rigid hub, flexible ribs, and a mesh reflecting surface. The space antenna model used is discussed along with the finite element approximation of the distributed model. The basic control problem is to design an optimal or near-optimal compensator to suppress the linear vibrations and rigid-body displacements of the structure. The application of an infinite dimensional Linear Quadratic Gaussian (LQG) control theory to flexible structure is discussed. Two basic approaches for robustness enhancement were investigated: loop transfer recovery and sensitivity optimization. A third approach synthesized from elements of these two basic approaches is currently under development. The control driven finite element approximation of flexible structures is discussed. Three sets of finite element basic vectors for computing functional control gains are compared. The possibility of constructing a finite element scheme to approximate the infinite dimensional Hamiltonian system directly, instead of indirectly is discussed.
Adaptive Control Of Remote Manipulator
NASA Technical Reports Server (NTRS)
Seraji, Homayoun
1989-01-01
Robotic control system causes remote manipulator to follow closely reference trajectory in Cartesian reference frame in work space, without resort to computationally intensive mathematical model of robot dynamics and without knowledge of robot and load parameters. System, derived from linear multivariable theory, uses relatively simple feedforward and feedback controllers with model-reference adaptive control.
Finite-time tracking control for multiple non-holonomic mobile robots based on visual servoing
NASA Astrophysics Data System (ADS)
Ou, Meiying; Li, Shihua; Wang, Chaoli
2013-12-01
This paper investigates finite-time tracking control problem of multiple non-holonomic mobile robots via visual servoing. It is assumed that the pinhole camera is fixed to the ceiling, and camera parameters are unknown. The desired reference trajectory is represented by a virtual leader whose states are available to only a subset of the followers, and the followers have only interaction. First, the camera-objective visual kinematic model is introduced by utilising the pinhole camera model for each mobile robot. Second, a unified tracking error system between camera-objective visual servoing model and desired reference trajectory is introduced. Third, based on the neighbour rule and by using finite-time control method, continuous distributed cooperative finite-time tracking control laws are designed for each mobile robot with unknown camera parameters, where the communication topology among the multiple mobile robots is assumed to be a directed graph. Rigorous proof shows that the group of mobile robots converges to the desired reference trajectory in finite time. Simulation example illustrates the effectiveness of our method.
NASA Astrophysics Data System (ADS)
Xu, Jiuping; Li, Jun
2002-09-01
In this paper a class of stochastic multiple-objective programming problems with one quadratic, several linear objective functions and linear constraints has been introduced. The former model is transformed into a deterministic multiple-objective nonlinear programming model by means of the introduction of random variables' expectation. The reference direction approach is used to deal with linear objectives and results in a linear parametric optimization formula with a single linear objective function. This objective function is combined with the quadratic function using the weighted sums. The quadratic problem is transformed into a linear (parametric) complementary problem, the basic formula for the proposed approach. The sufficient and necessary conditions for (properly, weakly) efficient solutions and some construction characteristics of (weakly) efficient solution sets are obtained. An interactive algorithm is proposed based on reference direction and weighted sums. Varying the parameter vector on the right-hand side of the model, the DM can freely search the efficient frontier with the model. An extended portfolio selection model is formed when liquidity is considered as another objective to be optimized besides expectation and risk. The interactive approach is illustrated with a practical example.
NASA Astrophysics Data System (ADS)
Brauer, U.
2007-08-01
The Open Navigator Framework (ONF) was developed to provide a unified and scalable platform for user interface integration. The main objective for the framework was to raise usability of monitoring and control consoles and to provide a reuse of software components in different application areas. ONF is currently applied for the Columbus onboard crew interface, the commanding application for the Columbus Control Centre, the Columbus user facilities specialized user interfaces, the Mission Execution Crew Assistant (MECA) study and EADS Astrium internal R&D projects. ONF provides a well documented and proven middleware for GUI components (Java plugin interface, simplified concept similar to Eclipse). The overall application configuration is performed within a graphical user interface for layout and component selection. The end-user does not have to work in the underlying XML configuration files. ONF was optimized to provide harmonized user interfaces for monitoring and command consoles. It provides many convenience functions designed together with flight controllers and onboard crew: user defined workspaces, incl. support for multi screens efficient communication mechanism between the components integrated web browsing and documentation search &viewing consistent and integrated menus and shortcuts common logging and application configuration (properties) supervision interface for remote plugin GUI access (web based) A large number of operationally proven ONF components have been developed: Command Stack & History: Release of commands and follow up the command acknowledges System Message Panel: Browse, filter and search system messages/events Unified Synoptic System: Generic synoptic display system Situational Awareness : Show overall subsystem status based on monitoring of key parameters System Model Browser: Browse mission database defintions (measurements, commands, events) Flight Procedure Executor: Execute checklist and logical flow interactive procedures Web Browser : Integrated browser reference documentation and operations data Timeline Viewer: View master timeline as Gantt chart Search: Local search of operations products (e.g. documentation, procedures, displays) All GUI components access the underlying spacecraft data (commanding, reporting data, events, command history) via a common library providing adaptors for the current deployments (Columbus MCS, Columbus onboard Data Management System, Columbus Trainer raw packet protocol). New Adaptors are easy to develop. Currently an adaptor to SCOS 2000 is developed as part of a study for the ESTEC standardization section ("USS for ESTEC Reference Facility").
TGIS, TIG, Program Development, Transportation & Public Facilities, State
accessible, accurate, and controlled inventory of public roadway features and linear coordinates for the Roadway Data System (RDS) network (Alaska DOT&PF's Linear Reference System or LRS) to meet Federal and
A general number-to-space mapping deficit in developmental dyscalculia.
Huber, S; Sury, D; Moeller, K; Rubinsten, O; Nuerk, H-C
2015-01-01
Previous research on developmental dyscalculia (DD) suggested that deficits in the number line estimation task are related to a failure to represent number magnitude linearly. This conclusion was derived from the observation of logarithmically shaped estimation patterns. However, recent research questioned this idea of an isomorphic relationship between estimation patterns and number magnitude representation. In the present study, we evaluated an alternative hypothesis: impairments in the number line estimation task are due to a general deficit in mapping numbers onto space. Adults with DD and a matched control group had to learn linear and non-linear layouts of the number line via feedback. Afterwards, we assessed their performance how well they learnt the new number-space mappings. We found irrespective of the layouts worse performance of adults with DD. Additionally, in case of the linear layout, we observed that their performance did not differ from controls near reference points, but that differences between groups increased as the distance to reference point increased. We conclude that worse performance of adults with DD in the number line task might be due a deficit in mapping numbers onto space which can be partly overcome relying on reference points. Copyright © 2015 Elsevier Ltd. All rights reserved.
Gravitational Reference Sensor Front-End Electronics Simulator for LISA
NASA Astrophysics Data System (ADS)
Meshksar, Neda; Ferraioli, Luigi; Mance, Davor; ten Pierick, Jan; Zweifel, Peter; Giardini, Domenico; ">LISA Pathfinder colaboration,
Kanherkar, Riya R.; Stair, Susan E.; Bhatia-Dey, Naina; Mills, Paul J.; Chopra, Deepak
2017-01-01
Since time immemorial humans have utilized natural products and therapies for their healing properties. Even now, in the age of genomics and on the cusp of regenerative medicine, the use of complementary and alternative medicine (CAM) approaches represents a popular branch of health care. Furthermore, there is a trend towards a unified medical philosophy referred to as Integrative Medicine (IM) that represents the convergence of CAM and conventional medicine. The IM model not only considers the holistic perspective of the physiological components of the individual, but also includes psychological and mind-body aspects. Justification for and validation of such a whole-systems approach is in part dependent upon identification of the functional pathways governing healing, and new data is revealing relationships between therapies and biochemical effects that have long defied explanation. We review this data and propose a unifying theme: IM's ability to affect healing is due at least in part to epigenetic mechanisms. This hypothesis is based on a mounting body of evidence that demonstrates a correlation between the physical and mental effects of IM and modulation of gene expression and epigenetic state. Emphasis on mapping, deciphering, and optimizing these effects will facilitate therapeutic delivery and create further benefits. PMID:28316635
Initial Alignment for SINS Based on Pseudo-Earth Frame in Polar Regions.
Gao, Yanbin; Liu, Meng; Li, Guangchun; Guang, Xingxing
2017-06-16
An accurate initial alignment must be required for inertial navigation system (INS). The performance of initial alignment directly affects the following navigation accuracy. However, the rapid convergence of meridians and the small horizontalcomponent of rotation of Earth make the traditional alignment methods ineffective in polar regions. In this paper, from the perspective of global inertial navigation, a novel alignment algorithm based on pseudo-Earth frame and backward process is proposed to implement the initial alignment in polar regions. Considering that an accurate coarse alignment of azimuth is difficult to obtain in polar regions, the dynamic error modeling with large azimuth misalignment angle is designed. At the end of alignment phase, the strapdown attitude matrix relative to local geographic frame is obtained without influence of position errors and cumbersome computation. As a result, it would be more convenient to access the following polar navigation system. Then, it is also expected to unify the polar alignment algorithm as much as possible, thereby further unifying the form of external reference information. Finally, semi-physical static simulation and in-motion tests with large azimuth misalignment angle assisted by unscented Kalman filter (UKF) validate the effectiveness of the proposed method.
Deformation analysis of the unified lunar control networks
NASA Astrophysics Data System (ADS)
Iz, H. Bâki; Chen, Yong Qi; King, Bruce Anthony; Ding, Xiaoli; Wu, Chen
2009-12-01
This study compares the latest Unified Lunar Control Network, ULCN 2005, solution with the earlier ULCN 1994 solution at global and local scales. At the global scale, the relative rotation, translation, and deformation (normal strains and shears) parameters between the two networks are estimated as a whole using their colocated station Cartesian coordinate differences. At the local scale, the network station coordinate differences are examined in local topocentric coordinate systems whose origins are located at the geometric center of quadrangles and tetrahedrons. This study identified that the omission of the topography in the old ULCN solutions shifted the geometric center of the lunar figure up to 5 km in the lunar equatorial plane and induced a few hundred-meter level global rotations of the ULCN 1994 reference frame with respect to ULCN 2005. The displacements between the old and new control networks are less than ± 2 km on the average at the local scale, which behave like translations, caused by the omission of lunar topography in the earlier solution. The contribution of local rigid body rotations and dilatational and compressional components to the local displacements are approximately ± 100 m for a quadrangle/tetrahedron of an average side length of 10 km.
MAPU: Max-Planck Unified database of organellar, cellular, tissue and body fluid proteomes
Zhang, Yanling; Zhang, Yong; Adachi, Jun; Olsen, Jesper V.; Shi, Rong; de Souza, Gustavo; Pasini, Erica; Foster, Leonard J.; Macek, Boris; Zougman, Alexandre; Kumar, Chanchal; Wiśniewski, Jacek R.; Jun, Wang; Mann, Matthias
2007-01-01
Mass spectrometry (MS)-based proteomics has become a powerful technology to map the protein composition of organelles, cell types and tissues. In our department, a large-scale effort to map these proteomes is complemented by the Max-Planck Unified (MAPU) proteome database. MAPU contains several body fluid proteomes; including plasma, urine, and cerebrospinal fluid. Cell lines have been mapped to a depth of several thousand proteins and the red blood cell proteome has also been analyzed in depth. The liver proteome is represented with 3200 proteins. By employing high resolution MS and stringent validation criteria, false positive identification rates in MAPU are lower than 1:1000. Thus MAPU datasets can serve as reference proteomes in biomarker discovery. MAPU contains the peptides identifying each protein, measured masses, scores and intensities and is freely available at using a clickable interface of cell or body parts. Proteome data can be queried across proteomes by protein name, accession number, sequence similarity, peptide sequence and annotation information. More than 4500 mouse and 2500 human proteins have already been identified in at least one proteome. Basic annotation information and links to other public databases are provided in MAPU and we plan to add further analysis tools. PMID:17090601
The multisensory basis of the self: From body to identity to others
Tsakiris, Manos
2017-01-01
ABSTRACT By grounding the self in the body, experimental psychology has taken the body as the starting point for a science of the self. One fundamental dimension of the bodily self is the sense of body ownership that refers to the special perceptual status of one’s own body, the feeling that “my body” belongs to me. The primary aim of this review article is to highlight recent advances in the study of body ownership and our understanding of the underlying neurocognitive processes in three ways. I first consider how the sense of body ownership has been investigated and elucidated in the context of multisensory integration. Beyond exteroception, recent studies have considered how this exteroceptively driven sense of body ownership can be linked to the other side of embodiment, that of the unobservable, yet felt, interoceptive body, suggesting that these two sides of embodiment interact to provide a unifying bodily self. Lastly, the multisensorial understanding of the self has been shown to have implications for our understanding of social relationships, especially in the context of self–other boundaries. Taken together, these three research strands motivate a unified model of the self inspired by current predictive coding models. PMID:27100132
The multisensory basis of the self: From body to identity to others [Formula: see text].
Tsakiris, Manos
2017-04-01
By grounding the self in the body, experimental psychology has taken the body as the starting point for a science of the self. One fundamental dimension of the bodily self is the sense of body ownership that refers to the special perceptual status of one's own body, the feeling that "my body" belongs to me. The primary aim of this review article is to highlight recent advances in the study of body ownership and our understanding of the underlying neurocognitive processes in three ways. I first consider how the sense of body ownership has been investigated and elucidated in the context of multisensory integration. Beyond exteroception, recent studies have considered how this exteroceptively driven sense of body ownership can be linked to the other side of embodiment, that of the unobservable, yet felt, interoceptive body, suggesting that these two sides of embodiment interact to provide a unifying bodily self. Lastly, the multisensorial understanding of the self has been shown to have implications for our understanding of social relationships, especially in the context of self-other boundaries. Taken together, these three research strands motivate a unified model of the self inspired by current predictive coding models.
On matters of mind and body: regarding Descartes.
Urban, Elizabeth
2018-04-01
In this paper the author considers Descartes' place in current thinking about the mind-body dilemma. The premise here is that in the history of ideas, the questions posed can be as significant as the answers acquired. Descartes' paramount question was 'How do we determine certainty?' and his pursuit of an answer led to cogito ergo sum. His discovery simultaneously raised the question whether mind is separate from or unified with the body. Some who currently hold that brain and subjectivity are unified contend that the philosopher 'split' mind from body and refer to 'Descartes' error'. This paper puts forward that Descartes' detractors fail to recognise Descartes' contribution to Western thought, which was to introduce the Enlightenment and to give a place to human subjectivity. Added to this, evidence from Descartes' correspondence with Princess Elisabeth of Bohemia supports the conclusion that Descartes did in fact believe in the unity of mind and body although he could not reconcile this rationally with the certainty from personal experience that they were separate substances. In this Descartes was engaged in just the same dilemma as that of current thinkers and researchers, a conflict which still is yet to be resolved. © 2018, The Society of Analytical Psychology.
A unified anatomy ontology of the vertebrate skeletal system.
Dahdul, Wasila M; Balhoff, James P; Blackburn, David C; Diehl, Alexander D; Haendel, Melissa A; Hall, Brian K; Lapp, Hilmar; Lundberg, John G; Mungall, Christopher J; Ringwald, Martin; Segerdell, Erik; Van Slyke, Ceri E; Vickaryous, Matthew K; Westerfield, Monte; Mabee, Paula M
2012-01-01
The skeleton is of fundamental importance in research in comparative vertebrate morphology, paleontology, biomechanics, developmental biology, and systematics. Motivated by research questions that require computational access to and comparative reasoning across the diverse skeletal phenotypes of vertebrates, we developed a module of anatomical concepts for the skeletal system, the Vertebrate Skeletal Anatomy Ontology (VSAO), to accommodate and unify the existing skeletal terminologies for the species-specific (mouse, the frog Xenopus, zebrafish) and multispecies (teleost, amphibian) vertebrate anatomy ontologies. Previous differences between these terminologies prevented even simple queries across databases pertaining to vertebrate morphology. This module of upper-level and specific skeletal terms currently includes 223 defined terms and 179 synonyms that integrate skeletal cells, tissues, biological processes, organs (skeletal elements such as bones and cartilages), and subdivisions of the skeletal system. The VSAO is designed to integrate with other ontologies, including the Common Anatomy Reference Ontology (CARO), Gene Ontology (GO), Uberon, and Cell Ontology (CL), and it is freely available to the community to be updated with additional terms required for research. Its structure accommodates anatomical variation among vertebrate species in development, structure, and composition. Annotation of diverse vertebrate phenotypes with this ontology will enable novel inquiries across the full spectrum of phenotypic diversity.
A Unified Anatomy Ontology of the Vertebrate Skeletal System
Dahdul, Wasila M.; Balhoff, James P.; Blackburn, David C.; Diehl, Alexander D.; Haendel, Melissa A.; Hall, Brian K.; Lapp, Hilmar; Lundberg, John G.; Mungall, Christopher J.; Ringwald, Martin; Segerdell, Erik; Van Slyke, Ceri E.; Vickaryous, Matthew K.; Westerfield, Monte; Mabee, Paula M.
2012-01-01
The skeleton is of fundamental importance in research in comparative vertebrate morphology, paleontology, biomechanics, developmental biology, and systematics. Motivated by research questions that require computational access to and comparative reasoning across the diverse skeletal phenotypes of vertebrates, we developed a module of anatomical concepts for the skeletal system, the Vertebrate Skeletal Anatomy Ontology (VSAO), to accommodate and unify the existing skeletal terminologies for the species-specific (mouse, the frog Xenopus, zebrafish) and multispecies (teleost, amphibian) vertebrate anatomy ontologies. Previous differences between these terminologies prevented even simple queries across databases pertaining to vertebrate morphology. This module of upper-level and specific skeletal terms currently includes 223 defined terms and 179 synonyms that integrate skeletal cells, tissues, biological processes, organs (skeletal elements such as bones and cartilages), and subdivisions of the skeletal system. The VSAO is designed to integrate with other ontologies, including the Common Anatomy Reference Ontology (CARO), Gene Ontology (GO), Uberon, and Cell Ontology (CL), and it is freely available to the community to be updated with additional terms required for research. Its structure accommodates anatomical variation among vertebrate species in development, structure, and composition. Annotation of diverse vertebrate phenotypes with this ontology will enable novel inquiries across the full spectrum of phenotypic diversity. PMID:23251424
Impact of kerogen heterogeneity on sorption of organic pollutants. 2. Sorption equilibria
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, C.; Yu, Z.Q.; Xiao, B.H.
2009-08-15
Phenanthrene and naphthalene sorption isotherms were measured for three different series of kerogen materials using completely mixed batch reactors. Sorption isotherms were nonlinear for each sorbate-sorbent system, and the Freundlich isotherm equation fit the sorption data well. The Freundlich isotherm linearity parameter n ranged from 0.192 to 0.729 for phenanthrene and from 0.389 to 0.731 for naphthalene. The n values correlated linearly with rigidity and aromaticity of the kerogen matrix, but the single-point, organic carbon-normalized distribution coefficients varied dramatically among the tested sorbents. A dual-mode sorption equation consisting of a linear partitioning domain and a Langmuir adsorption domain adequately quantifiedmore » the overall sorption equilibrium for each sorbent-sorbate system. Both models fit the data well, with r{sup 2} values of 0.965 to 0.996 for the Freundlich model and 0.963 to 0.997 for the dual-mode model for the phenanthrene sorption isotherms. The dual-mode model fitting results showed that as the rigidity and aromaticity of the kerogen matrix increased, the contribution of the linear partitioning domain to the overall sorption equilibrium decreased, whereas the contribution of the Langmuir adsorption domain increased. The present study suggested that kerogen materials found in soils and sediments should not be treated as a single, unified, carbonaceous sorbent phase.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crenshaw, Michael E., E-mail: michael.e.crenshaw4.civ@mail.mil
2014-04-15
In a continuum setting, the energy–momentum tensor embodies the relations between conservation of energy, conservation of linear momentum, and conservation of angular momentum. The well-defined total energy and the well-defined total momentum in a thermodynamically closed system with complete equations of motion are used to construct the total energy–momentum tensor for a stationary simple linear material with both magnetic and dielectric properties illuminated by a quasimonochromatic pulse of light through a gradient-index antireflection coating. The perplexing issues surrounding the Abraham and Minkowski momentums are bypassed by working entirely with conservation principles, the total energy, and the total momentum. We derivemore » electromagnetic continuity equations and equations of motion for the macroscopic fields based on the material four-divergence of the traceless, symmetric total energy–momentum tensor. We identify contradictions between the macroscopic Maxwell equations and the continuum form of the conservation principles. We resolve the contradictions, which are the actual fundamental issues underlying the Abraham–Minkowski controversy, by constructing a unified version of continuum electrodynamics that is based on establishing consistency between the three-dimensional Maxwell equations for macroscopic fields, the electromagnetic continuity equations, the four-divergence of the total energy–momentum tensor, and a four-dimensional tensor formulation of electrodynamics for macroscopic fields in a simple linear medium.« less
Project UNIFY. National Dropout Prevention Center/Network Newsletter. Volume 22, Number 1
ERIC Educational Resources Information Center
Duckenfield, Marty, Ed.
2011-01-01
The "National Dropout Prevention Newsletter" is published quarterly by the National Dropout Prevention Center/Network. This issue contains the following articles: (1) Special Olympics Project UNIFY (Andrea Cahn); (2) The Impact of Project UNIFY; (3) Project UNIFY Brings Youth Together to Learn and Graduate (William H. Hughes); (4)…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-21
... the California State Implementation Plan, San Joaquin Valley Unified Air Pollution Control District... approve revisions to the San Joaquin Valley Unified Air Pollution Control District (SJVUAPCD) portion of... Joaquin Valley Unified Air Pollution Control District, No. 08-17309 (9th Circuit)). In that case, NAHB...
Device for measuring hole elongation in a bolted joint
NASA Technical Reports Server (NTRS)
Wichorek, Gregory R. (Inventor)
1987-01-01
A device to determine the operable failure mode of mechanically fastened lightweight composite joints by measuring the hole elongation of a bolted joint is disclosed. The double-lap joint test apparatus comprises a stud, a test specimen having a hole, two load transfer plates, and linear displacement measuring instruments. The test specimen is sandwiched between the two load transfer plates and clamped together with the stud. Spacer washers are placed between the test specimen and each load transfer plate to provide a known, controllable area for the determination of clamping forces around the hole of the specimen attributable to bolt torque. The spacer washers also provide a gap for the mounting of reference angles on each side of the test specimen. Under tensile loading, elongation of the hole of the test specimen causes the stud to move away from the reference angles. This displacement is measured by the voltage output of two linear displacement measuring instruments that are attached to the stud and remain in contact with the reference angles throughout the tensile loading. The present invention obviates previous problems in obtaining specimen deformation measurements by monitoring the reference angles to the test specimen and the linear displacement measuring instruments to the stud.
NASA Astrophysics Data System (ADS)
Marras, Simone; Giraldo, Frank
2015-04-01
The prediction of extreme weather sufficiently ahead of its occurrence impacts society as a whole and coastal communities specifically (e.g. Hurricane Sandy that impacted the eastern seaboard of the U.S. in the fall of 2012). With the final goal of solving hurricanes at very high resolution and numerical accuracy, we have been developing the Non-hydrostatic Unified Model of the Atmosphere (NUMA) to solve the Euler and Navier-Stokes equations by arbitrary high-order element-based Galerkin methods on massively parallel computers. NUMA is a unified model with respect to the following criteria: (a) it is based on unified numerics in that element-based Galerkin methods allow the user to choose between continuous (spectral elements, CG) or discontinuous Galerkin (DG) methods and from a large spectrum of time integrators, (b) it is unified across scales in that it can solve flow in limited-area mode (flow in a box) or in global mode (flow on the sphere). NUMA is the dynamical core that powers the U.S. Naval Research Laboratory's next-generation global weather prediction system NEPTUNE (Navy's Environmental Prediction sysTem Utilizing the NUMA corE). Because the solution of the Euler equations by high order methods is prone to instabilities that must be damped in some way, we approach the problem of stabilization via an adaptive Large Eddy Simulation (LES) scheme meant to treat such instabilities by modeling the sub-grid scale features of the flow. The novelty of our effort lies in the extension to high order spectral elements for low Mach number stratified flows of a method that was originally designed for low order, adaptive finite elements in the high Mach number regime [1]. The Euler equations are regularized by means of a dynamically adaptive stress tensor that is proportional to the residual of the unperturbed equations. Its effect is close to none where the solution is sufficiently smooth, whereas it increases elsewhere, with a direct contribution to the stabilization of the otherwise oscillatory solution. As a first step toward the Large Eddy Simulation of a hurricane, we verify the model via a high-order and high resolution idealized simulation of deep convection on the sphere. References [1] M. Nazarov and J. Hoffman (2013) Residual-based artificial viscosity for simulation of turbulent compressible flow using adaptive finite element methods Int. J. Numer. Methods Fluids, 71:339-357
Longo, Benedetto; Farcomeni, Alessio; Ferri, Germano; Campanale, Antonella; Sorotos, Micheal; Santanelli, Fabio
2013-07-01
Breast volume assessment enhances preoperative planning of both aesthetic and reconstructive procedures, helping the surgeon in the decision-making process of shaping the breast. Numerous methods of breast size determination are currently reported but are limited by methodologic flaws and variable estimations. The authors aimed to develop a unifying predictive formula for volume assessment in small to large breasts based on anthropomorphic values. Ten anthropomorphic breast measurements and direct volumes of 108 mastectomy specimens from 88 women were collected prospectively. The authors performed a multivariate regression to build the optimal model for development of the predictive formula. The final model was then internally validated. A previously published formula was used as a reference. Mean (±SD) breast weight was 527.9 ± 227.6 g (range, 150 to 1250 g). After model selection, sternal notch-to-nipple, inframammary fold-to-nipple, and inframammary fold-to-fold projection distances emerged as the most important predictors. The resulting formula (the BREAST-V) showed an adjusted R of 0.73. The estimated expected absolute error on new breasts is 89.7 g (95 percent CI, 62.4 to 119.1 g) and the expected relative error is 18.4 percent (95 percent CI, 12.9 to 24.3 percent). Application of reference formula on the sample yielded worse predictions than those derived by the formula, showing an R of 0.55. The BREAST-V is a reliable tool for predicting small to large breast volumes accurately for use as a complementary device in surgeon evaluation. An app entitled BREAST-V for both iOS and Android devices is currently available for free download in the Apple App Store and Google Play Store. Diagnostic, II.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eccleston, C.H.
1997-09-05
The National Environmental Policy Act (NEPA) of 1969 was established by Congress more than a quarter of a century ago, yet there is a surprising lack of specific tools, techniques, and methodologies for effectively implementing these regulatory requirements. Lack of professionally accepted techniques is a principal factor responsible for many inefficiencies. Often, decision makers do not fully appreciate or capitalize on the true potential which NEPA provides as a platform for planning future actions. New approaches and modem management tools must be adopted to fully achieve NEPA`s mandate. A new strategy, referred to as Total Federal Planning, is proposed formore » unifying large-scale federal planning efforts under a single, systematic, structured, and holistic process. Under this approach, the NEPA planning process provides a unifying framework for integrating all early environmental and nonenvironmental decision-making factors into a single comprehensive planning process. To promote effectiveness and efficiency, modem tools and principles from the disciplines of Value Engineering, Systems Engineering, and Total Quality Management are incorporated. Properly integrated and implemented, these planning tools provide the rigorous, structured, and disciplined framework essential in achieving effective planning. Ultimately, the goal of a Total Federal Planning strategy is to construct a unified and interdisciplinary framework that substantially improves decision-making, while reducing the time, cost, redundancy, and effort necessary to comply with environmental and other planning requirements. At a time when Congress is striving to re-engineer the governmental framework, apparatus, and process, a Total Federal Planning philosophy offers a systematic approach for uniting the disjointed and often convoluted planning process currently used by most federal agencies. Potentially this approach has widespread implications in the way federal planning is approached.« less
29 CFR 779.219 - Unified operation may be achieved without common control or common ownership.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 29 Labor 3 2010-07-01 2010-07-01 false Unified operation may be achieved without common control or... Act May Apply; Enterprise Coverage Unified Operation Or Common Control § 779.219 Unified operation may be achieved without common control or common ownership. The performance of related activities through...
ERIC Educational Resources Information Center
Kaplan, David M.; Gladding, Samuel T.
2011-01-01
This article describes the development of the historic "Principles for Unifying and Strengthening the Profession." An outcome of the "20/20: A Vision for the Future of Counseling" initiative, this document delineates a core set of principles that unifies and advances the counseling profession. "Principles for Unifying and Strengthening the…
Chen, Yu; Mu, Xiaojing; Wang, Tao; Ren, Weiwei; Yang, Ya; Wang, Zhong Lin; Sun, Chengliang; Gu, Alex Yuandong
2016-01-01
Here, we report a stable and predictable aero-elastic motion in the flow-driven energy harvester, which is different from flapping and vortex-induced-vibration (VIV). A unified theoretical frame work that describes the flutter phenomenon observed in both “stiff” and “flexible” materials for flow driven energy harvester was presented in this work. We prove flutter in both types of materials is the results of the coupled effects of torsional and bending modes. Compared to “stiff” materials, which has a flow velocity-independent flutter frequency, flexible material presents a flutter frequency that almost linearly scales with the flow velocity. Specific to “flexible” materials, pre-stress modulates the frequency range in which flutter occurs. It is experimentally observed that a double-clamped “flexible” piezoelectric P(VDF-TrFE) thin belt, when driven into the flutter state, yields a 1,000 times increase in the output voltage compared to that of the non-fluttered state. At a fixed flow velocity, increase in pre-stress level of the P(VDF-TrFE) thin belt up-shifts the flutter frequency. In addition, this work allows the rational design of flexible piezoelectric devices, including flow-driven energy harvester, triboelectric energy harvester, and self-powered wireless flow speed sensor. PMID:27739484
General slip regime permeability model for gas flow through porous media
NASA Astrophysics Data System (ADS)
Zhou, Bo; Jiang, Peixue; Xu, Ruina; Ouyang, Xiaolong
2016-07-01
A theoretical effective gas permeability model was developed for rarefied gas flow in porous media, which holds over the entire slip regime with the permeability derived as a function of the Knudsen number. This general slip regime model (GSR model) is derived from the pore-scale Navier-Stokes equations subject to the first-order wall slip boundary condition using the volume-averaging method. The local closure problem for the volume-averaged equations is studied analytically and numerically using a periodic sphere array geometry. The GSR model includes a rational fraction function of the Knudsen number which leads to a limit effective permeability as the Knudsen number increases. The mechanism for this behavior is the viscous fluid inner friction caused by converging-diverging flow channels in porous media. A linearization of the GSR model leads to the Klinkenberg equation for slightly rarefied gas flows. Finite element simulations show that the Klinkenberg model overestimates the effective permeability by as much as 33% when a flow approaches the transition regime. The GSR model reduces to the unified permeability model [F. Civan, "Effective correlation of apparent gas permeability in tight porous media," Transp. Porous Media 82, 375 (2010)] for the flow in the slip regime and clarifies the physical significance of the empirical parameter b in the unified model.
NASA Astrophysics Data System (ADS)
Ma, Xiaoli; Guo, Xiaoyu; Song, Yuelin; Qiao, Lirui; Wang, Wenguang; Zhao, Mingbo; Tu, Pengfei; Jiang, Yong
2016-12-01
Clarification of the chemical composition of traditional Chinese medicine formulas (TCMFs) is a challenge due to the variety of structures and the complexity of plant matrices. Herein, an integrated strategy was developed by hyphenating ultra-performance liquid chromatography (UPLC), quadrupole time-of-flight (Q-TOF), hybrid triple quadrupole-linear ion trap mass spectrometry (Qtrap-MS), and the novel post-acquisition data processing software UNIFI to achieve automatic, rapid, accurate, and comprehensive qualitative and quantitative analysis of the chemical components in TCMFs. As a proof-of-concept, the chemical profiling of Baoyuan decoction (BYD), which is an ancient TCMF that is clinically used for the treatment of coronary heart disease that consists of Ginseng Radix et Rhizoma, Astragali Radix, Glycyrrhizae Radix et Rhizoma Praeparata Cum Melle, and Cinnamomi Cortex, was performed. As many as 236 compounds were plausibly or unambiguously identified, and 175 compounds were quantified or relatively quantified by the scheduled multiple reaction monitoring (sMRM) method. The findings demonstrate that the strategy integrating the rapidity of UNIFI software, the efficiency of UPLC, the accuracy of Q-TOF-MS, and the sensitivity and quantitation ability of Qtrap-MS provides a method for the efficient and comprehensive chemome characterization and quality control of complex TCMFs.
Chen, Yu; Mu, Xiaojing; Wang, Tao; Ren, Weiwei; Yang, Ya; Wang, Zhong Lin; Sun, Chengliang; Gu, Alex Yuandong
2016-10-14
Here, we report a stable and predictable aero-elastic motion in the flow-driven energy harvester, which is different from flapping and vortex-induced-vibration (VIV). A unified theoretical frame work that describes the flutter phenomenon observed in both "stiff" and "flexible" materials for flow driven energy harvester was presented in this work. We prove flutter in both types of materials is the results of the coupled effects of torsional and bending modes. Compared to "stiff" materials, which has a flow velocity-independent flutter frequency, flexible material presents a flutter frequency that almost linearly scales with the flow velocity. Specific to "flexible" materials, pre-stress modulates the frequency range in which flutter occurs. It is experimentally observed that a double-clamped "flexible" piezoelectric P(VDF-TrFE) thin belt, when driven into the flutter state, yields a 1,000 times increase in the output voltage compared to that of the non-fluttered state. At a fixed flow velocity, increase in pre-stress level of the P(VDF-TrFE) thin belt up-shifts the flutter frequency. In addition, this work allows the rational design of flexible piezoelectric devices, including flow-driven energy harvester, triboelectric energy harvester, and self-powered wireless flow speed sensor.
SIRGAS: the core geodetic infrastructure in Latin America and the Caribbean
NASA Astrophysics Data System (ADS)
Sanchez, L.; Brunini, C.; Drewes, H.; Mackern, V.; da Silva, A.
2013-05-01
Studying, understanding, and modelling geophysical phenomena, such as global change and geodynamics, require geodetic reference frames with (1) an order of accuracy higher than the magnitude of the effects we want to study, (2) consistency and reliability worldwide (the same accuracy everywhere), and (3) a long-term stability (the same order of accuracy at any time). The definition, realisation, maintenance, and wide-utilisation of the International Terrestrial Reference System (ITRS) are oriented to guarantee a globally unified geometric reference frame with reliability at the mm-level, i.e. the International Terrestrial Reference Frame (ITRF). The densification of the global ITRF in Latin America and The Caribbean is given by SIRGAS (Sistema de Referencia Geocéntrico para Las Américas), primary objective of which is to provide the most precise coordinates in the region. Therefore, SIRGAS is the backbone for all regional projects based on the generation, use, and analysis of geo-referenced data at national as well as at international level. Besides providing the reference for a wide range of scientific applications such as the monitoring of Earth's crust deformations, vertical movements, sea level variations, atmospheric studies, etc., SIRGAS is also the platform for practical applications such as engineering projects, digital administration of geographical data, geospatial data infrastructures, etc. According to this, the present contribution describes the main features of SIRGAS, giving special care to those challenges faced to continue providing the best possible, long-term stable and high-precise reference frame for Latin America and the Caribbean.
A proposal for unification of fatigue crack growth law
NASA Astrophysics Data System (ADS)
Kobelev, V.
2017-05-01
In the present paper, the new fractional-differential dependences of cycles to failure for a given initial crack length upon the stress amplitude in the linear fracture approach are proposed. The anticipated unified propagation function describes the infinitesimal crack length growths per increasing number of load cycles, supposing that the load ratio remains constant over the load history. Two unification fractional-differential functions with different number of fitting parameters are proposed. An alternative, threshold formulations for the fractional-differential propagation functions are suggested. The mean stress dependence is the immediate consequence from the considered laws. The corresponding formulas for crack length over the number of cycles are derived in closed form.
Analysis of randomly time varying systems by gaussian closure technique
NASA Astrophysics Data System (ADS)
Dash, P. K.; Iyengar, R. N.
1982-07-01
The Gaussian probability closure technique is applied to study the random response of multidegree of freedom stochastically time varying systems under non-Gaussian excitations. Under the assumption that the response, the coefficient and the excitation processes are jointly Gaussian, deterministic equations are derived for the first two response moments. It is further shown that this technique leads to the best Gaussian estimate in a minimum mean square error sense. An example problem is solved which demonstrates the capability of this technique for handling non-linearity, stochastic system parameters and amplitude limited responses in a unified manner. Numerical results obtained through the Gaussian closure technique compare well with the exact solutions.
NASA Astrophysics Data System (ADS)
Wei, Yimin; Wu, Hebing
2001-12-01
In this paper, the perturbation and subproper splittings for the generalized inverse AT,S(2), the unique matrix X such that XAX=X, R(X)=T and N(X)=S, are considered. We present lower and upper bounds for the perturbation of AT,S(2). Convergence of subproper splittings for computing the special solution AT,S(2)b of restricted rectangular linear system Ax=b, x[set membership, variant]T, are studied. For the solution AT,S(2)b we develop a characterization. Therefore, we give a unified treatment of the related problems considered in literature by Ben-Israel, Berman, Hanke, Neumann, Plemmons, etc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lue Xing; Sun Kun; Wang Pan
In the framework of Bell-polynomial manipulations, under investigation hereby are three single-field bilinearizable equations: the (1+1)-dimensional shallow water wave model, Boiti-Leon-Manna-Pempinelli model, and (2+1)-dimensional Sawada-Kotera model. Based on the concept of scale invariance, a direct and unifying Bell-polynomial scheme is employed to achieve the Baecklund transformations and Lax pairs associated with those three soliton equations. Note that the Bell-polynomial expressions and Bell-polynomial-typed Baecklund transformations for those three soliton equations can be, respectively, cast into the bilinear equations and bilinear Baecklund transformations with symbolic computation. Consequently, it is also shown that the Bell-polynomial-typed Baecklund transformations can be linearized into the correspondingmore » Lax pairs.« less
The cosmological dark sector as a scalar σ -meson field
NASA Astrophysics Data System (ADS)
Carneiro, Saulo
2018-03-01
Previous quantum field estimations of the QCD vacuum in the expanding space-time lead to a dark energy component scaling linearly with the Hubble parameter, which gives the correct figure for the observed cosmological term. Here we show that this behaviour also appears at the classical level, as a result of the chiral symmetry breaking in a low energy, effective σ -model. The dark sector is described in a unified way by the σ condensate and its fluctuations, giving rise to a decaying dark energy and a homogeneous creation of non-relativistic dark particles. The creation rate and the future asymptotic de Sitter horizon are both determined by the σ mass scale.
Kinetic signature of fractal-like filament networks formed by orientational linear epitaxy.
Hwang, Wonmuk; Eryilmaz, Esma
2014-07-11
We study a broad class of epitaxial assembly of filament networks on lattice surfaces. Over time, a scale-free behavior emerges with a 2.5-3 power-law exponent in filament length distribution. Partitioning between the power-law and exponential behaviors in a network can be used to find the stage and kinetic parameters of the assembly process. To analyze real-world networks, we develop a computer program that measures the network architecture in experimental images. Application to triaxial networks of collagen fibrils shows quantitative agreement with our model. Our unifying approach can be used for characterizing and controlling the network formation that is observed across biological and nonbiological systems.
Sensory integration of a light touch reference in human standing balance.
Assländer, Lorenz; Smith, Craig P; Reynolds, Raymond F
2018-01-01
In upright stance, light touch of a space-stationary touch reference reduces spontaneous sway. Moving the reference evokes sway responses which exhibit non-linear behavior that has been attributed to sensory reweighting. Reweighting refers to a change in the relative contribution of sensory cues signaling body sway in space and light touch cues signaling finger position with respect to the body. Here we test the hypothesis that the sensory fusion process involves a transformation of light touch signals into the same reference frame as other sensory inputs encoding body sway in space, or vice versa. Eight subjects lightly gripped a robotic manipulandum which moved in a circular arc around the ankle joint. A pseudo-randomized motion sequence with broad spectral characteristics was applied at three amplitudes. The stimulus was presented at two different heights and therefore different radial distances, which were matched in terms of angular motion. However, the higher stimulus evoked a significantly larger sway response, indicating that the response was not matched to stimulus angular motion. Instead, the body sway response was strongly related to the horizontal translation of the manipulandum. The results suggest that light touch is integrated as the horizontal distance between body COM and the finger. The data were well explained by a model with one feedback loop minimizing changes in horizontal COM-finger distance. The model further includes a second feedback loop estimating the horizontal finger motion and correcting the first loop when the touch reference is moving. The second loop includes the predicted transformation of sensory signals into the same reference frame and a non-linear threshold element that reproduces the non-linear sway responses, thus providing a mechanism that can explain reweighting.
Sensory integration of a light touch reference in human standing balance
Smith, Craig P.; Reynolds, Raymond F.
2018-01-01
In upright stance, light touch of a space-stationary touch reference reduces spontaneous sway. Moving the reference evokes sway responses which exhibit non-linear behavior that has been attributed to sensory reweighting. Reweighting refers to a change in the relative contribution of sensory cues signaling body sway in space and light touch cues signaling finger position with respect to the body. Here we test the hypothesis that the sensory fusion process involves a transformation of light touch signals into the same reference frame as other sensory inputs encoding body sway in space, or vice versa. Eight subjects lightly gripped a robotic manipulandum which moved in a circular arc around the ankle joint. A pseudo-randomized motion sequence with broad spectral characteristics was applied at three amplitudes. The stimulus was presented at two different heights and therefore different radial distances, which were matched in terms of angular motion. However, the higher stimulus evoked a significantly larger sway response, indicating that the response was not matched to stimulus angular motion. Instead, the body sway response was strongly related to the horizontal translation of the manipulandum. The results suggest that light touch is integrated as the horizontal distance between body COM and the finger. The data were well explained by a model with one feedback loop minimizing changes in horizontal COM-finger distance. The model further includes a second feedback loop estimating the horizontal finger motion and correcting the first loop when the touch reference is moving. The second loop includes the predicted transformation of sensory signals into the same reference frame and a non-linear threshold element that reproduces the non-linear sway responses, thus providing a mechanism that can explain reweighting. PMID:29874252
International Geomagnetic Reference Field: the third generation.
Peddie, N.W.
1982-01-01
In August 1981 the International Association of Geomagnetism and Aeronomy revised the International Geomagnetic Reference Field (IGRF). It is the second revision since the inception of the IGRF in 1968. The revision extends the earlier series of IGRF models from 1980 to 1985, introduces a new series of definitive models for 1965-1976, and defines a provisional reference field for 1975- 1980. The revision consists of: 1) a model of the main geomagnetic field at 1980.0, not continuous with the earlier series of IGRF models together with a forecast model of the secular variation of the main field during 1980-1985; 2) definitive models of the main field at 1965.0, 1970.0, and 1975.0, with linear interpolation of the model coefficients specified for intervening dates; and 3) a provisional reference field for 1975-1980, defined as the linear interpolation of the 1975 and 1980 main-field models.-from Author
Motion-based nearest vector metric for reference frame selection in the perception of motion.
Agaoglu, Mehmet N; Clarke, Aaron M; Herzog, Michael H; Ögmen, Haluk
2016-05-01
We investigated how the visual system selects a reference frame for the perception of motion. Two concentric arcs underwent circular motion around the center of the display, where observers fixated. The outer (target) arc's angular velocity profile was modulated by a sine wave midflight whereas the inner (reference) arc moved at a constant angular speed. The task was to report whether the target reversed its direction of motion at any point during its motion. We investigated the effects of spatial and figural factors by systematically varying the radial and angular distances between the arcs, and their relative sizes. We found that the effectiveness of the reference frame decreases with increasing radial- and angular-distance measures. Drastic changes in the relative sizes of the arcs did not influence motion reversal thresholds, suggesting no influence of stimulus form on perceived motion. We also investigated the effect of common velocity by introducing velocity fluctuations to the reference arc as well. We found no effect of whether or not a reference frame has a constant motion. We examined several form- and motion-based metrics, which could potentially unify our findings. We found that a motion-based nearest vector metric can fully account for all the data reported here. These findings suggest that the selection of reference frames for motion processing does not result from a winner-take-all process, but instead, can be explained by a field whose strength decreases with the distance between the nearest motion vectors regardless of the form of the moving objects.
Siddoway, C.S.; Siddoway, M.F.
2007-01-01
The convergence of meridians toward the South Pole causes unique problems for geometrical comparison of structural geological and geophysical datasets from Antarctica. The true North reference direction ordinarily is used for measuring and reporting vector data (strike, trend) in Antarctica, as elsewhere. However, over a latitude distance of just 100 km at 85° South, the angular difference in the true North direction exceeds 10°. Consequently, when performing a regional tectonic analysis of vector data (strike, trend) for structures such as faults, dike arrays, or geophysical lineaments oriented with respect to North at different sites, it is necessary to rotate the data to a common reference direction. A modular arithmetic function, performed as a spreadsheet calculation, offers the means to unify data sets from sites having different longitude position, by rotation to a common reference direction. The function is SC ≡ SM + ∆L (mod 360), where SC = converted strike; SM = measured strike; ∆L = angle in degrees longitude between reference longitude and study site; and 360, the divisor, is the number of degrees in Earth’s circumference. The method is used to evaluate 1) paleomagnetic rotation of the Ellsworth-Whitmore Mountains with respect to the Transantarctic Mountains, and 2) orogenic curvature of the Ross Orogen
NASA Astrophysics Data System (ADS)
Magnon, Anne
2005-04-01
A non geometric cosmology is presented, based on logic of observability, where logical categories of our perception set frontiers to comprehensibility. The Big-Bang singularity finds here a substitute (comparable to a "quantum jump"): a logical process (tied to self-referent and divisible totality) by which information emerges, focalizes on events and recycles, providing a transition from incoherence to causal coherence. This jump manufactures causal order and space-time localization, as exact solutions to Einstein's equation, where the last step of the process disentangles complex Riemann spheres into real null-cones (a geometric overturning imposed by self-reference, reminding us of our ability to project the cosmos within our mental sphere). Concepts such as antimatter and dark energy (dual entities tied to bifurcations or broken symmetries, and their compensation), are presented as hidden in the virtual potentialities, while irreversible time appears with the recycling of information and related flow. Logical bifurcations (such as the "part-totality" category, a quantum of information which owes its recycling to non localizable logical separations, as anticipated by unstability or horizon dependence of the quantum vacuum) induce broken symmetries, at the (complex or real) geometric level [eg. the antiselfdual complex non linear graviton solutions, which break duality symmetry, provide a model for (hidden) anti-matter, itself compensated with dark-energy, and providing, with space-time localization, the radiative gravitational energy (Bondi flux and related bifurcations of the peeling off type), as well as mass of isolated bodies]. These bifurcations are compensated by inertial effects (non geometric precursors of the Coriolis forces) able to explain (on logical grounds) the cosmic expansion (a repulsion?) and critical equilibrium of the cosmic tissue. Space-time environment, itself, emerges through the jump, as a censor to totality, a screen to incoherence (as anticipated by black-hole event horizons, cosmic censors able to shelter causal geometry). In analogy with black-hole singularities, the Big-Bang can be viewed as a geometric hint that a transition from incoherence to (causal space-time) localization and related coherence (comprehensibility), is taking place (space-time demolition, a reverse process towards incoherence or information recycling, is expected in the vicinity of singularities, as hinted by black-holes and related "time-machines"). A theory of the emergence of perception (and life?), in connection with observability and the function of partition (able to screen totality), is on its way [interface incoherence-coherence, sleeping and awaking states of localization, horizons of perception etc, are anticipated by black-hole event horizons, beyond which a non causal, dimensionless incoherent regime or memorization process, presents itself with the loss of localization, suggesting a unifying regime (ultimate energies?) hidden in cosmic potentialities]. The decoherence process presented here, suggests an ultimate interaction, expression of the logical relation of subsystems to totality, and to be identified to the flow of information or its recycling through cosmic jump (this is anticipated by the dissipation of distance or hierarchies on null-cones, themselves recycled with information and events). The geometric projection of this unified irreversible dynamics is expressed by unified Yang-Mills field equations (coupled to Einsteinian gravity). An ultimate form of action ("set"-volumes of information) presents itself, whose extrema can be achieved through extremal transfer of information and related partition of cells of information (thus anticipating the mitosis of living cells, possibly triggered at the non localizable level, as imposed by the logical regime of cosmic decoherence: participating subsystems ?). The matching of the objective and subjective facets of (information and) decoherences is perceived as contact with a reality.
Classical and quantum communication without a shared reference frame.
Bartlett, Stephen D; Rudolph, Terry; Spekkens, Robert W
2003-07-11
We show that communication without a shared reference frame is possible using entangled states. Both classical and quantum information can be communicated with perfect fidelity without a shared reference frame at a rate that asymptotically approaches one classical bit or one encoded qubit per transmitted qubit. We present an optical scheme to communicate classical bits without a shared reference frame using entangled photon pairs and linear optical Bell state measurements.
Entanglement distribution in multi-particle systems in terms of unified entropy.
Luo, Yu; Zhang, Fu-Gang; Li, Yongming
2017-04-25
We investigate the entanglement distribution in multi-particle systems in terms of unified (q, s)-entropy. We find that for any tripartite mixed state, the unified (q, s)-entropy entanglement of assistance follows a polygamy relation. This polygamy relation also holds in multi-particle systems. Furthermore, a generalized monogamy relation is provided for unified (q, s)-entropy entanglement in the multi-qubit system.
Federal Aviation Regulations - National Aviation Regulations of Russia
NASA Astrophysics Data System (ADS)
Chernykh, O.; Bakiiev, M.
2018-03-01
Chinese Aerospace Engineering is currently developing cooperation with Russia on a wide-body airplane project that has directed the work towards better understanding of Russian airworthiness management system. The paper introduces national Aviation regulations of Russia, presents a comparison of them with worldwide recognized regulations, and highlights typical differences. They have been found to be: two general types of regulations used in Russia (Aviation Regulations and Federal Aviation Regulations), non-unified structure of regulations on Aircraft Operation management, various separate agencies responsible for regulation issuance instead of one national aviation authority, typical confusions in references. The paper also gives a list of effective Russian Regulations of both types.
Mars Sample Handling Protocol Workshop Series
NASA Technical Reports Server (NTRS)
Race, Margaret S. (Editor); Nealson, Kenneth H.; Rummel, John D. (Editor); Acevedo, Sara E. (Editor); Devincenzi, Donald L. (Technical Monitor)
2001-01-01
This report provides a record of the proceedings and recommendations of Workshop 3 of the Series, which was held in San Diego, California, March 19-21, 2001. Materials such as the Workshop agenda and participant lists as well as complete citations of all references and a glossary of terms and acronyms appear in the Appendices. Workshop 3 builds on the deliberations and findings of the earlier workshops in the Series, which have been reported separately. During Workshop 3, five individual sub-groups were formed to discuss the following topics: (1) Unifying Properties of Life, (2) Morphological organization and chemical properties, (3) Geochemical and geophysical properties, (4) Chemical Method and (5) Cell Biology Methods.
Experimental test of single-system steering and application to quantum communication
NASA Astrophysics Data System (ADS)
Liu, Zhao-Di; Sun, Yong-Nan; Cheng, Ze-Di; Xu, Xiao-Ye; Zhou, Zong-Quan; Chen, Geng; Li, Chuan-Feng; Guo, Guang-Can
2017-02-01
Einstein-Podolsky-Rosen (EPR) steering describes the ability to steer remotely quantum states of an entangled pair by measuring locally one of its particles. Here we report on an experimental demonstration of single-system steering. The application to quantum communication is also investigated. Single-system steering refers to steering of a single d -dimensional quantum system that can be used in a unifying picture to certify the reliability of tasks employed in both quantum communication and quantum computation. In our experiment, high-dimensional quantum states are implemented by encoding polarization and orbital angular momentum of photons with dimensionality of up to 12.
GPM Avionics Module Heat Pipes Design and Performance Test Results
NASA Technical Reports Server (NTRS)
Ottenstein, Laura; DeChristopher, Mike
2012-01-01
GPM is a satellite constellation to study precipitation formed from a partnership between NASA and the Japanese Aerospace Exploration Agency (JAXA). The GPM Core Observatory, being developed and tested at GSFC, serves as a reference standard to unify precipitation measurements from the GPM satellite constellation. The Core Observatory carries an advanced radar/radiometer system to measure precipitation from space. The scientific data gained from GPM will benefit both NASA and JAXA by advancing our understanding of Earth's water and energy cycle, improving forecasts of extreme weather events, and extending our current capabilities in using accurate and timely precipitation information to benefit society.
An implicit adaptation algorithm for a linear model reference control system
NASA Technical Reports Server (NTRS)
Mabius, L.; Kaufman, H.
1975-01-01
This paper presents a stable implicit adaptation algorithm for model reference control. The constraints for stability are found using Lyapunov's second method and do not depend on perfect model following between the system and the reference model. Methods are proposed for satisfying these constraints without estimating the parameters on which the constraints depend.
Deformation Theory and Physics Model Building
NASA Astrophysics Data System (ADS)
Sternheimer, Daniel
2006-08-01
The mathematical theory of deformations has proved to be a powerful tool in modeling physical reality. We start with a short historical and philosophical review of the context and concentrate this rapid presentation on a few interrelated directions where deformation theory is essential in bringing a new framework - which has then to be developed using adapted tools, some of which come from the deformation aspect. Minkowskian space-time can be deformed into Anti de Sitter, where massless particles become composite (also dynamically): this opens new perspectives in particle physics, at least at the electroweak level, including prediction of new mesons. Nonlinear group representations and covariant field equations, coming from interactions, can be viewed as some deformation of their linear (free) part: recognizing this fact can provide a good framework for treating problems in this area, in particular global solutions. Last but not least, (algebras associated with) classical mechanics (and field theory) on a Poisson phase space can be deformed to (algebras associated with) quantum mechanics (and quantum field theory). That is now a frontier domain in mathematics and theoretical physics called deformation quantization, with multiple ramifications, avatars and connections in both mathematics and physics. These include representation theory, quantum groups (when considering Hopf algebras instead of associative or Lie algebras), noncommutative geometry and manifolds, algebraic geometry, number theory, and of course what is regrouped under the name of M-theory. We shall here look at these from the unifying point of view of deformation theory and refer to a limited number of papers as a starting point for further study.
Precession of a two-layer Earth: contributions of the core and elasticity
NASA Astrophysics Data System (ADS)
Baenas, Tomás; Ferrándiz, José M.; Escapa, Alberto; Getino, Juan; Navarro, Juan F.
2016-04-01
The Earth's internal structure contributes to the precession rate in a small but non-negligible amount, given the current accuracy goals demanded by IAG/GGOS to the reference frames, namely 30 μas and 3 μas/yr. These contributions come from a variety of sources. One of those not yet accounted for in current IAU models is associated to the crossed effects of certain nutation-rising terms of a two-layer Earth model; intuitively, it gathers an 'indirect' effect of the core via the NDFW, or FCN, resonance as well as a 'direct' effect arising from terms that account for energy variations depending on the elasticity of the core. Similar order of magnitude reaches the direct effect of the departure of the Earth's rheology from linear elasticity. To compute those effects we work out the problem in a unified way within the Hamiltonian framework developed by Getino and Ferrándiz (2001). It allows a consistent treatment of the problem since all the perturbations are derived from the same tide generating expansion and the crossing effects are rigorously obtained through Hori's canonical perturbation method. The problem admits an asymptotic analytical solution. The Hamiltonian is constructed by considering a two-layer Earth model made up of an anelastic mantle and a fluid core, perturbed by the gravitational action of the Moon and the Sun. The former effects reach some tens of μas/yr in the longitude rate, hence above the target accuracy level. We outline their influence in the estimation of the Earth's dynamical ellipticity, a main parameter factorizing both precession and nutation.
Accuracy of MHD simulations: Effects of simulation initialization in GUMICS-4
NASA Astrophysics Data System (ADS)
Lakka, Antti; Pulkkinen, Tuija; Dimmock, Andrew; Osmane, Adnane; Palmroth, Minna; Honkonen, Ilja
2016-04-01
We conducted a study aimed at revealing how different global magnetohydrodynamic (MHD) simulation initialization methods affect the dynamics in different parts of the Earth's magnetosphere-ionosphere system. While such magnetosphere-ionosphere coupling codes have been used for more than two decades, their testing still requires significant work to identify the optimal numerical representation of the physical processes. We used the Grand Unified Magnetosphere-Ionosphere Coupling Simulation (GUMICS-4), the only European global MHD simulation being developed by the Finnish Meteorological Institute. GUMICS-4 was put to a test that included two stages: 1) a 10 day Omni data interval was simulated and the results were validated by comparing both the bow shock and the magnetopause spatial positions predicted by the simulation to actual measurements and 2) the validated 10 day simulation run was used as a reference in a comparison of five 3 + 12 hour (3 hour synthetic initialisation + 12 hour actual simulation) simulation runs. The 12 hour input was not only identical in each simulation case but it also represented a subset of the 10 day input thus enabling quantifying the effects of different synthetic initialisations on the magnetosphere-ionosphere system. The used synthetic initialisation data sets were created using stepwise, linear and sinusoidal functions. Switching the used input from the synthetic to real Omni data was immediate. The results show that the magnetosphere forms in each case within an hour after the switch to real data. However, local dissimilarities are found in the magnetospheric dynamics after formation depending on the used initialisation method. This is evident especially in the inner parts of the lobe.
Linear Static Behavior of Damaged Laminated Composite Plates and Shells
2017-01-01
A mathematical scheme is proposed here to model a damaged mechanical configuration for laminated and sandwich structures. In particular, two kinds of functions defined in the reference domain of plates and shells are introduced to weaken their mechanical properties in terms of engineering constants: a two-dimensional Gaussian function and an ellipse shaped function. By varying the geometric parameters of these distributions, several damaged configurations are analyzed and investigated through a set of parametric studies. The effect of a progressive damage is studied in terms of displacement profiles and through-the-thickness variations of stress, strain, and displacement components. To this end, a posteriori recovery procedure based on the three-dimensional equilibrium equations for shell structures in orthogonal curvilinear coordinates is introduced. The theoretical framework for the two-dimensional shell model is based on a unified formulation able to study and compare several Higher-order Shear Deformation Theories (HSDTs), including Murakami’s function for the so-called zig-zag effect. Thus, various higher-order models are used and compared also to investigate the differences which can arise from the choice of the order of the kinematic expansion. Their ability to deal with several damaged configurations is analyzed as well. The paper can be placed also in the field of numerical analysis, since the solution to the static problem at issue is achieved by means of the Generalized Differential Quadrature (GDQ) method, whose accuracy and stability are proven by a set of convergence analyses and by the comparison with the results obtained through a commercial finite element software. PMID:28773170
NASA Astrophysics Data System (ADS)
Raftopoulos, Dionysios G.
Werner Heisenberg's well known requirement that Physical Science ought to occupy itself solely with entities that are both observable and measurable, is almost universally accepted. Starting from the above thesis and accepting Albert Einstein's second fundamental hypothesis, as stated in his historical article "On the Electrodynamics of moving Bodies", we are led to the conclusion that the kinematics of a material point, as measured and described by a localized real-life Observer, always refers not to its present position but rather to the one it occupied at a previous moment in time, which we call Conjugate Position, or Retarded Position according to Richard Feynman. From the experimenter's point of view, only the Conjugate position is important. Thus, the moving entity is observed and measured at a position that is different to the one it occupies now, a conclusion eerily evocative of the "shadows" paradigm in Plato's Cave Allegory. This, i.e. the kinematics of the Conjugate Position, is analytically described by the "Theory of Harmonicity of the Field of Light". Having selected the Projective Space as its Geometrical space of choice, an important conclusion of this theory is that, for a localized Observer, a linearly moving object is possible to appear simultaneously at two different positions and, consequently, at two different states in the Observer's Perceptible Space. This conclusion leads to the formulation of at least two fundamental theorems as well as to a plethora of corollaries all in accordance with the notions of contemporary Quantum Mechanics. A new form of the Unified Field of Light is presented.
Fogolari, Federico; Corazza, Alessandra; Esposito, Gennaro
2015-04-05
The generalized Born model in the Onufriev, Bashford, and Case (Onufriev et al., Proteins: Struct Funct Genet 2004, 55, 383) implementation has emerged as one of the best compromises between accuracy and speed of computation. For simulations of nucleic acids, however, a number of issues should be addressed: (1) the generalized Born model is based on a linear model and the linearization of the reference Poisson-Boltmann equation may be questioned for highly charged systems as nucleic acids; (2) although much attention has been given to potentials, solvation forces could be much less sensitive to linearization than the potentials; and (3) the accuracy of the Onufriev-Bashford-Case (OBC) model for nucleic acids depends on fine tuning of parameters. Here, we show that the linearization of the Poisson Boltzmann equation has mild effects on computed forces, and that with optimal choice of the OBC model parameters, solvation forces, essential for molecular dynamics simulations, agree well with those computed using the reference Poisson-Boltzmann model. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Lehmann, Thomas M.
2002-05-01
Reliable evaluation of medical image processing is of major importance for routine applications. Nonetheless, evaluation is often omitted or methodically defective when novel approaches or algorithms are introduced. Adopted from medical diagnosis, we define the following criteria to classify reference standards: 1. Reliance, if the generation or capturing of test images for evaluation follows an exactly determined and reproducible protocol. 2. Equivalence, if the image material or relationships considered within an algorithmic reference standard equal real-life data with respect to structure, noise, or other parameters of importance. 3. Independence, if any reference standard relies on a different procedure than that to be evaluated, or on other images or image modalities than that used routinely. This criterion bans the simultaneous use of one image for both, training and test phase. 4. Relevance, if the algorithm to be evaluated is self-reproducible. If random parameters or optimization strategies are applied, reliability of the algorithm must be shown before the reference standard is applied for evaluation. 5. Significance, if the number of reference standard images that are used for evaluation is sufficient large to enable statistically founded analysis. We demand that a true gold standard must satisfy the Criteria 1 to 3. Any standard only satisfying two criteria, i.e., Criterion 1 and Criterion 2 or Criterion 1 and Criterion 3, is referred to as silver standard. Other standards are termed to be from plastic. Before exhaustive evaluation based on gold or silver standards is performed, its relevance must be shown (Criterion 4) and sufficient tests must be carried out to found statistical analysis (Criterion 5). In this paper, examples are given for each class of reference standards.
2017-01-01
The mechanical response of a homogeneous isotropic linearly elastic material can be fully characterized by two physical constants, the Young’s modulus and the Poisson’s ratio, which can be derived by simple tensile experiments. Any other linear elastic parameter can be obtained from these two constants. By contrast, the physical responses of nonlinear elastic materials are generally described by parameters which are scalar functions of the deformation, and their particular choice is not always clear. Here, we review in a unified theoretical framework several nonlinear constitutive parameters, including the stretch modulus, the shear modulus and the Poisson function, that are defined for homogeneous isotropic hyperelastic materials and are measurable under axial or shear experimental tests. These parameters represent changes in the material properties as the deformation progresses, and can be identified with their linear equivalent when the deformations are small. Universal relations between certain of these parameters are further established, and then used to quantify nonlinear elastic responses in several hyperelastic models for rubber, soft tissue and foams. The general parameters identified here can also be viewed as a flexible basis for coupling elastic responses in multi-scale processes, where an open challenge is the transfer of meaningful information between scales. PMID:29225507
Modeling of second order space charge driven coherent sum and difference instabilities
NASA Astrophysics Data System (ADS)
Yuan, Yao-Shuo; Boine-Frankenheim, Oliver; Hofmann, Ingo
2017-10-01
Second order coherent oscillation modes in intense particle beams play an important role for beam stability in linear or circular accelerators. In addition to the well-known second order even envelope modes and their instability, coupled even envelope modes and odd (skew) modes have recently been shown in [Phys. Plasmas 23, 090705 (2016), 10.1063/1.4963851] to lead to parametric instabilities in periodic focusing lattices with sufficiently different tunes. While this work was partly using the usual envelope equations, partly also particle-in-cell (PIC) simulation, we revisit these modes here and show that the complete set of second order even and odd mode phenomena can be obtained in a unifying approach by using a single set of linearized rms moment equations based on "Chernin's equations." This has the advantage that accurate information on growth rates can be obtained and gathered in a "tune diagram." In periodic focusing we retrieve the parametric sum instabilities of coupled even and of odd modes. The stop bands obtained from these equations are compared with results from PIC simulations for waterbag beams and found to show very good agreement. The "tilting instability" obtained in constant focusing confirms the equivalence of this method with the linearized Vlasov-Poisson system evaluated in second order.
Nonlinear multivariate and time series analysis by neural network methods
NASA Astrophysics Data System (ADS)
Hsieh, William W.
2004-03-01
Methods in multivariate statistical analysis are essential for working with large amounts of geophysical data, data from observational arrays, from satellites, or from numerical model output. In classical multivariate statistical analysis, there is a hierarchy of methods, starting with linear regression at the base, followed by principal component analysis (PCA) and finally canonical correlation analysis (CCA). A multivariate time series method, the singular spectrum analysis (SSA), has been a fruitful extension of the PCA technique. The common drawback of these classical methods is that only linear structures can be correctly extracted from the data. Since the late 1980s, neural network methods have become popular for performing nonlinear regression and classification. More recently, neural network methods have been extended to perform nonlinear PCA (NLPCA), nonlinear CCA (NLCCA), and nonlinear SSA (NLSSA). This paper presents a unified view of the NLPCA, NLCCA, and NLSSA techniques and their applications to various data sets of the atmosphere and the ocean (especially for the El Niño-Southern Oscillation and the stratospheric quasi-biennial oscillation). These data sets reveal that the linear methods are often too simplistic to describe real-world systems, with a tendency to scatter a single oscillatory phenomenon into numerous unphysical modes or higher harmonics, which can be largely alleviated in the new nonlinear paradigm.
A Spreadsheet in the Mathematics Classroom.
ERIC Educational Resources Information Center
Watkins, Will; Taylor, Monty
1989-01-01
Demonstrates how spreadsheets can be used to implement linear system solving algorithms in college mathematics classes. Lotus 1-2-3 is described, a linear system of equations is illustrated using spreadsheets, and the interplay between applications, computations, and theory is discussed. (four references) (LRW)
NASA Astrophysics Data System (ADS)
Gratadour, D.; Rouan, D.; Grosset, L.; Boccaletti, A.; Clénet, Y.
2015-09-01
Aims: One of the main observational challenges for investigating the central regions of active galactic nuclei (AGN) at short wavelengths, using high angular resolution, and high contrast observations, is to directly detect the circumnuclear optically thick material hiding the central core emission when viewed edge-on. The lack of direct evidence is limiting our understanding of AGN, and several scenarios have been proposed to cope for the diverse observed aspects of activity in a unified approach. Methods: Observations in the near-infrared spectral range have shown themselves to be powerful for providing essential hints to the characterisation of the unified model ingredients because of the reduced optical depth of the obscuring material. Moreover, it is possible to trace this material through light scattered from the central engine's closest environment, so that polarimetric observations are the ideal tool for distinguishing it from purely thermal and stellar emissions. Results: Here we show strong evidence that there is an extended nuclear torus at the center of NGC 1068 thanks to new adaptive-optics-assisted polarimetric observations in the near-infrared. The orientation of the polarization vectors proves that there is a structured hourglass-shaped bicone and a compact elongated (20 × 60 pc) nuclear structure perpendicular to the bicone axis. The linearly polarized emission in the bicone is dominated by a centro-symmetric pattern, but the central compact region shows a clear deviation from the latter with linear polarization aligned perpendicular to the bicone axis. Figure 2 is available in electronic form at http://www.aanda.orgData obtained with the SPHERE an instrument designed and built by a consortium consisting of IPAG (France), MPIA (Germany), LAM (France), LESIA (France), Laboratoire Lagrange (France), INAF - Osservatorio di Padova (Italy), Observatoire de Genève (Switzerland), ETH Zurich (Switzerland), NOVA (Netherlands), ONERA (France), and ASTRON (Netherlands) in collaboration with ESO.
Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.
Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J
2016-10-03
Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods. Copyright © 2016 The Author. Published by Elsevier Ltd.. All rights reserved.
Highly Productive Application Development with ViennaCL for Accelerators
NASA Astrophysics Data System (ADS)
Rupp, K.; Weinbub, J.; Rudolf, F.
2012-12-01
The use of graphics processing units (GPUs) for the acceleration of general purpose computations has become very attractive over the last years, and accelerators based on many integrated CPU cores are about to hit the market. However, there are discussions about the benefit of GPU computing when comparing the reduction of execution times with the increased development effort [1]. To counter these concerns, our open-source linear algebra library ViennaCL [2,3] uses modern programming techniques such as generic programming in order to provide a convenient access layer for accelerator and GPU computing. Other GPU-accelerated libraries are primarily tuned for performance, but less tailored to productivity and portability: MAGMA [4] provides dense linear algebra operations via a LAPACK-comparable interface, but no dedicated matrix and vector types. Cusp [5] is closest in functionality to ViennaCL for sparse matrices, but is based on CUDA and thus restricted to devices from NVIDIA. However, no convenience layer for dense linear algebra is provided with Cusp. ViennaCL is written in C++ and uses OpenCL to access the resources of accelerators, GPUs and multi-core CPUs in a unified way. On the one hand, the library provides iterative solvers from the family of Krylov methods, including various preconditioners, for the solution of linear systems typically obtained from the discretization of partial differential equations. On the other hand, dense linear algebra operations are supported, including algorithms such as QR factorization and singular value decomposition. The user application interface of ViennaCL is compatible to uBLAS [6], which is part of the peer-reviewed Boost C++ libraries [7]. This allows to port existing applications based on uBLAS with a minimum of effort to ViennaCL. Conversely, the interface compatibility allows to use the iterative solvers from ViennaCL with uBLAS types directly, thus enabling code reuse beyond CPU-GPU boundaries. Out-of-the-box support for types from the Eigen library [8] and MTL 4 [9] are provided as well, enabling a seamless transition from single-core CPU to GPU and multi-core CPU computations. Case studies from the numerical solution of PDEs are given and isolated performance benchmarks are discussed. Also, pitfalls in scientific computing with GPUs and accelerators are addressed, allowing for a first evaluation of whether these novel devices can be mapped well to certain applications. References: [1] R. Bordawekar et al., Technical Report, IBM, 2010 [2] ViennaCL library. Online: http://viennacl.sourceforge.net/ [3] K. Rupp et al., GPUScA, 2010 [4] MAGMA library. Online: http://icl.cs.utk.edu/magma/ [5] Cusp library. Online: http://code.google.com/p/cusp-library/ [6] uBLAS library. Online: http://www.boost.org/libs/numeric/ublas/ [7] Boost C++ Libraries. Online: http://www.boost.org/ [8] Eigen library. Online: http://eigen.tuxfamily.org/ [9] MTL 4 Library. Online: http://www.mtl4.org/
1981-06-15
relationships 5 3. Normalized energy in ambiguity function for i = 0 14 k ilI SACLANTCEN SR-50 A RESUME OF STOCHASTIC, TIME-VARYING, LINEAR SYSTEM THEORY WITH...the order in which systems are concatenated is unimportant. These results are exactly analogous to the results of time-invariant linear system theory in...REFERENCES 1. MEIER, L. A rdsum6 of deterministic time-varying linear system theory with application to active sonar signal processing problems, SACLANTCEN
A Unified Approach to Modeling Multidisciplinary Interactions
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.; Bhatia, Kumar G.
2000-01-01
There are a number of existing methods to transfer information among various disciplines. For a multidisciplinary application with n disciplines, the traditional methods may be required to model (n(exp 2) - n) interactions. This paper presents a unified three-dimensional approach that reduces the number of interactions from (n(exp 2) - n) to 2n by using a computer-aided design model. The proposed modeling approach unifies the interactions among various disciplines. The approach is independent of specific discipline implementation, and a number of existing methods can be reformulated in the context of the proposed unified approach. This paper provides an overview of the proposed unified approach and reformulations for two existing methods. The unified approach is specially tailored for application environments where the geometry is created and managed through a computer-aided design system. Results are presented for a blended-wing body and a high-speed civil transport.
NASA Astrophysics Data System (ADS)
Chowdhury, Aritra; Sevinsky, Christopher J.; Santamaria-Pang, Alberto; Yener, Bülent
2017-03-01
The cancer diagnostic workflow is typically performed by highly specialized and trained pathologists, for which analysis is expensive both in terms of time and money. This work focuses on grade classification in colon cancer. The analysis is performed over 3 protein markers; namely E-cadherin, beta actin and colagenIV. In addition, we also use a virtual Hematoxylin and Eosin (HE) stain. This study involves a comparison of various ways in which we can manipulate the information over the 4 different images of the tissue samples and come up with a coherent and unified response based on the data at our disposal. Pre- trained convolutional neural networks (CNNs) is the method of choice for feature extraction. The AlexNet architecture trained on the ImageNet database is used for this purpose. We extract a 4096 dimensional feature vector corresponding to the 6th layer in the network. Linear SVM is used to classify the data. The information from the 4 different images pertaining to a particular tissue sample; are combined using the following techniques: soft voting, hard voting, multiplication, addition, linear combination, concatenation and multi-channel feature extraction. We observe that we obtain better results in general than when we use a linear combination of the feature representations. We use 5-fold cross validation to perform the experiments. The best results are obtained when the various features are linearly combined together resulting in a mean accuracy of 91.27%.
40 CFR 1065.307 - Linearity verification.
Code of Federal Regulations, 2012 CFR
2012-07-01
... meter at different flow rates. Use a gravimetric reference measurement (such as a scale, balance, or... nitrogen. Select gas divisions that you typically use. Use a selected gas division as the measured value.... For linearity verification for gravimetric PM balances, use external calibration weights that that...
40 CFR 1065.307 - Linearity verification.
Code of Federal Regulations, 2013 CFR
2013-07-01
... meter at different flow rates. Use a gravimetric reference measurement (such as a scale, balance, or... nitrogen. Select gas divisions that you typically use. Use a selected gas division as the measured value.... For linearity verification for gravimetric PM balances, use external calibration weights that that...
Journal Writing: Enlivening Elementary Linear Algebra.
ERIC Educational Resources Information Center
Meel, David E.
1999-01-01
Examines the various issues surrounding the implementation of journal writing in an undergraduate linear algebra course. Identifies the benefits of incorporating journal writing into an undergraduate mathematics course, which are supported with students' comments from their journals and their reflections on the process. Contains 14 references.…
Recovering a hidden polarization by ghost polarimetry.
Janassek, Patrick; Blumenstein, Sébastien; Elsäßer, Wolfgang
2018-02-15
By exploiting polarization correlations of light from a broadband fiber-based amplified spontaneous emission source we succeed in reconstructing a hidden polarization in a ghost polarimetry experiment in close analogy to ghost imaging and ghost spectroscopy. Thereby, an original linear polarization state in the object arm of a Mach-Zehnder interferometer configuration which has been camouflaged by a subsequent depolarizer is recovered by correlating it with light from a reference beam. The variation of a linear polarizer placed inside the reference beam results in a Malus law type second-order intensity correlation with high contrast, thus measuring a ghost polarigram.
Al-Ekrish, Asma'a A; Al-Shawaf, Reema; Schullian, Peter; Al-Sadhan, Ra'ed; Hörmann, Romed; Widmann, Gerlig
2016-10-01
To assess the comparability of linear measurements of dental implant sites recorded from multidetector computed tomography (MDCT) images obtained using standard-dose filtered backprojection (FBP) technique with those from various ultralow doses combined with FBP, adaptive statistical iterative reconstruction (ASIR), and model-based iterative reconstruction (MBIR) techniques. The results of the study may contribute to MDCT dose optimization for dental implant site imaging. MDCT scans of two cadavers were acquired using a standard reference protocol and four ultralow-dose test protocols (TP). The volume CT dose index of the different dose protocols ranged from a maximum of 30.48-36.71 mGy to a minimum of 0.44-0.53 mGy. All scans were reconstructed using FBP, ASIR-50, ASIR-100, and MBIR, and either a bone or standard reconstruction kernel. Linear measurements were recorded from standardized images of the jaws by two examiners. Intra- and inter-examiner reliability of the measurements were analyzed using Cronbach's alpha and inter-item correlation. Agreement between the measurements obtained with the reference-dose/FBP protocol and each of the test protocols was determined with Bland-Altman plots and linear regression. Statistical significance was set at a P-value of 0.05. No systematic variation was found between the linear measurements obtained with the reference protocol and the other imaging protocols. The only exceptions were TP3/ASIR-50 (bone kernel) and TP4/ASIR-100 (bone and standard kernels). The mean measurement differences between these three protocols and the reference protocol were within ±0.1 mm, with the 95 % confidence interval limits being within the range of ±1.15 mm. A nearly 97.5 % reduction in dose did not significantly affect the height and width measurements of edentulous jaws regardless of the reconstruction algorithm used.
Weyl relativity: a novel approach to Weyl's ideas
NASA Astrophysics Data System (ADS)
Barceló, Carlos; Carballo-Rubio, Raúl; Garay, Luis J.
2017-06-01
In this paper we revisit the motivation and construction of a unified theory of gravity and electromagnetism, following Weyl's insights regarding the appealing potential connection between the gauge invariance of electromagnetism and the conformal invariance of the gravitational field. We highlight that changing the local symmetry group of spacetime permits to construct a theory in which these two symmetries are combined into a putative gauge symmetry but with second-order field equations and non-trivial mass scales, unlike the original higher-order construction by Weyl. We prove that the gravitational field equations are equivalent to the (trace-free) Einstein field equations, ensuring their compatibility with known tests of general relativity. As a corollary, the effective cosmological constant is rendered radiatively stable due to Weyl invariance. A novel phenomenological consequence characteristic of this construction, potentially relevant for cosmological observations, is the existence of an energy scale below which effects associated with the non-integrability of spacetime distances, and an effective mass for the electromagnetic field, appear simultaneously (as dual manifestations of the use of Weyl connections). We explain how former criticisms against Weyl's ideas lose most of their power in its present reincarnation, which we refer to as Weyl relativity, as it represents a Weyl-invariant, unified description of both the Einstein and Maxwell field equations.
Weyl relativity: a novel approach to Weyl's ideas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barceló, Carlos; Carballo-Rubio, Raúl; Garay, Luis J., E-mail: carlos@iaa.es, E-mail: raul.carballo-rubio@uct.ac.za, E-mail: luisj.garay@ucm.es
In this paper we revisit the motivation and construction of a unified theory of gravity and electromagnetism, following Weyl's insights regarding the appealing potential connection between the gauge invariance of electromagnetism and the conformal invariance of the gravitational field. We highlight that changing the local symmetry group of spacetime permits to construct a theory in which these two symmetries are combined into a putative gauge symmetry but with second-order field equations and non-trivial mass scales, unlike the original higher-order construction by Weyl. We prove that the gravitational field equations are equivalent to the (trace-free) Einstein field equations, ensuring their compatibilitymore » with known tests of general relativity. As a corollary, the effective cosmological constant is rendered radiatively stable due to Weyl invariance. A novel phenomenological consequence characteristic of this construction, potentially relevant for cosmological observations, is the existence of an energy scale below which effects associated with the non-integrability of spacetime distances, and an effective mass for the electromagnetic field, appear simultaneously (as dual manifestations of the use of Weyl connections). We explain how former criticisms against Weyl's ideas lose most of their power in its present reincarnation, which we refer to as Weyl relativity, as it represents a Weyl-invariant, unified description of both the Einstein and Maxwell field equations.« less
MAPU: Max-Planck Unified database of organellar, cellular, tissue and body fluid proteomes.
Zhang, Yanling; Zhang, Yong; Adachi, Jun; Olsen, Jesper V; Shi, Rong; de Souza, Gustavo; Pasini, Erica; Foster, Leonard J; Macek, Boris; Zougman, Alexandre; Kumar, Chanchal; Wisniewski, Jacek R; Jun, Wang; Mann, Matthias
2007-01-01
Mass spectrometry (MS)-based proteomics has become a powerful technology to map the protein composition of organelles, cell types and tissues. In our department, a large-scale effort to map these proteomes is complemented by the Max-Planck Unified (MAPU) proteome database. MAPU contains several body fluid proteomes; including plasma, urine, and cerebrospinal fluid. Cell lines have been mapped to a depth of several thousand proteins and the red blood cell proteome has also been analyzed in depth. The liver proteome is represented with 3200 proteins. By employing high resolution MS and stringent validation criteria, false positive identification rates in MAPU are lower than 1:1000. Thus MAPU datasets can serve as reference proteomes in biomarker discovery. MAPU contains the peptides identifying each protein, measured masses, scores and intensities and is freely available at http://www.mapuproteome.com using a clickable interface of cell or body parts. Proteome data can be queried across proteomes by protein name, accession number, sequence similarity, peptide sequence and annotation information. More than 4500 mouse and 2500 human proteins have already been identified in at least one proteome. Basic annotation information and links to other public databases are provided in MAPU and we plan to add further analysis tools.
The argument for a unified approach to non-ionizing radiation protection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perala, R.A.; Rigden, G.J.; Pfeffer, R.A.
1993-12-01
In the next decade military equipment will be required to operate in severe electromagnetic environments. These environments are expected to contain most non-ionizing frequencies (D.C. to GHz), from hostile and/or non-hostile sources, and be severe enough to cause temporary upset or even catastrophic failure of electronic equipment. Over the past thirty years considerable emphasis has been placed on hardening critical systems to one or more of these non-ionizing radiation environments, the most prevalent being the nuclear-induced electromagnetic pulse (EMD). From this technology development there has evolved a hardening philosophy that applies to most of these non-ionizing radiation environments. The philosophy,more » which stresses the application of zonal shields plus penetration protection, can provide low-cost hardening against such diverse non-ionizing radiation as p-static, lightning, electromagnetic interference (EMI), EMP, high intensity radiated fields (HIRF), electromagnetic radiation (EMR), and high power microwaves (HPM). The objective in this paper is to describe the application of this philosophy to Army helicopters. The authors develop a unified specification complete with threat definitions and test methods which illustrates integration of EMP, lightning, and HIRF at the box qualification level. This paper is a summary of the effort documented in a cited reference.« less
Palmer, Donald; Feldman, Valerie
2017-12-01
This article draws on a report prepared for the Australian Royal Commission into Institutional Responses to Child Sexual Abuse (Palmer et al., 2016) to develop a more comprehensive analysis of the role that organizational culture plays in child sexual abuse in institutional contexts, where institutional contexts are taken to be formal organizations that include children among their members (referred to here as "youth-serving organizations"). We begin by integrating five strains of theory and research on organizational culture from organizational sociology and management theory into a unified framework for analysis. We then elaborate the main paths through which organizational culture can influence child sexual abuse in youth-serving organizations. We then use our unified analytic framework and our understanding of the main paths through which organizational culture can influence child sexual abuse in youth-serving organizations to analyze the role that organizational culture plays in the perpetration, detection, and response to child sexual abuse in youth-serving organizations. We selectively illustrate our analysis with case materials compiled by the Royal Commission into Institutional Responses to Child Sexual Abuse and reports of child sexual abuse published in a variety of other sources. We conclude with a brief discussion of the policy implications of our analysis. Copyright © 2017. Published by Elsevier Ltd.
PDBe: improved accessibility of macromolecular structure data from PDB and EMDB
Velankar, Sameer; van Ginkel, Glen; Alhroub, Younes; Battle, Gary M.; Berrisford, John M.; Conroy, Matthew J.; Dana, Jose M.; Gore, Swanand P.; Gutmanas, Aleksandras; Haslam, Pauline; Hendrickx, Pieter M. S.; Lagerstedt, Ingvar; Mir, Saqib; Fernandez Montecelo, Manuel A.; Mukhopadhyay, Abhik; Oldfield, Thomas J.; Patwardhan, Ardan; Sanz-García, Eduardo; Sen, Sanchayita; Slowley, Robert A.; Wainwright, Michael E.; Deshpande, Mandar S.; Iudin, Andrii; Sahni, Gaurav; Salavert Torres, Jose; Hirshberg, Miriam; Mak, Lora; Nadzirin, Nurul; Armstrong, David R.; Clark, Alice R.; Smart, Oliver S.; Korir, Paul K.; Kleywegt, Gerard J.
2016-01-01
The Protein Data Bank in Europe (http://pdbe.org) accepts and annotates depositions of macromolecular structure data in the PDB and EMDB archives and enriches, integrates and disseminates structural information in a variety of ways. The PDBe website has been redesigned based on an analysis of user requirements, and now offers intuitive access to improved and value-added macromolecular structure information. Unique value-added information includes lists of reviews and research articles that cite or mention PDB entries as well as access to figures and legends from full-text open-access publications that describe PDB entries. A powerful new query system not only shows all the PDB entries that match a given query, but also shows the ‘best structures’ for a given macromolecule, ligand complex or sequence family using data-quality information from the wwPDB validation reports. A PDBe RESTful API has been developed to provide unified access to macromolecular structure data available in the PDB and EMDB archives as well as value-added annotations, e.g. regarding structure quality and up-to-date cross-reference information from the SIFTS resource. Taken together, these new developments facilitate unified access to macromolecular structure data in an intuitive way for non-expert users and support expert users in analysing macromolecular structure data. PMID:26476444
A preliminary investigation of the effects of the unified protocol on temperament.
Carl, Jenna R; Gallagher, Matthew W; Sauer-Zavala, Shannon E; Bentley, Kate H; Barlow, David H
2014-08-01
Previous research has shown that two dimensions of temperament referred to as neuroticism/behavioral inhibition (N/BI) and extraversion/behavioral activation (E/BA) are key risk factors in the development and maintenance of anxiety and mood disorders (Brown & Barlow, 2009). Given such findings, these temperamental dimensions may represent promising treatment targets for individuals with emotional disorders; however, to date, few studies have investigated the effects of psychological treatments on temperamental constructs generally assumed to be "stable, inflexible, and pervasive" (American Psychiatric Association, 2000). The present study addresses this gap in the literature by examining the effects of the Unified Protocol for Transdiagnostic Treatment of Emotional Disorders (UP; Barlow et al., 2011), a cognitive-behavioral therapy designed to target core processes of N/BI and E/BA temperaments, in a sample of adults with principal anxiety disorders and a range of comorbid conditions. Results revealed small effects of the UP on N/BI and E/BA compared with a waitlist control group at post-treatment. Additionally, decreases in N/BI and increases in E/BA during treatment were associated with improvements in symptoms, functioning, and quality of life. Findings provide preliminary support for the notion that the UP treatment facilitates beneficial changes in dimensions of temperament. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Draper, David W.; Newell, David A.; Wentz, Frank J.; Krimchansky, Sergey; Jackson, Gail
2015-01-01
The Global Precipitation Measurement (GPM) mission is an international satellite mission that uses measurements from an advanced radar/radiometer system on a core observatory as reference standards to unify and advance precipitation estimates made by a constellation of research and operational microwave sensors. The GPM core observatory was launched on February 27, 2014 at 18:37 UT in a 65? inclination nonsun-synchronous orbit. GPM focuses on precipitation as a key component of the Earth's water and energy cycle, and has the capability to provide near-real-time observations for tracking severe weather events, monitoring freshwater resources, and other societal applications. The GPM microwave imager (GMI) on the core observatory provides the direct link to the constellation radiometer sensors, which fly mainly in polar orbits. The GMI sensitivity, accuracy, and stability play a crucial role in unifying the measurements from the GPM constellation of satellites. The instrument has exhibited highly stable operations through the duration of the calibration/validation period. This paper provides an overview of the GMI instrument and a report of early on-orbit commissioning activities. It discusses the on-orbit radiometric sensitivity, absolute calibration accuracy, and stability for each radiometric channel. Index Terms-Calibration accuracy, passive microwave remote sensing, radiometric sensitivity.
NASA Technical Reports Server (NTRS)
Shih, Tsan-Hsing; Povinelli, Louis A.; Liu, Nan-Suey; Potapczuk, Mark G.; Lumley, J. L.
1999-01-01
The asymptotic solutions, described by Tennekes and Lumley (1972), for surface flows in a channel, pipe or boundary layer at large Reynolds numbers are revisited. These solutions can be extended to more complex flows such as the flows with various pressure gradients, zero wall stress and rough surfaces, etc. In computational fluid dynamics (CFD), these solutions can be used as the boundary conditions to bridge the near-wall region of turbulent flows so that there is no need to have the fine grids near the wall unless the near-wall flow structures are required to resolve. These solutions are referred to as the wall functions. Furthermore, a generalized and unified law of the wall which is valid for whole surface layer (including viscous sublayer, buffer layer and inertial sublayer) is analytically constructed. The generalized law of the wall shows that the effect of both adverse and favorable pressure gradients on the surface flow is very significant. Such as unified wall function will be useful not only in deriving analytic expressions for surface flow properties but also bringing a great convenience for CFD methods to place accurate boundary conditions at any location away from the wall. The extended wall functions introduced in this paper can be used for complex flows with acceleration, deceleration, separation, recirculation and rough surfaces.
NASA Astrophysics Data System (ADS)
Ohtsu, Masayasu
1991-04-01
An application of a moment tensor analysis to acoustic emission (AE) is studied to elucidate crack types and orientations of AE sources. In the analysis, simplified treatment is desirable, because hundreds of AE records are obtained from just one experiment and thus sophisticated treatment is realistically cumbersome. Consequently, a moment tensor inversion based on P wave amplitude is employed to determine six independent tensor components. Selecting only P wave portion from the full-space Green's function of homogeneous and isotropic material, a computer code named SiGMA (simplified Green's functions for the moment tensor analysis) is developed for the AE inversion analysis. To classify crack type and to determine crack orientation from moment tensor components, a unified decomposition of eigenvalues into a double-couple (DC) part, a compensated linear vector dipole (CLVD) part, and an isotropic part is proposed. The aim of the decomposition is to determine the proportion of shear contribution (DC) and tensile contribution (CLVD + isotropic) on AE sources and to classify cracks into a crack type of the dominant motion. Crack orientations determined from eigenvectors are presented as crack-opening vectors for tensile cracks and fault motion vectors for shear cracks, instead of stereonets. The SiGMA inversion and the unified decomposition are applied to synthetic data and AE waveforms detected during an in situ hydrofracturing test. To check the accuracy of the procedure, numerical experiments are performed on the synthetic waveforms, including cases with 10% random noise added. Results show reasonable agreement with assumed crack configurations. Although the maximum error is approximately 10% with respect to the ratios, the differences on crack orientations are less than 7°. AE waveforms detected by eight accelerometers deployed during the hydrofracturing test are analyzed. Crack types and orientations determined are in reasonable agreement with a predicted failure plane from borehole TV observation. The results suggest that tensile cracks are generated first at weak seams and then shear cracks follow on the opened joints.
2013-01-01
Background This study aims to improve accuracy of Bioelectrical Impedance Analysis (BIA) prediction equations for estimating fat free mass (FFM) of the elderly by using non-linear Back Propagation Artificial Neural Network (BP-ANN) model and to compare the predictive accuracy with the linear regression model by using energy dual X-ray absorptiometry (DXA) as reference method. Methods A total of 88 Taiwanese elderly adults were recruited in this study as subjects. Linear regression equations and BP-ANN prediction equation were developed using impedances and other anthropometrics for predicting the reference FFM measured by DXA (FFMDXA) in 36 male and 26 female Taiwanese elderly adults. The FFM estimated by BIA prediction equations using traditional linear regression model (FFMLR) and BP-ANN model (FFMANN) were compared to the FFMDXA. The measuring results of an additional 26 elderly adults were used to validate than accuracy of the predictive models. Results The results showed the significant predictors were impedance, gender, age, height and weight in developed FFMLR linear model (LR) for predicting FFM (coefficient of determination, r2 = 0.940; standard error of estimate (SEE) = 2.729 kg; root mean square error (RMSE) = 2.571kg, P < 0.001). The above predictors were set as the variables of the input layer by using five neurons in the BP-ANN model (r2 = 0.987 with a SD = 1.192 kg and relatively lower RMSE = 1.183 kg), which had greater (improved) accuracy for estimating FFM when compared with linear model. The results showed a better agreement existed between FFMANN and FFMDXA than that between FFMLR and FFMDXA. Conclusion When compared the performance of developed prediction equations for estimating reference FFMDXA, the linear model has lower r2 with a larger SD in predictive results than that of BP-ANN model, which indicated ANN model is more suitable for estimating FFM. PMID:23388042
A Large-Scale, Multiagency Approach to Defining a Reference Network for Pacific Northwest Streams
NASA Astrophysics Data System (ADS)
Miller, Stephanie; Eldred, Peter; Muldoon, Ariel; Anlauf-Dunn, Kara; Stein, Charlie; Hubler, Shannon; Merrick, Lesley; Haxton, Nick; Larson, Chad; Rehn, Andrew; Ode, Peter; Vander Laan, Jake
2016-12-01
Aquatic monitoring programs vary widely in objectives and design. However, each program faces the unifying challenge of assessing conditions and quantifying reasonable expectations for measured indicators. A common approach for setting resource expectations is to define reference conditions that represent areas of least human disturbance or most natural state of a resource characterized by the range of natural variability across a region of interest. Identification of reference sites often relies heavily on professional judgment, resulting in varying and unrepeatable methods. Standardized methods for data collection, site characterization, and reference site selection facilitate greater cooperation among assessment programs and development of assessment tools that are readily shareable and comparable. We illustrate an example that can serve the broader global monitoring community on how to create a consistent and transparent reference network for multiple stream resource agencies. We provide a case study that offers a simple example of how reference sites can be used, at the landscape level, to link upslope management practices to a specific in-channel response. We found management practices, particularly areas with high road densities, have more fine sediments than areas with fewer roads. While this example uses data from only one of the partner agencies, if data were collected in a similar manner they can be combined and create a larger, more robust dataset. We hope that this starts a dialog regarding more standardized ways through inter-agency collaborations to evaluate data. Creating more consistency in physical and biological field protocols will increase the ability to share data.
78 FR 46274 - Pyroxasulfone; Pesticide Tolerances
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-31
... following a 4-week dermal exposure producing local inflammation and systemic effects of minimal to mild...- linear approach (i.e., Reference dose (RfD)) will adequately account for all chronic toxicity, including... and other relevant data. Cancer risk is quantified using a linear or nonlinear approach. If sufficient...
Structured chaos in a devil's staircase of the Josephson junction.
Shukrinov, Yu M; Botha, A E; Medvedeva, S Yu; Kolahchi, M R; Irie, A
2014-09-01
The phase dynamics of Josephson junctions (JJs) under external electromagnetic radiation is studied through numerical simulations. Current-voltage characteristics, Lyapunov exponents, and Poincaré sections are analyzed in detail. It is found that the subharmonic Shapiro steps at certain parameters are separated by structured chaotic windows. By performing a linear regression on the linear part of the data, a fractal dimension of D = 0.868 is obtained, with an uncertainty of ±0.012. The chaotic regions exhibit scaling similarity, and it is shown that the devil's staircase of the system can form a backbone that unifies and explains the highly correlated and structured chaotic behavior. These features suggest a system possessing multiple complete devil's staircases. The onset of chaos for subharmonic steps occurs through the Feigenbaum period doubling scenario. Universality in the sequence of periodic windows is also demonstrated. Finally, the influence of the radiation and JJ parameters on the structured chaos is investigated, and it is concluded that the structured chaos is a stable formation over a wide range of parameter values.
Structured chaos in a devil's staircase of the Josephson junction
NASA Astrophysics Data System (ADS)
Shukrinov, Yu. M.; Botha, A. E.; Medvedeva, S. Yu.; Kolahchi, M. R.; Irie, A.
2014-09-01
The phase dynamics of Josephson junctions (JJs) under external electromagnetic radiation is studied through numerical simulations. Current-voltage characteristics, Lyapunov exponents, and Poincaré sections are analyzed in detail. It is found that the subharmonic Shapiro steps at certain parameters are separated by structured chaotic windows. By performing a linear regression on the linear part of the data, a fractal dimension of D = 0.868 is obtained, with an uncertainty of ±0.012. The chaotic regions exhibit scaling similarity, and it is shown that the devil's staircase of the system can form a backbone that unifies and explains the highly correlated and structured chaotic behavior. These features suggest a system possessing multiple complete devil's staircases. The onset of chaos for subharmonic steps occurs through the Feigenbaum period doubling scenario. Universality in the sequence of periodic windows is also demonstrated. Finally, the influence of the radiation and JJ parameters on the structured chaos is investigated, and it is concluded that the structured chaos is a stable formation over a wide range of parameter values.
Three-dimensional finite amplitude electroconvection in dielectric liquids
NASA Astrophysics Data System (ADS)
Luo, Kang; Wu, Jian; Yi, Hong-Liang; Tan, He-Ping
2018-02-01
Charge injection induced electroconvection in a dielectric liquid lying between two parallel plates is numerically simulated in three dimensions (3D) using a unified lattice Boltzmann method (LBM). Cellular flow patterns and their subcritical bifurcation phenomena of 3D electroconvection are numerically investigated for the first time. A unit conversion is also derived to connect the LBM system to the real physical system. The 3D LBM codes are validated by three carefully chosen cases and all results are found to be highly consistent with the analytical solutions or other numerical studies. For strong injection, the steady state roll, polygon, and square flow patterns are observed under different initial disturbances. Numerical results show that the hexagonal cell with the central region being empty of charge and centrally downward flow is preferred in symmetric systems under random initial disturbance. For weak injection, the numerical results show that the flow directly passes from the motionless state to turbulence once the system loses its linear stability. In addition, the numerically predicted linear and finite amplitude stability criteria of different flow patterns are discussed.
Asymptotic theory of neutral stability of the Couette flow of a vibrationally excited gas
NASA Astrophysics Data System (ADS)
Grigor'ev, Yu. N.; Ershov, I. V.
2017-01-01
An asymptotic theory of the neutral stability curve for a supersonic plane Couette flow of a vibrationally excited gas is developed. The initial mathematical model consists of equations of two-temperature viscous gas dynamics, which are used to derive a spectral problem for a linear system of eighth-order ordinary differential equations within the framework of the classical linear stability theory. Unified transformations of the system for all shear flows are performed in accordance with the classical Lin scheme. The problem is reduced to an algebraic secular equation with separation into the "inviscid" and "viscous" parts, which is solved numerically. It is shown that the thus-calculated neutral stability curves agree well with the previously obtained results of the direct numerical solution of the original spectral problem. In particular, the critical Reynolds number increases with excitation enhancement, and the neutral stability curve is shifted toward the domain of higher wave numbers. This is also confirmed by means of solving an asymptotic equation for the critical Reynolds number at the Mach number M ≤ 4.
NASA Astrophysics Data System (ADS)
Song, Yan; Fang, Xiaosheng; Diao, Qingda
2016-03-01
In this paper, we discuss the mixed H2/H∞ distributed robust model predictive control problem for polytopic uncertain systems subject to randomly occurring actuator saturation and packet loss. The global system is decomposed into several subsystems, and all the subsystems are connected by a fixed topology network, which is the definition for the packet loss among the subsystems. To better use the successfully transmitted information via Internet, both the phenomena of actuator saturation and packet loss resulting from the limitation of the communication bandwidth are taken into consideration. A novel distributed controller model is established to account for the actuator saturation and packet loss in a unified representation by using two sets of Bernoulli distributed white sequences with known conditional probabilities. With the nonlinear feedback control law represented by the convex hull of a group of linear feedback laws, the distributed controllers for subsystems are obtained by solving an linear matrix inequality (LMI) optimisation problem. Finally, numerical studies demonstrate the effectiveness of the proposed techniques.
Structured chaos in a devil's staircase of the Josephson junction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shukrinov, Yu. M.; Botha, A. E., E-mail: bothaae@unisa.ac.za; Medvedeva, S. Yu.
2014-09-01
The phase dynamics of Josephson junctions (JJs) under external electromagnetic radiation is studied through numerical simulations. Current-voltage characteristics, Lyapunov exponents, and Poincaré sections are analyzed in detail. It is found that the subharmonic Shapiro steps at certain parameters are separated by structured chaotic windows. By performing a linear regression on the linear part of the data, a fractal dimension of D = 0.868 is obtained, with an uncertainty of ±0.012. The chaotic regions exhibit scaling similarity, and it is shown that the devil's staircase of the system can form a backbone that unifies and explains the highly correlated and structured chaotic behavior.more » These features suggest a system possessing multiple complete devil's staircases. The onset of chaos for subharmonic steps occurs through the Feigenbaum period doubling scenario. Universality in the sequence of periodic windows is also demonstrated. Finally, the influence of the radiation and JJ parameters on the structured chaos is investigated, and it is concluded that the structured chaos is a stable formation over a wide range of parameter values.« less
Parallel Algorithms for Least Squares and Related Computations.
1991-03-22
for dense computations in linear algebra . The work has recently been published in a general reference book on parallel algorithms by SIAM. AFO SR...written his Ph.D. dissertation with the principal investigator. (See publication 6.) • Parallel Algorithms for Dense Linear Algebra Computations. Our...and describe and to put into perspective a selection of the more important parallel algorithms for numerical linear algebra . We give a major new
SPAR reference manual. [for stress analysis
NASA Technical Reports Server (NTRS)
Whetstone, W. D.
1974-01-01
SPAR is a system of related programs which may be operated either in batch or demand (teletype) mode. Information exchange between programs is automatically accomplished through one or more direct access libraries, known collectively as the data complex. Card input is command-oriented, in free-field form. Capabilities available in the first production release of the system are fully documented, and include linear stress analysis, linear bifurcation buckling analysis, and linear vibrational analysis.
General methods for determining the linear stability of coronal magnetic fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Craig, I.J.D.; Sneyd, A.D.; McClymont, A.N.
1988-12-01
A time integration of a linearized plasma equation of motion has been performed to calculate the ideal linear stability of arbitrary three-dimensional magnetic fields. The convergence rates of the explicit and implicit power methods employed are speeded up by using sequences of cyclic shifts. Growth rates are obtained for Gold-Hoyle force-free equilibria, and the corkscrew-kink instability is found to be very weak. 19 references.
NASA Technical Reports Server (NTRS)
Randall, David A.
1990-01-01
A bulk planetary boundary layer (PBL) model was developed with a simple internal vertical structure and a simple second-order closure, designed for use as a PBL parameterization in a large-scale model. The model allows the mean fields to vary with height within the PBL, and so must address the vertical profiles of the turbulent fluxes, going beyond the usual mixed-layer assumption that the fluxes of conservative variables are linear with height. This is accomplished using the same convective mass flux approach that has also been used in cumulus parameterizations. The purpose is to show that such a mass flux model can include, in a single framework, the compensating subsidence concept, downgradient mixing, and well-mixed layers.
Sadeque, Farig; Xu, Dongfang; Bethard, Steven
2017-01-01
The 2017 CLEF eRisk pilot task focuses on automatically detecting depression as early as possible from a users’ posts to Reddit. In this paper we present the techniques employed for the University of Arizona team’s participation in this early risk detection shared task. We leveraged external information beyond the small training set, including a preexisting depression lexicon and concepts from the Unified Medical Language System as features. For prediction, we used both sequential (recurrent neural network) and non-sequential (support vector machine) models. Our models perform decently on the test data, and the recurrent neural models perform better than the non-sequential support vector machines while using the same feature sets. PMID:29075167
A Review of Recent Aeroelastic Analysis Methods for Propulsion at NASA Lewis Research Center
NASA Technical Reports Server (NTRS)
Reddy, T. S. R.; Bakhle, Milind A.; Srivastava, R.; Mehmed, Oral; Stefko, George L.
1993-01-01
This report reviews aeroelastic analyses for propulsion components (propfans, compressors and turbines) being developed and used at NASA LeRC. These aeroelastic analyses include both structural and aerodynamic models. The structural models include a typical section, a beam (with and without disk flexibility), and a finite-element blade model (with plate bending elements). The aerodynamic models are based on the solution of equations ranging from the two-dimensional linear potential equation to the three-dimensional Euler equations for multibladed configurations. Typical calculated results are presented for each aeroelastic model. Suggestions for further research are made. Many of the currently available aeroelastic models and analysis methods are being incorporated in a unified computer program, APPLE (Aeroelasticity Program for Propulsion at LEwis).
Predicting a future lifetime through Box-Cox transformation.
Yang, Z
1999-09-01
In predicting a future lifetime based on a sample of past lifetimes, the Box-Cox transformation method provides a simple and unified procedure that is shown in this article to meet or often outperform the corresponding frequentist solution in terms of coverage probability and average length of prediction intervals. Kullback-Leibler information and second-order asymptotic expansion are used to justify the Box-Cox procedure. Extensive Monte Carlo simulations are also performed to evaluate the small sample behavior of the procedure. Certain popular lifetime distributions, such as Weibull, inverse Gaussian and Birnbaum-Saunders are served as illustrative examples. One important advantage of the Box-Cox procedure lies in its easy extension to linear model predictions where the exact frequentist solutions are often not available.
Optical Implementation of the Optimal Universal and Phase-Covariant Quantum Cloning Machines
NASA Astrophysics Data System (ADS)
Ye, Liu; Song, Xue-Ke; Yang, Jie; Yang, Qun; Ma, Yang-Cheng
Quantum cloning relates to the security of quantum computation and quantum communication. In this paper, firstly we propose a feasible unified scheme to implement optimal 1 → 2 universal, 1 → 2 asymmetric and symmetric phase-covariant cloning, and 1 → 2 economical phase-covariant quantum cloning machines only via a beam splitter. Then 1 → 3 economical phase-covariant quantum cloning machines also can be realized by adding another beam splitter in context of linear optics. The scheme is based on the interference of two photons on a beam splitter with different splitting ratios for vertical and horizontal polarization components. It is shown that under certain condition, the scheme is feasible by current experimental technology.
Statistical mechanics of competitive resource allocation using agent-based models
NASA Astrophysics Data System (ADS)
Chakraborti, Anirban; Challet, Damien; Chatterjee, Arnab; Marsili, Matteo; Zhang, Yi-Cheng; Chakrabarti, Bikas K.
2015-01-01
Demand outstrips available resources in most situations, which gives rise to competition, interaction and learning. In this article, we review a broad spectrum of multi-agent models of competition (El Farol Bar problem, Minority Game, Kolkata Paise Restaurant problem, Stable marriage problem, Parking space problem and others) and the methods used to understand them analytically. We emphasize the power of concepts and tools from statistical mechanics to understand and explain fully collective phenomena such as phase transitions and long memory, and the mapping between agent heterogeneity and physical disorder. As these methods can be applied to any large-scale model of competitive resource allocation made up of heterogeneous adaptive agent with non-linear interaction, they provide a prospective unifying paradigm for many scientific disciplines.
A Note on Multigrid Theory for Non-nested Grids and/or Quadrature
NASA Technical Reports Server (NTRS)
Douglas, C. C.; Douglas, J., Jr.; Fyfe, D. E.
1996-01-01
We provide a unified theory for multilevel and multigrid methods when the usual assumptions are not present. For example, we do not assume that the solution spaces or the grids are nested. Further, we do not assume that there is an algebraic relationship between the linear algebra problems on different levels. What we provide is a computationally useful theory for adaptively changing levels. Theory is provided for multilevel correction schemes, nested iteration schemes, and one way (i.e., coarse to fine grid with no correction iterations) schemes. We include examples showing the applicability of this theory: finite element examples using quadrature in the matrix assembly and finite volume examples with non-nested grids. Our theory applies directly to other discretizations as well.
NASA Technical Reports Server (NTRS)
Johnson, F. T.
1980-01-01
A method for solving the linear integral equations of incompressible potential flow in three dimensions is presented. Both analysis (Neumann) and design (Dirichlet) boundary conditions are treated in a unified approach to the general flow problem. The method is an influence coefficient scheme which employs source and doublet panels as boundary surfaces. Curved panels possessing singularity strengths, which vary as polynomials are used, and all influence coefficients are derived in closed form. These and other features combine to produce an efficient scheme which is not only versatile but eminently suited to the practical realities of a user-oriented environment. A wide variety of numerical results demonstrating the method is presented.
Growth of electron plasma waves above and below f(p) in the electron foreshock
NASA Technical Reports Server (NTRS)
Cairns, Iver H.; Fung, Shing F.
1988-01-01
This paper investigates the conditions required for electron beams to drive wave growth significantly above and below the electron plasma frequency, f(p), by numerically solving the linear dispersion equation. It is shown that kinetic growth well below f(p) may occur over a broad range of frequencies due to the beam instability, when the electron beam is slow, dilute, and relatively cold. Alternatively, a cold or sharp feature at low parallel velocities in the distribution function may drive kinetic growth significantly below f(p). Kinetic broadband growth significantly above f(p) is explained in terms of faster warmer beams. A unified qualitative theory for the narrow-band and broad-band waves is proposed.
Emergence of linear elasticity from the atomistic description of matter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cakir, Abdullah, E-mail: acakir@ntu.edu.sg; Pica Ciamarra, Massimo; Dipartimento di Scienze Fisiche, CNR–SPIN, Università di Napoli Federico II, I-80126 Napoli
2016-08-07
We investigate the emergence of the continuum elastic limit from the atomistic description of matter at zero temperature considering how locally defined elastic quantities depend on the coarse graining length scale. Results obtained numerically investigating different model systems are rationalized in a unifying picture according to which the continuum elastic limit emerges through a process determined by two system properties, the degree of disorder, and a length scale associated to the transverse low-frequency vibrational modes. The degree of disorder controls the emergence of long-range local shear stress and shear strain correlations, while the length scale influences the amplitude of themore » fluctuations of the local elastic constants close to the jamming transition.« less
Particle Swarm Social Adaptive Model for Multi-Agent Based Insurgency Warfare Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cui, Xiaohui; Potok, Thomas E
2009-12-01
To better understand insurgent activities and asymmetric warfare, a social adaptive model for modeling multiple insurgent groups attacking multiple military and civilian targets is proposed and investigated. This report presents a pilot study using the particle swarm modeling, a widely used non-linear optimal tool to model the emergence of insurgency campaign. The objective of this research is to apply the particle swarm metaphor as a model of insurgent social adaptation for the dynamically changing environment and to provide insight and understanding of insurgency warfare. Our results show that unified leadership, strategic planning, and effective communication between insurgent groups are notmore » the necessary requirements for insurgents to efficiently attain their objective.« less
Code of Federal Regulations, 2011 CFR
2011-07-01
... followed by a gravimetric mass determination, but which is not a Class I equivalent method because of... MONITORING REFERENCE AND EQUIVALENT METHODS General Provisions § 53.1 Definitions. Terms used but not defined... slope of a linear plot fitted to corresponding candidate and reference method mean measurement data...
An Exposition on the Nonlinear Kinematics of Shells, Including Transverse Shearing Deformations
NASA Technical Reports Server (NTRS)
Nemeth, Michael P.
2013-01-01
An in-depth exposition on the nonlinear deformations of shells with "small" initial geometric imperfections, is presented without the use of tensors. First, the mathematical descriptions of an undeformed-shell reference surface, and its deformed image, are given in general nonorthogonal coordinates. The two-dimensional Green-Lagrange strains of the reference surface derived and simplified for the case of "small" strains. Linearized reference-surface strains, rotations, curvatures, and torsions are then derived and used to obtain the "small" Green-Lagrange strains in terms of linear deformation measures. Next, the geometry of the deformed shell is described mathematically and the "small" three-dimensional Green-Lagrange strains are given. The deformations of the shell and its reference surface are related by introducing a kinematic hypothesis that includes transverse shearing deformations and contains the classical Love-Kirchhoff kinematic hypothesis as a proper, explicit subset. Lastly, summaries of the essential equations are given for general nonorthogonal and orthogonal coordinates, and the basis for further simplification of the equations is discussed.
NASA Technical Reports Server (NTRS)
Zhu, S. Y.; Mueller, I. I.
1982-01-01
The effect of adopting definitive precession and equinox corrections on the terrestrial reference frame was investigated. It is noted that the effect on polar motion is a diurnal periodic term with an amplitude increasing linearly in time whole on UT1 it is a linear term: general principles are given to determine the effects of small rotations of the frame of a conventional inertial reference system (CIS) on the frame of the conventional terrestrial reference system (CTS); seven CTS options are presented, one of which is necessary to accommodate such rotation. Accommodating possible future changes in the astronomical nutation is discussed. The effects of differences which may exist between the various CTS's and CIS's on Earth rotation parameters (ERP) and how these differences can be determined are examined. It is shown that the CTS differences can be determined from observations made at the same site. The CIS differences by comparing the ERP's are determined by the different techniques during the same time period.
NASA Technical Reports Server (NTRS)
Zhu, S. Y.; Mueller, I. I.
1982-01-01
The effects of adopting new definitive precession and equinox corrections on the terrestrial reference frame was investigated. It is noted that: (1) the effect on polar motion is a diurnal periodic term with an amplitude increasing linearly in time whole on UT1 it is a linear term; (2) general principles are given to determine the effects of small rotations of the frame of a conventional inertial reference system (CIS) on the frame of the conventional terrestrial reference system (CTS); (3) seven CTS options are presented, one of which is necessary to accommodate such rotation. Accommodating possible future changes in the astronomical nutation is discussed. The effects of differences which may exist between the various CTS's and CIS's on Earth rotation parameters (ERP) and how these differences can be determined are examined. It is shown that the CTS differences can be determined from observations made at the same site, while the CIS differences by comparing the ERP's determined by the different techniques during the same time period.
A State Space Model for Spatial Updating of Remembered Visual Targets during Eye Movements
Mohsenzadeh, Yalda; Dash, Suryadeep; Crawford, J. Douglas
2016-01-01
In the oculomotor system, spatial updating is the ability to aim a saccade toward a remembered visual target position despite intervening eye movements. Although this has been the subject of extensive experimental investigation, there is still no unifying theoretical framework to explain the neural mechanism for this phenomenon, and how it influences visual signals in the brain. Here, we propose a unified state-space model (SSM) to account for the dynamics of spatial updating during two types of eye movement; saccades and smooth pursuit. Our proposed model is a non-linear SSM and implemented through a recurrent radial-basis-function neural network in a dual Extended Kalman filter (EKF) structure. The model parameters and internal states (remembered target position) are estimated sequentially using the EKF method. The proposed model replicates two fundamental experimental observations: continuous gaze-centered updating of visual memory-related activity during smooth pursuit, and predictive remapping of visual memory activity before and during saccades. Moreover, our model makes the new prediction that, when uncertainty of input signals is incorporated in the model, neural population activity and receptive fields expand just before and during saccades. These results suggest that visual remapping and motor updating are part of a common visuomotor mechanism, and that subjective perceptual constancy arises in part from training the visual system on motor tasks. PMID:27242452
Gavrilin, Gene V.; Cherkasova, Elena A.; Lipskaya, Galina Y.; Kew, Olen M.; Agol, Vadim I.
2000-01-01
We determined nucleotide sequences of the VP1 and 2AB genes and portions of the 2C and 3D genes of two evolving poliovirus lineages: circulating wild viruses of T geotype and Sabin vaccine-derived isolates from an immunodeficient patient. Different regions of the viral RNA were found to evolve nonsynchronously, and the rate of evolution of the 2AB region in the vaccine-derived population was not constant throughout its history. Synonymous replacements occurred not completely randomly, suggesting the need for conservation of certain rare codons (possibly to control translation elongation) and the existence of unidentified constraints in the viral RNA structure. Nevertheless the major contribution to the evolution of the two lineages came from linear accumulation of synonymous substitutions. Therefore, in agreement with current theories of viral evolution, we suggest that the majority of the mutations in both lineages were fixed as a result of successive sampling, from the heterogeneous populations, of random portions containing predominantly neutral and possibly adverse mutations. As a result of such a mode of evolution, the virus fitness may be maintained at a more or less constant level or may decrease unless more-fit variants are stochastically generated. The proposed unifying model of natural poliovirus evolution has important implications for the epidemiology of poliomyelitis. PMID:10906191
NASA Technical Reports Server (NTRS)
Summers, Geoffrey P.; Burke, Edward A.; Shapiro, Philip; Statler, Richard; Messenger, Scott R.; Walters, Robert J.
1994-01-01
It has been found useful in the past to use the concept of 'equivalent fluence' to compare the radiation response of different solar cell technologies. Results are usually given in terms of an equivalent 1 MeV electron or an equivalent 10 MeV proton fluence. To specify cell response in a complex space-radiation environment in terms of an equivalent fluence, it is necessary to measure damage coefficients for a number of representative electron and proton energies. However, at the last Photovoltaic Specialist Conference we showed that nonionizing energy loss (NIEL) could be used to correlate damage coefficients for protons, using measurements for GaAs as an example. This correlation means that damage coefficients for all proton energies except near threshold can be predicted from a measurement made at one particular energy. NIEL is the exact equivalent for displacement damage of linear energy transfer (LET) for ionization energy loss. The use of NIEL in this way leads naturally to the concept of 10 MeV equivalent proton fluence. The situation for electron damage is more complex, however. It is shown that the concept of 'displacement damage dose' gives a more general way of unifying damage coefficients. It follows that 1 MeV electron equivalent fluence is a special case of a more general quantity for unifying electron damage coefficients which we call the 'effective 1 MeV electron equivalent dose'.
Differential morphology and image processing.
Maragos, P
1996-01-01
Image processing via mathematical morphology has traditionally used geometry to intuitively understand morphological signal operators and set or lattice algebra to analyze them in the space domain. We provide a unified view and analytic tools for morphological image processing that is based on ideas from differential calculus and dynamical systems. This includes ideas on using partial differential or difference equations (PDEs) to model distance propagation or nonlinear multiscale processes in images. We briefly review some nonlinear difference equations that implement discrete distance transforms and relate them to numerical solutions of the eikonal equation of optics. We also review some nonlinear PDEs that model the evolution of multiscale morphological operators and use morphological derivatives. Among the new ideas presented, we develop some general 2-D max/min-sum difference equations that model the space dynamics of 2-D morphological systems (including the distance computations) and some nonlinear signal transforms, called slope transforms, that can analyze these systems in a transform domain in ways conceptually similar to the application of Fourier transforms to linear systems. Thus, distance transforms are shown to be bandpass slope filters. We view the analysis of the multiscale morphological PDEs and of the eikonal PDE solved via weighted distance transforms as a unified area in nonlinear image processing, which we call differential morphology, and briefly discuss its potential applications to image processing and computer vision.
NASA Astrophysics Data System (ADS)
Kossobokov, Vladimir G.; Nekrasova, Anastasia K.
2018-05-01
We continue applying the general concept of seismic risk analysis in a number of seismic regions worldwide by constructing regional seismic hazard maps based on morphostructural analysis, pattern recognition, and the Unified Scaling Law for Earthquakes (USLE), which generalizes the Gutenberg-Richter relationship making use of naturally fractal distribution of earthquake sources of different size in a seismic region. The USLE stands for an empirical relationship log10 N(M, L) = A + B·(5 - M) + C·log10 L, where N(M, L) is the expected annual number of earthquakes of a certain magnitude M within a seismically prone area of linear dimension L. We use parameters A, B, and C of USLE to estimate, first, the expected maximum magnitude in a time interval at seismically prone nodes of the morphostructural scheme of the region under study, then map the corresponding expected ground shaking parameters (e.g., peak ground acceleration, PGA, or macro-seismic intensity). After a rigorous verification against the available seismic evidences in the past (usually, the observed instrumental PGA or the historically reported macro-seismic intensity), such a seismic hazard map is used to generate maps of specific earthquake risks for population, cities, and infrastructures (e.g., those based on census of population, buildings inventory). The methodology of seismic hazard and risk assessment is illustrated by application to the territory of Greater Caucasus and Crimea.
NASA Astrophysics Data System (ADS)
Papalexiou, Simon Michael
2018-05-01
Hydroclimatic processes come in all "shapes and sizes". They are characterized by different spatiotemporal correlation structures and probability distributions that can be continuous, mixed-type, discrete or even binary. Simulating such processes by reproducing precisely their marginal distribution and linear correlation structure, including features like intermittency, can greatly improve hydrological analysis and design. Traditionally, modelling schemes are case specific and typically attempt to preserve few statistical moments providing inadequate and potentially risky distribution approximations. Here, a single framework is proposed that unifies, extends, and improves a general-purpose modelling strategy, based on the assumption that any process can emerge by transforming a specific "parent" Gaussian process. A novel mathematical representation of this scheme, introducing parametric correlation transformation functions, enables straightforward estimation of the parent-Gaussian process yielding the target process after the marginal back transformation, while it provides a general description that supersedes previous specific parameterizations, offering a simple, fast and efficient simulation procedure for every stationary process at any spatiotemporal scale. This framework, also applicable for cyclostationary and multivariate modelling, is augmented with flexible parametric correlation structures that parsimoniously describe observed correlations. Real-world simulations of various hydroclimatic processes with different correlation structures and marginals, such as precipitation, river discharge, wind speed, humidity, extreme events per year, etc., as well as a multivariate example, highlight the flexibility, advantages, and complete generality of the method.
The "Chaos" Pattern in Piaget's Theory of Cognitive Development.
ERIC Educational Resources Information Center
Lindsay, Jean S.
Piaget's theory of the cognitive development of the child is related to the recently developed non-linear "chaos" model. The term "chaos" refers to the tendency of dynamical, non-linear systems toward irregular, sometimes unpredictable, deterministic behavior. Piaget identified this same pattern in his model of cognitive…
NASA Astrophysics Data System (ADS)
Wu, Jing; Huang, Junbing; Wu, Hanping; Gu, Hongcan; Tang, Bo
2014-12-01
In order to verify the validity of the regional reference grating method in solve the strain/temperature cross sensitive problem in the actual ship structural health monitoring system, and to meet the requirements of engineering, for the sensitivity coefficients of regional reference grating method, national standard measurement equipment is used to calibrate the temperature sensitivity coefficient of selected FBG temperature sensor and strain sensitivity coefficient of FBG strain sensor in this modal. And the thermal expansion sensitivity coefficient of the steel for ships is calibrated with water bath method. The calibration results show that the temperature sensitivity coefficient of FBG temperature sensor is 28.16pm/°C within -10~30°C, and its linearity is greater than 0.999, the strain sensitivity coefficient of FBG strain sensor is 1.32pm/μɛ within -2900~2900μɛ whose linearity is almost to 1, the thermal expansion sensitivity coefficient of the steel for ships is 23.438pm/°C within 30~90°C, and its linearity is greater than 0.998. Finally, the calibration parameters are used in the actual ship structure health monitoring system for temperature compensation. The results show that the effect of temperature compensation is good, and the calibration parameters meet the engineering requirements, which provide an important reference for fiber Bragg grating sensor is widely used in engineering.
Improving Photometric Calibration of Meteor Video Camera Systems
NASA Technical Reports Server (NTRS)
Ehlert, Steven; Kingery, Aaron; Suggs, Robert
2016-01-01
We present the results of new calibration tests performed by the NASA Meteoroid Environment Oce (MEO) designed to help quantify and minimize systematic uncertainties in meteor photometry from video camera observations. These systematic uncertainties can be categorized by two main sources: an imperfect understanding of the linearity correction for the MEO's Watec 902H2 Ultimate video cameras and uncertainties in meteor magnitudes arising from transformations between the Watec camera's Sony EX-View HAD bandpass and the bandpasses used to determine reference star magnitudes. To address the rst point, we have measured the linearity response of the MEO's standard meteor video cameras using two independent laboratory tests on eight cameras. Our empirically determined linearity correction is critical for performing accurate photometry at low camera intensity levels. With regards to the second point, we have calculated synthetic magnitudes in the EX bandpass for reference stars. These synthetic magnitudes enable direct calculations of the meteor's photometric ux within the camera band-pass without requiring any assumptions of its spectral energy distribution. Systematic uncertainties in the synthetic magnitudes of individual reference stars are estimated at 0:20 mag, and are limited by the available spectral information in the reference catalogs. These two improvements allow for zero-points accurate to 0:05 ?? 0:10 mag in both ltered and un ltered camera observations with no evidence for lingering systematics.
Review of evaluation on ecological carrying capacity: The progress and trend of methodology
NASA Astrophysics Data System (ADS)
Wang, S. F.; Xu, Y.; Liu, T. J.; Ye, J. M.; Pan, B. L.; Chu, C.; Peng, Z. L.
2018-02-01
The ecological carrying capacity (ECC) has been regarded as an important reference to indicate the level of regional sustainable development since the very beginning of twenty-first century. By a brief review of the main progress in ECC evaluation methodologies in recent five years, this paper systematically discusses the features and differences of these methods and expounds the current states and future development trend of ECC methodology. The result shows that further exploration in terms of the dynamic, comprehensive and intelligent assessment technologies needs to be provided in order to form a unified and scientific ECC methodology system and to produce a reliable basis for environmental-economic decision-makings.
Calibration and assessment of full-field optical strain measurement procedures and instrumentation
NASA Astrophysics Data System (ADS)
Kujawinska, Malgorzata; Patterson, E. A.; Burguete, R.; Hack, E.; Mendels, D.; Siebert, T.; Whelan, Maurice
2006-09-01
There are no international standards or norms for the use of optical techniques for full-field strain measurement. In the paper the rationale and design of a reference material and a set of standarized materials for the calibration and evaluation of optical systems for full-field measurements of strain are outlined. A classification system for the steps in the measurement process is also proposed and allows the development of a unified approach to diagnostic testing of components in an optical system for strain measurement based on any optical technique. The results described arise from a European study known as SPOTS whose objectives were to begin to fill the gap caused by a lack of standards.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-22
... California State Implementation Plan Revisions, Monterey Bay Unified Air Pollution Control District AGENCY... to the Monterey Bay Unified Air Pollution Control District (MBAPCD) portion of the California State... CFR Part 52 Environmental protection, Air pollution control, Intergovernmental relations, Nitrogen...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-08
... the California State Implementation Plan, San Joaquin Valley Unified Air Pollution Control District... approve a revision to the San Joaquin Valley Unified Air Pollution Control District (SJVUAPCD) portion of..., Air pollution control, Intergovernmental relations, Ozone, Reporting and recordkeeping requirements...
Incubation, Insight, and Creative Problem Solving: A Unified Theory and a Connectionist Model
ERIC Educational Resources Information Center
Helie, Sebastien; Sun, Ron
2010-01-01
This article proposes a unified framework for understanding creative problem solving, namely, the explicit-implicit interaction theory. This new theory of creative problem solving constitutes an attempt at providing a more unified explanation of relevant phenomena (in part by reinterpreting/integrating various fragmentary existing theories of…
Toward a Unified Componential Theory of Human Reasoning. Technical Report No. 4.
ERIC Educational Resources Information Center
Sternberg, Robert J.
The unified theory described in this paper characterizes human reasoning as an information processing system with a hierarchical sequence of components and subtheories that account for performance on successively narrower tasks. Both deductive and inductive theories are subsumed in the unified componential theory, including transitive chain theory…
Alternative Fuels Data Center: Mesa Unified School District Reaps Economic
and Environmental Benefits with Propane Buses Mesa Unified School District Reaps Economic and School District Reaps Economic and Environmental Benefits with Propane Buses on Facebook Tweet about Alternative Fuels Data Center: Mesa Unified School District Reaps Economic and Environmental Benefits with
Cincinnati's Bold New Venture: A Unified K-12 Reading/Communication Arts Program.
ERIC Educational Resources Information Center
Green, Reginald Leon
1989-01-01
Describes a unified reading/communication arts program in the Cincinnati Public School System which uses new basal texts, support materials, and a customized instructional system for each grade level, integrating listening, speaking, reading, writing, and thinking skills into a unified language approach. Discusses intervention strategies,…
76 FR 39991 - Introduction to the Unified Agenda of Federal Regulatory and Deregulatory Actions
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-07
... Deregulatory Actions. SUMMARY: The Regulatory Flexibility Act requires that agencies publish semiannual... agencies have chosen to publish their regulatory agendas as part of the Unified Agenda. Editions of the... Published? III. How Is the Unified Agenda Organized? IV. What Information Appears for Each Entry? V...
Unified Approximations: A New Approach for Monoprotic Weak Acid-Base Equilibria
ERIC Educational Resources Information Center
Pardue, Harry; Odeh, Ihab N.; Tesfai, Teweldemedhin M.
2004-01-01
The unified approximations reduce the conceptual complexity by combining solutions for a relatively large number of different situations into just two similar sets of processes. Processes used to solve problems by either the unified or classical approximations require similar degrees of understanding of the underlying chemical processes.
29 CFR 779.218 - Methods to accomplish “unified operation.”
Code of Federal Regulations, 2010 CFR
2010-07-01
..., join together to perform some or all of their activities as a unified business or business system. They may accomplish such unification through agreements, franchises, grants, leases, or other arrangements... others so that they constitute a single business or unified business system. Whether in any particular...
Introduction to The Regulatory Plan and the Unified Agenda of Federal...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-20
... Unified Agenda 79451 Published? III. How Are The Regulatory Plan and the Unified Agenda 79451 Organized? IV. What Information Appears for Each Entry? 79452 V. Abbreviations 79454 VI. How Can Users Get... Department of Defense 79504 Department of Education 79509 Department of Energy 79512 Department of Health and...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-08
... the California State Implementation Plan, San Joaquin Valley Unified Air Pollution Control District... revisions to the San Joaquin Valley Unified Air Pollution Control District (SJVUAPCD) portion of the... of Subjects in 40 CFR Part 52 Environmental protection, Air pollution control, Incorporation by...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-01
... the California State Implementation Plan, Joaquin Valley Unified Air Pollution Control District and Imperial County Air Pollution Control District AGENCY: Environmental Protection Agency (EPA). ACTION: Final rule. SUMMARY: EPA is finalizing approval of revisions to the San Joaquin Valley Unified Air Pollution...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-24
... the California State Implementation Plan, San Joaquin Valley Unified Air Pollution Control District... approve revisions to the San Joaquin Valley Unified Air Pollution Control District (SJVUAPCD) portion of... Glass Manufacturing'', US EPA, June 1994. 7. ``Integrated Pollution Prevention and Control (IPPC...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-22
... the California State Implementation Plan, San Joaquin Valley Unified Air Pollution Control District... revisions to the San Joaquin Valley Unified Air Pollution Control District (SJVUAPCD) portion of the..., Gas, and Geothermal Resources confirmed that in the Ventura County Air Pollution Control District...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-25
... the California State Implementation Plan, San Joaquin Valley Unified Air Pollution Control District... revisions to the San Joaquin Valley Unified Air Pollution Control District (SJVAPCD) portion of the...)(2)). List of Subjects in 40 CFR Part 52 Environmental protection, Air pollution control...
Unified Early Childhood Personnel Preparation Programs: Perceptions from the Field.
ERIC Educational Resources Information Center
LaMontagne, M. J.; Johnson, Lawrence J.; Kilgo, Jennifer L.; Stayton, Vicki; Carr, Victoria; Bauer, Anne M.; Carpenter, Jenny
2002-01-01
This study examined perceptions of unified early childhood personnel preparation programs by 28 faculty members in such programs and by graduates (n=42) of unified, dual, or separate exceptional child education or exceptional child special education programs. Faculty stressed the importance of commitment and collaborative problem solving. The…