Modern Methods of Rail Welding
NASA Astrophysics Data System (ADS)
Kozyrev, Nikolay A.; Kozyreva, Olga A.; Usoltsev, Aleksander A.; Kryukov, Roman E.; Shevchenko, Roman A.
2017-10-01
Existing methods of rail welding, which are enable to get continuous welded rail track, are observed in this article. Analysis of existing welding methods allows considering an issue of continuous rail track in detail. Metallurgical and welding technologies of rail welding and also process technologies reducing aftereffects of temperature exposure are important factors determining the quality and reliability of the continuous rail track. Analysis of the existing methods of rail welding enable to find the research line for solving this problem.
Identification of influential users by neighbors in online social networks
NASA Astrophysics Data System (ADS)
Sheikhahmadi, Amir; Nematbakhsh, Mohammad Ali; Zareie, Ahmad
2017-11-01
Identification and ranking of influential users in social networks for the sake of news spreading and advertising has recently become an attractive field of research. Given the large number of users in social networks and also the various relations that exist among them, providing an effective method to identify influential users has been gradually considered as an essential factor. In most of the already-provided methods, those users who are located in an appropriate structural position of the network are regarded as influential users. These methods do not usually pay attention to the interactions among users, and also consider those relations as being binary in nature. This paper, therefore, proposes a new method to identify influential users in a social network by considering those interactions that exist among the users. Since users tend to act within the frame of communities, the network is initially divided into different communities. Then the amount of interaction among users is used as a parameter to set the weight of relations existing within the network. Afterward, by determining the neighbors' role for each user, a two-level method is proposed for both detecting users' influence and also ranking them. Simulation and experimental results on twitter data shows that those users who are selected by the proposed method, comparing to other existing ones, are distributed in a more appropriate distance. Moreover, the proposed method outperforms the other ones in terms of both the influential speed and capacity of the users it selects.
Local existence of N=1 supersymmetric gauge theory in four Dimensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akbar, Fiki T.; Gunara, Bobby E.; Zen, Freddy P.
2015-04-16
In this paper, we shall prove the local existence of N=1 supersymmetry gauge theory in 4 dimension. We start from the Lagrangian for coupling chiral and vector multiplets with constant gauge kinetic function and only considering a bosonic part by setting all fermionic field to be zero at level equation of motion. We consider a U(n) model as isometry for scalar field internal geometry. And we use a nonlinear semigroup method to prove the local existence.
Design sensitivity analysis with Applicon IFAD using the adjoint variable method
NASA Technical Reports Server (NTRS)
Frederick, Marjorie C.; Choi, Kyung K.
1984-01-01
A numerical method is presented to implement structural design sensitivity analysis using the versatility and convenience of existing finite element structural analysis program and the theoretical foundation in structural design sensitivity analysis. Conventional design variables, such as thickness and cross-sectional areas, are considered. Structural performance functionals considered include compliance, displacement, and stress. It is shown that calculations can be carried out outside existing finite element codes, using postprocessing data only. That is, design sensitivity analysis software does not have to be imbedded in an existing finite element code. The finite element structural analysis program used in the implementation presented is IFAD. Feasibility of the method is shown through analysis of several problems, including built-up structures. Accurate design sensitivity results are obtained without the uncertainty of numerical accuracy associated with selection of a finite difference perturbation.
Non-destructive inspection of polymer composite products
NASA Astrophysics Data System (ADS)
Anoshkin, A. N.; Sal'nikov, A. F.; Osokin, V. M.; Tretyakov, A. A.; Luzin, G. S.; Potrakhov, N. N.; Bessonov, V. B.
2018-02-01
The paper considers the main types of defects encountered in products made of polymer composite materials for aviation use. The analysis of existing methods of nondestructive testing is carried out, features of their application are considered taking into account design features, geometrical parameters and internal structure of objects of inspection. The advantages and disadvantages of the considered methods of nondestructive testing used in industrial production are shown.
New Internet search volume-based weighting method for integrating various environmental impacts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ji, Changyoon, E-mail: changyoon@yonsei.ac.kr; Hong, Taehoon, E-mail: hong7@yonsei.ac.kr
Weighting is one of the steps in life cycle impact assessment that integrates various characterized environmental impacts as a single index. Weighting factors should be based on the society's preferences. However, most previous studies consider only the opinion of some people. Thus, this research proposes a new weighting method that determines the weighting factors of environmental impact categories by considering public opinion on environmental impacts using the Internet search volumes for relevant terms. To validate the new weighting method, the weighting factors for six environmental impacts calculated by the new weighting method were compared with the existing weighting factors. Themore » resulting Pearson's correlation coefficient between the new and existing weighting factors was from 0.8743 to 0.9889. It turned out that the new weighting method presents reasonable weighting factors. It also requires less time and lower cost compared to existing methods and likewise meets the main requirements of weighting methods such as simplicity, transparency, and reproducibility. The new weighting method is expected to be a good alternative for determining the weighting factor. - Highlight: • A new weighting method using Internet search volume is proposed in this research. • The new weighting method reflects the public opinion using Internet search volume. • The correlation coefficient between new and existing weighting factors is over 0.87. • The new weighting method can present the reasonable weighting factors. • The proposed method can be a good alternative for determining the weighting factors.« less
A Doubly Stochastic Change Point Detection Algorithm for Noisy Biological Signals.
Gold, Nathan; Frasch, Martin G; Herry, Christophe L; Richardson, Bryan S; Wang, Xiaogang
2017-01-01
Experimentally and clinically collected time series data are often contaminated with significant confounding noise, creating short, noisy time series. This noise, due to natural variability and measurement error, poses a challenge to conventional change point detection methods. We propose a novel and robust statistical method for change point detection for noisy biological time sequences. Our method is a significant improvement over traditional change point detection methods, which only examine a potential anomaly at a single time point. In contrast, our method considers all suspected anomaly points and considers the joint probability distribution of the number of change points and the elapsed time between two consecutive anomalies. We validate our method with three simulated time series, a widely accepted benchmark data set, two geological time series, a data set of ECG recordings, and a physiological data set of heart rate variability measurements of fetal sheep model of human labor, comparing it to three existing methods. Our method demonstrates significantly improved performance over the existing point-wise detection methods.
Modeling Input Errors to Improve Uncertainty Estimates for Sediment Transport Model Predictions
NASA Astrophysics Data System (ADS)
Jung, J. Y.; Niemann, J. D.; Greimann, B. P.
2016-12-01
Bayesian methods using Markov chain Monte Carlo algorithms have recently been applied to sediment transport models to assess the uncertainty in the model predictions due to the parameter values. Unfortunately, the existing approaches can only attribute overall uncertainty to the parameters. This limitation is critical because no model can produce accurate forecasts if forced with inaccurate input data, even if the model is well founded in physical theory. In this research, an existing Bayesian method is modified to consider the potential errors in input data during the uncertainty evaluation process. The input error is modeled using Gaussian distributions, and the means and standard deviations are treated as uncertain parameters. The proposed approach is tested by coupling it to the Sedimentation and River Hydraulics - One Dimension (SRH-1D) model and simulating a 23-km reach of the Tachia River in Taiwan. The Wu equation in SRH-1D is used for computing the transport capacity for a bed material load of non-cohesive material. Three types of input data are considered uncertain: (1) the input flowrate at the upstream boundary, (2) the water surface elevation at the downstream boundary, and (3) the water surface elevation at a hydraulic structure in the middle of the reach. The benefits of modeling the input errors in the uncertainty analysis are evaluated by comparing the accuracy of the most likely forecast and the coverage of the observed data by the credible intervals to those of the existing method. The results indicate that the internal boundary condition has the largest uncertainty among those considered. Overall, the uncertainty estimates from the new method are notably different from those of the existing method for both the calibration and forecast periods.
Comparing biomarkers as principal surrogate endpoints.
Huang, Ying; Gilbert, Peter B
2011-12-01
Recently a new definition of surrogate endpoint, the "principal surrogate," was proposed based on causal associations between treatment effects on the biomarker and on the clinical endpoint. Despite its appealing interpretation, limited research has been conducted to evaluate principal surrogates, and existing methods focus on risk models that consider a single biomarker. How to compare principal surrogate value of biomarkers or general risk models that consider multiple biomarkers remains an open research question. We propose to characterize a marker or risk model's principal surrogate value based on the distribution of risk difference between interventions. In addition, we propose a novel summary measure (the standardized total gain) that can be used to compare markers and to assess the incremental value of a new marker. We develop a semiparametric estimated-likelihood method to estimate the joint surrogate value of multiple biomarkers. This method accommodates two-phase sampling of biomarkers and is more widely applicable than existing nonparametric methods by incorporating continuous baseline covariates to predict the biomarker(s), and is more robust than existing parametric methods by leaving the error distribution of markers unspecified. The methodology is illustrated using a simulated example set and a real data set in the context of HIV vaccine trials. © 2011, The International Biometric Society.
Global existence and finite time blow-up for a class of thin-film equation
NASA Astrophysics Data System (ADS)
Dong, Zhihua; Zhou, Jun
2017-08-01
This paper deals with a class of thin-film equation, which was considered in Li et al. (Nonlinear Anal Theory Methods Appl 147:96-109, 2016), where the case of lower initial energy (J(u_0)≤ d and d is a positive constant) was discussed, and the conditions on global existence or blow-up are given. We extend the results of this paper on two aspects: Firstly, we consider the upper and lower bounds of blow-up time and asymptotic behavior when J(u_0)
Exploring the notion of space coupling propulsion
NASA Technical Reports Server (NTRS)
Millis, Marc G.
1990-01-01
All existing methods of space propulsion are based on expelling a reaction mass (propellant) to induce motion. Alternatively, 'space coupling propulsion' refers to speculations about reacting with space-time itself to generate propulsive forces. Conceivably, the resulting increases in payload, range, and velocity would constitute a breakthrough in space propulsion. Such speculations are still considered science fiction for a number of reasons: (1) it appears to violate conservation of momentum; (2) no reactive media appear to exist in space; (3) no 'Grand Uniform Theories' exist to link gravity, an acceleration field, to other phenomena of nature such as electrodynamics. The rationale behind these objectives is the focus of interest. Various methods to either satisfy or explore these issues are presented along with secondary considerations. It is found that it may be useful to consider alternative conventions of science to further explore speculations of space coupling propulsion.
New algorithms to compute the nearness symmetric solution of the matrix equation.
Peng, Zhen-Yun; Fang, Yang-Zhi; Xiao, Xian-Wei; Du, Dan-Dan
2016-01-01
In this paper we consider the nearness symmetric solution of the matrix equation AXB = C to a given matrix [Formula: see text] in the sense of the Frobenius norm. By discussing equivalent form of the considered problem, we derive some necessary and sufficient conditions for the matrix [Formula: see text] is a solution of the considered problem. Based on the idea of the alternating variable minimization with multiplier method, we propose two iterative methods to compute the solution of the considered problem, and analyze the global convergence results of the proposed algorithms. Numerical results illustrate the proposed methods are more effective than the existing two methods proposed in Peng et al. (Appl Math Comput 160:763-777, 2005) and Peng (Int J Comput Math 87: 1820-1830, 2010).
A YinYang bipolar fuzzy cognitive TOPSIS method to bipolar disorder diagnosis.
Han, Ying; Lu, Zhenyu; Du, Zhenguang; Luo, Qi; Chen, Sheng
2018-05-01
Bipolar disorder is often mis-diagnosed as unipolar depression in the clinical diagnosis. The main reason is that, different from other diseases, bipolarity is the norm rather than exception in bipolar disorder diagnosis. YinYang bipolar fuzzy set captures bipolarity and has been successfully used to construct a unified inference mathematical modeling method to bipolar disorder clinical diagnosis. Nevertheless, symptoms and their interrelationships are not considered in the existing method, circumventing its ability to describe complexity of bipolar disorder. Thus, in this paper, a YinYang bipolar fuzzy multi-criteria group decision making method to bipolar disorder clinical diagnosis is developed. Comparing with the existing method, the new one is more comprehensive. The merits of the new method are listed as follows: First of all, multi-criteria group decision making method is introduced into bipolar disorder diagnosis for considering different symptoms and multiple doctors' opinions. Secondly, the discreet diagnosis principle is adopted by the revised TOPSIS method. Last but not the least, YinYang bipolar fuzzy cognitive map is provided for the understanding of interrelations among symptoms. The illustrated case demonstrates the feasibility, validity, and necessity of the theoretical results obtained. Moreover, the comparison analysis demonstrates that the diagnosis result is more accurate, when interrelations about symptoms are considered in the proposed method. In a conclusion, the main contribution of this paper is to provide a comprehensive mathematical approach to improve the accuracy of bipolar disorder clinical diagnosis, in which both bipolarity and complexity are considered. Copyright © 2018 Elsevier B.V. All rights reserved.
Generalized Ordinary Differential Equation Models 1
Miao, Hongyu; Wu, Hulin; Xue, Hongqi
2014-01-01
Existing estimation methods for ordinary differential equation (ODE) models are not applicable to discrete data. The generalized ODE (GODE) model is therefore proposed and investigated for the first time. We develop the likelihood-based parameter estimation and inference methods for GODE models. We propose robust computing algorithms and rigorously investigate the asymptotic properties of the proposed estimator by considering both measurement errors and numerical errors in solving ODEs. The simulation study and application of our methods to an influenza viral dynamics study suggest that the proposed methods have a superior performance in terms of accuracy over the existing ODE model estimation approach and the extended smoothing-based (ESB) method. PMID:25544787
Generalized Ordinary Differential Equation Models.
Miao, Hongyu; Wu, Hulin; Xue, Hongqi
2014-10-01
Existing estimation methods for ordinary differential equation (ODE) models are not applicable to discrete data. The generalized ODE (GODE) model is therefore proposed and investigated for the first time. We develop the likelihood-based parameter estimation and inference methods for GODE models. We propose robust computing algorithms and rigorously investigate the asymptotic properties of the proposed estimator by considering both measurement errors and numerical errors in solving ODEs. The simulation study and application of our methods to an influenza viral dynamics study suggest that the proposed methods have a superior performance in terms of accuracy over the existing ODE model estimation approach and the extended smoothing-based (ESB) method.
Multimodal Image Registration through Simultaneous Segmentation.
Aganj, Iman; Fischl, Bruce
2017-11-01
Multimodal image registration facilitates the combination of complementary information from images acquired with different modalities. Most existing methods require computation of the joint histogram of the images, while some perform joint segmentation and registration in alternate iterations. In this work, we introduce a new non-information-theoretical method for pairwise multimodal image registration, in which the error of segmentation - using both images - is considered as the registration cost function. We empirically evaluate our method via rigid registration of multi-contrast brain magnetic resonance images, and demonstrate an often higher registration accuracy in the results produced by the proposed technique, compared to those by several existing methods.
Cengizci, Süleyman; Atay, Mehmet Tarık; Eryılmaz, Aytekin
2016-01-01
This paper is concerned with two-point boundary value problems for singularly perturbed nonlinear ordinary differential equations. The case when the solution only has one boundary layer is examined. An efficient method so called Successive Complementary Expansion Method (SCEM) is used to obtain uniformly valid approximations to this kind of solutions. Four test problems are considered to check the efficiency and accuracy of the proposed method. The numerical results are found in good agreement with exact and existing solutions in literature. The results confirm that SCEM has a superiority over other existing methods in terms of easy-applicability and effectiveness.
Dissipative structure and global existence in critical space for Timoshenko system of memory type
NASA Astrophysics Data System (ADS)
Mori, Naofumi
2018-08-01
In this paper, we consider the initial value problem for the Timoshenko system with a memory term in one dimensional whole space. In the first place, we consider the linearized system: applying the energy method in the Fourier space, we derive the pointwise estimate of the solution in the Fourier space, which first gives the optimal decay estimate of the solution. Next, we give a characterization of the dissipative structure of the system by using the spectral analysis, which confirms our pointwise estimate is optimal. In the second place, we consider the nonlinear system: we show that the global-in-time existence and uniqueness result could be proved in the minimal regularity assumption in the critical Sobolev space H2. In the proof we don't need any time-weighted norm as recent works; we use just an energy method, which is improved to overcome the difficulties caused by regularity-loss property of Timoshenko system.
Fredholm-Volterra Integral Equation with a Generalized Singular Kernel and its Numerical Solutions
NASA Astrophysics Data System (ADS)
El-Kalla, I. L.; Al-Bugami, A. M.
2010-11-01
In this paper, the existence and uniqueness of solution of the Fredholm-Volterra integral equation (F-VIE), with a generalized singular kernel, are discussed and proved in the spaceL2(Ω)×C(0,T). The Fredholm integral term (FIT) is considered in position while the Volterra integral term (VIT) is considered in time. Using a numerical technique we have a system of Fredholm integral equations (SFIEs). This system of integral equations can be reduced to a linear algebraic system (LAS) of equations by using two different methods. These methods are: Toeplitz matrix method and Product Nyström method. A numerical examples are considered when the generalized kernel takes the following forms: Carleman function, logarithmic form, Cauchy kernel, and Hilbert kernel.
Generating soft shadows with a depth buffer algorithm
NASA Technical Reports Server (NTRS)
Brotman, L. S.; Badler, N. I.
1984-01-01
Computer-synthesized shadows used to appear with a sharp edge when cast onto a surface. At present the production of more realistic, soft shadows is considered. However, significant costs arise in connection with such a representation. The current investigation is concerned with a pragmatic approach, which combines an existing shadowing method with a popular visible surface rendering technique, called a 'depth buffer', to generate soft shadows resulting from light sources of finite extent. The considered method represents an extension of Crow's (1977) shadow volume algorithm.
Determination of laser cutting process conditions using the preference selection index method
NASA Astrophysics Data System (ADS)
Madić, Miloš; Antucheviciene, Jurgita; Radovanović, Miroslav; Petković, Dušan
2017-03-01
Determination of adequate parameter settings for improvement of multiple quality and productivity characteristics at the same time is of great practical importance in laser cutting. This paper discusses the application of the preference selection index (PSI) method for discrete optimization of the CO2 laser cutting of stainless steel. The main motivation for application of the PSI method is that it represents an almost unexplored multi-criteria decision making (MCDM) method, and moreover, this method does not require assessment of the considered criteria relative significances. After reviewing and comparing the existing approaches for determination of laser cutting parameter settings, the application of the PSI method was explained in detail. Experiment realization was conducted by using Taguchi's L27 orthogonal array. Roughness of the cut surface, heat affected zone (HAZ), kerf width and material removal rate (MRR) were considered as optimization criteria. The proposed methodology is found to be very useful in real manufacturing environment since it involves simple calculations which are easy to understand and implement. However, while applying the PSI method it was observed that it can not be useful in situations where there exist a large number of alternatives which have attribute values (performances) very close to those which are preferred.
Newton's method for nonlinear stochastic wave equations driven by one-dimensional Brownian motion.
Leszczynski, Henryk; Wrzosek, Monika
2017-02-01
We consider nonlinear stochastic wave equations driven by one-dimensional white noise with respect to time. The existence of solutions is proved by means of Picard iterations. Next we apply Newton's method. Moreover, a second-order convergence in a probabilistic sense is demonstrated.
Aligning Person-Centred Methods and Young People's Conceptualizations of Diversity
ERIC Educational Resources Information Center
Waite, Sue; Boyask, Ruth; Lawson, Hazel
2010-01-01
Many existing studies of diversity are concerned with social groups identified by externally determined factors, for example, ethnicity, gender, or educational attainment, and examine, either quantitatively or qualitatively, issues delineated by these. In evaluating methods used in previous research, we consider ways in which the adoption of…
Quantile Regression for Recurrent Gap Time Data
Luo, Xianghua; Huang, Chiung-Yu; Wang, Lan
2014-01-01
Summary Evaluating covariate effects on gap times between successive recurrent events is of interest in many medical and public health studies. While most existing methods for recurrent gap time analysis focus on modeling the hazard function of gap times, a direct interpretation of the covariate effects on the gap times is not available through these methods. In this article, we consider quantile regression that can provide direct assessment of covariate effects on the quantiles of the gap time distribution. Following the spirit of the weighted risk-set method by Luo and Huang (2011, Statistics in Medicine 30, 301–311), we extend the martingale-based estimating equation method considered by Peng and Huang (2008, Journal of the American Statistical Association 103, 637–649) for univariate survival data to analyze recurrent gap time data. The proposed estimation procedure can be easily implemented in existing software for univariate censored quantile regression. Uniform consistency and weak convergence of the proposed estimators are established. Monte Carlo studies demonstrate the effectiveness of the proposed method. An application to data from the Danish Psychiatric Central Register is presented to illustrate the methods developed in this article. PMID:23489055
A Proposed Methodology for Contextualised Evaluation in Higher Education
ERIC Educational Resources Information Center
Nygaard, Claus; Belluigi, Dina Zoe
2011-01-01
This paper aims to inspire stakeholders working with quality of higher education (such as members of study boards, study programme directors, curriculum developers and teachers) to critically consider their evaluation methods in relation to a focus on student learning. We argue that many of the existing methods of evaluation in higher education…
Autism Diagnosis and Screening: Factors to Consider in Differential Diagnosis
ERIC Educational Resources Information Center
Matson, Johnny L.; Beighley, Jennifer; Turygin, Nicole
2012-01-01
There has been an exponential growth in assessment methods to diagnose disorders on the autism spectrum. Many reasons for this trend exist and include advancing knowledge on how to make a diagnosis, the heterogeneity of the spectrum, the realization that different methods may be needed based on age and intellectual disability. Other factors…
Methods for Force Analysis of Overconstrained Parallel Mechanisms: A Review
NASA Astrophysics Data System (ADS)
Liu, Wen-Lan; Xu, Yun-Dou; Yao, Jian-Tao; Zhao, Yong-Sheng
2017-11-01
The force analysis of overconstrained PMs is relatively complex and difficult, for which the methods have always been a research hotspot. However, few literatures analyze the characteristics and application scopes of the various methods, which is not convenient for researchers and engineers to master and adopt them properly. A review of the methods for force analysis of both passive and active overconstrained PMs is presented. The existing force analysis methods for these two kinds of overconstrained PMs are classified according to their main ideas. Each category is briefly demonstrated and evaluated from such aspects as the calculation amount, the comprehensiveness of considering limbs' deformation, and the existence of explicit expressions of the solutions, which provides an important reference for researchers and engineers to quickly find a suitable method. The similarities and differences between the statically indeterminate problem of passive overconstrained PMs and that of active overconstrained PMs are discussed, and a universal method for these two kinds of overconstrained PMs is pointed out. The existing deficiencies and development directions of the force analysis methods for overconstrained systems are indicated based on the overview.
ASSESSING AND COMBINING RELIABILITY OF PROTEIN INTERACTION SOURCES
LEACH, SONIA; GABOW, AARON; HUNTER, LAWRENCE; GOLDBERG, DEBRA S.
2008-01-01
Integrating diverse sources of interaction information to create protein networks requires strategies sensitive to differences in accuracy and coverage of each source. Previous integration approaches calculate reliabilities of protein interaction information sources based on congruity to a designated ‘gold standard.’ In this paper, we provide a comparison of the two most popular existing approaches and propose a novel alternative for assessing reliabilities which does not require a gold standard. We identify a new method for combining the resultant reliabilities and compare it against an existing method. Further, we propose an extrinsic approach to evaluation of reliability estimates, considering their influence on the downstream tasks of inferring protein function and learning regulatory networks from expression data. Results using this evaluation method show 1) our method for reliability estimation is an attractive alternative to those requiring a gold standard and 2) the new method for combining reliabilities is less sensitive to noise in reliability assignments than the similar existing technique. PMID:17990508
Existence of topological multi-string solutions in Abelian gauge field theories
NASA Astrophysics Data System (ADS)
Han, Jongmin; Sohn, Juhee
2017-11-01
In this paper, we consider a general form of self-dual equations arising from Abelian gauge field theories coupled with the Einstein equations. By applying the super/subsolution method, we prove that topological multi-string solutions exist for any coupling constant, which improves previously known results. We provide two examples for application: the self-dual Einstein-Maxwell-Higgs model and the gravitational Maxwell gauged O(3) sigma model.
Wu, Hao
2018-05-01
In structural equation modelling (SEM), a robust adjustment to the test statistic or to its reference distribution is needed when its null distribution deviates from a χ 2 distribution, which usually arises when data do not follow a multivariate normal distribution. Unfortunately, existing studies on this issue typically focus on only a few methods and neglect the majority of alternative methods in statistics. Existing simulation studies typically consider only non-normal distributions of data that either satisfy asymptotic robustness or lead to an asymptotic scaled χ 2 distribution. In this work we conduct a comprehensive study that involves both typical methods in SEM and less well-known methods from the statistics literature. We also propose the use of several novel non-normal data distributions that are qualitatively different from the non-normal distributions widely used in existing studies. We found that several under-studied methods give the best performance under specific conditions, but the Satorra-Bentler method remains the most viable method for most situations. © 2017 The British Psychological Society.
Existence and Non-uniqueness of Global Weak Solutions to Inviscid Primitive and Boussinesq Equations
NASA Astrophysics Data System (ADS)
Chiodaroli, Elisabetta; Michálek, Martin
2017-08-01
We consider the initial value problem for the inviscid Primitive and Boussinesq equations in three spatial dimensions. We recast both systems as an abstract Euler-type system and apply the methods of convex integration of De Lellis and Székelyhidi to show the existence of infinitely many global weak solutions of the studied equations for general initial data. We also introduce an appropriate notion of dissipative solutions and show the existence of suitable initial data which generate infinitely many dissipative solutions.
Global Solutions to Repulsive Hookean Elastodynamics
NASA Astrophysics Data System (ADS)
Hu, Xianpeng; Masmoudi, Nader
2017-01-01
The global existence of classical solutions to the three dimensional repulsive Hookean elastodynamics around an equilibrium is considered. By linearization and Hodge's decomposition, the compressible part of the velocity, the density, and the compressible part of the transpose of the deformation gradient satisfy Klein-Gordon equations with speed {√{2}}, while the incompressible parts of the velocity and of the transpose of the deformation gradient satisfy wave equations with speed one. The space-time resonance method combined with the vector field method is used in a novel way to obtain the decay of the solution and hence global existence.
ERIC Educational Resources Information Center
Howard, Steven J.; Melhuish, Edward
2017-01-01
Several methods of assessing executive function (EF), self-regulation, language development, and social development in young children have been developed over previous decades. Yet new technologies make available methods of assessment not previously considered. In resolving conceptual and pragmatic limitations of existing tools, the Early Years…
ERIC Educational Resources Information Center
Thorpe, Anthony; Garside, Diane
2017-01-01
The professional development of middle leaders in higher education is little considered in existing research, though there are general concerns being raised about the suitability of the professional development opportunities currently available. This article develops and explores the use of meta-reflection as a method for professional development,…
Solitary waves and double layers in a dusty electronegative plasma.
Mamun, A A; Shukla, P K; Eliasson, B
2009-10-01
A dusty electronegative plasma containing Boltzmann electrons, Boltzmann negative ions, cold mobile positive ions, and negatively charged stationary dust has been considered. The basic features of arbitrary amplitude solitary waves (SWs) and double layers (DLs), which have been found to exist in such a dusty electronegative plasma, have been investigated by the pseudopotential method. The small amplitude limit has also been considered in order to study the small amplitude SWs and DLs analytically. It has been shown that under certain conditions, DLs do not exist, which is in good agreement with the experimental observations of Ghim and Hershkowitz [Y. Ghim (Kim) and N. Hershkowitz, Appl. Phys. Lett. 94, 151503 (2009)].
Existence of a coupled system of fractional differential equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ibrahim, Rabha W.; Siri, Zailan
2015-10-22
We manage the existence and uniqueness of a fractional coupled system containing Schrödinger equations. Such a system appears in quantum mechanics. We confirm that the fractional system under consideration admits a global solution in appropriate functional spaces. The solution is shown to be unique. The method is based on analytic technique of the fixed point theory. The fractional differential operator is considered from the virtue of the Riemann-Liouville differential operator.
Concerning an application of the method of least squares with a variable weight matrix
NASA Technical Reports Server (NTRS)
Sukhanov, A. A.
1979-01-01
An estimate of a state vector for a physical system when the weight matrix in the method of least squares is a function of this vector is considered. An iterative procedure is proposed for calculating the desired estimate. Conditions for the existence and uniqueness of the limit of this procedure are obtained, and a domain is found which contains the limit estimate. A second method for calculating the desired estimate which reduces to the solution of a system of algebraic equations is proposed. The question of applying Newton's method of tangents to solving the given system of algebraic equations is considered and conditions for the convergence of the modified Newton's method are obtained. Certain properties of the estimate obtained are presented together with an example.
Improvement of a method for positioning of pithead by considering motion of the surface water
NASA Astrophysics Data System (ADS)
Yi, H.; Lee, D. K.
2016-12-01
Underground mining has weakness compared with open pit mining in aspects of efficiency, economy and working environment. However, the method has applied for the development of a deep orebody. Development plan is established when the economic valuation and technical analysis of the deposits is completed through exploration of mineral resources. Development is a process to open a passage from the ground surface to the orebody as one of the steps of mining process. In the planning, there are details such as pithead positioning, mining method selection, and shaft design, etc. Among these, pithead positioning is implemented by considering infrastructures, watershed, geology, and economy. In this study, we propose a method to consider the motion of the surface waters in order to improve the existing pithead positioning techniques. The method contemplates the terrain around the mine and makes the surface water flow information. Then, the drainage treatment cost for each candidate location of pithead is suggested. This study covers the concept and design of the scheme.
Color normalization of histology slides using graph regularized sparse NMF
NASA Astrophysics Data System (ADS)
Sha, Lingdao; Schonfeld, Dan; Sethi, Amit
2017-03-01
Computer based automatic medical image processing and quantification are becoming popular in digital pathology. However, preparation of histology slides can vary widely due to differences in staining equipment, procedures and reagents, which can reduce the accuracy of algorithms that analyze their color and texture information. To re- duce the unwanted color variations, various supervised and unsupervised color normalization methods have been proposed. Compared with supervised color normalization methods, unsupervised color normalization methods have advantages of time and cost efficient and universal applicability. Most of the unsupervised color normaliza- tion methods for histology are based on stain separation. Based on the fact that stain concentration cannot be negative and different parts of the tissue absorb different stains, nonnegative matrix factorization (NMF), and particular its sparse version (SNMF), are good candidates for stain separation. However, most of the existing unsupervised color normalization method like PCA, ICA, NMF and SNMF fail to consider important information about sparse manifolds that its pixels occupy, which could potentially result in loss of texture information during color normalization. Manifold learning methods like Graph Laplacian have proven to be very effective in interpreting high-dimensional data. In this paper, we propose a novel unsupervised stain separation method called graph regularized sparse nonnegative matrix factorization (GSNMF). By considering the sparse prior of stain concentration together with manifold information from high-dimensional image data, our method shows better performance in stain color deconvolution than existing unsupervised color deconvolution methods, especially in keeping connected texture information. To utilized the texture information, we construct a nearest neighbor graph between pixels within a spatial area of an image based on their distances using heat kernal in lαβ space. The representation of a pixel in the stain density space is constrained to follow the feature distance of the pixel to pixels in the neighborhood graph. Utilizing color matrix transfer method with the stain concentrations found using our GSNMF method, the color normalization performance was also better than existing methods.
On the convergence of a linesearch based proximal-gradient method for nonconvex optimization
NASA Astrophysics Data System (ADS)
Bonettini, S.; Loris, I.; Porta, F.; Prato, M.; Rebegoldi, S.
2017-05-01
We consider a variable metric linesearch based proximal gradient method for the minimization of the sum of a smooth, possibly nonconvex function plus a convex, possibly nonsmooth term. We prove convergence of this iterative algorithm to a critical point if the objective function satisfies the Kurdyka-Łojasiewicz property at each point of its domain, under the assumption that a limit point exists. The proposed method is applied to a wide collection of image processing problems and our numerical tests show that our algorithm results to be flexible, robust and competitive when compared to recently proposed approaches able to address the optimization problems arising in the considered applications.
Unconstrained and contactless hand geometry biometrics.
de-Santos-Sierra, Alberto; Sánchez-Ávila, Carmen; Del Pozo, Gonzalo Bailador; Guerra-Casanova, Javier
2011-01-01
This paper presents a hand biometric system for contact-less, platform-free scenarios, proposing innovative methods in feature extraction, template creation and template matching. The evaluation of the proposed method considers both the use of three contact-less publicly available hand databases, and the comparison of the performance to two competitive pattern recognition techniques existing in literature: namely support vector machines (SVM) and k-nearest neighbour (k-NN). Results highlight the fact that the proposed method outcomes existing approaches in literature in terms of computational cost, accuracy in human identification, number of extracted features and number of samples for template creation. The proposed method is a suitable solution for human identification in contact-less scenarios based on hand biometrics, providing a feasible solution to devices with limited hardware requirements like mobile devices.
Fault management for data systems
NASA Technical Reports Server (NTRS)
Boyd, Mark A.; Iverson, David L.; Patterson-Hine, F. Ann
1993-01-01
Issues related to automating the process of fault management (fault diagnosis and response) for data management systems are considered. Substantial benefits are to be gained by successful automation of this process, particularly for large, complex systems. The use of graph-based models to develop a computer assisted fault management system is advocated. The general problem is described and the motivation behind choosing graph-based models over other approaches for developing fault diagnosis computer programs is outlined. Some existing work in the area of graph-based fault diagnosis is reviewed, and a new fault management method which was developed from existing methods is offered. Our method is applied to an automatic telescope system intended as a prototype for future lunar telescope programs. Finally, an application of our method to general data management systems is described.
Unconstrained and Contactless Hand Geometry Biometrics
de-Santos-Sierra, Alberto; Sánchez-Ávila, Carmen; del Pozo, Gonzalo Bailador; Guerra-Casanova, Javier
2011-01-01
This paper presents a hand biometric system for contact-less, platform-free scenarios, proposing innovative methods in feature extraction, template creation and template matching. The evaluation of the proposed method considers both the use of three contact-less publicly available hand databases, and the comparison of the performance to two competitive pattern recognition techniques existing in literature: namely Support Vector Machines (SVM) and k-Nearest Neighbour (k-NN). Results highlight the fact that the proposed method outcomes existing approaches in literature in terms of computational cost, accuracy in human identification, number of extracted features and number of samples for template creation. The proposed method is a suitable solution for human identification in contact-less scenarios based on hand biometrics, providing a feasible solution to devices with limited hardware requirements like mobile devices. PMID:22346634
NASA Astrophysics Data System (ADS)
Bagherinejad, Jafar; Niknam, Azar
2018-03-01
In this paper, a leader-follower competitive facility location problem considering the reactions of the competitors is studied. A model for locating new facilities and determining levels of quality for the facilities of the leader firm is proposed. Moreover, changes in the location and quality of existing facilities in a competitive market where a competitor offers the same goods or services are taken into account. The competitor could react by opening new facilities, closing existing ones, and adjusting the quality levels of its existing facilities. The market share, captured by each facility, depends on its distance to customer and its quality that is calculated based on the probabilistic Huff's model. Each firm aims to maximize its profit subject to constraints on quality levels and budget of setting up new facilities. This problem is formulated as a bi-level mixed integer non-linear model. The model is solved using a combination of Tabu Search with an exact method. The performance of the proposed algorithm is compared with an upper bound that is achieved by applying Karush-Kuhn-Tucker conditions. Computational results show that our algorithm finds near the upper bound solutions in a reasonable time.
Generalized Appended Product Indicator Procedure for Nonlinear Structural Equation Analysis.
ERIC Educational Resources Information Center
Wall, Melanie M.; Amemiya, Yasuo
2001-01-01
Considers the estimation of polynomial structural models and shows a limitation of an existing method. Introduces a new procedure, the generalized appended product indicator procedure, for nonlinear structural equation analysis. Addresses statistical issues associated with the procedure through simulation. (SLD)
Infrared Ship Target Segmentation Based on Spatial Information Improved FCM.
Bai, Xiangzhi; Chen, Zhiguo; Zhang, Yu; Liu, Zhaoying; Lu, Yi
2016-12-01
Segmentation of infrared (IR) ship images is always a challenging task, because of the intensity inhomogeneity and noise. The fuzzy C-means (FCM) clustering is a classical method widely used in image segmentation. However, it has some shortcomings, like not considering the spatial information or being sensitive to noise. In this paper, an improved FCM method based on the spatial information is proposed for IR ship target segmentation. The improvements include two parts: 1) adding the nonlocal spatial information based on the ship target and 2) using the spatial shape information of the contour of the ship target to refine the local spatial constraint by Markov random field. In addition, the results of K -means are used to initialize the improved FCM method. Experimental results show that the improved method is effective and performs better than the existing methods, including the existing FCM methods, for segmentation of the IR ship images.
NASA Astrophysics Data System (ADS)
Yamaguchi, Makoto; Midorikawa, Saburoh
The empirical equation for estimating the site amplification factor of ground motion by the average shear-wave velocity of ground (AVS) is examined. In the existing equations, the coefficient on dependence of the amplification factor on the AVS was treated as constant. The analysis showed that the coefficient varies with change of the AVS for short periods. A new estimation equation was proposed considering the dependence on the AVS. The new equation can represent soil characteristics that the softer soil has the longer predominant period, and can make better estimations for short periods than the existing method.
New derivation of soliton solutions to the AKNS2 system via dressing transformation methods
NASA Astrophysics Data System (ADS)
Assunção, A. de O.; Blas, H.; da Silva, M. J. B. F.
2012-03-01
We consider certain boundary conditions supporting soliton solutions in the generalized nonlinear Schrödinger equation (AKNSr) (r = 1, 2). Using the dressing transformation (DT) method and the related tau functions, we study the AKNSr system for the vanishing, (constant) non-vanishing and the mixed boundary conditions, and their associated bright, dark and bright-dark N-soliton solutions, respectively. Moreover, we introduce a modified DT related to the dressing group in order to consider the free-field boundary condition and derive generalized N dark-dark solitons. As a reduced submodel of the AKNSr system, we study the properties of the focusing, defocusing and mixed focusing-defocusing versions of the so-called coupled nonlinear Schrödinger equation (r-CNLS), which has recently been considered in many physical applications. We have shown that two-dark-dark-soliton bound states exist in the AKNS2 system, and three- and higher-dark-dark-soliton bound states cannot exist. The AKNSr (r ⩾ 3) extension is briefly discussed in this approach. The properties and calculations of some matrix elements using level-one vertex operators are outlined. Dedicated to the memory of S S Costa
SCOUT: simultaneous time segmentation and community detection in dynamic networks
Hulovatyy, Yuriy; Milenković, Tijana
2016-01-01
Many evolving complex real-world systems can be modeled via dynamic networks. An important problem in dynamic network research is community detection, which finds groups of topologically related nodes. Typically, this problem is approached by assuming either that each time point has a distinct community organization or that all time points share a single community organization. The reality likely lies between these two extremes. To find the compromise, we consider community detection in the context of the problem of segment detection, which identifies contiguous time periods with consistent network structure. Consequently, we formulate a combined problem of segment community detection (SCD), which simultaneously partitions the network into contiguous time segments with consistent community organization and finds this community organization for each segment. To solve SCD, we introduce SCOUT, an optimization framework that explicitly considers both segmentation quality and partition quality. SCOUT addresses limitations of existing methods that can be adapted to solve SCD, which consider only one of segmentation quality or partition quality. In a thorough evaluation, SCOUT outperforms the existing methods in terms of both accuracy and computational complexity. We apply SCOUT to biological network data to study human aging. PMID:27881879
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Kutler, Paul (Technical Monitor)
1998-01-01
Several stabilized demoralization procedures for conservation law equations on triangulated domains will be considered. Specifically, numerical schemes based on upwind finite volume, fluctuation splitting, Galerkin least-squares, and space discontinuous Galerkin demoralization will be considered in detail. A standard energy analysis for several of these methods will be given via entropy symmetrization. Next, we will present some relatively new theoretical results concerning congruence relationships for left or right symmetrized equations. These results suggest new variants of existing FV, DG, GLS, and FS methods which are computationally more efficient while retaining the pleasant theoretical properties achieved by entropy symmetrization. In addition, the task of Jacobean linearization of these schemes for use in Newton's method is greatly simplified owing to exploitation of exact symmetries which exist in the system. The FV, FS and DG schemes also permit discrete maximum principle analysis and enforcement which greatly adds to the robustness of the methods. Discrete maximum principle theory will be presented for general finite volume approximations on unstructured meshes. Next, we consider embedding these nonlinear space discretizations into exact and inexact Newton solvers which are preconditioned using a nonoverlapping (Schur complement) domain decomposition technique. Elements of nonoverlapping domain decomposition for elliptic problems will be reviewed followed by the present extension to hyperbolic and elliptic-hyperbolic problems. Other issues of practical relevance such the meshing of geometries, code implementation, turbulence modeling, global convergence, etc, will. be addressed as needed.
NASA Technical Reports Server (NTRS)
Barth, Timothy; Chancellor, Marisa K. (Technical Monitor)
1997-01-01
Several stabilized discretization procedures for conservation law equations on triangulated domains will be considered. Specifically, numerical schemes based on upwind finite volume, fluctuation splitting, Galerkin least-squares, and space discontinuous Galerkin discretization will be considered in detail. A standard energy analysis for several of these methods will be given via entropy symmetrization. Next, we will present some relatively new theoretical results concerning congruence relationships for left or right symmetrized equations. These results suggest new variants of existing FV, DG, GLS and FS methods which are computationally more efficient while retaining the pleasant theoretical properties achieved by entropy symmetrization. In addition, the task of Jacobian linearization of these schemes for use in Newton's method is greatly simplified owing to exploitation of exact symmetries which exist in the system. These variants have been implemented in the "ELF" library for which example calculations will be shown. The FV, FS and DG schemes also permit discrete maximum principle analysis and enforcement which greatly adds to the robustness of the methods. Some prevalent limiting strategies will be reviewed. Next, we consider embedding these nonlinear space discretizations into exact and inexact Newton solvers which are preconditioned using a nonoverlapping (Schur complement) domain decomposition technique. Elements of nonoverlapping domain decomposition for elliptic problems will be reviewed followed by the present extension to hyperbolic and elliptic-hyperbolic problems. Other issues of practical relevance such the meshing of geometries, code implementation, turbulence modeling, global convergence, etc. will be addressed as needed.
Predicting chaos in memristive oscillator via harmonic balance method.
Wang, Xin; Li, Chuandong; Huang, Tingwen; Duan, Shukai
2012-12-01
This paper studies the possible chaotic behaviors in a memristive oscillator with cubic nonlinearities via harmonic balance method which is also called the method of describing function. This method was proposed to detect chaos in classical Chua's circuit. We first transform the considered memristive oscillator system into Lur'e model and present the prediction of the existence of chaotic behaviors. To ensure the prediction result is correct, the distortion index is also measured. Numerical simulations are presented to show the effectiveness of theoretical results.
34 CFR 388.22 - What priorities does the Secretary consider in making an award?
Code of Federal Regulations, 2011 CFR
2011-07-01
... education methods, such as interactive audio, video, computer technologies, or existing telecommunications... training materials and practices. The proposed project demonstrates an effective plan to develop and... programs by other State vocational rehabilitation units. (2) Distance education. The proposed project...
34 CFR 388.22 - What priorities does the Secretary consider in making an award?
Code of Federal Regulations, 2010 CFR
2010-07-01
... education methods, such as interactive audio, video, computer technologies, or existing telecommunications... training materials and practices. The proposed project demonstrates an effective plan to develop and... programs by other State vocational rehabilitation units. (2) Distance education. The proposed project...
34 CFR 388.22 - What priorities does the Secretary consider in making an award?
Code of Federal Regulations, 2014 CFR
2014-07-01
... education methods, such as interactive audio, video, computer technologies, or existing telecommunications... training materials and practices. The proposed project demonstrates an effective plan to develop and... programs by other State vocational rehabilitation units. (2) Distance education. The proposed project...
34 CFR 388.22 - What priorities does the Secretary consider in making an award?
Code of Federal Regulations, 2013 CFR
2013-07-01
... education methods, such as interactive audio, video, computer technologies, or existing telecommunications... training materials and practices. The proposed project demonstrates an effective plan to develop and... programs by other State vocational rehabilitation units. (2) Distance education. The proposed project...
34 CFR 388.22 - What priorities does the Secretary consider in making an award?
Code of Federal Regulations, 2012 CFR
2012-07-01
... education methods, such as interactive audio, video, computer technologies, or existing telecommunications... training materials and practices. The proposed project demonstrates an effective plan to develop and... programs by other State vocational rehabilitation units. (2) Distance education. The proposed project...
Training of U.S. Air Traffic Controllers. (IDA Report No. R-206).
ERIC Educational Resources Information Center
Henry, James H.; And Others
The report reviews the evolution of existing national programs for air traffic controller training, estimates the number of persons requiring developmental and supplementary training, examines present controller selection and training programs, investigates performance measurement methods, considers standardization and quality control, discusses…
NASA Astrophysics Data System (ADS)
Takahashi, Masakazu; Fukue, Yoshinori
This paper proposes a Retrospective Computerized System Validation (RCSV) method for Drug Manufacturing Software (DMSW) that relates to drug production considering software modification. Because DMSW that is used for quality management and facility control affects big impact to quality of drugs, regulatory agency required proofs of adequacy for DMSW's functions and performance based on developed documents and test results. Especially, the work that explains adequacy for previously developed DMSW based on existing documents and operational records is called RCSV. When modifying RCSV conducted DMSW, it was difficult to secure consistency between developed documents and test results for modified DMSW parts and existing documents and operational records for non-modified DMSW parts. This made conducting RCSV difficult. In this paper, we proposed (a) definition of documents architecture, (b) definition of descriptive items and levels in the documents, (c) management of design information using database, (d) exhaustive testing, and (e) integrated RCSV procedure. As a result, we could conduct adequate RCSV securing consistency.
Evaluation of methods for determining hardware projected life
NASA Technical Reports Server (NTRS)
1971-01-01
An investigation of existing methods of predicting hardware life is summarized by reviewing programs having long life requirements, current research efforts on long life problems, and technical papers reporting work on life predicting techniques. The results indicate that there are no accurate quantitative means to predict hardware life for system level hardware. The effectiveness of test programs and the cause of hardware failures is considered.
Site dependent factors affecting the economic feasibility of solar powered absorption cooling
NASA Technical Reports Server (NTRS)
Bartlett, J. C.
1978-01-01
A procedure was developed to evaluate the cost effectiveness of combining an absorption cycle chiller with a solar energy system. A basic assumption of the procedure is that a solar energy system exists for meeting the heating load of the building, and that the building must be cooled. The decision to be made is to either cool the building with a conventional vapor compression cycle chiller or to use the existing solar energy system to provide a heat input to the absorption chiller. Two methods of meeting the cooling load not supplied by solar energy were considered. In the first method, heat is supplied to the absorption chiller by a boiler using fossil fuel. In the second method, the load not met by solar energy is net by a conventional vapor compression chiller. In addition, the procedure can consider waste heat as another form of auxiliary energy. Commercial applications of solar cooling with an absorption chiller were found to be more cost effective than the residential applications. In general, it was found that the larger the chiller, the more economically feasible it would be. Also, it was found that a conventional vapor compression chiller is a viable alternative for the auxiliary cooling source, especially for the larger chillers. The results of the analysis gives a relative rating of the sites considered as to their economic feasibility of solar cooling.
Max-margin multiattribute learning with low-rank constraint.
Zhang, Qiang; Chen, Lin; Li, Baoxin
2014-07-01
Attribute learning has attracted a lot of interests in recent years for its advantage of being able to model high-level concepts with a compact set of midlevel attributes. Real-world objects often demand multiple attributes for effective modeling. Most existing methods learn attributes independently without explicitly considering their intrinsic relatedness. In this paper, we propose max margin multiattribute learning with low-rank constraint, which learns a set of attributes simultaneously, using only relative ranking of the attributes for the data. By learning all the attributes simultaneously through low-rank constraint, the proposed method is able to capture their intrinsic correlation for improved learning; by requiring only relative ranking, the method avoids restrictive binary labels of attributes that are often assumed by many existing techniques. The proposed method is evaluated on both synthetic data and real visual data including a challenging video data set. Experimental results demonstrate the effectiveness of the proposed method.
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang
1998-01-01
This project is about the development of high order, non-oscillatory type schemes for computational fluid dynamics. Algorithm analysis, implementation, and applications are performed. Collaborations with NASA scientists have been carried out to ensure that the research is relevant to NASA objectives. The combination of ENO finite difference method with spectral method in two space dimension is considered, jointly with Cai [3]. The resulting scheme behaves nicely for the two dimensional test problems with or without shocks. Jointly with Cai and Gottlieb, we have also considered one-sided filters for spectral approximations to discontinuous functions [2]. We proved theoretically the existence of filters to recover spectral accuracy up to the discontinuity. We also constructed such filters for practical calculations.
On the Number of Periodic Solutions of Delay Differential Equations
NASA Astrophysics Data System (ADS)
Han, Maoan; Xu, Bing; Tian, Huanhuan; Bai, Yuzhen
In this paper, we consider the existence and number of periodic solutions for a class of delay differential equations of the form ẋ(t) = bx(t ‑ 1) + 𝜀f(x(t),x(t ‑ 1),𝜀), based on the Kaplan-Yorke method. Especially, we consider a kind of delay differential equations with f as a polynomial having parameters and find the number of periodic solutions with period 4 4k+1 or 4 4k+3.
Rational-operator-based depth-from-defocus approach to scene reconstruction.
Li, Ang; Staunton, Richard; Tjahjadi, Tardi
2013-09-01
This paper presents a rational-operator-based approach to depth from defocus (DfD) for the reconstruction of three-dimensional scenes from two-dimensional images, which enables fast DfD computation that is independent of scene textures. Two variants of the approach, one using the Gaussian rational operators (ROs) that are based on the Gaussian point spread function (PSF) and the second based on the generalized Gaussian PSF, are considered. A novel DfD correction method is also presented to further improve the performance of the approach. Experimental results are considered for real scenes and show that both approaches outperform existing RO-based methods.
A novel knowledge-based potential for RNA 3D structure evaluation
NASA Astrophysics Data System (ADS)
Yang, Yi; Gu, Qi; Zhang, Ben-Gong; Shi, Ya-Zhou; Shao, Zhi-Gang
2018-03-01
Ribonucleic acids (RNAs) play a vital role in biology, and knowledge of their three-dimensional (3D) structure is required to understand their biological functions. Recently structural prediction methods have been developed to address this issue, but a series of RNA 3D structures are generally predicted by most existing methods. Therefore, the evaluation of the predicted structures is generally indispensable. Although several methods have been proposed to assess RNA 3D structures, the existing methods are not precise enough. In this work, a new all-atom knowledge-based potential is developed for more accurately evaluating RNA 3D structures. The potential not only includes local and nonlocal interactions but also fully considers the specificity of each RNA by introducing a retraining mechanism. Based on extensive test sets generated from independent methods, the proposed potential correctly distinguished the native state and ranked near-native conformations to effectively select the best. Furthermore, the proposed potential precisely captured RNA structural features such as base-stacking and base-pairing. Comparisons with existing potential methods show that the proposed potential is very reliable and accurate in RNA 3D structure evaluation. Project supported by the National Science Foundation of China (Grants Nos. 11605125, 11105054, 11274124, and 11401448).
On the existence of maximum likelihood estimates for presence-only data
Hefley, Trevor J.; Hooten, Mevin B.
2015-01-01
It is important to identify conditions for which maximum likelihood estimates are unlikely to be identifiable from presence-only data. In data sets where the maximum likelihood estimates do not exist, penalized likelihood and Bayesian methods will produce coefficient estimates, but these are sensitive to the choice of estimation procedure and prior or penalty term. When sample size is small or it is thought that habitat preferences are strong, we propose a suite of estimation procedures researchers can consider using.
A Goal Oriented Approach for Modeling and Analyzing Security Trade-Offs
NASA Astrophysics Data System (ADS)
Elahi, Golnaz; Yu, Eric
In designing software systems, security is typically only one design objective among many. It may compete with other objectives such as functionality, usability, and performance. Too often, security mechanisms such as firewalls, access control, or encryption are adopted without explicit recognition of competing design objectives and their origins in stakeholder interests. Recently, there is increasing acknowledgement that security is ultimately about trade-offs. One can only aim for "good enough" security, given the competing demands from many parties. In this paper, we examine how conceptual modeling can provide explicit and systematic support for analyzing security trade-offs. After considering the desirable criteria for conceptual modeling methods, we examine several existing approaches for dealing with security trade-offs. From analyzing the limitations of existing methods, we propose an extension to the i* framework for security trade-off analysis, taking advantage of its multi-agent and goal orientation. The method was applied to several case studies used to exemplify existing approaches.
Methodological Issues in Questionnaire Design.
Song, Youngshin; Son, Youn Jung; Oh, Doonam
2015-06-01
The process of designing a questionnaire is complicated. Many questionnaires on nursing phenomena have been developed and used by nursing researchers. The purpose of this paper was to discuss questionnaire design and factors that should be considered when using existing scales. Methodological issues were discussed, such as factors in the design of questions, steps in developing questionnaires, wording and formatting methods for items, and administrations methods. How to use existing scales, how to facilitate cultural adaptation, and how to prevent socially desirable responding were discussed. Moreover, the triangulation method in questionnaire development was introduced. Steps were recommended for designing questions such as appropriately operationalizing key concepts for the target population, clearly formatting response options, generating items and confirming final items through face or content validity, sufficiently piloting the questionnaire using item analysis, demonstrating reliability and validity, finalizing the scale, and training the administrator. Psychometric properties and cultural equivalence should be evaluated prior to administration when using an existing questionnaire and performing cultural adaptation. In the context of well-defined nursing phenomena, logical and systematic methods will contribute to the development of simple and precise questionnaires.
Existence and exponential stability of traveling waves for delayed reaction-diffusion systems
NASA Astrophysics Data System (ADS)
Hsu, Cheng-Hsiung; Yang, Tzi-Sheng; Yu, Zhixian
2018-03-01
The purpose of this work is to investigate the existence and exponential stability of traveling wave solutions for general delayed multi-component reaction-diffusion systems. Following the monotone iteration scheme via an explicit construction of a pair of upper and lower solutions, we first obtain the existence of monostable traveling wave solutions connecting two different equilibria. Then, applying the techniques of weighted energy method and comparison principle, we show that all solutions of the Cauchy problem for the considered systems converge exponentially to traveling wave solutions provided that the initial perturbations around the traveling wave fronts belong to a suitable weighted Sobolev space.
Theoretical Background and Prognostic Modeling for Benchmarking SHM Sensors for Composite Structures
2010-10-01
minimum flaw size can be detected by the existing SHM based monitoring methods. Sandwich panels with foam , WebCore and honeycomb structures were...Whether it be hat stiffened, corrugated sandwich, honeycomb sandwich, or foam filled sandwich, all composite structures have one basic handicap in...based monitoring methods. Sandwich panels with foam , WebCore and honeycomb structures were considered for use in this study. Eigenmode frequency
Offline signature verification using convolution Siamese network
NASA Astrophysics Data System (ADS)
Xing, Zi-Jian; Yin, Fei; Wu, Yi-Chao; Liu, Cheng-Lin
2018-04-01
This paper presents an offline signature verification approach using convolutional Siamese neural network. Unlike the existing methods which consider feature extraction and metric learning as two independent stages, we adopt a deepleaning based framework which combines the two stages together and can be trained end-to-end. The experimental results on two offline public databases (GPDSsynthetic and CEDAR) demonstrate the superiority of our method on the offline signature verification problem.
Estimating the Cost of Standardized Student Testing in the United States.
ERIC Educational Resources Information Center
Phelps, Richard P.
2000-01-01
Describes and contrasts different methods of estimating costs of standardized testing. Using a cost-accounting approach, compares gross and marginal costs and considers testing objects (test materials and services, personnel and student time, and administrative/building overhead). Social marginal costs of replacing existing tests with a national…
Does Science Presuppose Naturalism (or Anything at All)?
ERIC Educational Resources Information Center
Fishman, Yonatan I.; Boudry, Maarten
2013-01-01
Several scientists, scientific institutions, and philosophers have argued that science is committed to Methodological Naturalism (MN), the view that science, by virtue of its methods, is limited to studying "natural" phenomena and cannot consider or evaluate hypotheses that refer to supernatural entities. While they may in fact exist, gods,…
Adapting Training to Meet the Preferred Learning Styles of Different Generations
ERIC Educational Resources Information Center
Urick, Michael
2017-01-01
This article considers how training professionals can respond to differences in training preferences between generational groups. It adopts two methods. First, it surveys the existing research and finds generally that preferences for training approaches can differ between groups and specifically that younger employees are perceived to leverage…
ERIC Educational Resources Information Center
Semali, Ladislaus M.; Hristova, Adelina; Owiny, Sylvia A.
2015-01-01
This study examines the relationship between informal science and indigenous innovations in local communities in which students matured. The discussion considers methods for bridging the gap that exists between parents' understanding of informal science ("Ubunifu") and what students learn in secondary schools in Kenya, Tanzania, and…
NASA Astrophysics Data System (ADS)
Gérard, Christian; Wrochna, Michał
2017-08-01
We consider the massive Klein-Gordon equation on a class of asymptotically static spacetimes (in the long range sense) with Cauchy surface of bounded geometry. We prove the existence and Hadamard property of the in and out states constructed by scattering theory methods.
Employee Perspectives on MOOCs for Workplace Learning
ERIC Educational Resources Information Center
Egloffstein, Marc; Ifenthaler, Dirk
2017-01-01
Massive Open Online Courses (MOOCs) can be considered a rather novel method in digital workplace learning, and there is as yet little empirical evidence on the acceptance and effectiveness of MOOCs in professional learning. In addition to existing findings on employers' attitudes, this study seeks to investigate the employee perspective towards…
Multidimensional Scaling of High School Students' Perceptions of Academic Dishonesty
ERIC Educational Resources Information Center
Schmelkin, Liora Pedhazur; Gilbert, Kimberly A.; Silva, Rebecca
2010-01-01
Although cheating on tests and other forms of academic dishonesty are considered rampant, no standard definition of academic dishonesty exists. The current study was conducted to investigate the perceptions of academic dishonesty in high school students, utilizing an innovative methodology, multidimensional scaling (MDS). Two methods were used to…
Should compulsive sexual behavior be considered an addiction?
Kraus, Shane W.; Voon, Valerie; Potenza, Marc N.
2016-01-01
Aims To review the evidence base for classifying compulsive sexual behavior (CSB) as a non-substance or “behavioral” addiction. Methods Data from multiple domains (e.g., epidemiological, phenomenological, clinical, biological) are reviewed and considered with respect to data from substance and gambling addictions. Results Overlapping features exist between CSB and substance-use disorders. Common neurotransmitter systems may contribute to CSB and substance-use disorders, and recent neuroimaging studies highlight similarities relating to craving and attentional biases. Similar pharmacological and psychotherapeutic treatments may be applicable to CSB and substance addictions, although considerable gaps in knowledge currently exist. Conclusions Despite the growing body of research linking compulsive sexual behavior to substance addictions, significant gaps in understanding continue to complicate classification of compulsive sexual behaviour as an addiction. PMID:26893127
Leung, Elvis M K; Tang, Phyllis N Y; Ye, Yuran; Chan, Wan
2013-10-16
2-Alkylcyclobutanones (2-ACBs) have long been considered as unique radiolytic products that can be used as indicators for irradiated food identification. A recent report on the natural existence of 2-ACB in non-irradiated nutmeg and cashew nut samples aroused worldwide concern because it contradicts the general belief that 2-ACBs are specific to irradiated food. The goal of this study is to test the natural existence of 2-ACBs in nut samples using our newly developed liquid chromatography-tandem mass spectrometry (LC-MS/MS) method with enhanced analytical sensitivity and selectivity ( Ye , Y. ; Liu , H. ; Horvatovich , P. ; Chan , W. Liquid chromatography-electrospray ionization tandem mass spectrometric analysis of 2-alkylcyclobutanones in irradiated chicken by precolumn derivatization with hydroxylamine . J. Agric. Food Chem. 2013 , 61 , 5758 - 5763 ). The validated method was applied to identify 2-dodecylcyclobutanone (2-DCB) and 2-tetradecylcyclobutanone (2-TCB) in nutmeg, cashew nut, pine nut, and apricot kernel samples (n = 22) of different origins. Our study reveals that 2-DCB and 2-TCB either do not exist naturally or exist at concentrations below the detection limit of the existing method. Thus, 2-DCB and 2-TCB are still valid to be used as biomarkers for identifying irradiated food.
Solution of the Bagley Torvik equation by fractional DTM
NASA Astrophysics Data System (ADS)
Arora, Geeta; Pratiksha
2017-07-01
In this paper, fractional differential transform method(DTM) is implemented on the Bagley Torvik equation. This equation models the viscoelastic behavior of geological strata, metals, glasses etc. It explains the motion of a rigid plate immersed in a Newtonian fluid. DTM is a simple, reliable and efficient method that gives a series solution. Caputo fractional derivative is considered throughout this work. Two examples are given to demonstrate the validity and applicability of the method and comparison is made with the existing results.
Approximate Model Checking of PCTL Involving Unbounded Path Properties
NASA Astrophysics Data System (ADS)
Basu, Samik; Ghosh, Arka P.; He, Ru
We study the problem of applying statistical methods for approximate model checking of probabilistic systems against properties encoded as
NASA Astrophysics Data System (ADS)
Ko, Dae-Eun; Shin, Sang-Hoon
2017-11-01
Spherical LNG tanks having many advantages such as structural safety are used as a cargo containment system of LNG carriers. However, it is practically difficult to fabricate perfectly spherical tanks of different sizes in the yard. The most effective method of manufacturing LNG tanks of various capacities is to insert a cylindrical part at the center of existing spherical tanks. While a simplified high-precision analysis method for the initial design of the spherical tanks has been developed for both static and dynamic loads, in the case of spherical tanks with a cylindrical central part, the analysis method available only considers static loads. The purpose of the present study is to derive the dynamic pressure distribution due to horizontal acceleration, which is essential for developing an analysis method that considers dynamic loads as well.
Design sensitivity analysis using EAL. Part 1: Conventional design parameters
NASA Technical Reports Server (NTRS)
Dopker, B.; Choi, Kyung K.; Lee, J.
1986-01-01
A numerical implementation of design sensitivity analysis of builtup structures is presented, using the versatility and convenience of an existing finite element structural analysis code and its database management system. The finite element code used in the implemenatation presented is the Engineering Analysis Language (EAL), which is based on a hybrid method of analysis. It was shown that design sensitivity computations can be carried out using the database management system of EAL, without writing a separate program and a separate database. Conventional (sizing) design parameters such as cross-sectional area of beams or thickness of plates and plane elastic solid components are considered. Compliance, displacement, and stress functionals are considered as performance criteria. The method presented is being extended to implement shape design sensitivity analysis using a domain method and a design component method.
A Method for Search Engine Selection using Thesaurus for Selective Meta-Search Engine
NASA Astrophysics Data System (ADS)
Goto, Shoji; Ozono, Tadachika; Shintani, Toramatsu
In this paper, we propose a new method for selecting search engines on WWW for selective meta-search engine. In selective meta-search engine, a method is needed that would enable selecting appropriate search engines for users' queries. Most existing methods use statistical data such as document frequency. These methods may select inappropriate search engines if a query contains polysemous words. In this paper, we describe an search engine selection method based on thesaurus. In our method, a thesaurus is constructed from documents in a search engine and is used as a source description of the search engine. The form of a particular thesaurus depends on the documents used for its construction. Our method enables search engine selection by considering relationship between terms and overcomes the problems caused by polysemous words. Further, our method does not have a centralized broker maintaining data, such as document frequency for all search engines. As a result, it is easy to add a new search engine, and meta-search engines become more scalable with our method compared to other existing methods.
Model Predictive Control considering Reachable Range of Wheels for Leg / Wheel Mobile Robots
NASA Astrophysics Data System (ADS)
Suzuki, Naito; Nonaka, Kenichiro; Sekiguchi, Kazuma
2016-09-01
Obstacle avoidance is one of the important tasks for mobile robots. In this paper, we study obstacle avoidance control for mobile robots equipped with four legs comprised of three DoF SCARA leg/wheel mechanism, which enables the robot to change its shape adapting to environments. Our previous method achieves obstacle avoidance by model predictive control (MPC) considering obstacle size and lateral wheel positions. However, this method does not ensure existence of joint angles which achieves reference wheel positions calculated by MPC. In this study, we propose a model predictive control considering reachable mobile ranges of wheels positions by combining multiple linear constraints, where each reachable mobile range is approximated as a convex trapezoid. Thus, we achieve to formulate a MPC as a quadratic problem with linear constraints for nonlinear problem of longitudinal and lateral wheel position control. By optimization of MPC, the reference wheel positions are calculated, while each joint angle is determined by inverse kinematics. Considering reachable mobile ranges explicitly, the optimal joint angles are calculated, which enables wheels to reach the reference wheel positions. We verify its advantages by comparing the proposed method with the previous method through numerical simulations.
A multi-scale network method for two-phase flow in porous media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khayrat, Karim, E-mail: khayratk@ifd.mavt.ethz.ch; Jenny, Patrick
Pore-network models of porous media are useful in the study of pore-scale flow in porous media. In order to extract macroscopic properties from flow simulations in pore-networks, it is crucial the networks are large enough to be considered representative elementary volumes. However, existing two-phase network flow solvers are limited to relatively small domains. For this purpose, a multi-scale pore-network (MSPN) method, which takes into account flow-rate effects and can simulate larger domains compared to existing methods, was developed. In our solution algorithm, a large pore network is partitioned into several smaller sub-networks. The algorithm to advance the fluid interfaces withinmore » each subnetwork consists of three steps. First, a global pressure problem on the network is solved approximately using the multiscale finite volume (MSFV) method. Next, the fluxes across the subnetworks are computed. Lastly, using fluxes as boundary conditions, a dynamic two-phase flow solver is used to advance the solution in time. Simulation results of drainage scenarios at different capillary numbers and unfavourable viscosity ratios are presented and used to validate the MSPN method against solutions obtained by an existing dynamic network flow solver.« less
Sampling Frequency Optimisation and Nonlinear Distortion Mitigation in Subsampling Receiver
NASA Astrophysics Data System (ADS)
Castanheira, Pedro Xavier Melo Fernandes
Subsampling receivers utilise the subsampling method to down convert signals from radio frequency (RF) to a lower frequency location. Multiple signals can also be down converted using the subsampling receiver, but using the incorrect subsampling frequency could result in signals aliasing one another after down conversion. The existing method for subsampling multiband signals focused on down converting all the signals without any aliasing between the signals. The case considered initially was a dual band signal, and then it was further extended to a more general multiband case. In this thesis, a new method is proposed with the assumption that only one signal is needed to not overlap the other multiband signals that are down converted at the same time. The proposed method will introduce unique formulas using the said assumption to calculate the valid subsampling frequencies, ensuring that the target signal is not aliased by the other signals. Simulation results show that the proposed method will provide lower valid subsampling frequencies for down conversion compared to the existing methods.
NASA Astrophysics Data System (ADS)
Gu, Yameng; Zhang, Xuming
2017-05-01
Optical coherence tomography (OCT) images are severely degraded by speckle noise. Existing methods for despeckling multiframe OCT data cannot deliver sufficient speckle suppression while preserving image details well. To address this problem, the spiking cortical model (SCM) based non-local means (NLM) method has been proposed in this letter. In the proposed method, the considered frame and two neighboring frames are input into three SCMs to generate the temporal series of pulse outputs. The normalized moment of inertia (NMI) of the considered patches in the pulse outputs is extracted to represent the rotational and scaling invariant features of the corresponding patches in each frame. The pixel similarity is computed based on the Euclidean distance between the NMI features and used as the weight. Each pixel in the considered frame is restored by the weighted averaging of all pixels in the pre-defined search window in the three frames. Experiments on the real multiframe OCT data of the pig eye demonstrate the advantage of the proposed method over the frame averaging method, the multiscale sparsity based tomographic denoising method, the wavelet-based method and the traditional NLM method in terms of visual inspection and objective metrics such as signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), equivalent number of looks (ENL) and cross-correlation (XCOR).
The Taguchi Method Application to Improve the Quality of a Sustainable Process
NASA Astrophysics Data System (ADS)
Titu, A. M.; Sandu, A. V.; Pop, A. B.; Titu, S.; Ciungu, T. C.
2018-06-01
Taguchi’s method has always been a method used to improve the quality of the analyzed processes and products. This research shows an unusual situation, namely the modeling of some parameters, considered technical parameters, in a process that is wanted to be durable by improving the quality process and by ensuring quality using an experimental research method. Modern experimental techniques can be applied in any field and this study reflects the benefits of interacting between the agriculture sustainability principles and the Taguchi’s Method application. The experimental method used in this practical study consists of combining engineering techniques with experimental statistical modeling to achieve rapid improvement of quality costs, in fact seeking optimization at the level of existing processes and the main technical parameters. The paper is actually a purely technical research that promotes a technical experiment using the Taguchi method, considered to be an effective method since it allows for rapid achievement of 70 to 90% of the desired optimization of the technical parameters. The missing 10 to 30 percent can be obtained with one or two complementary experiments, limited to 2 to 4 technical parameters that are considered to be the most influential. Applying the Taguchi’s Method in the technique and not only, allowed the simultaneous study in the same experiment of the influence factors considered to be the most important in different combinations and, at the same time, determining each factor contribution.
Computation of Relative Magnetic Helicity in Spherical Coordinates
NASA Astrophysics Data System (ADS)
Moraitis, Kostas; Pariat, Étienne; Savcheva, Antonia; Valori, Gherardo
2018-06-01
Magnetic helicity is a quantity of great importance in solar studies because it is conserved in ideal magnetohydrodynamics. While many methods for computing magnetic helicity in Cartesian finite volumes exist, in spherical coordinates, the natural coordinate system for solar applications, helicity is only treated approximately. We present here a method for properly computing the relative magnetic helicity in spherical geometry. The volumes considered are finite, of shell or wedge shape, and the three-dimensional magnetic field is considered to be fully known throughout the studied domain. Testing of the method with well-known, semi-analytic, force-free magnetic-field models reveals that it has excellent accuracy. Further application to a set of nonlinear force-free reconstructions of the magnetic field of solar active regions and comparison with an approximate method used in the past indicates that the proposed method can be significantly more accurate, thus making our method a promising tool in helicity studies that employ spherical geometry. Additionally, we determine and discuss the applicability range of the approximate method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ayissi, Raoul Domingo, E-mail: raoulayissi@yahoo.fr; Noutchegueme, Norbert, E-mail: nnoutch@yahoo.fr
Global solutions regular for the Einstein-Boltzmann equation on a magnetized Bianchi type-I cosmological model with the cosmological constant are investigated. We suppose that the metric is locally rotationally symmetric. The Einstein-Boltzmann equation has been already considered by some authors. But, in general Bancel and Choquet-Bruhat [Ann. Henri Poincaré XVIII(3), 263 (1973); Commun. Math. Phys. 33, 83 (1973)], they proved only the local existence, and in the case of the nonrelativistic Boltzmann equation. Mucha [Global existence of solutions of the Einstein-Boltzmann equation in the spatially homogeneous case. Evolution equation, existence, regularity and singularities (Banach Center Publications, Institute of Mathematics, Polish Academymore » of Science, 2000), Vol. 52] obtained a global existence result, for the relativistic Boltzmann equation coupled with the Einstein equations and using the Yosida operator, but confusing unfortunately with the nonrelativistic case. Noutchegueme and Dongho [Classical Quantum Gravity 23, 2979 (2006)] and Noutchegueme, Dongho, and Takou [Gen. Relativ. Gravitation 37, 2047 (2005)], have obtained a global solution in time, but still using the Yosida operator and considering only the uncharged case. Noutchegueme and Ayissi [Adv. Stud. Theor. Phys. 4, 855 (2010)] also proved a global existence of solutions to the Maxwell-Boltzmann system using the characteristic method. In this paper, we obtain using a method totally different from those used in the works of Noutchegueme and Dongho [Classical Quantum Gravity 23, 2979 (2006)], Noutchegueme, Dongho, and Takou [Gen. Relativ. Gravitation 37, 2047 (2005)], Noutchegueme and Ayissi [Adv. Stud. Theor. Phys. 4, 855 (2010)], and Mucha [Global existence of solutions of the Einstein-Boltzmann equation in the spatially homogeneous case. Evolution equation, existence, regularity and singularities (Banach Center Publications, Institute of Mathematics, Polish Academy of Science, 2000), Vol. 52] the global in time existence and uniqueness of a regular solution to the Einstein-Maxwell-Boltzmann system with the cosmological constant. We define and we use the weighted Sobolev separable spaces for the Boltzmann equation; some special spaces for the Einstein equations, then we clearly display all the proofs leading to the global existence theorems.« less
NASA Astrophysics Data System (ADS)
Ayissi, Raoul Domingo; Noutchegueme, Norbert
2015-01-01
Global solutions regular for the Einstein-Boltzmann equation on a magnetized Bianchi type-I cosmological model with the cosmological constant are investigated. We suppose that the metric is locally rotationally symmetric. The Einstein-Boltzmann equation has been already considered by some authors. But, in general Bancel and Choquet-Bruhat [Ann. Henri Poincaré XVIII(3), 263 (1973); Commun. Math. Phys. 33, 83 (1973)], they proved only the local existence, and in the case of the nonrelativistic Boltzmann equation. Mucha [Global existence of solutions of the Einstein-Boltzmann equation in the spatially homogeneous case. Evolution equation, existence, regularity and singularities (Banach Center Publications, Institute of Mathematics, Polish Academy of Science, 2000), Vol. 52] obtained a global existence result, for the relativistic Boltzmann equation coupled with the Einstein equations and using the Yosida operator, but confusing unfortunately with the nonrelativistic case. Noutchegueme and Dongho [Classical Quantum Gravity 23, 2979 (2006)] and Noutchegueme, Dongho, and Takou [Gen. Relativ. Gravitation 37, 2047 (2005)], have obtained a global solution in time, but still using the Yosida operator and considering only the uncharged case. Noutchegueme and Ayissi [Adv. Stud. Theor. Phys. 4, 855 (2010)] also proved a global existence of solutions to the Maxwell-Boltzmann system using the characteristic method. In this paper, we obtain using a method totally different from those used in the works of Noutchegueme and Dongho [Classical Quantum Gravity 23, 2979 (2006)], Noutchegueme, Dongho, and Takou [Gen. Relativ. Gravitation 37, 2047 (2005)], Noutchegueme and Ayissi [Adv. Stud. Theor. Phys. 4, 855 (2010)], and Mucha [Global existence of solutions of the Einstein-Boltzmann equation in the spatially homogeneous case. Evolution equation, existence, regularity and singularities (Banach Center Publications, Institute of Mathematics, Polish Academy of Science, 2000), Vol. 52] the global in time existence and uniqueness of a regular solution to the Einstein-Maxwell-Boltzmann system with the cosmological constant. We define and we use the weighted Sobolev separable spaces for the Boltzmann equation; some special spaces for the Einstein equations, then we clearly display all the proofs leading to the global existence theorems.
Optical Sensors and Methods for Underwater 3D Reconstruction
Massot-Campos, Miquel; Oliver-Codina, Gabriel
2015-01-01
This paper presents a survey on optical sensors and methods for 3D reconstruction in underwater environments. The techniques to obtain range data have been listed and explained, together with the different sensor hardware that makes them possible. The literature has been reviewed, and a classification has been proposed for the existing solutions. New developments, commercial solutions and previous reviews in this topic have also been gathered and considered. PMID:26694389
NASA Astrophysics Data System (ADS)
Wang, Bin; Wu, Xinyuan
2014-11-01
In this paper we consider multi-frequency highly oscillatory second-order differential equations x″ (t) + Mx (t) = f (t , x (t) ,x‧ (t)) where high-frequency oscillations are generated by the linear part Mx (t), and M is positive semi-definite (not necessarily nonsingular). It is known that Filon-type methods are effective approach to numerically solving highly oscillatory problems. Unfortunately, however, existing Filon-type asymptotic methods fail to apply to the highly oscillatory second-order differential equations when M is singular. We study and propose an efficient improvement on the existing Filon-type asymptotic methods, so that the improved Filon-type asymptotic methods can be able to numerically solving this class of multi-frequency highly oscillatory systems with a singular matrix M. The improved Filon-type asymptotic methods are designed by combining Filon-type methods with the asymptotic methods based on the variation-of-constants formula. We also present one efficient and practical improved Filon-type asymptotic method which can be performed at lower cost. Accompanying numerical results show the remarkable efficiency.
2010-05-01
irreducible, by the Perron - Frobenius theorem (see, for example, Theorem 8.4.4 in [28]), the eigenvalue 1 is simple. Next, the rank-one matrix Q has the...We refer to (2.1) as the scaling equation. Although algorithms must use A, existence and unique- ness theory need consider only the nonnegative matrix...B. If p = 1 and A is nonnegative , then A = B. We reserve the term binormalization for the case p = 2. We say A is scalable if there exists x > 0
On the existence of mosaic-skeleton approximations for discrete analogues of integral operators
NASA Astrophysics Data System (ADS)
Kashirin, A. A.; Taltykina, M. Yu.
2017-09-01
Exterior three-dimensional Dirichlet problems for the Laplace and Helmholtz equations are considered. By applying methods of potential theory, they are reduced to equivalent Fredholm boundary integral equations of the first kind, for which discrete analogues, i.e., systems of linear algebraic equations (SLAEs) are constructed. The existence of mosaic-skeleton approximations for the matrices of the indicated systems is proved. These approximations make it possible to reduce the computational complexity of an iterative solution of the SLAEs. Numerical experiments estimating the capabilities of the proposed approach are described.
On the Possibility of Acceleration of Polarized Protons in the Synchrotron Nuclotron
NASA Astrophysics Data System (ADS)
Shatunov, Yu. M.; Koop, I. A.; Otboev, A. V.; Mane, S. P.; Shatunov, P. Yu.
2018-05-01
One of the main tasks of the NICA project is to produce colliding beams of polarized protons. It is planned to accelerate polarized protons from the source to the maximum energy in the existing proton synchrotron. We consider all depolarizing spin resonances in the Nuclotron and propose methods to overcome them.
Practitioner Review: The Assessment of Bipolar Disorder in Children and Adolescents
ERIC Educational Resources Information Center
Baroni, Argelinda; Lunsford, Jessica R.; Luckenbaugh, David A.; Towbin, Kenneth E.; Leibenluft, Ellen
2009-01-01
Background: An increasing number of youth are being diagnosed with, and treated for, bipolar disorder (BD). Controversy exists about whether youth with non-episodic irritability and symptoms of attention deficit hyperactivity disorder (ADHD) should be considered to have a developmental presentation of mania. Method: A selective review of the…
40 CFR 63.1365 - Test methods and initial compliance procedures.
Code of Federal Regulations, 2012 CFR
2012-07-01
... temperature of 760 °C, the design evaluation must document that these conditions exist. (ii) For a combustion... autoignition temperature of the organic HAP, must consider the vent stream flow rate, and must establish the design minimum and average temperature in the combustion zone and the combustion zone residence time. (B...
40 CFR 63.1365 - Test methods and initial compliance procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... temperature of 760 °C, the design evaluation must document that these conditions exist. (ii) For a combustion... autoignition temperature of the organic HAP, must consider the vent stream flow rate, and must establish the design minimum and average temperature in the combustion zone and the combustion zone residence time. (B...
40 CFR 63.1365 - Test methods and initial compliance procedures.
Code of Federal Regulations, 2011 CFR
2011-07-01
... temperature of 760 °C, the design evaluation must document that these conditions exist. (ii) For a combustion... autoignition temperature of the organic HAP, must consider the vent stream flow rate, and must establish the design minimum and average temperature in the combustion zone and the combustion zone residence time. (B...
40 CFR 63.1365 - Test methods and initial compliance procedures.
Code of Federal Regulations, 2014 CFR
2014-07-01
... temperature of 760 °C, the design evaluation must document that these conditions exist. (ii) For a combustion... autoignition temperature of the organic HAP, must consider the vent stream flow rate, and must establish the design minimum and average temperature in the combustion zone and the combustion zone residence time. (B...
ERIC Educational Resources Information Center
Wu, Ting-Ting
2018-01-01
Memorizing English vocabulary is often considered uninteresting, and a lack of motivation exists during learning activities. Moreover, most vocabulary practice systems automatically select words from articles and do not provide integrated model methods for students. Therefore, this study constructed a mobile game-based English vocabulary practice…
Identifying influential spreaders in complex networks based on kshell hybrid method
NASA Astrophysics Data System (ADS)
Namtirtha, Amrita; Dutta, Animesh; Dutta, Biswanath
2018-06-01
Influential spreaders are the key players in maximizing or controlling the spreading in a complex network. Identifying the influential spreaders using kshell decomposition method has become very popular in the recent time. In the literature, the core nodes i.e. with the largest kshell index of a network are considered as the most influential spreaders. We have studied the kshell method and spreading dynamics of nodes using Susceptible-Infected-Recovered (SIR) epidemic model to understand the behavior of influential spreaders in terms of its topological location in the network. From the study, we have found that every node in the core area is not the most influential spreader. Even a strategically placed lower shell node can also be a most influential spreader. Moreover, the core area can also be situated at the periphery of the network. The existing indexing methods are only designed to identify the most influential spreaders from core nodes and not from lower shells. In this work, we propose a kshell hybrid method to identify highly influential spreaders not only from the core but also from lower shells. The proposed method comprises the parameters such as kshell power, node's degree, contact distance, and many levels of neighbors' influence potential. The proposed method is evaluated using nine real world network datasets. In terms of the spreading dynamics, the experimental results show the superiority of the proposed method over the other existing indexing methods such as the kshell method, the neighborhood coreness centrality, the mixed degree decomposition, etc. Furthermore, the proposed method can also be applied to large-scale networks by considering the three levels of neighbors' influence potential.
NASA Technical Reports Server (NTRS)
Toomarian, N.; Kirkham, Harold
1994-01-01
This report investigates the application of artificial neural networks to the problem of power system stability. The field of artificial intelligence, expert systems, and neural networks is reviewed. Power system operation is discussed with emphasis on stability considerations. Real-time system control has only recently been considered as applicable to stability, using conventional control methods. The report considers the use of artificial neural networks to improve the stability of the power system. The networks are considered as adjuncts and as replacements for existing controllers. The optimal kind of network to use as an adjunct to a generator exciter is discussed.
A new learning paradigm: learning using privileged information.
Vapnik, Vladimir; Vashist, Akshay
2009-01-01
In the Afterword to the second edition of the book "Estimation of Dependences Based on Empirical Data" by V. Vapnik, an advanced learning paradigm called Learning Using Hidden Information (LUHI) was introduced. This Afterword also suggested an extension of the SVM method (the so called SVM(gamma)+ method) to implement algorithms which address the LUHI paradigm (Vapnik, 1982-2006, Sections 2.4.2 and 2.5.3 of the Afterword). See also (Vapnik, Vashist, & Pavlovitch, 2008, 2009) for further development of the algorithms. In contrast to the existing machine learning paradigm where a teacher does not play an important role, the advanced learning paradigm considers some elements of human teaching. In the new paradigm along with examples, a teacher can provide students with hidden information that exists in explanations, comments, comparisons, and so on. This paper discusses details of the new paradigm and corresponding algorithms, introduces some new algorithms, considers several specific forms of privileged information, demonstrates superiority of the new learning paradigm over the classical learning paradigm when solving practical problems, and discusses general questions related to the new ideas.
Community Detection in Complex Networks via Clique Conductance.
Lu, Zhenqi; Wahlström, Johan; Nehorai, Arye
2018-04-13
Network science plays a central role in understanding and modeling complex systems in many areas including physics, sociology, biology, computer science, economics, politics, and neuroscience. One of the most important features of networks is community structure, i.e., clustering of nodes that are locally densely interconnected. Communities reveal the hierarchical organization of nodes, and detecting communities is of great importance in the study of complex systems. Most existing community-detection methods consider low-order connection patterns at the level of individual links. But high-order connection patterns, at the level of small subnetworks, are generally not considered. In this paper, we develop a novel community-detection method based on cliques, i.e., local complete subnetworks. The proposed method overcomes the deficiencies of previous similar community-detection methods by considering the mathematical properties of cliques. We apply the proposed method to computer-generated graphs and real-world network datasets. When applied to networks with known community structure, the proposed method detects the structure with high fidelity and sensitivity. When applied to networks with no a priori information regarding community structure, the proposed method yields insightful results revealing the organization of these complex networks. We also show that the proposed method is guaranteed to detect near-optimal clusters in the bipartition case.
FIND: difFerential chromatin INteractions Detection using a spatial Poisson process
Chen, Yang; Zhang, Michael Q.
2018-01-01
Polymer-based simulations and experimental studies indicate the existence of a spatial dependency between the adjacent DNA fibers involved in the formation of chromatin loops. However, the existing strategies for detecting differential chromatin interactions assume that the interacting segments are spatially independent from the other segments nearby. To resolve this issue, we developed a new computational method, FIND, which considers the local spatial dependency between interacting loci. FIND uses a spatial Poisson process to detect differential chromatin interactions that show a significant difference in their interaction frequency and the interaction frequency of their neighbors. Simulation and biological data analysis show that FIND outperforms the widely used count-based methods and has a better signal-to-noise ratio. PMID:29440282
Helsel, D.R.
2006-01-01
The most commonly used method in environmental chemistry to deal with values below detection limits is to substitute a fraction of the detection limit for each nondetect. Two decades of research has shown that this fabrication of values produces poor estimates of statistics, and commonly obscures patterns and trends in the data. Papers using substitution may conclude that significant differences, correlations, and regression relationships do not exist, when in fact they do. The reverse may also be true. Fortunately, good alternative methods for dealing with nondetects already exist, and are summarized here with references to original sources. Substituting values for nondetects should be used rarely, and should generally be considered unacceptable in scientific research. There are better ways.
Iris movement based wheel chair control using raspberry pi
NASA Astrophysics Data System (ADS)
Sharma, Jatin; Anbarasu, M.; Chakraborty, Chandan; Shanmugasundaram, M.
2017-11-01
Paralysis is considered as a major curse in this world. The number of persons who are paralyzed and therefore dependent on others due to loss of self-mobility is growing with the population. Quadriplegia is a form of Paralysis in which you can only move your eyes. Much work has been done to help disabled persons to live independently. Various methods are used for the same and this paper enlists some of the already existing methods along with some add-ons to improve the existing system. Add-ons include a system, which will be designed using Raspberry Pi and IR Camera Module. OpenCV will be used for image processing and Python is used for programming the Raspberry Pi.
Fuzzy architecture assessment for critical infrastructure resilience
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muller, George
2012-12-01
This paper presents an approach for the selection of alternative architectures in a connected infrastructure system to increase resilience of the overall infrastructure system. The paper begins with a description of resilience and critical infrastructure, then summarizes existing approaches to resilience, and presents a fuzzy-rule based method of selecting among alternative infrastructure architectures. This methodology includes considerations which are most important when deciding on an approach to resilience. The paper concludes with a proposed approach which builds on existing resilience architecting methods by integrating key system aspects using fuzzy memberships and fuzzy rule sets. This novel approach aids the systemsmore » architect in considering resilience for the evaluation of architectures for adoption into the final system architecture.« less
Fernández-Carrobles, M. Milagro; Tadeo, Irene; Bueno, Gloria; Noguera, Rosa; Déniz, Oscar; Salido, Jesús; García-Rojo, Marcial
2013-01-01
Given that angiogenesis and lymphangiogenesis are strongly related to prognosis in neoplastic and other pathologies and that many methods exist that provide different results, we aim to construct a morphometric tool allowing us to measure different aspects of the shape and size of vascular vessels in a complete and accurate way. The developed tool presented is based on vessel closing which is an essential property to properly characterize the size and the shape of vascular and lymphatic vessels. The method is fast and accurate improving existing tools for angiogenesis analysis. The tool also improves the accuracy of vascular density measurements, since the set of endothelial cells forming a vessel is considered as a single object. PMID:24489494
The transport of nuclear power plant components. [via airships
NASA Technical Reports Server (NTRS)
Keating, S. J., Jr.
1975-01-01
The problems of transporting nuclear power plant components to landlocked sites where the usual mode of transport by barge cannot be used are considered. Existing methods of ground-based overland transport are discussed and their costs presented. Components are described and traffic density projections made to the year 2000. Plots of units transported versus distance transported are provided for units booked in 1973 and booked and proposed in 1974. It is shown that, for these cases, overland transport requirements for the industry will be over 5,000,000 ton-miles/year while a projection based on increasing energy demands shows that this figure will increase significantly by the year 2000. The payload size, distances, and costs of existing overland modes are significant enough to consider development of a lighter than air (LTA) mode for transporting NSSS components.
Gravity localization in sine-Gordon braneworlds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cruz, W.T., E-mail: wilamicruz@gmail.com; Maluf, R.V., E-mail: r.v.maluf@fisica.ufc.br; Sousa, L.J.S., E-mail: luisjose@fisica.ufc.br
2016-01-15
In this work we study two types of five-dimensional braneworld models given by sine-Gordon potentials. In both scenarios, the thick brane is generated by a real scalar field coupled to gravity. We focus our investigation on the localization of graviton field and the behaviour of the massive spectrum. In particular, we analyse the localization of massive modes by means of a relative probability method in a Quantum Mechanics context. Initially, considering a scalar field sine-Gordon potential, we find a localized state to the graviton at zero mode. However, when we consider a double sine-Gordon potential, the brane structure is changedmore » allowing the existence of massive resonant states. The new results show how the existence of an internal structure can aid in the emergence of massive resonant modes on the brane.« less
NASA Astrophysics Data System (ADS)
Jolivet, R.; Simons, M.
2018-02-01
Interferometric synthetic aperture radar time series methods aim to reconstruct time-dependent ground displacements over large areas from sets of interferograms in order to detect transient, periodic, or small-amplitude deformation. Because of computational limitations, most existing methods consider each pixel independently, ignoring important spatial covariances between observations. We describe a framework to reconstruct time series of ground deformation while considering all pixels simultaneously, allowing us to account for spatial covariances, imprecise orbits, and residual atmospheric perturbations. We describe spatial covariances by an exponential decay function dependent of pixel-to-pixel distance. We approximate the impact of imprecise orbit information and residual long-wavelength atmosphere as a low-order polynomial function. Tests on synthetic data illustrate the importance of incorporating full covariances between pixels in order to avoid biased parameter reconstruction. An example of application to the northern Chilean subduction zone highlights the potential of this method.
Research on uncertainty evaluation measure and method of voltage sag severity
NASA Astrophysics Data System (ADS)
Liu, X. N.; Wei, J.; Ye, S. Y.; Chen, B.; Long, C.
2018-01-01
Voltage sag is an inevitable serious problem of power quality in power system. This paper focuses on a general summarization and reviews on the concepts, indices and evaluation methods about voltage sag severity. Considering the complexity and uncertainty of influencing factors, damage degree, the characteristics and requirements of voltage sag severity in the power source-network-load sides, the measure concepts and their existing conditions, evaluation indices and methods of voltage sag severity have been analyzed. Current evaluation techniques, such as stochastic theory, fuzzy logic, as well as their fusion, are reviewed in detail. An index system about voltage sag severity is provided for comprehensive study. The main aim of this paper is to propose thought and method of severity research based on advanced uncertainty theory and uncertainty measure. This study may be considered as a valuable guide for researchers who are interested in the domain of voltage sag severity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mcwilliams, A. J.
2015-09-08
This report reviews literature on reprocessing high temperature gas-cooled reactor graphite fuel components. A basic review of the various fuel components used in the pebble bed type reactors is provided along with a survey of synthesis methods for the fabrication of the fuel components. Several disposal options are considered for the graphite pebble fuel elements including the storage of intact pebbles, volume reduction by separating the graphite from fuel kernels, and complete processing of the pebbles for waste storage. Existing methods for graphite removal are presented and generally consist of mechanical separation techniques such as crushing and grinding chemical techniquesmore » through the use of acid digestion and oxidation. Potential methods for reprocessing the graphite pebbles include improvements to existing methods and novel technologies that have not previously been investigated for nuclear graphite waste applications. The best overall method will be dependent on the desired final waste form and needs to factor in the technical efficiency, political concerns, cost, and implementation.« less
Determining Semantically Related Significant Genes.
Taha, Kamal
2014-01-01
GO relation embodies some aspects of existence dependency. If GO term xis existence-dependent on GO term y, the presence of y implies the presence of x. Therefore, the genes annotated with the function of the GO term y are usually functionally and semantically related to the genes annotated with the function of the GO term x. A large number of gene set enrichment analysis methods have been developed in recent years for analyzing gene sets enrichment. However, most of these methods overlook the structural dependencies between GO terms in GO graph by not considering the concept of existence dependency. We propose in this paper a biological search engine called RSGSearch that identifies enriched sets of genes annotated with different functions using the concept of existence dependency. We observe that GO term xcannot be existence-dependent on GO term y, if x- and y- have the same specificity (biological characteristics). After encoding into a numeric format the contributions of GO terms annotating target genes to the semantics of their lowest common ancestors (LCAs), RSGSearch uses microarray experiment to identify the most significant LCA that annotates the result genes. We evaluated RSGSearch experimentally and compared it with five gene set enrichment systems. Results showed marked improvement.
Webly-Supervised Fine-Grained Visual Categorization via Deep Domain Adaptation.
Xu, Zhe; Huang, Shaoli; Zhang, Ya; Tao, Dacheng
2018-05-01
Learning visual representations from web data has recently attracted attention for object recognition. Previous studies have mainly focused on overcoming label noise and data bias and have shown promising results by learning directly from web data. However, we argue that it might be better to transfer knowledge from existing human labeling resources to improve performance at nearly no additional cost. In this paper, we propose a new semi-supervised method for learning via web data. Our method has the unique design of exploiting strong supervision, i.e., in addition to standard image-level labels, our method also utilizes detailed annotations including object bounding boxes and part landmarks. By transferring as much knowledge as possible from existing strongly supervised datasets to weakly supervised web images, our method can benefit from sophisticated object recognition algorithms and overcome several typical problems found in webly-supervised learning. We consider the problem of fine-grained visual categorization, in which existing training resources are scarce, as our main research objective. Comprehensive experimentation and extensive analysis demonstrate encouraging performance of the proposed approach, which, at the same time, delivers a new pipeline for fine-grained visual categorization that is likely to be highly effective for real-world applications.
NASA Astrophysics Data System (ADS)
Bezmaternykh, P. V.; Nikolaev, D. P.; Arlazarov, V. L.
2018-04-01
Textual blocks rectification or slant correction is an important stage of document image processing in OCR systems. This paper considers existing methods and introduces an approach for the construction of such algorithms based on Fast Hough Transform analysis. A quality measurement technique is proposed and obtained results are shown for both printed and handwritten textual blocks processing as a part of an industrial system of identity documents recognition on mobile devices.
Developing of method for primary frequency control droop and deadband actual values estimation
NASA Astrophysics Data System (ADS)
Nikiforov, A. A.; Chaplin, A. G.
2017-11-01
Operation of thermal power plant generation equipment, which participates in standardized primary frequency control (SPFC), must meet specific requirements. These requirements are formalized as nine algorithmic criteria, which are used for automatic monitoring of power plant participation in SPFC. One of these criteria - primary frequency control droop and deadband actual values estimation is considered in detail in this report. Experience shows that existing estimation method sometimes doesn’t work properly. Author offers alternative method, which allows estimating droop and deadband actual values more accurately. This method was implemented as a software application.
1991-04-01
1Z I III i II1’I’ ,, iiII ll~ I! ’i tiNt $W"T" INSTITUTE FOR AEROSPACE RESEARCH SCIENTIFIC AND TECHNICAL PUBLICATIONS AERONAUTICAL REPORTS...Aeronautical Reports (LR): Scientific and technical information pertaining to aeronautics considered important, complete, and a lasting contribution to existing...knowledge. Mechanical Engineering Reports (MS): Scientific and technical information pertaining to investigations outside aeronautics considered
2016 Summer Series - Thomas Barclay - Microlensing and the K2 Experiment
2016-07-05
Innovation is the ability to create a new idea, device or method from what already exists. It is even more innovative when it arises from what is considered to be waste. The NASA Ames Kepler mission revolutionized the way we see our place in the universe by demonstrating that planets are a common occurrence. When the Kepler mission ended, the team took the satellite that was considered to be useless and created a new innovative approach and platform to investigate a wide array of astronomy subfields called the K2 mission.
Quality: performance improvement, teamwork, information technology and protocols.
Coleman, Nana E; Pon, Steven
2013-04-01
Using the Institute of Medicine framework that outlines the domains of quality, this article considers four key aspects of health care delivery which have the potential to significantly affect the quality of health care within the pediatric intensive care unit. The discussion covers: performance improvement and how existing methods for reporting, review, and analysis of medical error relate to patient care; team composition and workflow; and the impact of information technologies on clinical practice. Also considered is how protocol-driven and standardized practice affects both patients and the fiscal interests of the health care system.
Concepts of ‘personalization’ in personalized medicine: implications for economic evaluation
Rogowski, Wolf; Payne, Katherine; Schnell-Inderst, Petra; Manca, Andrea; Rochau, Ursula; Jahn, Beate; Alagoz, Oguzhan; Leidl, Reiner; Siebert, Uwe
2015-01-01
Context This paper assesses if, and how, existing methods for economic evaluation are applicable to the evaluation of PM and if not, where extension to methods may be required. Method Structured workshop with a pre-defined group of experts (n=47), run using a modified nominal group technique. Workshop findings were recorded using extensive note taking and summarised using thematic data analysis. The workshop was complemented by structured literature searches. Results The key finding emerging from the workshop, using an economic perspective, was that two distinct, but linked, interpretations of the concept of PM exist (personalization by ‘physiology’ or ‘preferences’). These interpretations involve specific challenges for the design and conduct of economic evaluations. Existing evaluative (extra-welfarist) frameworks were generally considered appropriate for evaluating PM. When ‘personalization’ is viewed as using physiological biomarkers, challenges include: representing complex care pathways; representing spill-over effects; meeting data requirements such as evidence on heterogeneity; choosing appropriate time horizons for the value of further research in uncertainty analysis. When viewed as tailoring medicine to patient preferences, further work is needed regarding: revealed preferences, e.g. treatment (non)adherence; stated preferences, e.g. risk interpretation and attitude; consideration of heterogeneity in preferences; and the appropriate framework (welfarism vs. extra-welfarism) to incorporate non-health benefits. Conclusion Ideally, economic evaluations should take account of both interpretations of PM and consider physiology and preferences. It is important for decision makers to be cognizant of the issues involved with the economic evaluation of PM to appropriately interpret the evidence and target future research funding. PMID:25249200
Evaluating care from a care ethical perspective:: A pilot study.
Kuis, Esther E; Goossensen, Anne
2017-08-01
Care ethical theories provide an excellent opening for evaluation of healthcare practices since searching for (moments of) good care from a moral perspective is central to care ethics. However, a fruitful way to translate care ethical insights into measurable criteria and how to measure these criteria has as yet been unexplored: this study describes one of the first attempts. To investigate whether the emotional touchpoint method is suitable for evaluating care from a care ethical perspective. An adapted version of the emotional touchpoint interview method was used. Touchpoints represent the key moments to the experience of receiving care, where the patient recalls being touched emotionally or cognitively. Participants and research context: Interviews were conducted at three different care settings: a hospital, mental healthcare institution and care facility for older people. A total of 31 participants (29 patients and 2 relatives) took part in the study. Ethical considerations: The research was found not to be subject to the (Dutch) Medical Research Involving Human Subjects Act. A three-step care ethical evaluation model was developed and described using two touchpoints as examples. A focus group meeting showed that the method was considered of great value for partaking institutions in comparison with existing methods. Reflection and discussion: Considering existing methods to evaluate quality of care, the touchpoint method belongs to the category of instruments which evaluate the patient experience. The touchpoint method distinguishes itself because no pre-defined categories are used but the values of patients are followed, which is an essential issue from a care ethical perspective. The method portrays the insider perspective of patients and thereby contributes to humanizing care. The touchpoint method is a valuable instrument for evaluating care; it generates evaluation data about the core care ethical principle of responsiveness.
Sellami-Kaaniche, Emna; de Gouvello, Bernard; Gromaire, Marie-Christine; Chebbo, Ghassan
2014-04-01
Today, urban runoff is considered as an important source of environmental pollution. Roofing materials, in particular, the metallic ones, are considered as a major source of urban runoff metal contaminations. In the context of the European Water Directive (2000/60 CE), an accurate evaluation of contaminant flows from roofs is thus required on the city scale, and therefore the development of assessment tools is needed. However, on this scale, there is an important diversity of roofing materials. In addition, given the size of a city, a complete census of the materials of the different roofing elements represents a difficult task. Information relating roofing materials and their surfaces on an urban district do not currently exist in urban databases. The objective of this paper is to develop a new method of evaluating annual contaminant flow emissions from the different roofing material elements (e.g., gutter, rooftop) on the city scale. This method is based on using and adapting existing urban databases combined with a statistical approach. Different rules for identifying the materials of the different roofing elements on the city scale have been defined. The methodology is explained through its application to the evaluation of zinc emissions on the scale of the city of Créteil.
Parts-based stereoscopic image assessment by learning binocular manifold color visual properties
NASA Astrophysics Data System (ADS)
Xu, Haiyong; Yu, Mei; Luo, Ting; Zhang, Yun; Jiang, Gangyi
2016-11-01
Existing stereoscopic image quality assessment (SIQA) methods are mostly based on the luminance information, in which color information is not sufficiently considered. Actually, color is part of the important factors that affect human visual perception, and nonnegative matrix factorization (NMF) and manifold learning are in line with human visual perception. We propose an SIQA method based on learning binocular manifold color visual properties. To be more specific, in the training phase, a feature detector is created based on NMF with manifold regularization by considering color information, which not only allows parts-based manifold representation of an image, but also manifests localized color visual properties. In the quality estimation phase, visually important regions are selected by considering different human visual attention, and feature vectors are extracted by using the feature detector. Then the feature similarity index is calculated and the parts-based manifold color feature energy (PMCFE) for each view is defined based on the color feature vectors. The final quality score is obtained by considering a binocular combination based on PMCFE. The experimental results on LIVE I and LIVE Π 3-D IQA databases demonstrate that the proposed method can achieve much higher consistency with subjective evaluations than the state-of-the-art SIQA methods.
Nondestructive Examination Guidance for Dry Storage Casks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyer, Ryan M.; Suffield, Sarah R.; Hirt, Evelyn H.
In this report, an assessment of NDE methods is performed for components of NUHOMS 80 and 102 dry storage system components in an effort to assist NRC staff with review of license renewal applications. The report considers concrete components associated with the horizontal storage modules (HSMs) as well as metal components in the HSMs. In addition, the report considers the dry shielded canister (DSC). Scope is limited to NDE methods that are considered most likely to be proposed by licensees. The document, ACI 349.3R, Evaluation of Existing Nuclear Safety-Related Concrete Structures, is used as the basis for the majority ofmore » the NDE methods summarized for inspecting HSM concrete components. Two other documents, ACI 228.2R, Nondestructive Test Methods for Evaluation of Concrete in Structures, and ORNL/TM-2007/191, Inspection of Nuclear Power Plant Structure--Overview of Methods and Related Application, supplement the list with additional technologies that are considered applicable. For the canister, the ASME B&PV Code is used as the basis for NDE methods considered, along with currently funded efforts through industry (Electric Power Research Institute [EPRI]) and the U.S. Department of Energy (DOE) to develop inspection technologies for canisters. The report provides a description of HSM and DSC components with a focus on those aspects of design considered relevant to inspection. This is followed by a brief description of other concrete structural components such as bridge decks, dams, and reactor containment structures in an effort to facilitate comparison between these structures and HSM concrete components and infer which NDE methods may work best for certain HSM concrete components based on experience with these other structures. Brief overviews of the NDE methods are provided with a focus on issues and influencing factors that may impact implementation or performance. An analysis is performed to determine which NDE methods are most applicable to specific components.« less
Finding Chemical Structures Corresponding to a Set of Coordinates in Chemical Descriptor Space.
Miyao, Tomoyuki; Funatsu, Kimito
2017-08-01
When chemical structures are searched based on descriptor values, or descriptors are interpreted based on values, it is important that corresponding chemical structures actually exist. In order to consider the existence of chemical structures located in a specific region in the chemical space, we propose to search them inside training data domains (TDDs), which are dense areas of a training dataset in the chemical space. We investigated TDDs' features using diverse and local datasets, assuming that GDB11 is the chemical universe. These two analyses showed that considering TDDs gives higher chance of finding chemical structures than a random search-based method, and that novel chemical structures actually exist inside TDDs. In addition to those findings, we tested the hypothesis that chemical structures were distributed on the limited areas of chemical space. This hypothesis was confirmed by the fact that distances among chemical structures in several descriptor spaces were much shorter than those among randomly generated coordinates in the training data range. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heeter, Jenny; Barbose, Galen; Bird, Lori
2014-03-12
More than half of U.S. states have renewable portfolio standards (RPS) in place and have collectively deployed approximately 46,000 MW of new renewable energy capacity through year-end 2012. Most of these policies have five or more years of implementation experience, enabling an assessment of their costs and benefits. Understanding RPS benefits and costs is essential for policymakers evaluating existing RPS policies, assessing the need for modifications, and considering new policies. A key aspect of this study is the comprehensive review of existing RPS cost and benefit estimates, in addition to an examination of the variety of methods used to calculatemore » such estimates. Based on available data and estimates reported by utilities and regulators, this study summarizes RPS costs to date. The study considers how those costs may evolve going forward, given scheduled increases in RPS targets and cost containment mechanisms incorporated into existing policies. The report also summarizes RPS benefits estimates, based on published studies for individual states, and discusses key methodological considerations.« less
ERIC Educational Resources Information Center
Jones, Susan M.
2011-01-01
The purpose of the mixed-method Delphi study is to identify the financial leadership competencies considered most important in operating public higher education institutions. The current study also determined whether differences existed in the perceptions of participants' age, level of education, years of service as a president, the number of…
A Developed Meta-model for Selection of Cotton Fabrics Using Design of Experiments and TOPSIS Method
NASA Astrophysics Data System (ADS)
Chakraborty, Shankar; Chatterjee, Prasenjit
2017-12-01
Selection of cotton fabrics for providing optimal clothing comfort is often considered as a multi-criteria decision making problem consisting of an array of candidate alternatives to be evaluated based of several conflicting properties. In this paper, design of experiments and technique for order preference by similarity to ideal solution (TOPSIS) are integrated so as to develop regression meta-models for identifying the most suitable cotton fabrics with respect to the computed TOPSIS scores. The applicability of the adopted method is demonstrated using two real time examples. These developed models can also identify the statistically significant fabric properties and their interactions affecting the measured TOPSIS scores and final selection decisions. There exists good degree of congruence between the ranking patterns as derived using these meta-models and the existing methods for cotton fabric ranking and subsequent selection.
Research on Operation Assessment Method for Energy Meter
NASA Astrophysics Data System (ADS)
Chen, Xiangqun; Huang, Rui; Shen, Liman; chen, Hao; Xiong, Dezhi; Xiao, Xiangqi; Liu, Mouhai; Xu, Renheng
2018-03-01
The existing electric energy meter rotation maintenance strategy regularly checks the electric energy meter and evaluates the state. It only considers the influence of time factors, neglects the influence of other factors, leads to the inaccuracy of the evaluation, and causes the waste of resources. In order to evaluate the running state of the electric energy meter in time, a method of the operation evaluation of the electric energy meter is proposed. The method is based on extracting the existing data acquisition system, marketing business system and metrology production scheduling platform that affect the state of energy meters, and classified into error stability, operational reliability, potential risks and other factors according to the influencing factors, based on the above basic test score, inspecting score, monitoring score, score of family defect detection. Then, according to the evaluation model according to the scoring, we evaluate electric energy meter operating state, and finally put forward the corresponding maintenance strategy of rotation.
A Method of Evaluating Operation of Electric Energy Meter
NASA Astrophysics Data System (ADS)
Chen, Xiangqun; Li, Tianyang; Cao, Fei; Chu, Pengfei; Zhao, Xinwang; Huang, Rui; Liu, Liping; Zhang, Chenglin
2018-05-01
The existing electric energy meter rotation maintenance strategy regularly checks the electric energy meter and evaluates the state. It only considers the influence of time factors, neglects the influence of other factors, leads to the inaccuracy of the evaluation, and causes the waste of resources. In order to evaluate the running state of the electric energy meter in time, a method of the operation evaluation of the electric energy meter is proposed. The method is based on extracting the existing data acquisition system, marketing business system and metrology production scheduling platform that affect the state of energy meters, and classified into error stability, operational reliability, potential risks and other factors according to the influencing factors, based on the above basic test score, inspecting score, monitoring score, score of family defect detection. Then, according to the evaluation model according to the scoring, we evaluate electric energy meter operating state, and finally put forward the corresponding maintenance strategy of rotation.
Physiologic measures of sexual function in women: a review.
Woodard, Terri L; Diamond, Michael P
2009-07-01
To review and describe physiologic measures of assessing sexual function in women. Literature review. Studies that use instruments designed to measure female sexual function. Women participating in studies of female sexual function. Various instruments that measure physiologic features of female sexual function. Appraisal of the various instruments, including their advantages and disadvantages. Many unique physiologic methods of evaluating female sexual function have been developed during the past four decades. Each method has its benefits and limitations. Many physiologic methods exist, but most are not well-validated. In addition there has been an inability to correlate most physiologic measures with subjective measures of sexual arousal. Furthermore, given the complex nature of the sexual response in women, physiologic measures should be considered in context of other data, including the history, physical examination, and validated questionnaires. Nonetheless, the existence of appropriate physiologic measures is vital to our understanding of female sexual function and dysfunction.
Analysis of Classes of Singular Steady State Reaction Diffusion Equations
NASA Astrophysics Data System (ADS)
Son, Byungjae
We study positive radial solutions to classes of steady state reaction diffusion problems on the exterior of a ball with both Dirichlet and nonlinear boundary conditions. We study both Laplacian as well as p-Laplacian problems with reaction terms that are p-sublinear at infinity. We consider both positone and semipositone reaction terms and establish existence, multiplicity and uniqueness results. Our existence and multiplicity results are achieved by a method of sub-supersolutions and uniqueness results via a combination of maximum principles, comparison principles, energy arguments and a-priori estimates. Our results significantly enhance the literature on p-sublinear positone and semipositone problems. Finally, we provide exact bifurcation curves for several one-dimensional problems. In the autonomous case, we extend and analyze a quadrature method, and in the nonautonomous case, we employ shooting methods. We use numerical solvers in Mathematica to generate the bifurcation curves.
An Exact Formula for Calculating Inverse Radial Lens Distortions
Drap, Pierre; Lefèvre, Julien
2016-01-01
This article presents a new approach to calculating the inverse of radial distortions. The method presented here provides a model of reverse radial distortion, currently modeled by a polynomial expression, that proposes another polynomial expression where the new coefficients are a function of the original ones. After describing the state of the art, the proposed method is developed. It is based on a formal calculus involving a power series used to deduce a recursive formula for the new coefficients. We present several implementations of this method and describe the experiments conducted to assess the validity of the new approach. Such an approach, non-iterative, using another polynomial expression, able to be deduced from the first one, can actually be interesting in terms of performance, reuse of existing software, or bridging between different existing software tools that do not consider distortion from the same point of view. PMID:27258288
NASA Astrophysics Data System (ADS)
Lin, Wei; Li, Xizhe; Yang, Zhengming; Lin, Lijun; Xiong, Shengchun; Wang, Zhiyuan; Wang, Xiangyang; Xiao, Qianhua
Based on the basic principle of the porosity method in image segmentation, considering the relationship between the porosity of the rocks and the fractal characteristics of the pore structures, a new improved image segmentation method was proposed, which uses the calculated porosity of the core images as a constraint to obtain the best threshold. The results of comparative analysis show that the porosity method can best segment images theoretically, but the actual segmentation effect is deviated from the real situation. Due to the existence of heterogeneity and isolated pores of cores, the porosity method that takes the experimental porosity of the whole core as the criterion cannot achieve the desired segmentation effect. On the contrary, the new improved method overcomes the shortcomings of the porosity method, and makes a more reasonable binary segmentation for the core grayscale images, which segments images based on the actual porosity of each image by calculated. Moreover, the image segmentation method based on the calculated porosity rather than the measured porosity also greatly saves manpower and material resources, especially for tight rocks.
NASA Technical Reports Server (NTRS)
Gray, D. J.
1978-01-01
Cryogenic transportation methods for providing liquid hydrogen requirements are examined in support of shuttle transportation system launch operations at Kennedy Space Center, Florida, during the time frames 1982-1991 in terms of cost and operational effectiveness. Transportation methods considered included sixteen different options employing mobile semi-trailer tankers, railcars, barges and combinations of each method. The study concludes that the most effective method of delivering liquid hydrogen from the vendor production facility in New Orleans to Kennedy Space Center includes maximum utilization of existing mobile tankers and railcars supplemented by maximum capacity mobile tankers procured incrementally in accordance with shuttle launch rates actually achieved.
Automated web service composition supporting conditional branch structures
NASA Astrophysics Data System (ADS)
Wang, Pengwei; Ding, Zhijun; Jiang, Changjun; Zhou, Mengchu
2014-01-01
The creation of value-added services by automatic composition of existing ones is gaining a significant momentum as the potential silver bullet in service-oriented architecture. However, service composition faces two aspects of difficulties. First, users' needs present such characteristics as diversity, uncertainty and personalisation; second, the existing services run in a real-world environment that is highly complex and dynamically changing. These difficulties may cause the emergence of nondeterministic choices in the process of service composition, which has gone beyond what the existing automated service composition techniques can handle. According to most of the existing methods, the process model of composite service includes sequence constructs only. This article presents a method to introduce conditional branch structures into the process model of composite service when needed, in order to satisfy users' diverse and personalised needs and adapt to the dynamic changes of real-world environment. UML activity diagrams are used to represent dependencies in composite service. Two types of user preferences are considered in this article, which have been ignored by the previous work and a simple programming language style expression is adopted to describe them. Two different algorithms are presented to deal with different situations. A real-life case is provided to illustrate the proposed concepts and methods.
Fan, M; Wang, K; Jiang, D
1999-08-01
In this paper, we study the existence and global attractivity of positive periodic solutions of periodic n-species Lotka-Volterra competition systems. By using the method of coincidence degree and Lyapunov functional, a set of easily verifiable sufficient conditions are derived for the existence of at least one strictly positive (componentwise) periodic solution of periodic n-species Lotka-Volterra competition systems with several deviating arguments and the existence of a unique globally asymptotically stable periodic solution with strictly positive components of periodic n-species Lotka-Volterra competition system with several delays. Some new results are obtained. As an application, we also examine some special cases of the system we considered, which have been studied extensively in the literature. Some known results are improved and generalized.
Tabu Search enhances network robustness under targeted attacks
NASA Astrophysics Data System (ADS)
Sun, Shi-wen; Ma, Yi-lin; Li, Rui-qi; Wang, Li; Xia, Cheng-yi
2016-03-01
We focus on the optimization of network robustness with respect to intentional attacks on high-degree nodes. Given an existing network, this problem can be considered as a typical single-objective combinatorial optimization problem. Based on the heuristic Tabu Search optimization algorithm, a link-rewiring method is applied to reconstruct the network while keeping the degree of every node unchanged. Through numerical simulations, BA scale-free network and two real-world networks are investigated to verify the effectiveness of the proposed optimization method. Meanwhile, we analyze how the optimization affects other topological properties of the networks, including natural connectivity, clustering coefficient and degree-degree correlation. The current results can help to improve the robustness of existing complex real-world systems, as well as to provide some insights into the design of robust networks.
FIND: difFerential chromatin INteractions Detection using a spatial Poisson process.
Djekidel, Mohamed Nadhir; Chen, Yang; Zhang, Michael Q
2018-02-12
Polymer-based simulations and experimental studies indicate the existence of a spatial dependency between the adjacent DNA fibers involved in the formation of chromatin loops. However, the existing strategies for detecting differential chromatin interactions assume that the interacting segments are spatially independent from the other segments nearby. To resolve this issue, we developed a new computational method, FIND, which considers the local spatial dependency between interacting loci. FIND uses a spatial Poisson process to detect differential chromatin interactions that show a significant difference in their interaction frequency and the interaction frequency of their neighbors. Simulation and biological data analysis show that FIND outperforms the widely used count-based methods and has a better signal-to-noise ratio. © 2018 Djekidel et al.; Published by Cold Spring Harbor Laboratory Press.
Simple diffusion can support the pitchfork, the flip bifurcations, and the chaos
NASA Astrophysics Data System (ADS)
Meng, Lili; Li, Xinfu; Zhang, Guang
2017-12-01
In this paper, a discrete rational fration population model with the Dirichlet boundary conditions will be considered. According to the discrete maximum principle and the sub- and supper-solution method, the necessary and sufficient conditions of uniqueness and existence of positive steady state solutions will be obtained. In addition, the dynamical behavior of a special two patch metapopulation model is investigated by using the bifurcation method, the center manifold theory, the bifurcation diagrams and the largest Lyapunov exponent. The results show that there exist the pitchfork, the flip bifurcations, and the chaos. Clearly, these phenomena are caused by the simple diffusion. The theoretical analysis of chaos is very imortant, unfortunately, there is not any results in this hand. However, some open problems are given.
NASA Astrophysics Data System (ADS)
Wang, Qingzhi; Tan, Guanzheng; He, Yong; Wu, Min
2017-10-01
This paper considers a stability analysis issue of piecewise non-linear systems and applies it to intermittent synchronisation of chaotic systems. First, based on piecewise Lyapunov function methods, more general and less conservative stability criteria of piecewise non-linear systems in periodic and aperiodic cases are presented, respectively. Next, intermittent synchronisation conditions of chaotic systems are derived which extend existing results. Finally, Chua's circuit is taken as an example to verify the validity of our methods.
Batchelor, Hannah K.
2015-01-01
The objective of this paper was to review existing information regarding food effects on drug absorption within paediatric populations. Mechanisms that underpin food–drug interactions were examined to consider potential differences between adult and paediatric populations, to provide insights into how this may alter the pharmacokinetic profile in a child. Relevant literature was searched to retrieve information on food–drug interaction studies undertaken on: (i) paediatric oral drug formulations; and (ii) within paediatric populations. The applicability of existing methodology to predict food effects in adult populations was evaluated with respect to paediatric populations where clinical data was available. Several differences in physiology, anatomy and the composition of food consumed within a paediatric population are likely to lead to food–drug interactions that cannot be predicted based on adult studies. Existing methods to predict food effects cannot be directly extrapolated to allow predictions within paediatric populations. Development of systematic methods and guidelines is needed to address the general lack of information on examining food–drug interactions within paediatric populations. PMID:27417362
Howard, Steven J.; Melhuish, Edward
2016-01-01
Several methods of assessing executive function (EF), self-regulation, language development, and social development in young children have been developed over previous decades. Yet new technologies make available methods of assessment not previously considered. In resolving conceptual and pragmatic limitations of existing tools, the Early Years Toolbox (EYT) offers substantial advantages for early assessment of language, EF, self-regulation, and social development. In the current study, results of our large-scale administration of this toolbox to 1,764 preschool and early primary school students indicated very good reliability, convergent validity with existing measures, and developmental sensitivity. Results were also suggestive of better capture of children’s emerging abilities relative to comparison measures. Preliminary norms are presented, showing a clear developmental trajectory across half-year age groups. The accessibility of the EYT, as well as its advantages over existing measures, offers considerably enhanced opportunities for objective measurement of young children’s abilities to enable research and educational applications. PMID:28503022
Predicting future discoveries from current scientific literature.
Petrič, Ingrid; Cestnik, Bojan
2014-01-01
Knowledge discovery in biomedicine is a time-consuming process starting from the basic research, through preclinical testing, towards possible clinical applications. Crossing of conceptual boundaries is often needed for groundbreaking biomedical research that generates highly inventive discoveries. We demonstrate the ability of a creative literature mining method to advance valuable new discoveries based on rare ideas from existing literature. When emerging ideas from scientific literature are put together as fragments of knowledge in a systematic way, they may lead to original, sometimes surprising, research findings. If enough scientific evidence is already published for the association of such findings, they can be considered as scientific hypotheses. In this chapter, we describe a method for the computer-aided generation of such hypotheses based on the existing scientific literature. Our literature-based discovery of NF-kappaB with its possible connections to autism was recently approved by scientific community, which confirms the ability of our literature mining methodology to accelerate future discoveries based on rare ideas from existing literature.
Utility-preserving anonymization for health data publishing.
Lee, Hyukki; Kim, Soohyung; Kim, Jong Wook; Chung, Yon Dohn
2017-07-11
Publishing raw electronic health records (EHRs) may be considered as a breach of the privacy of individuals because they usually contain sensitive information. A common practice for the privacy-preserving data publishing is to anonymize the data before publishing, and thus satisfy privacy models such as k-anonymity. Among various anonymization techniques, generalization is the most commonly used in medical/health data processing. Generalization inevitably causes information loss, and thus, various methods have been proposed to reduce information loss. However, existing generalization-based data anonymization methods cannot avoid excessive information loss and preserve data utility. We propose a utility-preserving anonymization for privacy preserving data publishing (PPDP). To preserve data utility, the proposed method comprises three parts: (1) utility-preserving model, (2) counterfeit record insertion, (3) catalog of the counterfeit records. We also propose an anonymization algorithm using the proposed method. Our anonymization algorithm applies full-domain generalization algorithm. We evaluate our method in comparison with existence method on two aspects, information loss measured through various quality metrics and error rate of analysis result. With all different types of quality metrics, our proposed method show the lower information loss than the existing method. In the real-world EHRs analysis, analysis results show small portion of error between the anonymized data through the proposed method and original data. We propose a new utility-preserving anonymization method and an anonymization algorithm using the proposed method. Through experiments on various datasets, we show that the utility of EHRs anonymized by the proposed method is significantly better than those anonymized by previous approaches.
Statistical Methods Applied to Gamma-ray Spectroscopy Algorithms in Nuclear Security Missions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fagan, Deborah K.; Robinson, Sean M.; Runkle, Robert C.
2012-10-01
In a wide range of nuclear security missions, gamma-ray spectroscopy is a critical research and development priority. One particularly relevant challenge is the interdiction of special nuclear material for which gamma-ray spectroscopy supports the goals of detecting and identifying gamma-ray sources. This manuscript examines the existing set of spectroscopy methods, attempts to categorize them by the statistical methods on which they rely, and identifies methods that have yet to be considered. Our examination shows that current methods effectively estimate the effect of counting uncertainty but in many cases do not address larger sources of decision uncertainty—ones that are significantly moremore » complex. We thus explore the premise that significantly improving algorithm performance requires greater coupling between the problem physics that drives data acquisition and statistical methods that analyze such data. Untapped statistical methods, such as Bayes Modeling Averaging and hierarchical and empirical Bayes methods have the potential to reduce decision uncertainty by more rigorously and comprehensively incorporating all sources of uncertainty. We expect that application of such methods will demonstrate progress in meeting the needs of nuclear security missions by improving on the existing numerical infrastructure for which these analyses have not been conducted.« less
Accounting for ecosystem services in life cycle assessment, Part I: a critical review.
Zhang, Yi; Singh, Shweta; Bakshi, Bhavik R
2010-04-01
If life cycle oriented methods are to encourage sustainable development, they must account for the role of ecosystem goods and services, since these form the basis of planetary activities and human well-being. This article reviews methods that are relevant to accounting for the role of nature and that could be integrated into life cycle oriented approaches. These include methods developed by ecologists for quantifying ecosystem services, by ecological economists for monetary valuation, and life cycle methods such as conventional life cycle assessment, thermodynamic methods for resource accounting such as exergy and emergy analysis, variations of the ecological footprint approach, and human appropriation of net primary productivity. Each approach has its strengths: economic methods are able to quantify the value of cultural services; LCA considers emissions and assesses their impact; emergy accounts for supporting services in terms of cumulative exergy; and ecological footprint is intuitively appealing and considers biocapacity. However, no method is able to consider all the ecosystem services, often due to the desire to aggregate all resources in terms of a single unit. This review shows that comprehensive accounting for ecosystem services in LCA requires greater integration among existing methods, hierarchical schemes for interpreting results via multiple levels of aggregation, and greater understanding of the role of ecosystems in supporting human activities. These present many research opportunities that must be addressed to meet the challenges of sustainability.
Li, Jian-Long; Wang, Peng; Fung, Wing Kam; Zhou, Ji-Yuan
2017-10-16
For dichotomous traits, the generalized disequilibrium test with the moment estimate of the variance (GDT-ME) is a powerful family-based association method. Genomic imprinting is an important epigenetic phenomenon and currently, there has been increasing interest of incorporating imprinting to improve the test power of association analysis. However, GDT-ME does not take imprinting effects into account, and it has not been investigated whether it can be used for association analysis when the effects indeed exist. In this article, based on a novel decomposition of the genotype score according to the paternal or maternal source of the allele, we propose the generalized disequilibrium test with imprinting (GDTI) for complete pedigrees without any missing genotypes. Then, we extend GDTI and GDT-ME to accommodate incomplete pedigrees with some pedigrees having missing genotypes, by using a Monte Carlo (MC) sampling and estimation scheme to infer missing genotypes given available genotypes in each pedigree, denoted by MCGDTI and MCGDT-ME, respectively. The proposed GDTI and MCGDTI methods evaluate the differences of the paternal as well as maternal allele scores for all discordant relative pairs in a pedigree, including beyond first-degree relative pairs. Advantages of the proposed GDTI and MCGDTI test statistics over existing methods are demonstrated by simulation studies under various simulation settings and by application to the rheumatoid arthritis dataset. Simulation results show that the proposed tests control the size well under the null hypothesis of no association, and outperform the existing methods under various imprinting effect models. The existing GDT-ME and the proposed MCGDT-ME can be used to test for association even when imprinting effects exist. For the application to the rheumatoid arthritis data, compared to the existing methods, MCGDTI identifies more loci statistically significantly associated with the disease. Under complete and incomplete imprinting effect models, our proposed GDTI and MCGDTI methods, by considering the information on imprinting effects and all discordant relative pairs within each pedigree, outperform all the existing test statistics and MCGDTI can recapture much of the missing information. Therefore, MCGDTI is recommended in practice.
Frappier, Vincent; Najmanovich, Rafael J.
2014-01-01
Normal mode analysis (NMA) methods are widely used to study dynamic aspects of protein structures. Two critical components of NMA methods are coarse-graining in the level of simplification used to represent protein structures and the choice of potential energy functional form. There is a trade-off between speed and accuracy in different choices. In one extreme one finds accurate but slow molecular-dynamics based methods with all-atom representations and detailed atom potentials. On the other extreme, fast elastic network model (ENM) methods with Cα−only representations and simplified potentials that based on geometry alone, thus oblivious to protein sequence. Here we present ENCoM, an Elastic Network Contact Model that employs a potential energy function that includes a pairwise atom-type non-bonded interaction term and thus makes it possible to consider the effect of the specific nature of amino-acids on dynamics within the context of NMA. ENCoM is as fast as existing ENM methods and outperforms such methods in the generation of conformational ensembles. Here we introduce a new application for NMA methods with the use of ENCoM in the prediction of the effect of mutations on protein stability. While existing methods are based on machine learning or enthalpic considerations, the use of ENCoM, based on vibrational normal modes, is based on entropic considerations. This represents a novel area of application for NMA methods and a novel approach for the prediction of the effect of mutations. We compare ENCoM to a large number of methods in terms of accuracy and self-consistency. We show that the accuracy of ENCoM is comparable to that of the best existing methods. We show that existing methods are biased towards the prediction of destabilizing mutations and that ENCoM is less biased at predicting stabilizing mutations. PMID:24762569
Simulation-Based Prediction of Equivalent Continuous Noises during Construction Processes
Zhang, Hong; Pei, Yun
2016-01-01
Quantitative prediction of construction noise is crucial to evaluate construction plans to help make decisions to address noise levels. Considering limitations of existing methods for measuring or predicting the construction noise and particularly the equivalent continuous noise level over a period of time, this paper presents a discrete-event simulation method for predicting the construction noise in terms of equivalent continuous level. The noise-calculating models regarding synchronization, propagation and equivalent continuous level are presented. The simulation framework for modeling the noise-affected factors and calculating the equivalent continuous noise by incorporating the noise-calculating models into simulation strategy is proposed. An application study is presented to demonstrate and justify the proposed simulation method in predicting the equivalent continuous noise during construction. The study contributes to provision of a simulation methodology to quantitatively predict the equivalent continuous noise of construction by considering the relevant uncertainties, dynamics and interactions. PMID:27529266
On Application of Model Predictive Control to Power Converter with Switching
NASA Astrophysics Data System (ADS)
Zanma, Tadanao; Fukuta, Junichi; Doki, Shinji; Ishida, Muneaki; Okuma, Shigeru; Matsumoto, Takashi; Nishimori, Eiji
This paper concerns a DC-DC converter control. In DC-DC converters, there exist both continuous components such as inductance, conductance and resistance and discrete ones, IGBT and MOSFET as semiconductor switching elements. Such a system can be regarded as a hybrid dynamical system. Thus, this paper presents a dc-dc control technique based on the model predictive control. Specifically, a case in which the load of the dc-dc converter changes from active to sleep is considered. In the case, a control method which makes the output voltage follow to the reference quickly in transition, and the switching frequency be constant in steady state. In addition, in applying the model predictive control to power electronics circuits, the switching characteristic of the device and the restriction condition for protection are also considered. The effectiveness of the proposed method is illustrated by comparing a conventional method through some simulation results.
Numerical noise prediction in fluid machinery
NASA Astrophysics Data System (ADS)
Pantle, Iris; Magagnato, Franco; Gabi, Martin
2005-09-01
Numerical methods successively became important in the design and optimization of fluid machinery. However, as noise emission is considered, one can hardly find standardized prediction methods combining flow and acoustical optimization. Several numerical field methods for sound calculations have been developed. Due to the complexity of the considered flow, approaches must be chosen to avoid exhaustive computing. In this contribution the noise of a simple propeller is investigated. The configurations of the calculations comply with an existing experimental setup chosen for evaluation. The used in-house CFD solver SPARC contains an acoustic module based on Ffowcs Williams-Hawkings Acoustic Analogy. From the flow results of the time dependent Large Eddy Simulation the time dependent acoustic sources are extracted and given to the acoustic module where relevant sound pressure levels are calculated. The difficulties, which arise while proceeding from open to closed rotors and from gas to liquid are discussed.
Xu, Jingyang; Zhang, Ziyuan; Zheng, Xiaochun; Bond, John W
2017-05-01
Visualization of latent fingerprints on metallic surfaces by the method of applying electrostatic charging and adsorption is considered as a promising chemical-free method, which has the merit of nondestruction, and is considered to be effective for some difficult situations such as aged fingerprint deposits or those exposed to environmental extremes. In fact, a portable electrostatic generator can be easily accessible in a local forensic technology laboratory, which is already widely used in the visualization of footwear impressions. In this study, a modified version of this electrostatic apparatus is proposed for latent fingerprint development and has shown great potential in visualizing fingerprints on metallic surfaces such as cartridge cases. Results indicate that this experimental arrangement can successfully develop aged latent fingerprints on metal surfaces, and we demonstrate its effectiveness compared with existing conventional fingerprint recovery methods. © 2016 American Academy of Forensic Sciences.
Simulation-Based Prediction of Equivalent Continuous Noises during Construction Processes.
Zhang, Hong; Pei, Yun
2016-08-12
Quantitative prediction of construction noise is crucial to evaluate construction plans to help make decisions to address noise levels. Considering limitations of existing methods for measuring or predicting the construction noise and particularly the equivalent continuous noise level over a period of time, this paper presents a discrete-event simulation method for predicting the construction noise in terms of equivalent continuous level. The noise-calculating models regarding synchronization, propagation and equivalent continuous level are presented. The simulation framework for modeling the noise-affected factors and calculating the equivalent continuous noise by incorporating the noise-calculating models into simulation strategy is proposed. An application study is presented to demonstrate and justify the proposed simulation method in predicting the equivalent continuous noise during construction. The study contributes to provision of a simulation methodology to quantitatively predict the equivalent continuous noise of construction by considering the relevant uncertainties, dynamics and interactions.
NASA Astrophysics Data System (ADS)
Mercier, Sylvain; Gratton, Serge; Tardieu, Nicolas; Vasseur, Xavier
2017-12-01
Many applications in structural mechanics require the numerical solution of sequences of linear systems typically issued from a finite element discretization of the governing equations on fine meshes. The method of Lagrange multipliers is often used to take into account mechanical constraints. The resulting matrices then exhibit a saddle point structure and the iterative solution of such preconditioned linear systems is considered as challenging. A popular strategy is then to combine preconditioning and deflation to yield an efficient method. We propose an alternative that is applicable to the general case and not only to matrices with a saddle point structure. In this approach, we consider to update an existing algebraic or application-based preconditioner, using specific available information exploiting the knowledge of an approximate invariant subspace or of matrix-vector products. The resulting preconditioner has the form of a limited memory quasi-Newton matrix and requires a small number of linearly independent vectors. Numerical experiments performed on three large-scale applications in elasticity highlight the relevance of the new approach. We show that the proposed method outperforms the deflation method when considering sequences of linear systems with varying matrices.
Salústio, P J; Feio, G; Figueirinhas, J L; Pinto, J F; Cabral Marques, H M
2009-02-01
The work aims to prove the complexation of two model drugs (ibuprofen, IB and indomethacin, IN) by beta-cyclodextrin (betaCD), and the effect of water in such a process, and makes a comparison of their complexation yields. Two methods were considered: kneading of a binary mixture of the drug, betaCD, and inclusion of either IB or IN in aqueous solutions of betaCD. In the latter method water was removed by air stream, spray-drying and freeze-drying. To prove the formation of complexes in final products, optical microscopy, UV spectroscopy, IR spectroscopy, DSC, X-ray and NMR were considered. Each powder was added to an acidic solution (pH=2) to quantify the concentration of the drug inside betaCD cavity. Other media (pH=5 and 7) were used to prove the existence of drug not complexed in each powder, as the drugs solubility increases with the pH. It was observed that complexation occurred in all powders, and that the fraction of drug inside the betaCD did not depend neither on the method of complexation nor on the processes of drying considered.
Scales and Exercises with Aksak Meters in Flute Education: A Study with Turkish and Italian Students
ERIC Educational Resources Information Center
Sakin, Ajda Senol; Öztürk, Ferda Gürgan
2016-01-01
Musical scale and exercise studies in instrumental education are considered as a fundamental component of music education. During an analysis of methods prepared for instrumental education, it was detected that scale and exercise studies for Aksak meters generally did not exist. This study was conducted to identify the effects of scales and…
Issues to consider in the derivation of water quality benchmarks for the protection of aquatic life.
Schneider, Uwe
2014-01-01
While water quality benchmarks for the protection of aquatic life have been in use in some jurisdictions for several decades (USA, Canada, several European countries), more and more countries are now setting up their own national water quality benchmark development programs. In doing so, they either adopt an existing method from another jurisdiction, update on an existing approach, or develop their own new derivation method. Each approach has its own advantages and disadvantages, and many issues have to be addressed when setting up a water quality benchmark development program or when deriving a water quality benchmark. Each of these tasks requires a special expertise. They may seem simple, but are complex in their details. The intention of this paper was to provide some guidance for this process of water quality benchmark development on the program level, for the derivation methodology development, and in the actual benchmark derivation step, as well as to point out some issues (notably the inclusion of adapted populations and cryptic species and points to consider in the use of the species sensitivity distribution approach) and future opportunities (an international data repository and international collaboration in water quality benchmark development).
Testing for genetic association taking into account phenotypic information of relatives.
Uh, Hae-Won; Wijk, Henk Jan van der; Houwing-Duistermaat, Jeanine J
2009-12-15
We investigated efficient case-control association analysis using family data. The outcome of interest was coronary heart disease. We employed existing and new methods that take into account the correlations among related individuals to obtain the proper type I error rates. The methods considered for autosomal single-nucleotide polymorphisms were: 1) generalized estimating equations-based methods, 2) variance-modified Cochran-Armitage (MCA) trend test incorporating kinship coefficients, and 3) genotypic modified quasi-likelihood score test. Additionally, for X-linked single-nucleotide polymorphisms we proposed a two-degrees-of-freedom test. Performance of these methods was tested using Framingham Heart Study 500 k array data.
Paradigms for machine learning
NASA Technical Reports Server (NTRS)
Schlimmer, Jeffrey C.; Langley, Pat
1991-01-01
Five paradigms are described for machine learning: connectionist (neural network) methods, genetic algorithms and classifier systems, empirical methods for inducing rules and decision trees, analytic learning methods, and case-based approaches. Some dimensions are considered along with these paradigms vary in their approach to learning, and the basic methods are reviewed that are used within each framework, together with open research issues. It is argued that the similarities among the paradigms are more important than their differences, and that future work should attempt to bridge the existing boundaries. Finally, some recent developments in the field of machine learning are discussed, and their impact on both research and applications is examined.
Legarra, Andres; Christensen, Ole F.; Vitezica, Zulma G.; Aguilar, Ignacio; Misztal, Ignacy
2015-01-01
Recent use of genomic (marker-based) relationships shows that relationships exist within and across base population (breeds or lines). However, current treatment of pedigree relationships is unable to consider relationships within or across base populations, although such relationships must exist due to finite size of the ancestral population and connections between populations. This complicates the conciliation of both approaches and, in particular, combining pedigree with genomic relationships. We present a coherent theoretical framework to consider base population in pedigree relationships. We suggest a conceptual framework that considers each ancestral population as a finite-sized pool of gametes. This generates across-individual relationships and contrasts with the classical view which each population is considered as an infinite, unrelated pool. Several ancestral populations may be connected and therefore related. Each ancestral population can be represented as a “metafounder,” a pseudo-individual included as founder of the pedigree and similar to an “unknown parent group.” Metafounders have self- and across relationships according to a set of parameters, which measure ancestral relationships, i.e., homozygozities within populations and relationships across populations. These parameters can be estimated from existing pedigree and marker genotypes using maximum likelihood or a method based on summary statistics, for arbitrarily complex pedigrees. Equivalences of genetic variance and variance components between the classical and this new parameterization are shown. Segregation variance on crosses of populations is modeled. Efficient algorithms for computation of relationship matrices, their inverses, and inbreeding coefficients are presented. Use of metafounders leads to compatibility of genomic and pedigree relationship matrices and to simple computing algorithms. Examples and code are given. PMID:25873631
Exploring 3D Human Action Recognition: from Offline to Online.
Liu, Zhenyu; Li, Rui; Tan, Jianrong
2018-02-20
With the introduction of cost-effective depth sensors, a tremendous amount of research has been devoted to studying human action recognition using 3D motion data. However, most existing methods work in an offline fashion, i.e., they operate on a segmented sequence. There are a few methods specifically designed for online action recognition, which continually predicts action labels as a stream sequence proceeds. In view of this fact, we propose a question: can we draw inspirations and borrow techniques or descriptors from existing offline methods, and then apply these to online action recognition? Note that extending offline techniques or descriptors to online applications is not straightforward, since at least two problems-including real-time performance and sequence segmentation-are usually not considered in offline action recognition. In this paper, we give a positive answer to the question. To develop applicable online action recognition methods, we carefully explore feature extraction, sequence segmentation, computational costs, and classifier selection. The effectiveness of the developed methods is validated on the MSR 3D Online Action dataset and the MSR Daily Activity 3D dataset.
Exploring 3D Human Action Recognition: from Offline to Online
Li, Rui; Liu, Zhenyu; Tan, Jianrong
2018-01-01
With the introduction of cost-effective depth sensors, a tremendous amount of research has been devoted to studying human action recognition using 3D motion data. However, most existing methods work in an offline fashion, i.e., they operate on a segmented sequence. There are a few methods specifically designed for online action recognition, which continually predicts action labels as a stream sequence proceeds. In view of this fact, we propose a question: can we draw inspirations and borrow techniques or descriptors from existing offline methods, and then apply these to online action recognition? Note that extending offline techniques or descriptors to online applications is not straightforward, since at least two problems—including real-time performance and sequence segmentation—are usually not considered in offline action recognition. In this paper, we give a positive answer to the question. To develop applicable online action recognition methods, we carefully explore feature extraction, sequence segmentation, computational costs, and classifier selection. The effectiveness of the developed methods is validated on the MSR 3D Online Action dataset and the MSR Daily Activity 3D dataset. PMID:29461502
Role of short-range correlation in facilitation of wave propagation in a long-range ladder chain
NASA Astrophysics Data System (ADS)
Farzadian, O.; Niry, M. D.
2018-09-01
We extend a new method for generating a random chain, which has a kind of short-range correlation induced by a repeated sequence while retaining long-range correlation. Three distinct methods are considered to study the localization-delocalization transition of mechanical waves in one-dimensional disordered media with simultaneous existence of short and long-range correlation. First, a transfer-matrix method was used to calculate numerically the localization length of a wave in a binary chain. We found that the existence of short-range correlation in a long-range correlated chain can increase the localization length at the resonance frequency Ωc. Then, we carried out an analytical study of the delocalization properties of the waves in correlated disordered media around Ωc. Finally, we apply a dynamical method based on the direct numerical simulation of the wave equation to study the propagation of waves in the correlated chain. Imposing short-range correlation on the long-range background will lead the propagation to super-diffusive transport. The results obtained with all three methods are in agreement with each other.
Selection of remedial alternatives for mine sites: a multicriteria decision analysis approach.
Betrie, Getnet D; Sadiq, Rehan; Morin, Kevin A; Tesfamariam, Solomon
2013-04-15
The selection of remedial alternatives for mine sites is a complex task because it involves multiple criteria and often with conflicting objectives. However, an existing framework used to select remedial alternatives lacks multicriteria decision analysis (MCDA) aids and does not consider uncertainty in the selection of alternatives. The objective of this paper is to improve the existing framework by introducing deterministic and probabilistic MCDA methods. The Preference Ranking Organization Method for Enrichment Evaluation (PROMETHEE) methods have been implemented in this study. The MCDA analysis involves processing inputs to the PROMETHEE methods that are identifying the alternatives, defining the criteria, defining the criteria weights using analytical hierarchical process (AHP), defining the probability distribution of criteria weights, and conducting Monte Carlo Simulation (MCS); running the PROMETHEE methods using these inputs; and conducting a sensitivity analysis. A case study was presented to demonstrate the improved framework at a mine site. The results showed that the improved framework provides a reliable way of selecting remedial alternatives as well as quantifying the impact of different criteria on selecting alternatives. Copyright © 2013 Elsevier Ltd. All rights reserved.
Global Existence Analysis of Cross-Diffusion Population Systems for Multiple Species
NASA Astrophysics Data System (ADS)
Chen, Xiuqing; Daus, Esther S.; Jüngel, Ansgar
2018-02-01
The existence of global-in-time weak solutions to reaction-cross-diffusion systems for an arbitrary number of competing population species is proved. The equations can be derived from an on-lattice random-walk model with general transition rates. In the case of linear transition rates, it extends the two-species population model of Shigesada, Kawasaki, and Teramoto. The equations are considered in a bounded domain with homogeneous Neumann boundary conditions. The existence proof is based on a refined entropy method and a new approximation scheme. Global existence follows under a detailed balance or weak cross-diffusion condition. The detailed balance condition is related to the symmetry of the mobility matrix, which mirrors Onsager's principle in thermodynamics. Under detailed balance (and without reaction) the entropy is nonincreasing in time, but counter-examples show that the entropy may increase initially if detailed balance does not hold.
Melnikov processes and chaos in randomly perturbed dynamical systems
NASA Astrophysics Data System (ADS)
Yagasaki, Kazuyuki
2018-07-01
We consider a wide class of randomly perturbed systems subjected to stationary Gaussian processes and show that chaotic orbits exist almost surely under some nondegenerate condition, no matter how small the random forcing terms are. This result is very contrasting to the deterministic forcing case, in which chaotic orbits exist only if the influence of the forcing terms overcomes that of the other terms in the perturbations. To obtain the result, we extend Melnikov’s method and prove that the corresponding Melnikov functions, which we call the Melnikov processes, have infinitely many zeros, so that infinitely many transverse homoclinic orbits exist. In addition, a theorem on the existence and smoothness of stable and unstable manifolds is given and the Smale–Birkhoff homoclinic theorem is extended in an appropriate form for randomly perturbed systems. We illustrate our theory for the Duffing oscillator subjected to the Ornstein–Uhlenbeck process parametrically.
Considering the Lives of Microbes in Microbial Communities.
Shank, Elizabeth A
2018-01-01
Over the last decades, sequencing technologies have transformed our ability to investigate the composition and functional capacity of microbial communities. Even so, critical questions remain about these complex systems that cannot be addressed by the bulk, community-averaged data typically provided by sequencing methods. In this Perspective, I propose that future advances in microbiome research will emerge from considering "the lives of microbes": we need to create methods to explicitly interrogate how microbes exist and interact in native-setting-like microenvironments. This approach includes developing approaches that expose the phenotypic heterogeneity of microbes; exploring the effects of coculture cues on cellular differentiation and metabolite production; and designing visualization systems that capture features of native microbial environments while permitting the nondestructive observation of microbial interactions over space and time with single-cell resolution.
[Thinking on designation of sham acupuncture in clinical research].
Pan, Li-Jia; Chen, Bo; Zhao, Xue; Guo, Yi
2014-01-01
Randomized controlled trials (RCT) is the source of the raw data of evidence-based medicine. Blind method is adopted in most of the high-quality RCT. Sham acupuncture is the main form of blinded in acupuncture clinical trial. In order to improve the quality of acupuncture clinical trail, based on the necessity of sham acupuncture in clinical research, the current situation as well as the existing problems of sham acupuncture, suggestions were put forward from the aspects of new way and new designation method which can be adopted as reference, and factors which have to be considered during the process of implementing. Various subjective and objective factors involving in the process of trial should be considered, and used of the current international standards, try to be quantification, and carry out strict quality monitoring.
Prioritizing Health: A Systematic Approach to Scoping Determinants in Health Impact Assessment.
McCallum, Lindsay C; Ollson, Christopher A; Stefanovic, Ingrid L
2016-01-01
The determinants of health are those factors that have the potential to affect health, either positively or negatively, and include a range of personal, social, economic, and environmental factors. In the practice of health impact assessment (HIA), the stage at which the determinants of health are considered for inclusion is during the scoping step. The scoping step is intended to identify how the HIA will be carried out and to set the boundaries (e.g., temporal and geographical) for the assessment. There are several factors that can help to inform the scoping process, many of which are considered in existing HIA tools and guidance; however, a systematic method of prioritizing determinants was found to be lacking. In order to analyze existing HIA scoping tools that are available, a systematic literature review was conducted, including both primary and gray literature. A total of 10 HIA scoping tools met the inclusion/exclusion criteria and were carried forward for comparative analysis. The analysis focused on minimum elements and practice standards of HIA scoping that have been established in the field. The analysis determined that existing approaches lack a clear, systematic method of prioritization of health determinants for inclusion in HIA. This finding led to the development of a Systematic HIA Scoping tool that addressed this gap. The decision matrix tool uses factors, such as impact, public concern, and data availability, to prioritize health determinants. Additionally, the tool allows for identification of data gaps and provides a transparent method for budget allocation and assessment planning. In order to increase efficiency and improve utility, the tool was programed into Microsoft Excel. Future work in the area of HIA methodology development is vital to the ongoing success of the practice and utilization of HIA as a reliable decision-making tool.
Naturalness preservation image contrast enhancement via histogram modification
NASA Astrophysics Data System (ADS)
Tian, Qi-Chong; Cohen, Laurent D.
2018-04-01
Contrast enhancement is a technique for enhancing image contrast to obtain better visual quality. Since many existing contrast enhancement algorithms usually produce over-enhanced results, the naturalness preservation is needed to be considered in the framework of image contrast enhancement. This paper proposes a naturalness preservation contrast enhancement method, which adopts the histogram matching to improve the contrast and uses the image quality assessment to automatically select the optimal target histogram. The contrast improvement and the naturalness preservation are both considered in the target histogram, so this method can avoid the over-enhancement problem. In the proposed method, the optimal target histogram is a weighted sum of the original histogram, the uniform histogram, and the Gaussian-shaped histogram. Then the structural metric and the statistical naturalness metric are used to determine the weights of corresponding histograms. At last, the contrast-enhanced image is obtained via matching the optimal target histogram. The experiments demonstrate the proposed method outperforms the compared histogram-based contrast enhancement algorithms.
Robust estimation procedure in panel data model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shariff, Nurul Sima Mohamad; Hamzah, Nor Aishah
2014-06-19
The panel data modeling has received a great attention in econometric research recently. This is due to the availability of data sources and the interest to study cross sections of individuals observed over time. However, the problems may arise in modeling the panel in the presence of cross sectional dependence and outliers. Even though there are few methods that take into consideration the presence of cross sectional dependence in the panel, the methods may provide inconsistent parameter estimates and inferences when outliers occur in the panel. As such, an alternative method that is robust to outliers and cross sectional dependencemore » is introduced in this paper. The properties and construction of the confidence interval for the parameter estimates are also considered in this paper. The robustness of the procedure is investigated and comparisons are made to the existing method via simulation studies. Our results have shown that robust approach is able to produce an accurate and reliable parameter estimates under the condition considered.« less
Domain decomposition methods for nonconforming finite element spaces of Lagrange-type
NASA Technical Reports Server (NTRS)
Cowsar, Lawrence C.
1993-01-01
In this article, we consider the application of three popular domain decomposition methods to Lagrange-type nonconforming finite element discretizations of scalar, self-adjoint, second order elliptic equations. The additive Schwarz method of Dryja and Widlund, the vertex space method of Smith, and the balancing method of Mandel applied to nonconforming elements are shown to converge at a rate no worse than their applications to the standard conforming piecewise linear Galerkin discretization. Essentially, the theory for the nonconforming elements is inherited from the existing theory for the conforming elements with only modest modification by constructing an isomorphism between the nonconforming finite element space and a space of continuous piecewise linear functions.
Mokhtari Azar, Akbar; Ghadirpour Jelogir, Ali; Nabi Bidhendi, Gholam Reza; Zaredar, Narges
2011-04-01
No doubt, operator is one of the main fundaments in wastewater treatment plants. By identifying the inadequacies, the operator could be considered as an important key in treatment plant. Several methods are used for wastewater treatment that requires spending a lot of cost. However, all investments of treatment facilities are usable when the expected efficiency of the treatment plant was obtained. Using experienced operator, this goal is more easily accessible. In this research, the wastewater of an urban community contaminated with moderated, diluted and highly concentrated pollution has been treated using surface and deep aeration treatment method. Sampling of these pilots was performed during winter 2008 to summer 2009. The results indicate that all analyzed parameters were eliminated using activated sludge and surface aeration methods. However, in activated sludge and deep aeration methods in combination with suitable function of operator, more pollutants could be eliminated. Hence, existence of operator in wastewater treatment plants is the basic principle to achieve considered efficiency. Wastewater treatment system is not intelligent itself and that is the operator who can organize even an inefficient system by its continuous presence. The converse of this fact is also real. Despite the various units and appropriate design of wastewater treatment plant, without an operator, the studied process cannot be expected highly efficient. In places frequently affected by the shock of organic and hydraulic loads, the compensator tank is important to offset the wastewater treatment process. Finally, in regard to microbial parameters, existence of disinfection unit is very useful.
Structural design using equilibrium programming formulations
NASA Technical Reports Server (NTRS)
Scotti, Stephen J.
1995-01-01
Solutions to increasingly larger structural optimization problems are desired. However, computational resources are strained to meet this need. New methods will be required to solve increasingly larger problems. The present approaches to solving large-scale problems involve approximations for the constraints of structural optimization problems and/or decomposition of the problem into multiple subproblems that can be solved in parallel. An area of game theory, equilibrium programming (also known as noncooperative game theory), can be used to unify these existing approaches from a theoretical point of view (considering the existence and optimality of solutions), and be used as a framework for the development of new methods for solving large-scale optimization problems. Equilibrium programming theory is described, and existing design techniques such as fully stressed design and constraint approximations are shown to fit within its framework. Two new structural design formulations are also derived. The first new formulation is another approximation technique which is a general updating scheme for the sensitivity derivatives of design constraints. The second new formulation uses a substructure-based decomposition of the structure for analysis and sensitivity calculations. Significant computational benefits of the new formulations compared with a conventional method are demonstrated.
Experiment study on RC frame retrofitted by the external structure
NASA Astrophysics Data System (ADS)
Liu, Chunyang; Shi, Junji; Hiroshi, Kuramoto; Taguchi, Takashi; Kamiya, Takashi
2016-09-01
A new retrofitting method is proposed herein for reinforced concrete (RC) structures through attachment of an external structure. The external structure consists of a fiber concrete encased steel frame, connection slab and transverse beams. The external structure is connected to the existing structure through a connection slab and transverse beams. Pseudostatic experiments were carried out on one unretrofitted specimen and three retrofitted frame specimens. The characteristics, including failure mode, crack pattern, hysteresis loops behavior, relationship of strain and displacement of the concrete slab, are demonstrated. The results show that the load carrying capacity is obviously increased, and the extension length of the slab and the number of columns within the external frame are important influence factors on the working performance of the existing structure. In addition, the displacement difference between the existing structure and the outer structure was caused mainly by three factors: shear deformation of the slab, extraction of transverse beams, and drift of the conjunction part between the slab and the existing frame. Furthermore, the total deformation determined by the first two factors accounted for approximately 80% of the damage, therefore these factors should be carefully considered in engineering practice to enhance the effects of this new retrofitting method.
Experimental and CFD evidence of multiple solutions in a naturally ventilated building.
Heiselberg, P; Li, Y; Andersen, A; Bjerre, M; Chen, Z
2004-02-01
This paper considers the existence of multiple solutions to natural ventilation of a simple one-zone building, driven by combined thermal and opposing wind forces. The present analysis is an extension of an earlier analytical study of natural ventilation in a fully mixed building, and includes the effect of thermal stratification. Both computational and experimental investigations were carried out in parallel with an analytical investigation. When flow is dominated by thermal buoyancy, it was found experimentally that there is thermal stratification. When the flow is wind-dominated, the room is fully mixed. Results from all three methods have shown that the hysteresis phenomena exist. Under certain conditions, two different stable steady-state solutions are found to exist by all three methods for the same set of parameters. As shown by both the computational fluid dynamics (CFD) and experimental results, one of the solutions can shift to another when there is a sufficient perturbation. These results have probably provided the strongest evidence so far for the conclusion that multiple states exist in natural ventilation of simple buildings. Different initial conditions in the CFD simulations led to different solutions, suggesting that caution must be taken when adopting the commonly used 'zero initialization'.
Feasibility study of a single, elliptical heliocentric Earth-Mars trajectory
NASA Technical Reports Server (NTRS)
Blake, M.; Fulgham, K.; Westrup, S.
1989-01-01
The initial intent of this design project was to evaluate the existence and feasibility of a single elliptical heliocentric Earth/Mars trajectory. This trajectory was constrained to encounter Mars twice in its orbit, within a time interval of 15 to 180 Earth days between encounters. The single ellipse restriction was soon found to be prohibitive for reasons shown later. Therefore, the approach taken in the design of the round-trip mission to Mars was to construct single-leg trajectories which connected two planets on two prescribed dates. Three methods of trajectory design were developed. Method 1 is an eclectic approach and employs Gaussian Orbit Determination (Method 1A) and Lambert-Euler Preliminary Orbit Determination (Method 1B) in conjunction with each other. Method 2 is an additional version of Lambert's Solution to orbit determination, and both a coplanar and a noncoplanar solution were developed within Method 2. In each of these methods, the fundamental variables are two position vectors and the time between the position vectors. In all methods, the motion was considered Keplerian motion and the reference frame origin was located at the sun. Perturbative effects were not considered in Method 1. The feasibility study of round-trip Earth/Mars trajectories maintains generality by considering only heliocentric trajectory parameters and planetary approach conditions. The coordinates and velocity components of the planets, for the standard epoch J2000, were computed from an approximate set of osculating elements by the procedure outlined in an ephemeris of coordinates.
Community mental health care in India.
Padmavati, R
2005-04-01
Recent times are witnessing methods in the various forms of community care for the mentally ill in India. Non-governmental organizations (NGO) play a pivotal role in filling the gap in the existing mental health services in India and the substantial need for these services. Various strategies that have been employed in community care have attempted to utilize existing community resources for implementation. Informal manpower resources incorporated with specialist psychiatric care and integrated with existing health care facilities have been general strategies. While the feasibility and cost-effectiveness of the NGO operated community outreach programs for the mentally ill have been demonstrated, various factors are seen to influence the planning and execution of such programs. This paper elucidates some critical factors that would need to be considered in community mental health care in India.
Improved cosine similarity measures of simplified neutrosophic sets for medical diagnoses.
Ye, Jun
2015-03-01
In pattern recognition and medical diagnosis, similarity measure is an important mathematical tool. To overcome some disadvantages of existing cosine similarity measures of simplified neutrosophic sets (SNSs) in vector space, this paper proposed improved cosine similarity measures of SNSs based on cosine function, including single valued neutrosophic cosine similarity measures and interval neutrosophic cosine similarity measures. Then, weighted cosine similarity measures of SNSs were introduced by taking into account the importance of each element. Further, a medical diagnosis method using the improved cosine similarity measures was proposed to solve medical diagnosis problems with simplified neutrosophic information. The improved cosine similarity measures between SNSs were introduced based on cosine function. Then, we compared the improved cosine similarity measures of SNSs with existing cosine similarity measures of SNSs by numerical examples to demonstrate their effectiveness and rationality for overcoming some shortcomings of existing cosine similarity measures of SNSs in some cases. In the medical diagnosis method, we can find a proper diagnosis by the cosine similarity measures between the symptoms and considered diseases which are represented by SNSs. Then, the medical diagnosis method based on the improved cosine similarity measures was applied to two medical diagnosis problems to show the applications and effectiveness of the proposed method. Two numerical examples all demonstrated that the improved cosine similarity measures of SNSs based on the cosine function can overcome the shortcomings of the existing cosine similarity measures between two vectors in some cases. By two medical diagnoses problems, the medical diagnoses using various similarity measures of SNSs indicated the identical diagnosis results and demonstrated the effectiveness and rationality of the diagnosis method proposed in this paper. The improved cosine measures of SNSs based on cosine function can overcome some drawbacks of existing cosine similarity measures of SNSs in vector space, and then their diagnosis method is very suitable for handling the medical diagnosis problems with simplified neutrosophic information and demonstrates the effectiveness and rationality of medical diagnoses. Copyright © 2014 Elsevier B.V. All rights reserved.
Review of methodology and technology available for the detection of extrasolar planetary systems
NASA Technical Reports Server (NTRS)
Tarter, J. C.; Black, D. C.; Billingham, J.
1985-01-01
Four approaches exist for the detection of extrasolar planets. According to the only direct method, the planet is imaged at some wavelength in a manner which makes it possible to differentiate its own feeble luminosity (internal energy source plus reflected starlight) from that of the nearby host star. The three indirect methods involve the detection of a planetary mass companion on the basis of the observable effects it has on the host star. A search is conducted regarding the occurrence of regular, periodic changes in the stellar spatial motion (astrometric method) or the velocity of stellar emission line spectra (spectroscopic method) or in the apparent total stellar luminosity (photometric method). Details regarding the approaches employed for implementing the considered methods are discussed.
Travelling waves for a Frenkel-Kontorova chain
NASA Astrophysics Data System (ADS)
Buffoni, Boris; Schwetlick, Hartmut; Zimmer, Johannes
2017-08-01
In this article, the Frenkel-Kontorova model for dislocation dynamics is considered, where the on-site potential consists of quadratic wells joined by small arcs, which can be spinodal (concave) as commonly assumed in physics. The existence of heteroclinic waves-making a transition from one well of the on-site potential to another-is proved by means of a Schauder fixed point argument. The setting developed here is general enough to treat such a Frenkel-Kontorova chain with smooth (C2) on-site potential. It is shown that the method can also establish the existence of two-transition waves for a piecewise quadratic on-site potential.
Khan, Junaid Ahmad; Mustafa, M.; Hayat, T.; Sheikholeslami, M.; Alsaedi, A.
2015-01-01
This work deals with the three-dimensional flow of nanofluid over a bi-directional exponentially stretching sheet. The effects of Brownian motion and thermophoretic diffusion of nanoparticles are considered in the mathematical model. The temperature and nanoparticle volume fraction at the sheet are also distributed exponentially. Local similarity solutions are obtained by an implicit finite difference scheme known as Keller-box method. The results are compared with the existing studies in some limiting cases and found in good agreement. The results reveal the existence of interesting Sparrow-Gregg-type hills for temperature distribution corresponding to some range of parametric values. PMID:25785857
New meteor showers – yes or not?
NASA Astrophysics Data System (ADS)
Koukal, Jakub
2018-01-01
The development of meteor astronomy associated with the development of CCD technology is reflected in a huge increase in databases of meteor orbits. It has never been possible before in the history of meteor astronomy to examine properties of meteors or meteor showers. Existing methods for detecting new meteor showers seem to be inadequate in these circumstances. The spontaneous discovery of new meteor showers leads to ambiguous specifications of new meteor showers. There is a duplication of already discovered meteor showers and a division of existing meteor showers based on their own criteria. The analysis in this article considers some new meteor showers in the IAU MDC database.
Uptake of explosives from contaminated soil by existing vegetation at the Iowa Army Ammunition Plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, J.F.; Zellmer, S.D.; Tomczyk, N.A.
This study examines the uptake of explosives by existing vegetation growing in soils contaminated with 2,4,6-trinitrotoluene (TNT) and 1,3,5-trinitro-3,5-triazine (RDX) in three areas at the Iowa Army Ammunition Plant (IAAP). To determine explosives uptake under natural environmental conditions, existing plant materials and soil from the root zone were sampled at different locations in each area, and plant materials were separated by species. Standard methods were used to determine the concentrations of explosives, their derivatives, and metabolites in the soil samples. Plant materials were also analyzed. The compound TNT was not detected in the aboveground portion of plants, and vegetation growingmore » on TNT-contaminated soils is not considered a health hazard. However, soil and plant roots may contain TNT degradation products that may be toxic; hence, their consumption is not advised. The compound RDX was found in the tops and roots of plants growing on RDX-contaminated soils at all surveyed sites. Although RDX is not a listed carcinogen, several of its potentially present degradation products are carcinogens. Therefore, the consumption of any plant tissues growing on RDX-contaminated sites should be considered a potential health hazard.« less
The independent relationship between triglycerides and coronary heart disease
Morrison, Alan; Hokanson, John E
2009-01-01
Aims: The aim was to review epidemiologic studies to reassess whether serum levels of triglycerides should be considered independently of high-density lipoprotein-cholesterol (HDL-C) as a predictor of coronary heart disease (CHD). Methods and results: We systematically reviewed population-based cohort studies in which baseline serum levels of triglycerides and HDL-C were included as explanatory variables in multivariate analyses with the development of CHD (coronary events or coronary death) as dependent variable. A total of 32 unique reports describing 38 cohorts were included. The independent association between elevated triglycerides and risk of CHD was statistically significant in 16 of 30 populations without pre-existing CHD. Among populations with diabetes mellitus or pre-existing CHD, or the elderly, triglycerides were not significantly independently associated with CHD in any of 8 cohorts. Triglycerides and HDL-C were mutually exclusive predictors of coronary events in 12 of 20 analyses of patients without pre-existing CHD. Conclusions: Epidemiologic studies provide evidence of an association between triglycerides and the development of primary CHD independently of HDL-C. Evidence of an inverse relationship between triglycerides and HDL-C suggests that both should be considered in CHD risk estimation and as targets for intervention. PMID:19436658
Economic evaluation: Concepts, selected studies, system costs, and a proposed program
NASA Technical Reports Server (NTRS)
Osterhoudt, F. H. (Principal Investigator)
1979-01-01
The more usual approaches to valuing crop information are reviewed and an integrated approach is recommended. Problems associated with implementation are examined. What has already been accomplished in the economic evaluation of LACIE-type information is reported including various studies of benefits. The costs of the existing and proposed systems are considered. A method and approach is proposed for further studies.
Control problem for a system of linear loaded differential equations
NASA Astrophysics Data System (ADS)
Barseghyan, V. R.; Barseghyan, T. V.
2018-04-01
The problem of control and optimal control for a system of linear loaded differential equations is considered. Necessary and sufficient conditions for complete controllability and conditions for the existence of a program control and the corresponding motion are formulated. The explicit form of control action for the control problem is constructed and a method for solving the problem of optimal control is proposed.
ERIC Educational Resources Information Center
California State Postsecondary Education Commission, Sacramento.
Budgeting for instructional equipment at California's public colleges and universities is considered. Information is provided on how the University of California and the California State University budget the replacement of existing instructional equipment as well as instructional equipment in new or altered facilities. Methods used by the…
Evaluation of cancer mortality in a cohort of workers exposed to low-level radiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lea, C.S.
1995-12-01
The purpose of this dissertation was to re-analyze existing data to explore methodologic approaches that may determine whether excess cancer mortality in the ORNL cohort can be explained by time-related factors not previously considered; grouping of cancer outcomes; selection bias due to choice of method selected to incorporate an empirical induction period; or the type of statistical model chosen.
Autoimmune hepatitis related autoantibodies in children with type 1 diabetes
2014-01-01
Background and objectives The frequency of Type 1 diabetes (T1D)-related autoantibodies was determined in children with autoimmune hepatitis. However, the incidence of autoimmune hepatitis related autoantibodies in children with T1D has been poorly investigated. The aim of the present cross sectional prospective study was to determine the occurrence of autoimmune hepatitis-related autoantibodies in children with T1D. Methods Children with T1D following in diabetic clinic in our center were screened for existence of liver related autoantibodies from November 2010 to November 2011. The patients’ sera were analyzed for the existence of autoantibodies such as anti-nuclear antibody, anti-smooth muscle antibody, and anti-Liver Kidney microsomal antibody, using enzyme linked immunoassay and indirect immunofluorescence methods. A titer of anti-nuclear antibody ≥1/40 was considered positive and titer of < 1/40 was considered negative. Anti-liver kidney microsomal antibody titer of < 3 U/ml was considered negative, 3 – 5 U/ml borderlines, and > 5 U/ml was considered positive. Results 106 children with T1D have been examined over a one-year period: age ranges between 8 months to 15.5 years, sixty two patients were females. Autoantibody screen revealed a girl with positive anti-liver kidney microsomal antibody (1%) and 8 children had positive anti-nuclear antibody (7.5%), without clinical, biochemical or radiologic evidence of liver disease. None of the patients had positive smooth muscle antibody. In conclusion Anti-liver kidney microsomal antibody is rarely found in sera of children with T1D; the clinical significance of which is unknown. PMID:24636465
NASA Technical Reports Server (NTRS)
1971-01-01
Two alternative technical approaches were studied for application of an electrochemical process using a solid oxide electrolyte (zirconia stabilized by yttria or scandia) to oxygen reclamation from carbon dioxide and water, for spacecraft life support systems. Among the topics considered are the advisability of proceeding to engineering prototype development and fabrication of a full scale model for the system concept, the optimum choice of method or approach to be carried into prototype development, and the technical problem areas which exist.
Parasitic modes removal out of operating mode neighbourhood in the DAW accelerating structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andreev, V.G.; Belugin, V.M.; Esin, S.K.
1983-08-01
The disk and washer (DAW) accelerating structure finds its use in a number of new projects (PIGMI, SNQ etc ). It composes the main part of the accelerating structure of the meson factory now under construction in the Institute for Nuclear Research (INR), Moscow. It is known that the parasitic modes with azimuthal field variations exist at the operating mode region. In this report different methods of the parasitic modes frequency shift are considered. The main attention is given to the resonant methods, which are the most efficient.
Reflected stochastic differential equation models for constrained animal movement
Hanks, Ephraim M.; Johnson, Devin S.; Hooten, Mevin B.
2017-01-01
Movement for many animal species is constrained in space by barriers such as rivers, shorelines, or impassable cliffs. We develop an approach for modeling animal movement constrained in space by considering a class of constrained stochastic processes, reflected stochastic differential equations. Our approach generalizes existing methods for modeling unconstrained animal movement. We present methods for simulation and inference based on augmenting the constrained movement path with a latent unconstrained path and illustrate this augmentation with a simulation example and an analysis of telemetry data from a Steller sea lion (Eumatopias jubatus) in southeast Alaska.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCombes, Lucy, E-mail: l.mccombes@leedsbeckett.ac.uk; Vanclay, Frank, E-mail: frank.vanclay@rug.nl; Evers, Yvette, E-mail: y.evers@tft-earth.org
The discourse on the social impacts of tourism needs to shift from the current descriptive critique of tourism to considering what can be done in actual practice to embed the management of tourism's social impacts into the existing planning, product development and operational processes of tourism businesses. A pragmatic approach for designing research methodologies, social management systems and initial actions, which is shaped by the real world operational constraints and existing systems used in the tourism industry, is needed. Our pilot study with a small Bulgarian travel company put social impact assessment (SIA) to the test to see if itmore » could provide this desired approach and assist in implementing responsible tourism development practice, especially in small tourism businesses. Our findings showed that our adapted SIA method has value as a practical method for embedding a responsible tourism approach. While there were some challenges, SIA proved to be effective in assisting the staff of our test case tourism business to better understand their social impacts on their local communities and to identify actions to take. - Highlights: • Pragmatic approach is needed for the responsible management of social impacts of tourism. • Our adapted Social impact Assessment (SIA) method has value as a practical method. • SIA can be embedded into tourism businesses existing ‘ways of doing things’. • We identified challenges and ways to improve our method to better suit small tourism business context.« less
Mehl, S.; Hill, M.C.
2002-01-01
Models with local grid refinement, as often required in groundwater models, pose special problems for model calibration. This work investigates the calculation of sensitivities and the performance of regression methods using two existing and one new method of grid refinement. The existing local grid refinement methods considered are: (a) a variably spaced grid in which the grid spacing becomes smaller near the area of interest and larger where such detail is not needed, and (b) telescopic mesh refinement (TMR), which uses the hydraulic heads or fluxes of a regional model to provide the boundary conditions for a locally refined model. The new method has a feedback between the regional and local grids using shared nodes, and thereby, unlike the TMR methods, balances heads and fluxes at the interfacing boundary. Results for sensitivities are compared for the three methods and the effect of the accuracy of sensitivity calculations are evaluated by comparing inverse modelling results. For the cases tested, results indicate that the inaccuracies of the sensitivities calculated using the TMR approach can cause the inverse model to converge to an incorrect solution.
Mehl, S.; Hill, M.C.
2002-01-01
Models with local grid refinement, as often required in groundwater models, pose special problems for model calibration. This work investigates the calculation of sensitivities and performance of regression methods using two existing and one new method of grid refinement. The existing local grid refinement methods considered are (1) a variably spaced grid in which the grid spacing becomes smaller near the area of interest and larger where such detail is not needed and (2) telescopic mesh refinement (TMR), which uses the hydraulic heads or fluxes of a regional model to provide the boundary conditions for a locally refined model. The new method has a feedback between the regional and local grids using shared nodes, and thereby, unlike the TMR methods, balances heads and fluxes at the interfacing boundary. Results for sensitivities are compared for the three methods and the effect of the accuracy of sensitivity calculations are evaluated by comparing inverse modelling results. For the cases tested, results indicate that the inaccuracies of the sensitivities calculated using the TMR approach can cause the inverse model to converge to an incorrect solution.
NASA Astrophysics Data System (ADS)
Fan, Xiao-Ning; Zhi, Bo
2017-07-01
Uncertainties in parameters such as materials, loading, and geometry are inevitable in designing metallic structures for cranes. When considering these uncertainty factors, reliability-based design optimization (RBDO) offers a more reasonable design approach. However, existing RBDO methods for crane metallic structures are prone to low convergence speed and high computational cost. A unilevel RBDO method, combining a discrete imperialist competitive algorithm with an inverse reliability strategy based on the performance measure approach, is developed. Application of the imperialist competitive algorithm at the optimization level significantly improves the convergence speed of this RBDO method. At the reliability analysis level, the inverse reliability strategy is used to determine the feasibility of each probabilistic constraint at each design point by calculating its α-percentile performance, thereby avoiding convergence failure, calculation error, and disproportionate computational effort encountered using conventional moment and simulation methods. Application of the RBDO method to an actual crane structure shows that the developed RBDO realizes a design with the best tradeoff between economy and safety together with about one-third of the convergence speed and the computational cost of the existing method. This paper provides a scientific and effective design approach for the design of metallic structures of cranes.
Du, Pufeng; Wang, Lusheng
2014-01-01
One of the fundamental tasks in biology is to identify the functions of all proteins to reveal the primary machinery of a cell. Knowledge of the subcellular locations of proteins will provide key hints to reveal their functions and to understand the intricate pathways that regulate biological processes at the cellular level. Protein subcellular location prediction has been extensively studied in the past two decades. A lot of methods have been developed based on protein primary sequences as well as protein-protein interaction network. In this paper, we propose to use the protein-protein interaction network as an infrastructure to integrate existing sequence based predictors. When predicting the subcellular locations of a given protein, not only the protein itself, but also all its interacting partners were considered. Unlike existing methods, our method requires neither the comprehensive knowledge of the protein-protein interaction network nor the experimentally annotated subcellular locations of most proteins in the protein-protein interaction network. Besides, our method can be used as a framework to integrate multiple predictors. Our method achieved 56% on human proteome in absolute-true rate, which is higher than the state-of-the-art methods. PMID:24466278
A Hyper-Heuristic Ensemble Method for Static Job-Shop Scheduling.
Hart, Emma; Sim, Kevin
2016-01-01
We describe a new hyper-heuristic method NELLI-GP for solving job-shop scheduling problems (JSSP) that evolves an ensemble of heuristics. The ensemble adopts a divide-and-conquer approach in which each heuristic solves a unique subset of the instance set considered. NELLI-GP extends an existing ensemble method called NELLI by introducing a novel heuristic generator that evolves heuristics composed of linear sequences of dispatching rules: each rule is represented using a tree structure and is itself evolved. Following a training period, the ensemble is shown to outperform both existing dispatching rules and a standard genetic programming algorithm on a large set of new test instances. In addition, it obtains superior results on a set of 210 benchmark problems from the literature when compared to two state-of-the-art hyper-heuristic approaches. Further analysis of the relationship between heuristics in the evolved ensemble and the instances each solves provides new insights into features that might describe similar instances.
Analytical Method to Evaluate Failure Potential During High-Risk Component Development
NASA Technical Reports Server (NTRS)
Tumer, Irem Y.; Stone, Robert B.; Clancy, Daniel (Technical Monitor)
2001-01-01
Communicating failure mode information during design and manufacturing is a crucial task for failure prevention. Most processes use Failure Modes and Effects types of analyses, as well as prior knowledge and experience, to determine the potential modes of failures a product might encounter during its lifetime. When new products are being considered and designed, this knowledge and information is expanded upon to help designers extrapolate based on their similarity with existing products and the potential design tradeoffs. This paper makes use of similarities and tradeoffs that exist between different failure modes based on the functionality of each component/product. In this light, a function-failure method is developed to help the design of new products with solutions for functions that eliminate or reduce the potential of a failure mode. The method is applied to a simplified rotating machinery example in this paper, and is proposed as a means to account for helicopter failure modes during design and production, addressing stringent safety and performance requirements for NASA applications.
Reevaluation of air surveillance station siting
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abbott, K.; Jannik, T.
2016-07-06
DOE Technical Standard HDBK-1216-2015 (DOE 2015) recommends evaluating air-monitoring station placement using the analytical method developed by Waite. The technique utilizes wind rose and population distribution data in order to determine a weighting factor for each directional sector surrounding a nuclear facility. Based on the available resources (number of stations) and a scaling factor, this weighting factor is used to determine the number of stations recommended to be placed in each sector considered. An assessment utilizing this method was performed in 2003 to evaluate the effectiveness of the existing SRS air-monitoring program. The resulting recommended distribution of air-monitoring stations wasmore » then compared to that of the existing site perimeter surveillance program. The assessment demonstrated that the distribution of air-monitoring stations at the time generally agreed with the results obtained using the Waite method; however, at the time new stations were established in Barnwell and in Williston in order to meet requirements of DOE guidance document EH-0173T.« less
Ortho Image and DTM Generation with Intelligent Methods
NASA Astrophysics Data System (ADS)
Bagheri, H.; Sadeghian, S.
2013-10-01
Nowadays the artificial intelligent algorithms has considered in GIS and remote sensing. Genetic algorithm and artificial neural network are two intelligent methods that are used for optimizing of image processing programs such as edge extraction and etc. these algorithms are very useful for solving of complex program. In this paper, the ability and application of genetic algorithm and artificial neural network in geospatial production process like geometric modelling of satellite images for ortho photo generation and height interpolation in raster Digital Terrain Model production process is discussed. In first, the geometric potential of Ikonos-2 and Worldview-2 with rational functions, 2D & 3D polynomials were tested. Also comprehensive experiments have been carried out to evaluate the viability of the genetic algorithm for optimization of rational function, 2D & 3D polynomials. Considering the quality of Ground Control Points, the accuracy (RMSE) with genetic algorithm and 3D polynomials method for Ikonos-2 Geo image was 0.508 pixel sizes and the accuracy (RMSE) with GA algorithm and rational function method for Worldview-2 image was 0.930 pixel sizes. For more another optimization artificial intelligent methods, neural networks were used. With the use of perceptron network in Worldview-2 image, a result of 0.84 pixel sizes with 4 neurons in middle layer was gained. The final conclusion was that with artificial intelligent algorithms it is possible to optimize the existing models and have better results than usual ones. Finally the artificial intelligence methods, like genetic algorithms as well as neural networks, were examined on sample data for optimizing interpolation and for generating Digital Terrain Models. The results then were compared with existing conventional methods and it appeared that these methods have a high capacity in heights interpolation and that using these networks for interpolating and optimizing the weighting methods based on inverse distance leads to a high accurate estimation of heights.
Analysis of Classes of Superlinear Semipositone Problems with Nonlinear Boundary Conditions
NASA Astrophysics Data System (ADS)
Morris, Quinn A.
We study positive radial solutions for classes of steady state reaction diffusion problems on the exterior of a ball with both Dirichlet and nonlinear boundary conditions. We consider p-Laplacian problems (p > 1) with reaction terms which are superlinear at infinity and semipositone. In the case p = 2, using variational methods, we establish the existence of a solution, and via detailed analysis of the Green's function, we prove the positivity of the solution. In the case p ≠ 2, we again use variational methods to establish the existence of a solution, but the positivity of the solution is achieved via sophisticated a priori estimates. In the case p ≠ 2, the Green's function analysis is no longer available. Our results significantly enhance the literature on superlinear semipositone problems. Finally, we provide algorithms for the numerical generation of exact bifurcation curves for one-dimensional problems. In the autonomous case, we extend and analyze a quadrature method, and using nonlinear solvers in Mathematica, generate bifurcation curves. In the nonautonomous case, we employ shooting methods in Mathematica to generate bifurcation curves.
Yu, Liyang; Han, Qi; Niu, Xiamu; Yiu, S M; Fang, Junbin; Zhang, Ye
2016-02-01
Most of the existing image modification detection methods which are based on DCT coefficient analysis model the distribution of DCT coefficients as a mixture of a modified and an unchanged component. To separate the two components, two parameters, which are the primary quantization step, Q1, and the portion of the modified region, α, have to be estimated, and more accurate estimations of α and Q1 lead to better detection and localization results. Existing methods estimate α and Q1 in a completely blind manner, without considering the characteristics of the mixture model and the constraints to which α should conform. In this paper, we propose a more effective scheme for estimating α and Q1, based on the observations that, the curves on the surface of the likelihood function corresponding to the mixture model is largely smooth, and α can take values only in a discrete set. We conduct extensive experiments to evaluate the proposed method, and the experimental results confirm the efficacy of our method. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Improving Medical Device Regulation: The United States and Europe in Perspective
SORENSON, CORINNA; DRUMMOND, MICHAEL
2014-01-01
Context: Recent debates and events have brought into question the effectiveness of existing regulatory frameworks for medical devices in the United States and Europe to ensure their performance, safety, and quality. This article provides a comparative analysis of medical device regulation in the two jurisdictions, explores current reforms to improve the existing systems, and discusses additional actions that should be considered to fully meet this aim. Medical device regulation must be improved to safeguard public health and ensure that high-quality and effective technologies reach patients. Methods: We explored and analyzed medical device regulatory systems in the United States and Europe in accordance with the available gray and peer-reviewed literature and legislative documents. Findings: The two regulatory systems differ in their mandate and orientation, organization, pre-and postmarket evidence requirements, and transparency of process. Despite these differences, both jurisdictions face similar challenges for ensuring that only safe and effective devices reach the market, monitoring real-world use, and exchanging pertinent information on devices with key users such as clinicians and patients. To address these issues, reforms have recently been introduced or debated in the United States and Europe that are principally focused on strengthening regulatory processes, enhancing postmarket regulation through more robust surveillance systems, and improving the traceability and monitoring of devices. Some changes in premarket requirements for devices are being considered. Conclusions: Although the current reforms address some of the outstanding challenges in device regulation, additional steps are needed to improve existing policy. We examine a number of actions to be considered, such as requiring high-quality evidence of benefit for medium-and high-risk devices; moving toward greater centralization and coordination of regulatory approval in Europe; creating links between device identifier systems and existing data collection tools, such as electronic health records; and fostering increased and more effective use of registries to ensure safe postmarket use of new and existing devices. PMID:24597558
Paths of Improving the Technological Process of Manufacture of GTE Turbine Blades
NASA Astrophysics Data System (ADS)
Vdovin, R. A.; Smelov, V. G.; Bolotov, M. A.; Pronichev, N. D.
2016-08-01
The article provides an analysis of the problems at manufacture of blades of the turbine of gas-turbine engines and power stations is provided in article, and also paths of perfecting of technological process of manufacture of blades are offered. The analysis of the main systems of basing of blades in the course of machining and the control methods of the processed blades existing at the enterprises with the indication of merits and demerits is carried out. In work criteria in the form of the mathematical models of a spatial distribution of an allowance considering the uniform distribution of an allowance on a feather profile are developed. The considered methods allow to reduce percent of release of marriage and to reduce labor input when polishing path part of a feather of blades of the turbine.
NASA Technical Reports Server (NTRS)
Riley, Donald R.
2016-01-01
Calculated numerical values for some aerodynamic terms and stability Derivatives for several different wings in unseparated inviscid incompressible flow were made using a discrete vortex method involving a limited number of horseshoe vortices. Both longitudinal and lateral-directional derivatives were calculated for steady conditions as well as for sinusoidal oscillatory motions. Variables included the number of vortices used and the rotation axis/moment center chordwise location. Frequencies considered were limited to the range of interest to vehicle dynamic stability (kb <.24 ). Comparisons of some calculated numerical results with experimental wind-tunnel measurements were in reasonable agreement in the low angle-of-attack range considering the differences existing between the mathematical representation and experimental wind-tunnel models tested. Of particular interest was the presence of induced drag for the oscillatory condition.
Determination of power and moment on shaft of special asynchronous electric drives
NASA Astrophysics Data System (ADS)
Karandey, V. Yu; Popov, B. K.; Popova, O. B.; Afanasyev, V. L.
2018-03-01
In the article, questions and tasks of determination of power and the moment on a shaft of special asynchronous electric drives are considered. Use of special asynchronous electric drives in mechanical engineering and other industries is relevant. The considered types of electric drives possess the improved mass-dimensional indicators in comparison with singleengine systems. Also these types of electric drives have constructive advantages; the improved characteristics allow one to realize the technological process. But creation and design of new electric drives demands adjustment of existing or development of new methods and approaches of calculation of parameters. Determination of power and the moment on a shaft of special asynchronous electric drives is the main objective during design of electric drives. This task has been solved based on a method of electromechanical transformation of energy.
Estimation of the behavior factor of existing RC-MRF buildings
NASA Astrophysics Data System (ADS)
Vona, Marco; Mastroberti, Monica
2018-01-01
In recent years, several research groups have studied a new generation of analysis methods for seismic response assessment of existing buildings. Nevertheless, many important developments are still needed in order to define more reliable and effective assessment procedures. Moreover, regarding existing buildings, it should be highlighted that due to the low knowledge level, the linear elastic analysis is the only analysis method allowed. The same codes (such as NTC2008, EC8) consider the linear dynamic analysis with behavior factor as the reference method for the evaluation of seismic demand. This type of analysis is based on a linear-elastic structural model subject to a design spectrum, obtained by reducing the elastic spectrum through a behavior factor. The behavior factor (reduction factor or q factor in some codes) is used to reduce the elastic spectrum ordinate or the forces obtained from a linear analysis in order to take into account the non-linear structural capacities. The behavior factors should be defined based on several parameters that influence the seismic nonlinear capacity, such as mechanical materials characteristics, structural system, irregularity and design procedures. In practical applications, there is still an evident lack of detailed rules and accurate behavior factor values adequate for existing buildings. In this work, some investigations of the seismic capacity of the main existing RC-MRF building types have been carried out. In order to make a correct evaluation of the seismic force demand, actual behavior factor values coherent with force based seismic safety assessment procedure have been proposed and compared with the values reported in the Italian seismic code, NTC08.
On A Nonlinear Generalization of Sparse Coding and Dictionary Learning.
Xie, Yuchen; Ho, Jeffrey; Vemuri, Baba
2013-01-01
Existing dictionary learning algorithms are based on the assumption that the data are vectors in an Euclidean vector space ℝ d , and the dictionary is learned from the training data using the vector space structure of ℝ d and its Euclidean L 2 -metric. However, in many applications, features and data often originated from a Riemannian manifold that does not support a global linear (vector space) structure. Furthermore, the extrinsic viewpoint of existing dictionary learning algorithms becomes inappropriate for modeling and incorporating the intrinsic geometry of the manifold that is potentially important and critical to the application. This paper proposes a novel framework for sparse coding and dictionary learning for data on a Riemannian manifold, and it shows that the existing sparse coding and dictionary learning methods can be considered as special (Euclidean) cases of the more general framework proposed here. We show that both the dictionary and sparse coding can be effectively computed for several important classes of Riemannian manifolds, and we validate the proposed method using two well-known classification problems in computer vision and medical imaging analysis.
On A Nonlinear Generalization of Sparse Coding and Dictionary Learning
Xie, Yuchen; Ho, Jeffrey; Vemuri, Baba
2013-01-01
Existing dictionary learning algorithms are based on the assumption that the data are vectors in an Euclidean vector space ℝd, and the dictionary is learned from the training data using the vector space structure of ℝd and its Euclidean L2-metric. However, in many applications, features and data often originated from a Riemannian manifold that does not support a global linear (vector space) structure. Furthermore, the extrinsic viewpoint of existing dictionary learning algorithms becomes inappropriate for modeling and incorporating the intrinsic geometry of the manifold that is potentially important and critical to the application. This paper proposes a novel framework for sparse coding and dictionary learning for data on a Riemannian manifold, and it shows that the existing sparse coding and dictionary learning methods can be considered as special (Euclidean) cases of the more general framework proposed here. We show that both the dictionary and sparse coding can be effectively computed for several important classes of Riemannian manifolds, and we validate the proposed method using two well-known classification problems in computer vision and medical imaging analysis. PMID:24129583
NASA Astrophysics Data System (ADS)
Ditscherlein, L.; Peuker, U. A.
2017-04-01
For the application of colloidal probe atomic force microscopy at high temperatures (>500 K), stable colloidal probe cantilevers are essential. In this study, two new methods for gluing alumina particles onto temperature stable cantilevers are presented and compared with an existing method for borosilicate particles at elevated temperatures as well as with cp-cantilevers prepared with epoxy resin at room temperature. The durability of the fixing of the particle is quantified with a test method applying high shear forces. The force is calculated with a mechanical model considering both the bending as well as the torsion on the colloidal probe.
ADM For Solving Linear Second-Order Fredholm Integro-Differential Equations
NASA Astrophysics Data System (ADS)
Karim, Mohd F.; Mohamad, Mahathir; Saifullah Rusiman, Mohd; Che-Him, Norziha; Roslan, Rozaini; Khalid, Kamil
2018-04-01
In this paper, we apply Adomian Decomposition Method (ADM) as numerically analyse linear second-order Fredholm Integro-differential Equations. The approximate solutions of the problems are calculated by Maple package. Some numerical examples have been considered to illustrate the ADM for solving this equation. The results are compared with the existing exact solution. Thus, the Adomian decomposition method can be the best alternative method for solving linear second-order Fredholm Integro-Differential equation. It converges to the exact solution quickly and in the same time reduces computational work for solving the equation. The result obtained by ADM shows the ability and efficiency for solving these equations.
System and algorithm for evaluation of human auditory analyzer state
NASA Astrophysics Data System (ADS)
Bachynskiy, Mykhaylo V.; Azarkhov, Oleksandr Yu.; Shtofel, Dmytro Kh.; Horbatiuk, Svitlana M.; Ławicki, Tomasz; Kalizhanova, Aliya; Smailova, Saule; Askarova, Nursanat
2017-08-01
The paper discusses questions of human auditory state evaluation with technical means. It considers the disadvantages of existing clinical audiometry methods and systems. It is proposed to use method for evaluating of auditory analyzer state by means of pulsometry to get the medical study more objective and efficient. It provides for use of two optoelectronic sensors located on the carotid artery and ear lobe, Using this method the biotechnical system for evaluation and stimulation of human auditory analyzer stare wad developed. Its hardware and software were substantiated. Different modes of simulation in the designed system were tested and the influence of the procedure on a patient was studied.
Wireless Monitoring of Automobile Tires for Intelligent Tires
Matsuzaki, Ryosuke; Todoroki, Akira
2008-01-01
This review discusses key technologies of intelligent tires focusing on sensors and wireless data transmission. Intelligent automobile tires, which monitor their pressure, deformation, wheel loading, friction, or tread wear, are expected to improve the reliability of tires and tire control systems. However, in installing sensors in a tire, many problems have to be considered, such as compatibility of the sensors with tire rubber, wireless transmission, and battery installments. As regards sensing, this review discusses indirect methods using existing sensors, such as that for wheel speed, and direct methods, such as surface acoustic wave sensors and piezoelectric sensors. For wireless transmission, passive wireless methods and energy harvesting are also discussed. PMID:27873979
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armour, E.A.G.
1982-06-07
It has been known since the work of Aronson, Kleinman and Spruch, and Armour that, if the proton is considered to be infinitely massive, no bound state of a system made up of a positron and a hydrogen atom can exist. In this Letter a new method is introduced for taking into account finite nuclear mass. With use of this method it is shown that the inclusion of the finite mass of the proton does not result in the appearance of a bound state. This is the first time that this result has been established.
Legarra, Andres; Christensen, Ole F; Vitezica, Zulma G; Aguilar, Ignacio; Misztal, Ignacy
2015-06-01
Recent use of genomic (marker-based) relationships shows that relationships exist within and across base population (breeds or lines). However, current treatment of pedigree relationships is unable to consider relationships within or across base populations, although such relationships must exist due to finite size of the ancestral population and connections between populations. This complicates the conciliation of both approaches and, in particular, combining pedigree with genomic relationships. We present a coherent theoretical framework to consider base population in pedigree relationships. We suggest a conceptual framework that considers each ancestral population as a finite-sized pool of gametes. This generates across-individual relationships and contrasts with the classical view which each population is considered as an infinite, unrelated pool. Several ancestral populations may be connected and therefore related. Each ancestral population can be represented as a "metafounder," a pseudo-individual included as founder of the pedigree and similar to an "unknown parent group." Metafounders have self- and across relationships according to a set of parameters, which measure ancestral relationships, i.e., homozygozities within populations and relationships across populations. These parameters can be estimated from existing pedigree and marker genotypes using maximum likelihood or a method based on summary statistics, for arbitrarily complex pedigrees. Equivalences of genetic variance and variance components between the classical and this new parameterization are shown. Segregation variance on crosses of populations is modeled. Efficient algorithms for computation of relationship matrices, their inverses, and inbreeding coefficients are presented. Use of metafounders leads to compatibility of genomic and pedigree relationship matrices and to simple computing algorithms. Examples and code are given. Copyright © 2015 by the Genetics Society of America.
Impact of New Water Sources on the Overall Water Network: An Optimisation Approach
Jones, Brian C.; Hove-Musekwa, Senelani D.
2014-01-01
A mathematical programming problem is formulated for a water network with new water sources included. Salinity and water hardness are considered in the model, which is later solved using the Max-Min Ant System (MMAS) to assess the impact of new water sources on the total cost of the existing network. It is efficient to include new water sources if the distances to them are short or if there is a high penalty associated with failure to meet demand. Desalination unit costs also significantly affect the decision whether to install new water sources into the existing network while softening costs are generally negligible in making such decisions. Experimental results show that, in the example considered, it is efficient to reduce number of desalination plants to remain with one central plant. The Max-Min Ant System algorithm seems to be an effective method as shown by least computational time as compared to the commercial solver Cplex. PMID:27382617
NASA Astrophysics Data System (ADS)
Wu, Linqin; Xu, Sheng; Jiang, Dezhi
2015-12-01
Industrial wireless networked control system has been widely used, and how to evaluate the performance of the wireless network is of great significance. In this paper, considering the shortcoming of the existing performance evaluation methods, a comprehensive performance evaluation method of networks multi-indexes fuzzy analytic hierarchy process (MFAHP) combined with the fuzzy mathematics and the traditional analytic hierarchy process (AHP) is presented. The method can overcome that the performance evaluation is not comprehensive and subjective. Experiments show that the method can reflect the network performance of real condition. It has direct guiding role on protocol selection, network cabling, and node setting, and can meet the requirements of different occasions by modifying the underlying parameters.
Modified multiple time scale method for solving strongly nonlinear damped forced vibration systems
NASA Astrophysics Data System (ADS)
Razzak, M. A.; Alam, M. Z.; Sharif, M. N.
2018-03-01
In this paper, modified multiple time scale (MTS) method is employed to solve strongly nonlinear forced vibration systems. The first-order approximation is only considered in order to avoid complexicity. The formulations and the determination of the solution procedure are very easy and straightforward. The classical multiple time scale (MS) and multiple scales Lindstedt-Poincare method (MSLP) do not give desire result for the strongly damped forced vibration systems with strong damping effects. The main aim of this paper is to remove these limitations. Two examples are considered to illustrate the effectiveness and convenience of the present procedure. The approximate external frequencies and the corresponding approximate solutions are determined by the present method. The results give good coincidence with corresponding numerical solution (considered to be exact) and also provide better result than other existing results. For weak nonlinearities with weak damping effect, the absolute relative error measures (first-order approximate external frequency) in this paper is only 0.07% when amplitude A = 1.5 , while the relative error gives MSLP method is surprisingly 28.81%. Furthermore, for strong nonlinearities with strong damping effect, the absolute relative error found in this article is only 0.02%, whereas the relative error obtained by MSLP method is 24.18%. Therefore, the present method is not only valid for weakly nonlinear damped forced systems, but also gives better result for strongly nonlinear systems with both small and strong damping effect.
NASA Astrophysics Data System (ADS)
Yaşar, Elif; Yıldırım, Yakup; Yaşar, Emrullah
2018-06-01
This paper devotes to conformable fractional space-time perturbed Gerdjikov-Ivanov (GI) equation which appears in nonlinear fiber optics and photonic crystal fibers (PCF). We consider the model with full nonlinearity in order to give a generalized flavor. The sine-Gordon equation approach is carried out to model equation for retrieving the dark, bright, dark-bright, singular and combined singular optical solitons. The constraint conditions are also reported for guaranteeing the existence of these solitons. We also present some graphical simulations of the solutions for better understanding the physical phenomena of the behind the considered model.
Yee, Kwang Chien; Bettiol, Silvana; Nash, Rosie; Macintyrne, Kate; Wong, Ming Chao; Nøhr, Christian
2018-01-01
Advances in medicine have improved health and healthcare for many around the world. The challenge is achieving the best outcomes of health via healthcare delivery to every individual. Healthcare inequalities exist within a country and between countries. Health information technology (HIT) has provided a mean to deliver equal access to healthcare services regardless of social context and physical location. In order to achieve better health outcomes for every individual, socio-cultural factors, such as literacy and social context need to consider. This paper argues that HIT while improves healthcare inequalities by providing access, might worsen healthcare inequity. In order to improve healthcare inequity using HIT, this paper argues that we need to consider patients and context, and hence the concept of context driven care. To improve healthcare inequity, we need to conceptually consider the patient's view and methodologically consider design methods that achieve participatory outcomes.
Existence of solution for a general fractional advection-dispersion equation
NASA Astrophysics Data System (ADS)
Torres Ledesma, César E.
2018-05-01
In this work, we consider the existence of solution to the following fractional advection-dispersion equation -d/dt ( p {_{-∞}}It^{β }(u'(t)) + q {t}I_{∞}^{β }(u'(t))) + b(t)u = f(t, u(t)),t\\in R where β \\in (0,1) , _{-∞}It^{β } and tI_{∞}^{β } denote left and right Liouville-Weyl fractional integrals of order β respectively, 0
Horsetail matching: a flexible approach to optimization under uncertainty
NASA Astrophysics Data System (ADS)
Cook, L. W.; Jarrett, J. P.
2018-04-01
It is important to design engineering systems to be robust with respect to uncertainties in the design process. Often, this is done by considering statistical moments, but over-reliance on statistical moments when formulating a robust optimization can produce designs that are stochastically dominated by other feasible designs. This article instead proposes a formulation for optimization under uncertainty that minimizes the difference between a design's cumulative distribution function and a target. A standard target is proposed that produces stochastically non-dominated designs, but the formulation also offers enough flexibility to recover existing approaches for robust optimization. A numerical implementation is developed that employs kernels to give a differentiable objective function. The method is applied to algebraic test problems and a robust transonic airfoil design problem where it is compared to multi-objective, weighted-sum and density matching approaches to robust optimization; several advantages over these existing methods are demonstrated.
Structural Optimization of a Knuckle with Consideration of Stiffness and Durability Requirements
Kim, Geun-Yeon
2014-01-01
The automobile's knuckle is connected to the parts of the steering system and the suspension system and it is used for adjusting the direction of a rotation through its attachment to the wheel. This study changes the existing material made of GCD45 to Al6082M and recommends the lightweight design of the knuckle as the optimal design technique to be installed in small cars. Six shape design variables were selected for the optimization of the knuckle and the criteria relevant to stiffness and durability were considered as the design requirements during the optimization process. The metamodel-based optimization method that uses the kriging interpolation method as the optimization technique was applied. The result shows that all constraints for stiffness and durability are satisfied using A16082M, while reducing the weight of the knuckle by 60% compared to that of the existing GCD450. PMID:24995359
Cheng, Ching-Min; Hwang, Sheue-Ling
2015-03-01
This paper outlines the human error identification (HEI) techniques that currently exist to assess latent human errors. Many formal error identification techniques have existed for years, but few have been validated to cover latent human error analysis in different domains. This study considers many possible error modes and influential factors, including external error modes, internal error modes, psychological error mechanisms, and performance shaping factors, and integrates several execution procedures and frameworks of HEI techniques. The case study in this research was the operational process of changing chemical cylinders in a factory. In addition, the integrated HEI method was used to assess the operational processes and the system's reliability. It was concluded that the integrated method is a valuable aid to develop much safer operational processes and can be used to predict human error rates on critical tasks in the plant. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Aircraft interior noise reduction by alternate resonance tuning
NASA Technical Reports Server (NTRS)
Bliss, Donald B.; Gottwald, James A.; Srinivasan, Ramakrishna; Gustaveson, Mark B.
1990-01-01
Existing interior noise reduction techniques for aircraft fuselages perform reasonably well at higher frequencies, but are inadequate at lower frequencies, particularly with respect to the low blade passage harmonics with high forcing levels found in propeller aircraft. A method is being studied which considers aircraft fuselage lined with panels alternately tuned to frequencies above and below the frequency that must be attenuated. Adjacent panels would oscillate at equal amplitude, to give equal source strength, but with opposite phase. Provided these adjacent panels are acoustically compact, the resulting cancellation causes the interior acoustic modes to become cutoff, and therefore be non-propagating and evanescent. This interior noise reduction method, called Alternate Resonance Tuning (ART), is currently being investigated both theoretically and experimentally. This new concept has potential application to reducing interior noise due to the propellers in advanced turboprop aircraft as well as for existing aircraft configurations.
NASA Astrophysics Data System (ADS)
Basten, Van; Latief, Yusuf; Berawi, Mohammed Ali; Budiman, Rachmat; Riswanto
2017-03-01
Total completed building construction value in Indonesia increased 116% during 2009 to 2011. That's followed by increasing 11% energy consumption in Indonesia in the last three years with 70% energy met to the electricity needs of commercial building. In addition, a few application of green building concept in Indonesia made the greenhouse gas emissions or CO2 amount increased by 25%. Construction, operation, and maintain of building cost consider relatively high. The evaluation in this research is used to improve the building performance with some of green concept alternatives. The research methodology is conducted by combination of qualitative and quantitative approaches through interview and case study. Assessing the successful of optimization functions in the existing green building is based on the operational and maintenance phase with the Life Cycle Assessment (LCA) Method. The result of optimization that is the largest efficiency and effective of building life cycle.
NASA Astrophysics Data System (ADS)
Huang, Di; Duan, Zhisheng
2018-03-01
This paper addresses the multi-objective fault detection observer design problems for a hypersonic vehicle. Owing to the fact that parameters' variations, modelling errors and disturbances are inevitable in practical situations, system uncertainty is considered in this study. By fully utilising the orthogonal space information of output matrix, some new understandings are proposed for the construction of Lyapunov matrix. Sufficient conditions for the existence of observers to guarantee the fault sensitivity and disturbance robustness in infinite frequency domain are presented. In order to further relax the conservativeness, slack matrices are introduced to fully decouple the observer gain with the Lyapunov matrices in finite frequency range. Iterative linear matrix inequality algorithms are proposed to obtain the solutions. The simulation examples which contain a Monte Carlo campaign illustrate that the new methods can effectively reduce the design conservativeness compared with the existing methods.
NASA Astrophysics Data System (ADS)
Boubir, Badreddine
2018-06-01
In this paper, we investigate the dynamics of bright optical solitons in nonlinear metamaterials governed by a (2 + 1)-dimensional nonlinear Schrödinger equation. Three types of nonlinearities have been considered, Kerr law, power law and parabolic law. We based on the solitary wave ansatz method to find these optical soliton solutions. All necessary parametric conditions for their existence are driven.
Investigation of Women with Postmenopausal Uterine Bleeding: Clinical Practice Recommendations
Munro, Malcolm G
2014-01-01
Postmenopausal uterine bleeding is defined as uterine bleeding after permanent cessation of menstruation resulting from loss of ovarian follicular activity. Bleeding can be spontaneous or related to ovarian hormone replacement therapy or to use of selective estrogen receptor modulators (eg, tamoxifen adjuvant therapy for breast carcinoma). Because anovulatory “cycles” with episodes of multimonth amenorrhea frequently precede menopause, no consensus exists regarding the appropriate interval of amenorrhea before an episode of bleeding that allows for the definition of postmenopausal bleeding. The clinician faces the possibility that an underlying malignancy exists, knowing that most often the bleeding comes from a benign source. Formerly, the gold-standard clinical investigation of postmenopausal uterine bleeding was institution-based dilation and curettage, but there now exist office-based methods for the evaluation of women with this complaint. Strategies designed to implement these diagnostic methods must be applied in a balanced way considering the resource utilization issues of overinvestigation and the risk of missing a malignancy with underinvestigation. Consequently, guidelines and recommendations were developed to consider these issues and the diverse spectrum of practitioners who evaluate women with postmenopausal bleeding. The guideline development group determined that, for initial management of spontaneous postmenopausal bleeding, primary assessment may be with either endometrial sampling or transvaginal ultrasonography, allowing patients with an endometrial echo complex thickness of 4 mm or less to be managed expectantly. Guidelines are also provided for patients receiving selective estrogen receptor modulators or hormone replacement therapy, and for an endometrial echo complex with findings consistent with fluid in the endometrial cavity. PMID:24377427
Jung, Ji-Young; Seo, Dong-Yoon; Lee, Jung-Ryun
2018-01-04
A wireless sensor network (WSN) is emerging as an innovative method for gathering information that will significantly improve the reliability and efficiency of infrastructure systems. Broadcast is a common method to disseminate information in WSNs. A variety of counter-based broadcast schemes have been proposed to mitigate the broadcast-storm problems, using the count threshold value and a random access delay. However, because of the limited propagation of the broadcast-message, there exists a trade-off in a sense that redundant retransmissions of the broadcast-message become low and energy efficiency of a node is enhanced, but reachability become low. Therefore, it is necessary to study an efficient counter-based broadcast scheme that can dynamically adjust the random access delay and count threshold value to ensure high reachability, low redundant of broadcast-messages, and low energy consumption of nodes. Thus, in this paper, we first measure the additional coverage provided by a node that receives the same broadcast-message from two neighbor nodes, in order to achieve high reachability with low redundant retransmissions of broadcast-messages. Second, we propose a new counter-based broadcast scheme considering the size of the additional coverage area, distance between the node and the broadcasting node, remaining battery of the node, and variations of the node density. Finally, we evaluate performance of the proposed scheme compared with the existing counter-based broadcast schemes. Simulation results show that the proposed scheme outperforms the existing schemes in terms of saved rebroadcasts, reachability, and total energy consumption.
NASA Astrophysics Data System (ADS)
Sanskrityayn, Abhishek; Suk, Heejun; Kumar, Naveen
2017-04-01
In this study, analytical solutions of one-dimensional pollutant transport originating from instantaneous and continuous point sources were developed in groundwater and riverine flow using both Green's Function Method (GFM) and pertinent coordinate transformation method. Dispersion coefficient and flow velocity are considered spatially and temporally dependent. The spatial dependence of the velocity is linear, non-homogeneous and that of dispersion coefficient is square of that of velocity, while the temporal dependence is considered linear, exponentially and asymptotically decelerating and accelerating. Our proposed analytical solutions are derived for three different situations depending on variations of dispersion coefficient and velocity, respectively which can represent real physical processes occurring in groundwater and riverine systems. First case refers to steady solute transport situation in steady flow in which dispersion coefficient and velocity are only spatially dependent. The second case represents transient solute transport in steady flow in which dispersion coefficient is spatially and temporally dependent while the velocity is spatially dependent. Finally, the third case indicates transient solute transport in unsteady flow in which both dispersion coefficient and velocity are spatially and temporally dependent. The present paper demonstrates the concentration distribution behavior from a point source in realistically occurring flow domains of hydrological systems including groundwater and riverine water in which the dispersivity of pollutant's mass is affected by heterogeneity of the medium as well as by other factors like velocity fluctuations, while velocity is influenced by water table slope and recharge rate. Such capabilities give the proposed method's superiority about application of various hydrological problems to be solved over other previously existing analytical solutions. Especially, to author's knowledge, any other solution doesn't exist for both spatially and temporally variations of dispersion coefficient and velocity. In this study, the existing analytical solutions from previous widely known studies are used for comparison as validation tools to verify the proposed analytical solution as well as the numerical code of the Two-Dimensional Subsurface Flow, Fate and Transport of Microbes and Chemicals (2DFATMIC) code and the developed 1D finite difference code (FDM). All such solutions show perfect match with the respective proposed solutions.
Lucas, Patricia J; Baird, Janis; Arai, Lisa; Law, Catherine; Roberts, Helen M
2007-01-01
Background The inclusion of qualitative studies in systematic reviews poses methodological challenges. This paper presents worked examples of two methods of data synthesis (textual narrative and thematic), used in relation to one review, with the aim of enabling researchers to consider the strength of different approaches. Methods A systematic review of lay perspectives of infant size and growth was conducted, locating 19 studies (including both qualitative and quantitative). The data extracted from these were synthesised using both a textual narrative and a thematic synthesis. Results The processes of both methods are presented, showing a stepwise progression to the final synthesis. Both methods led us to similar conclusions about lay views toward infant size and growth. Differences between methods lie in the way they dealt with study quality and heterogeneity. Conclusion On the basis of the work reported here, we consider textual narrative and thematic synthesis have strengths and weaknesses in relation to different research questions. Thematic synthesis holds most potential for hypothesis generation, but may obscure heterogeneity and quality appraisal. Textual narrative synthesis is better able to describe the scope of existing research and account for the strength of evidence, but is less good at identifying commonality. PMID:17224044
SAR Image Change Detection Based on Fuzzy Markov Random Field Model
NASA Astrophysics Data System (ADS)
Zhao, J.; Huang, G.; Zhao, Z.
2018-04-01
Most existing SAR image change detection algorithms only consider single pixel information of different images, and not consider the spatial dependencies of image pixels. So the change detection results are susceptible to image noise, and the detection effect is not ideal. Markov Random Field (MRF) can make full use of the spatial dependence of image pixels and improve detection accuracy. When segmenting the difference image, different categories of regions have a high degree of similarity at the junction of them. It is difficult to clearly distinguish the labels of the pixels near the boundaries of the judgment area. In the traditional MRF method, each pixel is given a hard label during iteration. So MRF is a hard decision in the process, and it will cause loss of information. This paper applies the combination of fuzzy theory and MRF to the change detection of SAR images. The experimental results show that the proposed method has better detection effect than the traditional MRF method.
NASA Astrophysics Data System (ADS)
Parkin, G.; O'Donnell, G.; Ewen, J.; Bathurst, J. C.; O'Connell, P. E.; Lavabre, J.
1996-02-01
Validation methods commonly used to test catchment models are not capable of demonstrating a model's fitness for making predictions for catchments where the catchment response is not known (including hypothetical catchments, and future conditions of existing catchments which are subject to land-use or climate change). This paper describes the first use of a new method of validation (Ewen and Parkin, 1996. J. Hydrol., 175: 583-594) designed to address these types of application; the method involves making 'blind' predictions of selected hydrological responses which are considered important for a particular application. SHETRAN (a physically based, distributed catchment modelling system) is tested on a small Mediterranean catchment. The test involves quantification of the uncertainty in four predicted features of the catchment response (continuous hydrograph, peak discharge rates, monthly runoff, and total runoff), and comparison of observations with the predicted ranges for these features. The results of this test are considered encouraging.
Li, Yanqiu; Liu, Shi; Inaki, Schlaberg H.
2017-01-01
Accuracy and speed of algorithms play an important role in the reconstruction of temperature field measurements by acoustic tomography. Existing algorithms are based on static models which only consider the measurement information. A dynamic model of three-dimensional temperature reconstruction by acoustic tomography is established in this paper. A dynamic algorithm is proposed considering both acoustic measurement information and the dynamic evolution information of the temperature field. An objective function is built which fuses measurement information and the space constraint of the temperature field with its dynamic evolution information. Robust estimation is used to extend the objective function. The method combines a tunneling algorithm and a local minimization technique to solve the objective function. Numerical simulations show that the image quality and noise immunity of the dynamic reconstruction algorithm are better when compared with static algorithms such as least square method, algebraic reconstruction technique and standard Tikhonov regularization algorithms. An effective method is provided for temperature field reconstruction by acoustic tomography. PMID:28895930
DOE Office of Scientific and Technical Information (OSTI.GOV)
N /A
The proposed action and three alternatives, including a No Build alternative, were evaluated along the existing RWIPL alignment to accommodate the placement of the proposed RWIPL. Construction feasibility, reasonableness and potential environmental impacts were considered during the evaluation of the four actions (and action alternatives) for the proposed RWIPL activities. Reasonable actions were identified as those actions which were considered to be supported by common sense and sound technical principles. Feasible actions were those actions which were considered to be capable of being accomplished, practicable and non-excessive in terms of cost. The evaluation process considered the following design specifications, whichmore » were determined to be important to the feasibility of the overall project. The proposed RWIPL replacement project must therefore: (1) Comply with the existing design basis and criteria, (2) Maintain continuity of operation of the facility during construction, (3)Provide the required service life, (4) Be cost effective, (5)Improve the operation and maintenance of the pipeline, and (6) Maintain minimal environmental impact while meeting the performance requirements. Sizing of the pipe, piping construction materials, construction method (e.g., open-cut trench, directional drilling, etc.) and the acquisition of new Right-of-Way (ROW) were additionally evaluated in the preliminary alternative identification, selection and screening process.« less
Szatkiewicz, Jin P; Wang, WeiBo; Sullivan, Patrick F; Wang, Wei; Sun, Wei
2013-02-01
Structural variation is an important class of genetic variation in mammals. High-throughput sequencing (HTS) technologies promise to revolutionize copy-number variation (CNV) detection but present substantial analytic challenges. Converging evidence suggests that multiple types of CNV-informative data (e.g. read-depth, read-pair, split-read) need be considered, and that sophisticated methods are needed for more accurate CNV detection. We observed that various sources of experimental biases in HTS confound read-depth estimation, and note that bias correction has not been adequately addressed by existing methods. We present a novel read-depth-based method, GENSENG, which uses a hidden Markov model and negative binomial regression framework to identify regions of discrete copy-number changes while simultaneously accounting for the effects of multiple confounders. Based on extensive calibration using multiple HTS data sets, we conclude that our method outperforms existing read-depth-based CNV detection algorithms. The concept of simultaneous bias correction and CNV detection can serve as a basis for combining read-depth with other types of information such as read-pair or split-read in a single analysis. A user-friendly and computationally efficient implementation of our method is freely available.
NASA Astrophysics Data System (ADS)
Golinko, I. M.; Kovrigo, Yu. M.; Kubrak, A. I.
2014-03-01
An express method for optimally tuning analog PI and PID controllers is considered. An integral quality criterion with minimizing the control output is proposed for optimizing control systems. The suggested criterion differs from existing ones in that the control output applied to the technological process is taken into account in a correct manner, due to which it becomes possible to maximally reduce the expenditure of material and/or energy resources in performing control of industrial equipment sets. With control organized in such manner, smaller wear and longer service life of control devices are achieved. A unimodal nature of the proposed criterion for optimally tuning a controller is numerically demonstrated using the methods of optimization theory. A functional interrelation between the optimal controller parameters and dynamic properties of a controlled plant is numerically determined for a single-loop control system. The results obtained from simulation of transients in a control system carried out using the proposed and existing functional dependences are compared with each other. The proposed calculation formulas differ from the existing ones by a simple structure and highly accurate search for the optimal controller tuning parameters. The obtained calculation formulas are recommended for being used by specialists in automation for design and optimization of control systems.
NASA Astrophysics Data System (ADS)
Doležel, Jiří; Novák, Drahomír; Petrů, Jan
2017-09-01
Transportation routes of oversize and excessive loads are currently planned in relation to ensure the transit of a vehicle through critical points on the road. Critical points are level-intersection of roads, bridges etc. This article presents a comprehensive procedure to determine a reliability and a load-bearing capacity level of the existing bridges on highways and roads using the advanced methods of reliability analysis based on simulation techniques of Monte Carlo type in combination with nonlinear finite element method analysis. The safety index is considered as a main criterion of the reliability level of the existing construction structures and the index is described in current structural design standards, e.g. ISO and Eurocode. An example of a single-span slab bridge made of precast prestressed concrete girders of the 60 year current time and its load bearing capacity is set for the ultimate limit state and serviceability limit state. The structure’s design load capacity was estimated by the full probability nonlinear MKP analysis using a simulation technique Latin Hypercube Sampling (LHS). Load-bearing capacity values based on a fully probabilistic analysis are compared with the load-bearing capacity levels which were estimated by deterministic methods of a critical section of the most loaded girders.
D Reconstruction from Multi-View Medical X-Ray Images - Review and Evaluation of Existing Methods
NASA Astrophysics Data System (ADS)
Hosseinian, S.; Arefi, H.
2015-12-01
The 3D concept is extremely important in clinical studies of human body. Accurate 3D models of bony structures are currently required in clinical routine for diagnosis, patient follow-up, surgical planning, computer assisted surgery and biomechanical applications. However, 3D conventional medical imaging techniques such as computed tomography (CT) scan and magnetic resonance imaging (MRI) have serious limitations such as using in non-weight-bearing positions, costs and high radiation dose(for CT). Therefore, 3D reconstruction methods from biplanar X-ray images have been taken into consideration as reliable alternative methods in order to achieve accurate 3D models with low dose radiation in weight-bearing positions. Different methods have been offered for 3D reconstruction from X-ray images using photogrammetry which should be assessed. In this paper, after demonstrating the principles of 3D reconstruction from X-ray images, different existing methods of 3D reconstruction of bony structures from radiographs are classified and evaluated with various metrics and their advantages and disadvantages are mentioned. Finally, a comparison has been done on the presented methods with respect to several metrics such as accuracy, reconstruction time and their applications. With regards to the research, each method has several advantages and disadvantages which should be considered for a specific application.
Initial Results of an MDO Method Evaluation Study
NASA Technical Reports Server (NTRS)
Alexandrov, Natalia M.; Kodiyalam, Srinivas
1998-01-01
The NASA Langley MDO method evaluation study seeks to arrive at a set of guidelines for using promising MDO methods by accumulating and analyzing computational data for such methods. The data are collected by conducting a series of re- producible experiments. In the first phase of the study, three MDO methods were implemented in the SIGHT: framework and used to solve a set of ten relatively simple problems. In this paper, we comment on the general considerations for conducting method evaluation studies and report some initial results obtained to date. In particular, although the results are not conclusive because of the small initial test set, other formulations, optimality conditions, and sensitivity of solutions to various perturbations. Optimization algorithms are used to solve a particular MDO formulation. It is then appropriate to speak of local convergence rates and of global convergence properties of an optimization algorithm applied to a specific formulation. An analogous distinction exists in the field of partial differential equations. On the one hand, equations are analyzed in terms of regularity, well-posedness, and the existence and unique- ness of solutions. On the other, one considers numerous algorithms for solving differential equations. The area of MDO methods studies MDO formulations combined with optimization algorithms, although at times the distinction is blurred. It is important to
Convex Formulations of Learning from Crowds
NASA Astrophysics Data System (ADS)
Kajino, Hiroshi; Kashima, Hisashi
It has attracted considerable attention to use crowdsourcing services to collect a large amount of labeled data for machine learning, since crowdsourcing services allow one to ask the general public to label data at very low cost through the Internet. The use of crowdsourcing has introduced a new challenge in machine learning, that is, coping with low quality of crowd-generated data. There have been many recent attempts to address the quality problem of multiple labelers, however, there are two serious drawbacks in the existing approaches, that are, (i) non-convexity and (ii) task homogeneity. Most of the existing methods consider true labels as latent variables, which results in non-convex optimization problems. Also, the existing models assume only single homogeneous tasks, while in realistic situations, clients can offer multiple tasks to crowds and crowd workers can work on different tasks in parallel. In this paper, we propose a convex optimization formulation of learning from crowds by introducing personal models of individual crowds without estimating true labels. We further extend the proposed model to multi-task learning based on the resemblance between the proposed formulation and that for an existing multi-task learning model. We also devise efficient iterative methods for solving the convex optimization problems by exploiting conditional independence structures in multiple classifiers.
Respiratory Artefact Removal in Forced Oscillation Measurements: A Machine Learning Approach.
Pham, Thuy T; Thamrin, Cindy; Robinson, Paul D; McEwan, Alistair L; Leong, Philip H W
2017-08-01
Respiratory artefact removal for the forced oscillation technique can be treated as an anomaly detection problem. Manual removal is currently considered the gold standard, but this approach is laborious and subjective. Most existing automated techniques used simple statistics and/or rejected anomalous data points. Unfortunately, simple statistics are insensitive to numerous artefacts, leading to low reproducibility of results. Furthermore, rejecting anomalous data points causes an imbalance between the inspiratory and expiratory contributions. From a machine learning perspective, such methods are unsupervised and can be considered simple feature extraction. We hypothesize that supervised techniques can be used to find improved features that are more discriminative and more highly correlated with the desired output. Features thus found are then used for anomaly detection by applying quartile thresholding, which rejects complete breaths if one of its features is out of range. The thresholds are determined by both saliency and performance metrics rather than qualitative assumptions as in previous works. Feature ranking indicates that our new landmark features are among the highest scoring candidates regardless of age across saliency criteria. F1-scores, receiver operating characteristic, and variability of the mean resistance metrics show that the proposed scheme outperforms previous simple feature extraction approaches. Our subject-independent detector, 1IQR-SU, demonstrated approval rates of 80.6% for adults and 98% for children, higher than existing methods. Our new features are more relevant. Our removal is objective and comparable to the manual method. This is a critical work to automate forced oscillation technique quality control.
Methods for cost estimation in software project management
NASA Astrophysics Data System (ADS)
Briciu, C. V.; Filip, I.; Indries, I. I.
2016-02-01
The speed in which the processes used in software development field have changed makes it very difficult the task of forecasting the overall costs for a software project. By many researchers, this task has been considered unachievable, but there is a group of scientist for which this task can be solved using the already known mathematical methods (e.g. multiple linear regressions) and the new techniques as genetic programming and neural networks. The paper presents a solution for building a model for the cost estimation models in the software project management using genetic algorithms starting from the PROMISE datasets related COCOMO 81 model. In the first part of the paper, a summary of the major achievements in the research area of finding a model for estimating the overall project costs is presented together with the description of the existing software development process models. In the last part, a basic proposal of a mathematical model of a genetic programming is proposed including here the description of the chosen fitness function and chromosome representation. The perspective of model described it linked with the current reality of the software development considering as basis the software product life cycle and the current challenges and innovations in the software development area. Based on the author's experiences and the analysis of the existing models and product lifecycle it was concluded that estimation models should be adapted with the new technologies and emerging systems and they depend largely by the chosen software development method.
Boundary based on exchange symmetry theory for multilevel simulations. I. Basic theory.
Shiga, Motoyuki; Masia, Marco
2013-07-28
In this paper, we lay the foundations for a new method that allows multilevel simulations of a diffusive system, i.e., a system where a flux of particles through the boundaries might disrupt the primary region. The method is based on the use of flexible restraints that maintain the separation between inner and outer particles. It is shown that, by introducing a bias potential that accounts for the exchange symmetry of the system, the correct statistical distribution is preserved. Using a toy model consisting of non-interacting particles in an asymmetric potential well, we prove that the method is formally exact, and that it could be simplified by considering only up to a couple of particle exchanges without a loss of accuracy. A real-world test is then made by considering a hybrid MM(∗)/MM calculation of cesium ion in water. In this case, the single exchange approximation is sound enough that the results superimpose to the exact solutions. Potential applications of this method to many different hybrid QM/MM systems are discussed, as well as its limitations and strengths in comparison to existing approaches.
NASA Astrophysics Data System (ADS)
Köktan, Utku; Demir, Gökhan; Kerem Ertek, M.
2017-04-01
The earthquake behavior of retaining walls is commonly calculated with pseudo static approaches based on Mononobe-Okabe method. The seismic ground pressure acting on the retaining wall by the Mononobe-Okabe method does not give a definite idea of the distribution of the seismic ground pressure because it is obtained by balancing the forces acting on the active wedge behind the wall. With this method, wave propagation effects and soil-structure interaction are neglected. The purpose of this study is to examine the earthquake behavior of a retaining wall taking into account the soil-structure interaction. For this purpose, time history seismic analysis of the soil-structure interaction system using finite element method has been carried out considering 3 different soil conditions. Seismic analysis of the soil-structure model was performed according to the earthquake record of "1971, San Fernando Pacoima Dam, 196 degree" existing in the library of MIDAS GTS NX software. The results obtained from the analyses show that the soil-structure interaction is very important for the seismic design of a retaining wall. Keywords: Soil-structure interaction, Finite element model, Retaining wall
NASA Astrophysics Data System (ADS)
Wang, Zhaopeng; Cuntz, Manfred
2017-10-01
We derive fitting formulae for the quick determination of the existence of S-type and P-type habitable zones (HZs) in binary systems. Based on previous work, we consider the limits of the climatological HZ in binary systems (which sensitively depend on the system parameters) based on a joint constraint encompassing planetary orbital stability and a habitable region for a possible system planet. Additionally, we employ updated results on planetary climate models obtained by Kopparapu and collaborators. Our results are applied to four P-type systems (Kepler-34, Kepler-35, Kepler-413, and Kepler-1647) and two S-type systems (TrES-2 and KOI-1257). Our method allows us to gauge the existence of climatological HZs for these systems in a straightforward manner with detailed consideration of the observational uncertainties. Further applications may include studies of other existing systems as well as systems to be identified through future observational campaigns.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Zhaopeng; Cuntz, Manfred, E-mail: zhaopeng.wang@mavs.uta.edu, E-mail: cuntz@uta.edu
We derive fitting formulae for the quick determination of the existence of S-type and P-type habitable zones (HZs) in binary systems. Based on previous work, we consider the limits of the climatological HZ in binary systems (which sensitively depend on the system parameters) based on a joint constraint encompassing planetary orbital stability and a habitable region for a possible system planet. Additionally, we employ updated results on planetary climate models obtained by Kopparapu and collaborators. Our results are applied to four P-type systems (Kepler-34, Kepler-35, Kepler-413, and Kepler-1647) and two S-type systems (TrES-2 and KOI-1257). Our method allows us tomore » gauge the existence of climatological HZs for these systems in a straightforward manner with detailed consideration of the observational uncertainties. Further applications may include studies of other existing systems as well as systems to be identified through future observational campaigns.« less
Determination of vertical pressures on running wheels of freight trolleys of bridge type cranes
NASA Astrophysics Data System (ADS)
Goncharov, K. A.; Denisov, I. A.
2018-03-01
The problematic issues of the design of the bridge-type trolley crane, connected with ensuring uniform load distribution between the running wheels, are considered. The shortcomings of the existing methods of calculation of reference pressures are described. The results of the analytical calculation of the pressure of the support wheels are compared with the results of the numerical solution of this problem for various schemes of trolley supporting frames. Conclusions are given on the applicability of various methods for calculating vertical pressures, depending on the type of metal structures used in the trolley.
NASA Technical Reports Server (NTRS)
Dash, S.; Delguidice, P.
1972-01-01
A second order numerical method employing reference plane characteristics has been developed for the calculation of geometrically complex three dimensional nozzle-exhaust flow fields, heretofore uncalculable by existing methods. The nozzles may have irregular cross sections with swept throats and may be stacked in modules using the vehicle undersurface for additional expansion. The nozzles may have highly nonuniform entrance conditions, the medium considered being an equilibrium hydrogen-air mixture. The program calculates and carries along the underexpansion shock and contact as discrete discontinuity surfaces, for a nonuniform vehicle external flow.
On the magnetic attitude control for spacecraft via the ɛ-strategies method
NASA Astrophysics Data System (ADS)
Smirnov, Georgi V.; Ovchinnikov, Mikhail; Miranda, Francisco
2008-09-01
We develop a new approach to stabilization problems based on a combination of the Lyapunov functions method with local controllability properties. The stabilizability is understood in the sense of ɛ-strategies introduced by Pontryagin in the frame of differential games theory. To illustrate the possibilities of our approach we consider a satellite with two magnetic coils directed along its principal inertia axes. Its circular orbit is neither polar nor equatorial. We show that there exists an ɛ-strategy stabilizing an Earth pointing satellite, whenever the deviations from the equilibrium position are small enough.
On the wavelet optimized finite difference method
NASA Technical Reports Server (NTRS)
Jameson, Leland
1994-01-01
When one considers the effect in the physical space, Daubechies-based wavelet methods are equivalent to finite difference methods with grid refinement in regions of the domain where small scale structure exists. Adding a wavelet basis function at a given scale and location where one has a correspondingly large wavelet coefficient is, essentially, equivalent to adding a grid point, or two, at the same location and at a grid density which corresponds to the wavelet scale. This paper introduces a wavelet optimized finite difference method which is equivalent to a wavelet method in its multiresolution approach but which does not suffer from difficulties with nonlinear terms and boundary conditions, since all calculations are done in the physical space. With this method one can obtain an arbitrarily good approximation to a conservative difference method for solving nonlinear conservation laws.
NASA Technical Reports Server (NTRS)
Muravyov, Alexander A.; Turner, Travis L.; Robinson, Jay H.; Rizzi, Stephen A.
1999-01-01
In this paper, the problem of random vibration of geometrically nonlinear MDOF structures is considered. The solutions obtained by application of two different versions of a stochastic linearization method are compared with exact (F-P-K) solutions. The formulation of a relatively new version of the stochastic linearization method (energy-based version) is generalized to the MDOF system case. Also, a new method for determination of nonlinear sti ness coefficients for MDOF structures is demonstrated. This method in combination with the equivalent linearization technique is implemented in a new computer program. Results in terms of root-mean-square (RMS) displacements obtained by using the new program and an existing in-house code are compared for two examples of beam-like structures.
A Bayesian Scoring Technique for Mining Predictive and Non-Spurious Rules
Batal, Iyad; Cooper, Gregory; Hauskrecht, Milos
2015-01-01
Rule mining is an important class of data mining methods for discovering interesting patterns in data. The success of a rule mining method heavily depends on the evaluation function that is used to assess the quality of the rules. In this work, we propose a new rule evaluation score - the Predictive and Non-Spurious Rules (PNSR) score. This score relies on Bayesian inference to evaluate the quality of the rules and considers the structure of the rules to filter out spurious rules. We present an efficient algorithm for finding rules with high PNSR scores. The experiments demonstrate that our method is able to cover and explain the data with a much smaller rule set than existing methods. PMID:25938136
A Bayesian Scoring Technique for Mining Predictive and Non-Spurious Rules.
Batal, Iyad; Cooper, Gregory; Hauskrecht, Milos
Rule mining is an important class of data mining methods for discovering interesting patterns in data. The success of a rule mining method heavily depends on the evaluation function that is used to assess the quality of the rules. In this work, we propose a new rule evaluation score - the Predictive and Non-Spurious Rules (PNSR) score. This score relies on Bayesian inference to evaluate the quality of the rules and considers the structure of the rules to filter out spurious rules. We present an efficient algorithm for finding rules with high PNSR scores. The experiments demonstrate that our method is able to cover and explain the data with a much smaller rule set than existing methods.
Hybrid methods for simulating hydrodynamics and heat transfer in multiscale (1D-3D) models
NASA Astrophysics Data System (ADS)
Filimonov, S. A.; Mikhienkova, E. I.; Dekterev, A. A.; Boykov, D. V.
2017-09-01
The work is devoted to application of different-scale models in the simulation of hydrodynamics and heat transfer of large and/or complex systems, which can be considered as a combination of extended and “compact” elements. The model consisting of simultaneously existing three-dimensional and network (one-dimensional) elements is called multiscale. The paper examines the relevance of building such models and considers three main options for their implementation: the spatial and the network parts of the model are calculated separately; spatial and network parts are calculated simultaneously (hydraulically unified model); network elements “penetrate” the spatial part and are connected through the integral characteristics at the tube/channel walls (hydraulically disconnected model). Each proposed method is analyzed in terms of advantages and disadvantages. The paper presents a number of practical examples demonstrating the application of multiscale models.
Donatello, S; Tyrer, M; Cheeseman, C R
2010-01-01
A hazardous waste assessment has been completed on ash samples obtained from seven sewage sludge incinerators operating in the UK, using the methods recommended in the EU Hazardous Waste Directive. Using these methods, the assumed speciation of zinc (Zn) ultimately determines if the samples are hazardous due to ecotoxicity hazard. Leaching test results showed that two of the seven sewage sludge ash samples would require disposal in a hazardous waste landfill because they exceed EU landfill waste acceptance criteria for stabilised non-reactive hazardous waste cells for soluble selenium (Se). Because Zn cannot be proven to exist predominantly as a phosphate or oxide in the ashes, it is recommended they be considered as non-hazardous waste. However leaching test results demonstrate that these ashes cannot be considered as inert waste, and this has significant implications for the management, disposal and re-use of sewage sludge ash.
Precision digital control systems
NASA Astrophysics Data System (ADS)
Vyskub, V. G.; Rozov, B. S.; Savelev, V. I.
This book is concerned with the characteristics of digital control systems of great accuracy. A classification of such systems is considered along with aspects of stabilization, programmable control applications, digital tracking systems and servomechanisms, and precision systems for the control of a scanning laser beam. Other topics explored are related to systems of proportional control, linear devices and methods for increasing precision, approaches for further decreasing the response time in the case of high-speed operation, possibilities for the implementation of a logical control law, and methods for the study of precision digital control systems. A description is presented of precision automatic control systems which make use of electronic computers, taking into account the existing possibilities for an employment of computers in automatic control systems, approaches and studies required for including a computer in such control systems, and an analysis of the structure of automatic control systems with computers. Attention is also given to functional blocks in the considered systems.
Stability and Optimal Harvesting of Modified Leslie-Gower Predator-Prey Model
NASA Astrophysics Data System (ADS)
Toaha, S.; Azis, M. I.
2018-03-01
This paper studies a modified of dynamics of Leslie-Gower predator-prey population model. The model is stated as a system of first order differential equations. The model consists of one predator and one prey. The Holling type II as a predation function is considered in this model. The predator and prey populations are assumed to be beneficial and then the two populations are harvested with constant efforts. Existence and stability of the interior equilibrium point are analysed. Linearization method is used to get the linearized model and the eigenvalue is used to justify the stability of the interior equilibrium point. From the analyses, we show that under a certain condition the interior equilibrium point exists and is locally asymptotically stable. For the model with constant efforts of harvesting, cost function, revenue function, and profit function are considered. The stable interior equilibrium point is then related to the maximum profit problem as well as net present value of revenues problem. We show that there exists a certain value of the efforts that maximizes the profit function and net present value of revenues while the interior equilibrium point remains stable. This means that the populations can live in coexistence for a long time and also maximize the benefit even though the populations are harvested with constant efforts.
Optimal control of thermally coupled Navier Stokes equations
NASA Technical Reports Server (NTRS)
Ito, Kazufumi; Scroggs, Jeffrey S.; Tran, Hien T.
1994-01-01
The optimal boundary temperature control of the stationary thermally coupled incompressible Navier-Stokes equation is considered. Well-posedness and existence of the optimal control and a necessary optimality condition are obtained. Optimization algorithms based on the augmented Lagrangian method with second order update are discussed. A test example motivated by control of transport process in the high pressure vapor transport (HVPT) reactor is presented to demonstrate the applicability of our theoretical results and proposed algorithm.
Peopling the past: new perspectives on the ancient Maya.
Robin, C
2001-01-02
The new direction in Maya archaeology is toward achieving a greater understanding of people and their roles and their relations in the past. To answer emerging humanistic questions about ancient people's lives Mayanists are increasingly making use of new and existing scientific methods from archaeology and other disciplines. Maya archaeology is bridging the divide between the humanities and sciences to answer questions about ancient people previously considered beyond the realm of archaeological knowledge.
Glenn, Edward P.; Nagler, Pamela L.; Morino, Kiyomi; Hultine, Kevin
2013-01-01
Conclusions: Salts accumulated in the vadose zone at both sites so usable water was confined to the saturated capillary fringe above the aquifer. Existence of a saline aquifer imposes several types of constraints on phreatophyte EG, which need to be considered in models of plant water uptake. The heterogeneous nature of saltcedar EG over river terraces introduces potential errors into estimates of ET by wide-area methods.
Dynamical behavior for the three-dimensional generalized Hasegawa-Mima equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang Ruifeng; Guo Boling; Institute of Applied Physics and Computational Mathematics, P.O. Box 8009, Beijing 100088
2007-01-15
The long time behavior of solution of the three-dimensional generalized Hasegawa-Mima [Phys. Fluids 21, 87 (1978)] equations with dissipation term is considered. The global attractor problem of the three-dimensional generalized Hasegawa-Mima equations with periodic boundary condition was studied. Applying the method of uniform a priori estimates, the existence of global attractor of this problem was proven, and also the dimensions of the global attractor are estimated.
Automatic image enhancement based on multi-scale image decomposition
NASA Astrophysics Data System (ADS)
Feng, Lu; Wu, Zhuangzhi; Pei, Luo; Long, Xiong
2014-01-01
In image processing and computational photography, automatic image enhancement is one of the long-range objectives. Recently the automatic image enhancement methods not only take account of the globe semantics, like correct color hue and brightness imbalances, but also the local content of the image, such as human face and sky of landscape. In this paper we describe a new scheme for automatic image enhancement that considers both global semantics and local content of image. Our automatic image enhancement method employs the multi-scale edge-aware image decomposition approach to detect the underexposure regions and enhance the detail of the salient content. The experiment results demonstrate the effectiveness of our approach compared to existing automatic enhancement methods.
NASA Astrophysics Data System (ADS)
Huang, Jian; Yuen, Pong C.; Chen, Wen-Sheng; Lai, J. H.
2005-05-01
Many face recognition algorithms/systems have been developed in the last decade and excellent performances have also been reported when there is a sufficient number of representative training samples. In many real-life applications such as passport identification, only one well-controlled frontal sample image is available for training. Under this situation, the performance of existing algorithms will degrade dramatically or may not even be implemented. We propose a component-based linear discriminant analysis (LDA) method to solve the one training sample problem. The basic idea of the proposed method is to construct local facial feature component bunches by moving each local feature region in four directions. In this way, we not only generate more samples with lower dimension than the original image, but also consider the face detection localization error while training. After that, we propose a subspace LDA method, which is tailor-made for a small number of training samples, for the local feature projection to maximize the discrimination power. Theoretical analysis and experiment results show that our proposed subspace LDA is efficient and overcomes the limitations in existing LDA methods. Finally, we combine the contributions of each local component bunch with a weighted combination scheme to draw the recognition decision. A FERET database is used for evaluating the proposed method and results are encouraging.
An integrative approach for measuring semantic similarities using gene ontology.
Peng, Jiajie; Li, Hongxiang; Jiang, Qinghua; Wang, Yadong; Chen, Jin
2014-01-01
Gene Ontology (GO) provides rich information and a convenient way to study gene functional similarity, which has been successfully used in various applications. However, the existing GO based similarity measurements have limited functions for only a subset of GO information is considered in each measure. An appropriate integration of the existing measures to take into account more information in GO is demanding. We propose a novel integrative measure called InteGO2 to automatically select appropriate seed measures and then to integrate them using a metaheuristic search method. The experiment results show that InteGO2 significantly improves the performance of gene similarity in human, Arabidopsis and yeast on both molecular function and biological process GO categories. InteGO2 computes gene-to-gene similarities more accurately than tested existing measures and has high robustness. The supplementary document and software are available at http://mlg.hit.edu.cn:8082/.
Polynomial solutions of the Monge-Ampère equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aminov, Yu A
2014-11-30
The question of the existence of polynomial solutions to the Monge-Ampère equation z{sub xx}z{sub yy}−z{sub xy}{sup 2}=f(x,y) is considered in the case when f(x,y) is a polynomial. It is proved that if f is a polynomial of the second degree, which is positive for all values of its arguments and has a positive squared part, then no polynomial solution exists. On the other hand, a solution which is not polynomial but is analytic in the whole of the x, y-plane is produced. Necessary and sufficient conditions for the existence of polynomial solutions of degree up to 4 are found and methods for the construction ofmore » such solutions are indicated. An approximation theorem is proved. Bibliography: 10 titles.« less
Instabilities in the Sun-Jupiter-Asteroid three body problem
NASA Astrophysics Data System (ADS)
Urschel, John C.; Galante, Joseph R.
2013-03-01
We consider dynamics of a Sun-Jupiter-Asteroid system, and, under some simplifying assumptions, show the existence of instabilities in the motions of an asteroid. In particular, we show that an asteroid whose initial orbit is far from the orbit of Mars can be gradually perturbed into one that crosses Mars' orbit. Properly formulated, the motion of the asteroid can be described as a Hamiltonian system with two degrees of freedom, with the dynamics restricted to a "large" open region of the phase space reduced to an exact area preserving map. Instabilities arise in regions where the map has no invariant curves. The method of MacKay and Percival is used to explicitly rule out the existence of these curves, and results of Mather abstractly guarantee the existence of diffusing orbits. We emphasize that finding such diffusing orbits numerically is quite difficult, and is outside the scope of this paper.
Method of App Selection for Healthcare Providers Based on Consumer Needs.
Lee, Jisan; Kim, Jeongeun
2018-01-01
Mobile device applications can be used to manage health. However, healthcare providers hesitate to use them because selection methods that consider the needs of health consumers and identify the most appropriate application are rare. This study aimed to create an effective method of identifying applications that address user needs. Women experiencing dysmenorrhea and premenstrual syndrome were the targeted users. First, we searched for related applications from two major sources of mobile applications. Brainstorming, mind mapping, and persona and scenario techniques were used to create a checklist of relevant criteria, which was used to rate the applications. Of the 2784 applications found, 369 were analyzed quantitatively. Of those, five of the top candidates were evaluated by three groups: application experts, clinical experts, and potential users. All three groups ranked one application the highest; however, the remaining rankings differed. The results of this study suggest that the method created is useful because it considers not only the needs of various users but also the knowledge of application and clinical experts. This study proposes a method for finding and using the best among existing applications and highlights the need for nurses who can understand and combine opinions of users and application and clinical experts.
Lucas, Patricia J; Baird, Janis; Arai, Lisa; Law, Catherine; Roberts, Helen M
2007-01-15
The inclusion of qualitative studies in systematic reviews poses methodological challenges. This paper presents worked examples of two methods of data synthesis (textual narrative and thematic), used in relation to one review, with the aim of enabling researchers to consider the strength of different approaches. A systematic review of lay perspectives of infant size and growth was conducted, locating 19 studies (including both qualitative and quantitative). The data extracted from these were synthesised using both a textual narrative and a thematic synthesis. The processes of both methods are presented, showing a stepwise progression to the final synthesis. Both methods led us to similar conclusions about lay views toward infant size and growth. Differences between methods lie in the way they dealt with study quality and heterogeneity. On the basis of the work reported here, we consider textual narrative and thematic synthesis have strengths and weaknesses in relation to different research questions. Thematic synthesis holds most potential for hypothesis generation, but may obscure heterogeneity and quality appraisal. Textual narrative synthesis is better able to describe the scope of existing research and account for the strength of evidence, but is less good at identifying commonality.
Capability maturity models for offshore organisational management.
Strutt, J E; Sharp, J V; Terry, E; Miles, R
2006-12-01
The goal setting regime imposed by the UK safety regulator has important implications for an organisation's ability to manage health and safety related risks. Existing approaches to safety assurance based on risk analysis and formal safety assessments are increasingly considered unlikely to create the step change improvement in safety to which the offshore industry aspires and alternative approaches are being considered. One approach, which addresses the important issue of organisational behaviour and which can be applied at a very early stage of design, is the capability maturity model (CMM). The paper describes the development of a design safety capability maturity model, outlining the key processes considered necessary to safety achievement, definition of maturity levels and scoring methods. The paper discusses how CMM is related to regulatory mechanisms and risk based decision making together with the potential of CMM to environmental risk management.
ENGINEERING ECONOMIC ANALYSIS OF A PROGRAM FOR ARTIFICIAL GROUNDWATER RECHARGE.
Reichard, Eric G.; Bredehoeft, John D.
1984-01-01
This study describes and demonstrates two alternate methods for evaluating the relative costs and benefits of artificial groundwater recharge using percolation ponds. The first analysis considers the benefits to be the reduction of pumping lifts and land subsidence; the second considers benefits as the alternative costs of a comparable surface delivery system. Example computations are carried out for an existing artificial recharge program in Santa Clara Valley in California. A computer groundwater model is used to estimate both the average long term and the drought period effects of artificial recharge in the study area. Results indicate that the costs of artificial recharge are considerably smaller than the alternative costs of an equivalent surface system. Refs.
Using DNA barcoding to differentiate invasive Dreissena species (Mollusca, Bivalvia)
Marescaux, Jonathan; Van Doninck, Karine
2013-01-01
Abstract The zebra mussel (Dreissena polymorpha) and the quagga mussel (Dreissena rostriformis bugensis) are considered as the most competitive invaders in freshwaters of Europe and North America. Although shell characteristics exist to differentiate both species, phenotypic plasticity in the genus Dreissena does not always allow a clear identification. Therefore, the need to find an accurate identification method is essential. DNA barcoding has been proven to be an adequate procedure to discriminate species. The cytochrome c oxidase subunit I mitochondrial gene (COI) is considered as the standard barcode for animals. We tested the use of this gene as an efficient DNA barcode and found that it allow rapid and accurate identification of adult Dreissena individuals. PMID:24453560
Identification and isolation of adult liver stem/progenitor cells.
Tanaka, Minoru; Miyajima, Atsushi
2012-01-01
Hepatoblasts are considered to be liver stem/progenitor cells in the fetus because they propagate and differentiate into two types of liver epithelial cells, hepatocytes and cholangiocytes. In adults, oval cells that emerge in severely injured liver are considered facultative hepatic stem/progenitor cells. However, the nature of oval cells has remained unclear for long time due to the lack of a method to isolate them. It has also been unclear whether liver stem/progenitor cells exist in normal adult liver. Recently, we and others have successfully identified oval cells and adult liver stem/progenitor cells. Here, we describe the identification and isolation of mouse liver stem/progenitor cells by utilizing antibodies against specific cell surface marker molecules.
Do regional methods really help reduce uncertainties in flood frequency analyses?
NASA Astrophysics Data System (ADS)
Cong Nguyen, Chi; Payrastre, Olivier; Gaume, Eric
2013-04-01
Flood frequency analyses are often based on continuous measured series at gauge sites. However, the length of the available data sets is usually too short to provide reliable estimates of extreme design floods. To reduce the estimation uncertainties, the analyzed data sets have to be extended either in time, making use of historical and paleoflood data, or in space, merging data sets considered as statistically homogeneous to build large regional data samples. Nevertheless, the advantage of the regional analyses, the important increase of the size of the studied data sets, may be counterbalanced by the possible heterogeneities of the merged sets. The application and comparison of four different flood frequency analysis methods to two regions affected by flash floods in the south of France (Ardèche and Var) illustrates how this balance between the number of records and possible heterogeneities plays in real-world applications. The four tested methods are: (1) a local statistical analysis based on the existing series of measured discharges, (2) a local analysis valuating the existing information on historical floods, (3) a standard regional flood frequency analysis based on existing measured series at gauged sites and (4) a modified regional analysis including estimated extreme peak discharges at ungauged sites. Monte Carlo simulations are conducted to simulate a large number of discharge series with characteristics similar to the observed ones (type of statistical distributions, number of sites and records) to evaluate to which extent the results obtained on these case studies can be generalized. These two case studies indicate that even small statistical heterogeneities, which are not detected by the standard homogeneity tests implemented in regional flood frequency studies, may drastically limit the usefulness of such approaches. On the other hand, these result show that the valuation of information on extreme events, either historical flood events at gauged sites or estimated extremes at ungauged sites in the considered region, is an efficient way to reduce uncertainties in flood frequency studies.
Single-cell level methods for studying the effect of antibiotics on bacteria during infection.
Kogermann, Karin; Putrinš, Marta; Tenson, Tanel
2016-12-01
Considerable evidence about phenotypic heterogeneity among bacteria during infection has accumulated during recent years. This heterogeneity has to be considered if the mechanisms of infection and antibiotic action are to be understood, so we need to implement existing and find novel methods to monitor the effects of antibiotics on bacteria at the single-cell level. This review provides an overview of methods by which this aim can be achieved. Fluorescence label-based methods and Raman scattering as a label-free approach are discussed in particular detail. Other label-free methods that can provide single-cell level information, such as impedance spectroscopy and surface plasmon resonance, are briefly summarized. The advantages and disadvantages of these different methods are discussed in light of a challenging in vivo environment. Copyright © 2016 Elsevier B.V. All rights reserved.
Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly
2016-01-01
This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.
Directional Histogram Ratio at Random Probes: A Local Thresholding Criterion for Capillary Images
Lu, Na; Silva, Jharon; Gu, Yu; Gerber, Scott; Wu, Hulin; Gelbard, Harris; Dewhurst, Stephen; Miao, Hongyu
2013-01-01
With the development of micron-scale imaging techniques, capillaries can be conveniently visualized using methods such as two-photon and whole mount microscopy. However, the presence of background staining, leaky vessels and the diffusion of small fluorescent molecules can lead to significant complexity in image analysis and loss of information necessary to accurately quantify vascular metrics. One solution to this problem is the development of accurate thresholding algorithms that reliably distinguish blood vessels from surrounding tissue. Although various thresholding algorithms have been proposed, our results suggest that without appropriate pre- or post-processing, the existing approaches may fail to obtain satisfactory results for capillary images that include areas of contamination. In this study, we propose a novel local thresholding algorithm, called directional histogram ratio at random probes (DHR-RP). This method explicitly considers the geometric features of tube-like objects in conducting image binarization, and has a reliable performance in distinguishing small vessels from either clean or contaminated background. Experimental and simulation studies suggest that our DHR-RP algorithm is superior over existing thresholding methods. PMID:23525856
Algicidal bacteria in the sea and their impact on algal blooms.
Mayali, Xavier; Azam, Farooq
2004-01-01
Over the past two decades, many reports have revealed the existence of bacteria capable of killing phytoplankton. These algicidal bacteria sometimes increase in abundance concurrently with the decline of algal blooms, suggesting that they may affect algal bloom dynamics. Here, we synthesize the existing knowledge on algicidal bacteria interactions with marine eukaryotic microalgae. We discuss the effectiveness of the current methods to characterize the algicidal phenotype in an ecosystem context. We briefly consider the literature on the phylogenetic identification of algicidal bacteria, their interaction with their algal prey, the characterization of algicidal molecules, and the enumeration of algicidal bacteria during algal blooms. We conclude that, due to limitations of current methods, the evidence for algicidal bacteria causing algal bloom decline is circumstantial. New methods and an ecosystem approach are needed to test hypotheses on the impact of algicidal bacteria in algal bloom dynamics. This will require enlarging the scope of inquiry from its current focus on the potential utility of algicidal bacteria in the control of harmful algal blooms. We suggest conceptualizing bacterial algicidy within the general problem of bacterial regulation of algal community structure in the ocean.
Determination of N epsilon-(carboxymethyl)lysine in foods and related systems.
Ames, Jennifer M
2008-04-01
The sensitive and specific determination of advanced glycation end products (AGEs) is of considerable interest because these compounds have been associated with pro-oxidative and proinflammatory effects in vivo. AGEs form when carbonyl compounds, such as glucose and its oxidation products, glyoxal and methylglyoxal, react with the epsilon-amino group of lysine and the guanidino group of arginine to give structures including N epsilon-(carboxymethyl)lysine (CML), N epsilon-(carboxyethyl)lysine, and hydroimidazolones. CML is frequently used as a marker for AGEs in general. It exists in both the free or peptide-bound forms. Analysis of CML involves its extraction from the food (including protein hydrolysis to release any peptide-bound adduct) and determination by immunochemical or instrumental means. Various factors must be considered at each step of the analysis. Extraction, hydrolysis, and sample clean-up are all less straight forward for food samples, compared to plasma and tissue. The immunochemical and instrumental methods all have their advantages and disadvantages, and no perfect method exists. Currently, different procedures are being used in different laboratories, and there is an urgent need to compare, improve, and validate methods.
A protocol for lifetime energy and environmental impact assessment of building insulation materials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shrestha, Som S., E-mail: shresthass@ornl.gov; Biswas, Kaushik; Desjarlais, Andre O.
This article describes a proposed protocol that is intended to provide a comprehensive list of factors to be considered in evaluating the direct and indirect environmental impacts of building insulation materials, as well as detailed descriptions of standardized calculation methodologies to determine those impacts. The energy and environmental impacts of insulation materials can generally be divided into two categories: (1) direct impact due to the embodied energy of the insulation materials and other factors and (2) indirect or environmental impacts avoided as a result of reduced building energy use due to addition of insulation. Standards and product category rules exist,more » which provide guidelines about the life cycle assessment (LCA) of materials, including building insulation products. However, critical reviews have suggested that these standards fail to provide complete guidance to LCA studies and suffer from ambiguities regarding the determination of the environmental impacts of building insulation and other products. The focus of the assessment protocol described here is to identify all factors that contribute to the total energy and environmental impacts of different building insulation products and, more importantly, provide standardized determination methods that will allow comparison of different insulation material types. Further, the intent is not to replace current LCA standards but to provide a well-defined, easy-to-use comparison method for insulation materials using existing LCA guidelines. - Highlights: • We proposed a protocol to evaluate the environmental impacts of insulation materials. • The protocol considers all life cycle stages of an insulation material. • Both the direct environmental impacts and the indirect impacts are defined. • Standardized calculation methods for the ‘avoided operational energy’ is defined. • Standardized calculation methods for the ‘avoided environmental impact’ is defined.« less
Pseudorange Measurement Method Based on AIS Signals.
Zhang, Jingbo; Zhang, Shufang; Wang, Jinpeng
2017-05-22
In order to use the existing automatic identification system (AIS) to provide additional navigation and positioning services, a complete pseudorange measurements solution is presented in this paper. Through the mathematical analysis of the AIS signal, the bit-0-phases in the digital sequences were determined as the timestamps. Monte Carlo simulation was carried out to compare the accuracy of the zero-crossing and differential peak, which are two timestamp detection methods in the additive white Gaussian noise (AWGN) channel. Considering the low-speed and low-dynamic motion characteristics of ships, an optimal estimation method based on the minimum mean square error is proposed to improve detection accuracy. Furthermore, the α difference filter algorithm was used to achieve the fusion of the optimal estimation results of the two detection methods. The results show that the algorithm can greatly improve the accuracy of pseudorange estimation under low signal-to-noise ratio (SNR) conditions. In order to verify the effectiveness of the scheme, prototypes containing the measurement scheme were developed and field tests in Xinghai Bay of Dalian (China) were performed. The test results show that the pseudorange measurement accuracy was better than 28 m (σ) without any modification of the existing AIS system.
Pseudorange Measurement Method Based on AIS Signals
Zhang, Jingbo; Zhang, Shufang; Wang, Jinpeng
2017-01-01
In order to use the existing automatic identification system (AIS) to provide additional navigation and positioning services, a complete pseudorange measurements solution is presented in this paper. Through the mathematical analysis of the AIS signal, the bit-0-phases in the digital sequences were determined as the timestamps. Monte Carlo simulation was carried out to compare the accuracy of the zero-crossing and differential peak, which are two timestamp detection methods in the additive white Gaussian noise (AWGN) channel. Considering the low-speed and low-dynamic motion characteristics of ships, an optimal estimation method based on the minimum mean square error is proposed to improve detection accuracy. Furthermore, the α difference filter algorithm was used to achieve the fusion of the optimal estimation results of the two detection methods. The results show that the algorithm can greatly improve the accuracy of pseudorange estimation under low signal-to-noise ratio (SNR) conditions. In order to verify the effectiveness of the scheme, prototypes containing the measurement scheme were developed and field tests in Xinghai Bay of Dalian (China) were performed. The test results show that the pseudorange measurement accuracy was better than 28 m (σ) without any modification of the existing AIS system. PMID:28531153
A Statistical Approach for Testing Cross-Phenotype Effects of Rare Variants
Broadaway, K. Alaine; Cutler, David J.; Duncan, Richard; Moore, Jacob L.; Ware, Erin B.; Jhun, Min A.; Bielak, Lawrence F.; Zhao, Wei; Smith, Jennifer A.; Peyser, Patricia A.; Kardia, Sharon L.R.; Ghosh, Debashis; Epstein, Michael P.
2016-01-01
Increasing empirical evidence suggests that many genetic variants influence multiple distinct phenotypes. When cross-phenotype effects exist, multivariate association methods that consider pleiotropy are often more powerful than univariate methods that model each phenotype separately. Although several statistical approaches exist for testing cross-phenotype effects for common variants, there is a lack of similar tests for gene-based analysis of rare variants. In order to fill this important gap, we introduce a statistical method for cross-phenotype analysis of rare variants using a nonparametric distance-covariance approach that compares similarity in multivariate phenotypes to similarity in rare-variant genotypes across a gene. The approach can accommodate both binary and continuous phenotypes and further can adjust for covariates. Our approach yields a closed-form test whose significance can be evaluated analytically, thereby improving computational efficiency and permitting application on a genome-wide scale. We use simulated data to demonstrate that our method, which we refer to as the Gene Association with Multiple Traits (GAMuT) test, provides increased power over competing approaches. We also illustrate our approach using exome-chip data from the Genetic Epidemiology Network of Arteriopathy. PMID:26942286
Directional virtual backbone based data aggregation scheme for Wireless Visual Sensor Networks.
Zhang, Jing; Liu, Shi-Jian; Tsai, Pei-Wei; Zou, Fu-Min; Ji, Xiao-Rong
2018-01-01
Data gathering is a fundamental task in Wireless Visual Sensor Networks (WVSNs). Features of directional antennas and the visual data make WVSNs more complex than the conventional Wireless Sensor Network (WSN). The virtual backbone is a technique, which is capable of constructing clusters. The version associating with the aggregation operation is also referred to as the virtual backbone tree. In most of the existing literature, the main focus is on the efficiency brought by the construction of clusters that the existing methods neglect local-balance problems in general. To fill up this gap, Directional Virtual Backbone based Data Aggregation Scheme (DVBDAS) for the WVSNs is proposed in this paper. In addition, a measurement called the energy consumption density is proposed for evaluating the adequacy of results in the cluster-based construction problems. Moreover, the directional virtual backbone construction scheme is proposed by considering the local-balanced factor. Furthermore, the associated network coding mechanism is utilized to construct DVBDAS. Finally, both the theoretical analysis of the proposed DVBDAS and the simulations are given for evaluating the performance. The experimental results prove that the proposed DVBDAS achieves higher performance in terms of both the energy preservation and the network lifetime extension than the existing methods.
Health systems around the world - a comparison of existing health system rankings.
Schütte, Stefanie; Acevedo, Paula N Marin; Flahault, Antoine
2018-06-01
Existing health systems all over the world are different due to the different combinations of components that can be considered for their establishment. The ranking of health systems has been a focal points for many years especially the issue of performance. In 2000 the World Health Organization (WHO) performed a ranking to compare the Performance of the health system of the member countries. Since then other health system rankings have been performed and it became an issue of public discussion. A point of contention regarding these rankings is the methodology employed by each of them, since no gold standard exists. Therefore, this review focuses on evaluating the methodologies of each existing health system performance ranking to assess their reproducibility and transparency. A search was conducted to identify existing health system rankings, and a questionnaire was developed for the comparison of the methodologies based on the following indicators: (1) General information, (2) Statistical methods, (3) Data (4) Indicators. Overall nine rankings were identified whereas six of them focused rather on the measurement of population health without any financial component and were therefore excluded. Finally, three health system rankings were selected for this review: "Health Systems: Improving Performance" by the WHO, "Mirror, Mirror on the wall: How the Performance of the US Health Care System Compares Internationally" by the Commonwealth Fund and "the Most efficient Health Care" by Bloomberg. After the completion of the comparison of the rankings by giving them scores according to the indicators, the ranking performed the WHO was considered the most complete regarding the ability of reproducibility and transparency of the methodology. This review and comparison could help in establishing consensus in the field of health system research. This may also help giving recommendations for future health rankings and evaluating the current gap in the literature.
NASA Astrophysics Data System (ADS)
Wang, Fu; Ma, Dexin; Bührig-Polaczek, Andreas
2017-11-01
γ/ γ' eutectics' nucleation behavior during the solidification of a single-crystal superalloy with additional carbon was investigated by using directional solidification quenching method. The results show that the nucleation of the γ/ γ' eutectics can directly occur on the existing γ dendrites, directly in the remaining liquid, or on the primary MC-type carbides. The γ/γ' eutectics formed through the latter two mechanisms have different crystal orientations than that of the γ matrix. This suggests that the conventional Ni-based single-crystal superalloy castings with additional carbon only guarantee the monocrystallinity of the γ matrix and some γ/ γ' eutectics and, in addition to the carbides, there are other misoriented polycrystalline microstructures existing in macroscopically considered "single-crystal" superalloy castings.
Graph theoretical stable allocation as a tool for reproduction of control by human operators
NASA Astrophysics Data System (ADS)
van Nooijen, Ronald; Ertsen, Maurits; Kolechkina, Alla
2016-04-01
During the design of central control algorithms for existing water resource systems under manual control it is important to consider the interaction with parts of the system that remain under manual control and to compare the proposed new system with the existing manual methods. In graph theory the "stable allocation" problem has good solution algorithms and allows for formulation of flow distribution problems in terms of priorities. As a test case for the use of this approach we used the algorithm to derive water allocation rules for the Gezira Scheme, an irrigation system located between the Blue and White Niles south of Khartoum. In 1925, Gezira started with 300,000 acres; currently it covers close to two million acres.
CFD-Predicted Tile Heating Bump Factors Due to Tile Overlay Repairs
NASA Technical Reports Server (NTRS)
Lessard, Victor R.
2006-01-01
A Computational Fluid Dynamics investigation of the Orbiter's Tile Overlay Repair (TOR) is performed to assess the aeroheating Damage Assessment Team's (DAT) existing heating correlation method for protuberance interference heating on the surrounding thermal protection system. Aerothermodynamic heating analyses are performed for TORs at the design reference damage locations body points 1800 and 1075 for a Mach 17.9 and a=39deg STS-107 flight trajectory point with laminar flow. Six different cases are considered. The computed peak heating bump factor on the surrounding tiles are below the DAT's heating bump factor values for smooth tile cases. However, for the uneven tiles cases the peak interference heating is shown to be considerably higher than the existing correlation prediction.
Liquefaction assessment based on combined use of CPT and shear wave velocity measurements
NASA Astrophysics Data System (ADS)
Bán, Zoltán; Mahler, András; Győri, Erzsébet
2017-04-01
Soil liquefaction is one of the most devastating secondary effects of earthquakes and can cause significant damage in built infrastructure. For this reason liquefaction hazard shall be considered in all regions where moderate-to-high seismic activity encounters with saturated, loose, granular soil deposits. Several approaches exist to take into account this hazard, from which the in-situ test based empirical methods are the most commonly used in practice. These methods are generally based on the results of CPT, SPT or shear wave velocity measurements. In more complex or high risk projects CPT and VS measurement are often performed at the same location commonly in the form of seismic CPT. Furthermore, VS profile determined by surface wave methods can also supplement the standard CPT measurement. However, combined use of both in-situ indices in one single empirical method is limited. For this reason, the goal of this research was to develop such an empirical method within the framework of simplified empirical procedures where the results of CPT and VS measurements are used in parallel and can supplement each other. The combination of two in-situ indices, a small strain property measurement with a large strain measurement, can reduce uncertainty of empirical methods. In the first step by careful reviewing of the already existing liquefaction case history databases, sites were selected where the records of both CPT and VS measurement are available. After implementing the necessary corrections on the gathered 98 case histories with respect to fines content, overburden pressure and magnitude, a logistic regression was performed to obtain the probability contours of liquefaction occurrence. Logistic regression is often used to explore the relationship between a binary response and a set of explanatory variables. The occurrence or absence of liquefaction can be considered as binary outcome and the equivalent clean sand value of normalized overburden corrected cone tip resistance (qc1Ncs), the overburden corrected shear wave velocity (V S1), and the magnitude and effective stress corrected cyclic stress ratio (CSRM=7.5,σv'=1atm) were considered as input variables. In this case the graphical representation of the cyclic resistance ratio curve for a given probability has been replaced by a surface that separates the liquefaction and non-liquefaction cases.
Linear quadratic optimization for positive LTI system
NASA Astrophysics Data System (ADS)
Muhafzan, Yenti, Syafrida Wirma; Zulakmal
2017-05-01
Nowaday the linear quadratic optimization subject to positive linear time invariant (LTI) system constitute an interesting study considering it can become a mathematical model of variety of real problem whose variables have to nonnegative and trajectories generated by these variables must be nonnegative. In this paper we propose a method to generate an optimal control of linear quadratic optimization subject to positive linear time invariant (LTI) system. A sufficient condition that guarantee the existence of such optimal control is discussed.
Research on Energy-saving Shape Design of High School Library Building in Cold Region
NASA Astrophysics Data System (ADS)
Hui, Zhao; Weishuang, Xie; Zirui, Tong
2017-11-01
Considering climatic characteristics in cold region, existing high school libraries in Changchun are researched according to investigation of real conditions of these library buildings. Mathematical analysis and CAD methods are used to summarize the relation between building shape and building energy saving of high school library. Strategies are put forward for sustainable development of high school library building in cold region, providing reliable design basis for construction of high school libraries in Changchun.
Peopling the past: New perspectives on the ancient Maya
Robin, Cynthia
2001-01-01
The new direction in Maya archaeology is toward achieving a greater understanding of people and their roles and their relations in the past. To answer emerging humanistic questions about ancient people's lives Mayanists are increasingly making use of new and existing scientific methods from archaeology and other disciplines. Maya archaeology is bridging the divide between the humanities and sciences to answer questions about ancient people previously considered beyond the realm of archaeological knowledge. PMID:11136245
Global solutions to the equation of thermoelasticity with fading memory
NASA Astrophysics Data System (ADS)
Okada, Mari; Kawashima, Shuichi
2017-07-01
We consider the initial-history value problem for the one-dimensional equation of thermoelasticity with fading memory. It is proved that if the data are smooth and small, then a unique smooth solution exists globally in time and converges to the constant equilibrium state as time goes to infinity. Our proof is based on a technical energy method which makes use of the strict convexity of the entropy function and the properties of strongly positive definite kernels.
Antimatter Requirements and Energy Costs for Near-Term Propulsion Applications
NASA Technical Reports Server (NTRS)
Schmidt, G. R.; Gerrish, H. P.; Martin, J. J.; Smith, G. A.; Meyer, K. J.
1999-01-01
The superior energy density of antimatter annihilation has often been pointed to as the ultimate source of energy for propulsion. However, the limited capacity and very low efficiency of present-day antiproton production methods suggest that antimatter may be too costly to consider for near-term propulsion applications. We address this issue by assessing the antimatter requirements for six different types of propulsion concepts, including two in which antiprotons are used to drive energy release from combined fission/fusion. These requirements are compared against the capacity of both the current antimatter production infrastructure and the improved capabilities that could exist within the early part of next century. Results show that although it may be impractical to consider systems that rely on antimatter as the sole source of propulsive energy, the requirements for propulsion based on antimatter-assisted fission/fusion do fall within projected near-term production capabilities. In fact, a new facility designed solely for antiproton production but based on existing technology could feasibly support interstellar precursor missions and omniplanetary spaceflight with antimatter costs ranging up to $6.4 million per mission.
Correlated Topic Vector for Scene Classification.
Wei, Pengxu; Qin, Fei; Wan, Fang; Zhu, Yi; Jiao, Jianbin; Ye, Qixiang
2017-07-01
Scene images usually involve semantic correlations, particularly when considering large-scale image data sets. This paper proposes a novel generative image representation, correlated topic vector, to model such semantic correlations. Oriented from the correlated topic model, correlated topic vector intends to naturally utilize the correlations among topics, which are seldom considered in the conventional feature encoding, e.g., Fisher vector, but do exist in scene images. It is expected that the involvement of correlations can increase the discriminative capability of the learned generative model and consequently improve the recognition accuracy. Incorporated with the Fisher kernel method, correlated topic vector inherits the advantages of Fisher vector. The contributions to the topics of visual words have been further employed by incorporating the Fisher kernel framework to indicate the differences among scenes. Combined with the deep convolutional neural network (CNN) features and Gibbs sampling solution, correlated topic vector shows great potential when processing large-scale and complex scene image data sets. Experiments on two scene image data sets demonstrate that correlated topic vector improves significantly the deep CNN features, and outperforms existing Fisher kernel-based features.
Alexander, Gregory L.; Popejoy, Lori; Lyons, Vanessa; Shumate, Sue; Mueller, Jessica; Galambos, Colleen; Vogelsmeier, Amy; Rantz, Marilyn; Flesner, Marcia
2016-01-01
Objectives Limited research exists on nursing home information technologies, such as health information exchange (HIE) systems. Capturing the experiences of early HIE adopters provides vital information about how these systems are used. In this study, we conduct a secondary analysis of qualitative data captured during interviews with 15 nursing home leaders representing 14 nursing homes in the midwestern United States that are part of the Missouri Quality Improvement Initiative (MOQI) national demonstration project. Methods The interviews were conducted as part of an external evaluation of the HIE vendor contracting with the MOQI initiative with the purpose of understanding the challenges and successes of HIE implementation, with a particular focus on Direct HIE services. Results Emerging themes included (1) incorporating HIE into existing work processes, (2) participation inside and outside the facility, (3) appropriate training and retraining, (4) getting others to use the HIE, (5) getting the HIE operational, and 6) putting policies for technology into place. Discussion Three essential areas should be considered for nursing homes considering HIE adoption: readiness to adopt technology, availability of technology resources, and matching of new clinical workflows. PMID:27843423
Joint Concept Correlation and Feature-Concept Relevance Learning for Multilabel Classification.
Zhao, Xiaowei; Ma, Zhigang; Li, Zhi; Li, Zhihui
2018-02-01
In recent years, multilabel classification has attracted significant attention in multimedia annotation. However, most of the multilabel classification methods focus only on the inherent correlations existing among multiple labels and concepts and ignore the relevance between features and the target concepts. To obtain more robust multilabel classification results, we propose a new multilabel classification method aiming to capture the correlations among multiple concepts by leveraging hypergraph that is proved to be beneficial for relational learning. Moreover, we consider mining feature-concept relevance, which is often overlooked by many multilabel learning algorithms. To better show the feature-concept relevance, we impose a sparsity constraint on the proposed method. We compare the proposed method with several other multilabel classification methods and evaluate the classification performance by mean average precision on several data sets. The experimental results show that the proposed method outperforms the state-of-the-art methods.
Quantitative Evaluation Method of Each Generation Margin for Power System Planning
NASA Astrophysics Data System (ADS)
Su, Su; Tanaka, Kazuyuki
As the power system deregulation advances, the competition among the power companies becomes heated, and they seek more efficient system planning using existing facilities. Therefore, an efficient system planning method has been expected. This paper proposes a quantitative evaluation method for the (N-1) generation margin considering the overload and the voltage stability restriction. Concerning the generation margin related with the overload, a fast solution method without the recalculation of the (N-1) Y-matrix is proposed. Referred to the voltage stability, this paper proposes an efficient method to search the stability limit. The IEEE30 model system which is composed of 6 generators and 14 load nodes is employed to validate the proposed method. According to the results, the proposed method can reduce the computational cost for the generation margin related with the overload under the (N-1) condition, and specify the value quantitatively.
NASA Technical Reports Server (NTRS)
Flores, C. C.; Gurkin, L. W.
1982-01-01
The three-stage Taurus-Nike-Tomahawk launch vehicle is being considered for performance enhancement of the existing Taurus-Tomahawk flight system. In addition, performance enhancement of other existing two-stage launch vehicles is being considered through the use of tandem booster systems. Aeroballistic characteristics of the proposed Taurus-Nike-Tomahawk vehicle are presented, as are overall performance capabilities of other potential three-stage flight systems.
A Weighted Multipath Measurement Based on Gene Ontology for Estimating Gene Products Similarity
Liu, Lizhen; Dai, Xuemin; Song, Wei; Lu, Jingli
2014-01-01
Abstract Many different methods have been proposed for calculating the semantic similarity of term pairs based on gene ontology (GO). Most existing methods are based on information content (IC), and the methods based on IC are used more commonly than those based on the structure of GO. However, most IC-based methods not only fail to handle identical annotations but also show a strong bias toward well-annotated proteins. We propose a new method called weighted multipath measurement (WMM) for estimating the semantic similarity of gene products based on the structure of the GO. We not only considered the contribution of every path between two GO terms but also took the depth of the lowest common ancestors into account. We assigned different weights for different kinds of edges in GO graph. The similarity values calculated by WMM can be reused because they are only relative to the characteristics of GO terms. Experimental results showed that the similarity values obtained by WMM have a higher accuracy. We compared the performance of WMM with that of other methods using GO data and gene annotation datasets for yeast and humans downloaded from the GO database. We found that WMM is more suited for prediction of gene function than most existing IC-based methods and that it can distinguish proteins with identical annotations (two proteins are annotated with the same terms) from each other. PMID:25229994
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2016-03-09
This work represents a first-of-its-kind successful application to employ advanced numerical methods in solving realistic two-phase flow problems with two-fluid six-equation two-phase flow model. These advanced numerical methods include high-resolution spatial discretization scheme with staggered grids (high-order) fully implicit time integration schemes, and Jacobian-free Newton–Krylov (JFNK) method as the nonlinear solver. The computer code developed in this work has been extensively validated with existing experimental flow boiling data in vertical pipes and rod bundles, which cover wide ranges of experimental conditions, such as pressure, inlet mass flux, wall heat flux and exit void fraction. Additional code-to-code benchmark with the RELAP5-3Dmore » code further verifies the correct code implementation. The combined methods employed in this work exhibit strong robustness in solving two-phase flow problems even when phase appearance (boiling) and realistic discrete flow regimes are considered. Transitional flow regimes used in existing system analysis codes, normally introduced to overcome numerical difficulty, were completely removed in this work. As a result, this in turn provides the possibility to utilize more sophisticated flow regime maps in the future to further improve simulation accuracy.« less
NASA Astrophysics Data System (ADS)
Yoshida, T.; Sato, T.; Oyama, H.
2014-12-01
Methane hydrates in subsea environments near Japan are believed to new natural gas resources. These methane hydrate crystals are very small and existed in the intergranular pores of sandy sediments in sand mud alternate layers. For gas production, several processes for recovering natural gas from the methane hydrate in a sedimentary reservoir have been proposed, but almost all technique are obtain dissociated gas from methane hydrates. When methane hydrates are dissociated, gas and water are existed. These gas and water are flown in pore space of sand mud alternate layers, and there is a possibility that the mud layer is eroded by these flows. It is considered that the mad erosion causes production trouble such as making skins or well instability. In this study, we carried out pore scale numerical simulation to represent mud erosion. This research aims to develop a fundamental simulation method based on LBM (Lattice Boltzmann Method). In the simulation, sand particles are generated numerically in simulation area which is approximately 200x200x200μm3. The periodic boundary condition is used except for mud layers. The water/gas flow in pore space is calculated by LBM, and shear stress distribution is obtained at the position flow interacting mud surface. From this shear stress, we consider that the driving force of mud erosion. As results, mud erosion can be reproduced numerically by adjusting the parameters such as critical shear stress. We confirmed that the simulation using LBM is appropriate for mud erosion.
Modelling of non-equilibrium flow in the branched pipeline systems
NASA Astrophysics Data System (ADS)
Sumskoi, S. I.; Sverchkov, A. M.; Lisanov, M. V.; Egorov, A. F.
2016-09-01
This article presents a mathematical model and a numerical method for solving the task of water hammer in the branched pipeline system. The task is considered in the onedimensional non-stationary formulation taking into account the realities such as the change in the diameter of the pipeline and its branches. By comparison with the existing analytic solution it has been shown that the proposed method possesses good accuracy. With the help of the developed model and numerical method the task has been solved concerning the transmission of the compression waves complex in the branching pipeline system when several shut down valves operate. It should be noted that the offered model and method may be easily introduced to a number of other tasks, for example, to describe the flow of blood in the vessels.
Mean Field Type Control with Congestion (II): An Augmented Lagrangian Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Achdou, Yves, E-mail: achdou@ljll.univ-paris-diderot.fr; Laurière, Mathieu
This work deals with a numerical method for solving a mean-field type control problem with congestion. It is the continuation of an article by the same authors, in which suitably defined weak solutions of the system of partial differential equations arising from the model were discussed and existence and uniqueness were proved. Here, the focus is put on numerical methods: a monotone finite difference scheme is proposed and shown to have a variational interpretation. Then an Alternating Direction Method of Multipliers for solving the variational problem is addressed. It is based on an augmented Lagrangian. Two kinds of boundary conditionsmore » are considered: periodic conditions and more realistic boundary conditions associated to state constrained problems. Various test cases and numerical results are presented.« less
Revisiting the Schönbein ozone measurement methodology
NASA Astrophysics Data System (ADS)
Ramírez-González, Ignacio A.; Añel, Juan A.; Saiz-López, Alfonso; García-Feal, Orlando; Cid, Antonio; Mejuto, Juan Carlos; Gimeno, Luis
2017-04-01
Trough the XIX century the Schönbein method gained a lot of popularity by its easy way to measure tropospheric ozone. Traditionally it has been considered that Schönbein measurements are not accurate enough to be useful. Detractors of this method argue that it is sensitive to meteorological conditions, being the most important the influence of relative humidity. As a consequence the data obtained by this method have usually been discarded. Here we revisit this method taking into account that values measured during the 19th century were taken using different measurement papers. We explore several concentrations of starch and potassium iodide, the basis for this measurement method. Our results are compared with the previous ones existing in the literature. The validity of the Schönbein methodology is discussed having into account humidity and other meteorological variables.
Spacelike matching to null infinity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zenginoglu, Anil; Tiglio, Manuel
2009-07-15
We present two methods to include the asymptotic domain of a background spacetime in null directions for numerical solutions of evolution equations so that both the radiation extraction problem and the outer boundary problem are solved. The first method is based on the geometric conformal approach, the second is a coordinate based approach. We apply these methods to the case of a massless scalar wave equation on a Kerr spacetime. Our methods are designed to allow existing codes to reach the radiative zone by including future null infinity in the computational domain with relatively minor modifications. We demonstrate the flexibilitymore » of the methods by considering both Boyer-Lindquist and ingoing Kerr coordinates near the black hole. We also confirm numerically predictions concerning tail decay rates for scalar fields at null infinity in Kerr spacetime due to Hod for the first time.« less
Dyas, Jane V; Apekey, Tanefa; Tilling, Michelle; Siriwardena, A Niroshan
2009-09-22
Recruiting to primary care studies is complex. With the current drive to increase numbers of patients involved in primary care studies, we need to know more about successful recruitment approaches. There is limited evidence on recruitment to focus group studies, particularly when no natural grouping exists and where participants do not regularly meet. The aim of this paper is to reflect on recruitment to a focus group study comparing the methods used with existing evidence using a resource for research recruitment, PROSPeR (Planning Recruitment Options: Strategies for Primary Care). The focus group formed part of modelling a complex intervention in primary care in the Resources for Effective Sleep Treatment (REST) study. Despite a considered approach at the design stage, there were a number of difficulties with recruitment. The recruitment strategy and subsequent revisions are detailed. The researchers' modifications to recruitment, justifications and evidence from the literature in support of them are presented. Contrary evidence is used to analyse why some aspects were unsuccessful and evidence is used to suggest improvements. Recruitment to focus group studies should be considered in two distinct phases; getting potential participants to contact the researcher, and converting those contacts into attendance. The difficulty of recruitment in primary care is underemphasised in the literature especially where people do not regularly come together, typified by this case study of patients with sleep problems. We recommend training GPs and nurses to recruit patients during consultations. Multiple recruitment methods should be employed from the outset and the need to build topic related non-financial incentives into the group meeting should be considered. Recruitment should be monitored regularly with barriers addressed iteratively as a study progresses.
Data based abnormality detection
NASA Astrophysics Data System (ADS)
Purwar, Yashasvi
Data based abnormality detection is a growing research field focussed on extracting information from feature rich data. They are considered to be non-intrusive and non-destructive in nature which gives them a clear advantage over conventional methods. In this study, we explore different streams of data based anomalies detection. We propose extension and revisions to existing valve stiction detection algorithm supported with industrial case study. We also explored the area of image analysis and proposed a complete solution for Malaria diagnosis. The proposed method is tested over images provided by pathology laboratory at Alberta Health Service. We also address the robustness and practicality of the solution proposed.
An algorithm for spatial heirarchy clustering
NASA Technical Reports Server (NTRS)
Dejesusparada, N. (Principal Investigator); Velasco, F. R. D.
1981-01-01
A method for utilizing both spectral and spatial redundancy in compacting and preclassifying images is presented. In multispectral satellite images, a high correlation exists between neighboring image points which tend to occupy dense and restricted regions of the feature space. The image is divided into windows of the same size where the clustering is made. The classes obtained in several neighboring windows are clustered, and then again successively clustered until only one region corresponding to the whole image is obtained. By employing this algorithm only a few points are considered in each clustering, thus reducing computational effort. The method is illustrated as applied to LANDSAT images.
Stability of discrete time recurrent neural networks and nonlinear optimization problems.
Singh, Jayant; Barabanov, Nikita
2016-02-01
We consider the method of Reduction of Dissipativity Domain to prove global Lyapunov stability of Discrete Time Recurrent Neural Networks. The standard and advanced criteria for Absolute Stability of these essentially nonlinear systems produce rather weak results. The method mentioned above is proved to be more powerful. It involves a multi-step procedure with maximization of special nonconvex functions over polytopes on every step. We derive conditions which guarantee an existence of at most one point of local maximum for such functions over every hyperplane. This nontrivial result is valid for wide range of neuron transfer functions. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Qian; Xue, Anke
2018-06-01
This paper has proposed a robust control for the spacecraft rendezvous system by considering the parameter uncertainties and actuator unsymmetrical saturation based on the discrete gain scheduling approach. By changing of variables, we transform the actuator unsymmetrical saturation control problem into a symmetrical one. The main advantage of the proposed method is improving the dynamic performance of the closed-loop system with a region of attraction as large as possible. By the Lyapunov approach and the scheduling technology, the existence conditions for the admissible controller are formulated in the form of linear matrix inequalities. The numerical simulation illustrates the effectiveness of the proposed method.
The Psychology Experiment Building Language (PEBL) and PEBL Test Battery
Mueller, Shane T.; Piper, Brian J.
2014-01-01
Background We briefly describe the Psychology Experiment Building Language (PEBL), an open source software system for designing and running psychological experiments. New Method We describe the PEBL test battery, a set of approximately 70 behavioral tests which can be freely used, shared, and modified. Included is a comprehensive set of past research upon which tests in the battery are based. Results We report the results of benchmark tests that establish the timing precision of PEBL. Comparison with Existing Method We consider alternatives to the PEBL system and battery tests. Conclusions We conclude with a discussion of the ethical factors involved in the open source testing movement. PMID:24269254
NASA Astrophysics Data System (ADS)
Feehan, Paul M. N.
2017-09-01
We prove existence of solutions to boundary value problems and obstacle problems for degenerate-elliptic, linear, second-order partial differential operators with partial Dirichlet boundary conditions using a new version of the Perron method. The elliptic operators considered have a degeneracy along a portion of the domain boundary which is similar to the degeneracy of a model linear operator identified by Daskalopoulos and Hamilton [9] in their study of the porous medium equation or the degeneracy of the Heston operator [21] in mathematical finance. Existence of a solution to the partial Dirichlet problem on a half-ball, where the operator becomes degenerate on the flat boundary and a Dirichlet condition is only imposed on the spherical boundary, provides the key additional ingredient required for our Perron method. Surprisingly, proving existence of a solution to this partial Dirichlet problem with ;mixed; boundary conditions on a half-ball is more challenging than one might expect. Due to the difficulty in developing a global Schauder estimate and due to compatibility conditions arising where the ;degenerate; and ;non-degenerate boundaries; touch, one cannot directly apply the continuity or approximate solution methods. However, in dimension two, there is a holomorphic map from the half-disk onto the infinite strip in the complex plane and one can extend this definition to higher dimensions to give a diffeomorphism from the half-ball onto the infinite ;slab;. The solution to the partial Dirichlet problem on the half-ball can thus be converted to a partial Dirichlet problem on the slab, albeit for an operator which now has exponentially growing coefficients. The required Schauder regularity theory and existence of a solution to the partial Dirichlet problem on the slab can nevertheless be obtained using previous work of the author and C. Pop [16]. Our Perron method relies on weak and strong maximum principles for degenerate-elliptic operators, concepts of continuous subsolutions and supersolutions for boundary value and obstacle problems for degenerate-elliptic operators, and maximum and comparison principle estimates previously developed by the author [13].
A scoping review of spatial cluster analysis techniques for point-event data.
Fritz, Charles E; Schuurman, Nadine; Robertson, Colin; Lear, Scott
2013-05-01
Spatial cluster analysis is a uniquely interdisciplinary endeavour, and so it is important to communicate and disseminate ideas, innovations, best practices and challenges across practitioners, applied epidemiology researchers and spatial statisticians. In this research we conducted a scoping review to systematically search peer-reviewed journal databases for research that has employed spatial cluster analysis methods on individual-level, address location, or x and y coordinate derived data. To illustrate the thematic issues raised by our results, methods were tested using a dataset where known clusters existed. Point pattern methods, spatial clustering and cluster detection tests, and a locally weighted spatial regression model were most commonly used for individual-level, address location data (n = 29). The spatial scan statistic was the most popular method for address location data (n = 19). Six themes were identified relating to the application of spatial cluster analysis methods and subsequent analyses, which we recommend researchers to consider; exploratory analysis, visualization, spatial resolution, aetiology, scale and spatial weights. It is our intention that researchers seeking direction for using spatial cluster analysis methods, consider the caveats and strengths of each approach, but also explore the numerous other methods available for this type of analysis. Applied spatial epidemiology researchers and practitioners should give special consideration to applying multiple tests to a dataset. Future research should focus on developing frameworks for selecting appropriate methods and the corresponding spatial weighting schemes.
Efficient design of CMOS TSC checkers
NASA Technical Reports Server (NTRS)
Biddappa, Anita; Shamanna, Manjunath K.; Maki, Gary; Whitaker, Sterling
1990-01-01
This paper considers the design of an efficient, robustly testable, CMOS Totally Self-Checking (TSC) Checker for k-out-of-2k codes. Most existing implementations use primitive gates and assume the single stuck-at fault model. The self-testing property has been found to fail for CMOS TSC checkers under the stuck-open fault model due to timing skews and arbitrary delays in the circuit. A new four level design using CMOS primitive gates (NAND, NOR, INVERTERS) is presented. This design retains its properties under the stuck-open fault model. Additionally, this method offers an impressive reduction (greater than 70 percent) in gate count, gate inputs, and test set size when compared to the existing method. This implementation is easily realizable and is based on Anderson's technique. A thorough comparative study has been made on the proposed implementation and Kundu's implementation and the results indicate that the proposed one is better than Kundu's in all respects for k-out-of-2k codes.
NASA Technical Reports Server (NTRS)
Hague, D. S.; Merz, A. W.
1975-01-01
Multivariable search techniques are applied to a particular class of airfoil optimization problems. These are the maximization of lift and the minimization of disturbance pressure magnitude in an inviscid nonlinear flow field. A variety of multivariable search techniques contained in an existing nonlinear optimization code, AESOP, are applied to this design problem. These techniques include elementary single parameter perturbation methods, organized search such as steepest-descent, quadratic, and Davidon methods, randomized procedures, and a generalized search acceleration technique. Airfoil design variables are seven in number and define perturbations to the profile of an existing NACA airfoil. The relative efficiency of the techniques are compared. It is shown that elementary one parameter at a time and random techniques compare favorably with organized searches in the class of problems considered. It is also shown that significant reductions in disturbance pressure magnitude can be made while retaining reasonable lift coefficient values at low free stream Mach numbers.
A modified belief entropy in Dempster-Shafer framework.
Zhou, Deyun; Tang, Yongchuan; Jiang, Wen
2017-01-01
How to quantify the uncertain information in the framework of Dempster-Shafer evidence theory is still an open issue. Quite a few uncertainty measures have been proposed in Dempster-Shafer framework, however, the existing studies mainly focus on the mass function itself, the available information represented by the scale of the frame of discernment (FOD) in the body of evidence is ignored. Without taking full advantage of the information in the body of evidence, the existing methods are somehow not that efficient. In this paper, a modified belief entropy is proposed by considering the scale of FOD and the relative scale of a focal element with respect to FOD. Inspired by Deng entropy, the new belief entropy is consistent with Shannon entropy in the sense of probability consistency. What's more, with less information loss, the new measure can overcome the shortage of some other uncertainty measures. A few numerical examples and a case study are presented to show the efficiency and superiority of the proposed method.
A modified belief entropy in Dempster-Shafer framework
Zhou, Deyun; Jiang, Wen
2017-01-01
How to quantify the uncertain information in the framework of Dempster-Shafer evidence theory is still an open issue. Quite a few uncertainty measures have been proposed in Dempster-Shafer framework, however, the existing studies mainly focus on the mass function itself, the available information represented by the scale of the frame of discernment (FOD) in the body of evidence is ignored. Without taking full advantage of the information in the body of evidence, the existing methods are somehow not that efficient. In this paper, a modified belief entropy is proposed by considering the scale of FOD and the relative scale of a focal element with respect to FOD. Inspired by Deng entropy, the new belief entropy is consistent with Shannon entropy in the sense of probability consistency. What’s more, with less information loss, the new measure can overcome the shortage of some other uncertainty measures. A few numerical examples and a case study are presented to show the efficiency and superiority of the proposed method. PMID:28481914
Aircraft interior noise reduction by alternate resonance tuning
NASA Technical Reports Server (NTRS)
Bliss, Donald B.; Gottwald, James A.; Gustaveson, Mark B.; Burton, James R., III; Castellino, Craig
1989-01-01
Existing interior noise reduction techniques for aircraft fuselages perform reasonably well at higher frequencies, but are inadequate at lower, particularly with respect to the low blade passage harmonics with high forcing levels found in propeller aircraft. A method is being studied which considers aircraft fuselages lines with panels alternately tuned to frequencies above and below the frequency to be attenuated. Adjacent panels would oscillate at equal amplitude, to give equal source strength, but with opposite phase. Provided these adjacent panels are acoustically compact, the resulting cancellation causes the interior acoustic modes to become cut off and therefore be non-propagating and evanescent. This interior noise reduction method, called Alternate Resonance Tuning (ART), is currently being investigated both theoretically and experimentally. This new concept has potential application to reducing interior noise due to the propellers in advanced turboprop aircraft as well as for existing aircraft configurations. This program summarizes the work carried out at Duke University during the third semester of a contract supported by the Structural Acoustics Branch at NASA Langley Research Center.
NASA Astrophysics Data System (ADS)
Li, Haifeng; Zhu, Qing; Yang, Xiaoxia; Xu, Linrong
2012-10-01
Typical characteristics of remote sensing applications are concurrent tasks, such as those found in disaster rapid response. The existing composition approach to geographical information processing service chain, searches for an optimisation solution and is what can be deemed a "selfish" way. This way leads to problems of conflict amongst concurrent tasks and decreases the performance of all service chains. In this study, a non-cooperative game-based mathematical model to analyse the competitive relationships between tasks, is proposed. A best response function is used, to assure each task maintains utility optimisation by considering composition strategies of other tasks and quantifying conflicts between tasks. Based on this, an iterative algorithm that converges to Nash equilibrium is presented, the aim being to provide good convergence and maximise the utilisation of all tasks under concurrent task conditions. Theoretical analyses and experiments showed that the newly proposed method, when compared to existing service composition methods, has better practical utility in all tasks.
High-Dimensional Heteroscedastic Regression with an Application to eQTL Data Analysis
Daye, Z. John; Chen, Jinbo; Li, Hongzhe
2011-01-01
Summary We consider the problem of high-dimensional regression under non-constant error variances. Despite being a common phenomenon in biological applications, heteroscedasticity has, so far, been largely ignored in high-dimensional analysis of genomic data sets. We propose a new methodology that allows non-constant error variances for high-dimensional estimation and model selection. Our method incorporates heteroscedasticity by simultaneously modeling both the mean and variance components via a novel doubly regularized approach. Extensive Monte Carlo simulations indicate that our proposed procedure can result in better estimation and variable selection than existing methods when heteroscedasticity arises from the presence of predictors explaining error variances and outliers. Further, we demonstrate the presence of heteroscedasticity in and apply our method to an expression quantitative trait loci (eQTLs) study of 112 yeast segregants. The new procedure can automatically account for heteroscedasticity in identifying the eQTLs that are associated with gene expression variations and lead to smaller prediction errors. These results demonstrate the importance of considering heteroscedasticity in eQTL data analysis. PMID:22547833
What can acute medicine learn from qualitative methods?
Heasman, Brett; Reader, Tom W
2015-10-01
The contribution of qualitative methods to evidence-based medicine is growing, with qualitative studies increasingly used to examine patient experience and unsafe organizational cultures. The present review considers qualitative research recently conducted on teamwork and organizational culture in the ICU and also other acute domains. Qualitative studies have highlighted the importance of interpersonal and social aspects of healthcare on managing and responding to patient care needs. Clear/consistent communication, compassion, and trust underpin successful patient-physician interactions, with improved patient experiences linked to patient safety and clinical effectiveness across a wide range of measures and outcomes. Across multidisciplinary teams, good communication facilitates shared understanding, decision-making and coordinated action, reducing patient risk in the process. Qualitative methods highlight the complex nature of risk management in hospital wards, which is highly contextualized to the demands and resources available, and influenced by multilayered social contexts. In addition to augmenting quantitative research, qualitative investigations enable the investigation of questions on social behaviour that are beyond the scope of quantitative assessment alone. To develop improved patient-centred care, health professionals should therefore consider integrating qualitative procedures into their existing assessments of patient/staff satisfaction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Paul T.; Shadid, John N.; Tsuji, Paul H.
Here, this study explores the performance and scaling of a GMRES Krylov method employed as a smoother for an algebraic multigrid (AMG) preconditioned Newton- Krylov solution approach applied to a fully-implicit variational multiscale (VMS) nite element (FE) resistive magnetohydrodynamics (MHD) formulation. In this context a Newton iteration is used for the nonlinear system and a Krylov (GMRES) method is employed for the linear subsystems. The efficiency of this approach is critically dependent on the scalability and performance of the AMG preconditioner for the linear solutions and the performance of the smoothers play a critical role. Krylov smoothers are considered inmore » an attempt to reduce the time and memory requirements of existing robust smoothers based on additive Schwarz domain decomposition (DD) with incomplete LU factorization solves on each subdomain. Three time dependent resistive MHD test cases are considered to evaluate the method. The results demonstrate that the GMRES smoother can be faster due to a decrease in the preconditioner setup time and a reduction in outer GMRESR solver iterations, and requires less memory (typically 35% less memory for global GMRES smoother) than the DD ILU smoother.« less
QR images: optimized image embedding in QR codes.
Garateguy, Gonzalo J; Arce, Gonzalo R; Lau, Daniel L; Villarreal, Ofelia P
2014-07-01
This paper introduces the concept of QR images, an automatic method to embed QR codes into color images with bounded probability of detection error. These embeddings are compatible with standard decoding applications and can be applied to any color image with full area coverage. The QR information bits are encoded into the luminance values of the image, taking advantage of the immunity of QR readers against local luminance disturbances. To mitigate the visual distortion of the QR image, the algorithm utilizes halftoning masks for the selection of modified pixels and nonlinear programming techniques to locally optimize luminance levels. A tractable model for the probability of error is developed and models of the human visual system are considered in the quality metric used to optimize the luminance levels of the QR image. To minimize the processing time, the optimization techniques proposed to consider the mechanics of a common binarization method and are designed to be amenable for parallel implementations. Experimental results show the graceful degradation of the decoding rate and the perceptual quality as a function the embedding parameters. A visual comparison between the proposed and existing methods is presented.
A decoy chain deployment method based on SDN and NFV against penetration attack
Zhao, Qi; Zhang, Chuanhao
2017-01-01
Penetration attacks are one of the most serious network security threats. However, existing network defense technologies do not have the ability to entirely block the penetration behavior of intruders. Therefore, the network needs additional defenses. In this paper, a decoy chain deployment (DCD) method based on SDN+NFV is proposed to address this problem. This method considers about the security status of networks, and deploys decoy chains with the resource constraints. DCD changes the attack surface of the network and makes it difficult for intruders to discern the current state of the network. Simulation experiments and analyses show that DCD can effectively resist penetration attacks by increasing the time cost and complexity of a penetration attack. PMID:29216257
A decoy chain deployment method based on SDN and NFV against penetration attack.
Zhao, Qi; Zhang, Chuanhao; Zhao, Zheng
2017-01-01
Penetration attacks are one of the most serious network security threats. However, existing network defense technologies do not have the ability to entirely block the penetration behavior of intruders. Therefore, the network needs additional defenses. In this paper, a decoy chain deployment (DCD) method based on SDN+NFV is proposed to address this problem. This method considers about the security status of networks, and deploys decoy chains with the resource constraints. DCD changes the attack surface of the network and makes it difficult for intruders to discern the current state of the network. Simulation experiments and analyses show that DCD can effectively resist penetration attacks by increasing the time cost and complexity of a penetration attack.
A general method for the inclusion of radiation chemistry in astrochemical models.
Shingledecker, Christopher N; Herbst, Eric
2018-02-21
In this paper, we propose a general formalism that allows for the estimation of radiolysis decomposition pathways and rate coefficients suitable for use in astrochemical models, with a focus on solid phase chemistry. Such a theory can help increase the connection between laboratory astrophysics experiments and astrochemical models by providing a means for modelers to incorporate radiation chemistry into chemical networks. The general method proposed here is targeted particularly at the majority of species now included in chemical networks for which little radiochemical data exist; however, the method can also be used as a starting point for considering better studied species. We here apply our theory to the irradiation of H 2 O ice and compare the results with previous experimental data.
Sub-pattern based multi-manifold discriminant analysis for face recognition
NASA Astrophysics Data System (ADS)
Dai, Jiangyan; Guo, Changlu; Zhou, Wei; Shi, Yanjiao; Cong, Lin; Yi, Yugen
2018-04-01
In this paper, we present a Sub-pattern based Multi-manifold Discriminant Analysis (SpMMDA) algorithm for face recognition. Unlike existing Multi-manifold Discriminant Analysis (MMDA) approach which is based on holistic information of face image for recognition, SpMMDA operates on sub-images partitioned from the original face image and then extracts the discriminative local feature from the sub-images separately. Moreover, the structure information of different sub-images from the same face image is considered in the proposed method with the aim of further improve the recognition performance. Extensive experiments on three standard face databases (Extended YaleB, CMU PIE and AR) demonstrate that the proposed method is effective and outperforms some other sub-pattern based face recognition methods.
NASA Astrophysics Data System (ADS)
Alizadeh, Mohammad Reza; Nikoo, Mohammad Reza; Rakhshandehroo, Gholam Reza
2017-08-01
Sustainable management of water resources necessitates close attention to social, economic and environmental aspects such as water quality and quantity concerns and potential conflicts. This study presents a new fuzzy-based multi-objective compromise methodology to determine the socio-optimal and sustainable policies for hydro-environmental management of groundwater resources, which simultaneously considers the conflicts and negotiation of involved stakeholders, uncertainties in decision makers' preferences, existing uncertainties in the groundwater parameters and groundwater quality and quantity issues. The fuzzy multi-objective simulation-optimization model is developed based on qualitative and quantitative groundwater simulation model (MODFLOW and MT3D), multi-objective optimization model (NSGA-II), Monte Carlo analysis and Fuzzy Transformation Method (FTM). Best compromise solutions (best management policies) on trade-off curves are determined using four different Fuzzy Social Choice (FSC) methods. Finally, a unanimity fallback bargaining method is utilized to suggest the most preferred FSC method. Kavar-Maharloo aquifer system in Fars, Iran, as a typical multi-stakeholder multi-objective real-world problem is considered to verify the proposed methodology. Results showed an effective performance of the framework for determining the most sustainable allocation policy in groundwater resource management.
Reserves in load capacity assessment of existing bridges
NASA Astrophysics Data System (ADS)
Žitný, Jan; Ryjáček, Pavel
2017-09-01
High percentage of all railway bridges in the Czech Republic is made of structural steel. Majority of these bridges is designed according to historical codes and according to the deterioration, they have to be assessed if they satisfy the needs of modern railway traffic. The load capacity assessment of existing bridges according to Eurocodes is however often too conservative and especially, braking and acceleration forces cause huge problems to structural elements of the bridge superstructure. The aim of this paper is to review the different approaches for the determination of braking and acceleration forces. Both, current and historical theoretical models and in-situ measurements are considered. The research of several local European state norms superior to Eurocode for assessment of existing railway bridges shows the big diversity of used local approaches and the conservativeness of Eurocode. This paper should also work as an overview for designers dealing with load capacity assessment, revealing the reserves for existing bridges. Based on these different approaches, theoretical models and data obtained from the measurements, the method for determination of braking and acceleration forces on the basis of real traffic data should be proposed.
Probabilistic Methods for Uncertainty Propagation Applied to Aircraft Design
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Lin, Hong-Zong; Khalessi, Mohammad R.
2002-01-01
Three methods of probabilistic uncertainty propagation and quantification (the method of moments, Monte Carlo simulation, and a nongradient simulation search method) are applied to an aircraft analysis and conceptual design program to demonstrate design under uncertainty. The chosen example problems appear to have discontinuous design spaces and thus these examples pose difficulties for many popular methods of uncertainty propagation and quantification. However, specific implementation features of the first and third methods chosen for use in this study enable successful propagation of small uncertainties through the program. Input uncertainties in two configuration design variables are considered. Uncertainties in aircraft weight are computed. The effects of specifying required levels of constraint satisfaction with specified levels of input uncertainty are also demonstrated. The results show, as expected, that the designs under uncertainty are typically heavier and more conservative than those in which no input uncertainties exist.
Krishan, Kewal; Chatterjee, Preetika M; Kanchan, Tanuj; Kaur, Sandeep; Baryah, Neha; Singh, R K
2016-04-01
Sex estimation is considered as one of the essential parameters in forensic anthropology casework, and requires foremost consideration in the examination of skeletal remains. Forensic anthropologists frequently employ morphologic and metric methods for sex estimation of human remains. These methods are still very imperative in identification process in spite of the advent and accomplishment of molecular techniques. A constant boost in the use of imaging techniques in forensic anthropology research has facilitated to derive as well as revise the available population data. These methods however, are less reliable owing to high variance and indistinct landmark details. The present review discusses the reliability and reproducibility of various analytical approaches; morphological, metric, molecular and radiographic methods in sex estimation of skeletal remains. Numerous studies have shown a higher reliability and reproducibility of measurements taken directly on the bones and hence, such direct methods of sex estimation are considered to be more reliable than the other methods. Geometric morphometric (GM) method and Diagnose Sexuelle Probabiliste (DSP) method are emerging as valid methods and widely used techniques in forensic anthropology in terms of accuracy and reliability. Besides, the newer 3D methods are shown to exhibit specific sexual dimorphism patterns not readily revealed by traditional methods. Development of newer and better methodologies for sex estimation as well as re-evaluation of the existing ones will continue in the endeavour of forensic researchers for more accurate results. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Experimental study of near-field air entrainment by subsonic volcanic jets
Solovitz, Stephen A.; Mastin, Larry G.
2009-01-01
The flow structure in the developing region of a turbulent jet has been examined using particle image velocimetry methods, considering the flow at steady state conditions. The velocity fields were integrated to determine the ratio of the entrained air speed to the jet speed, which was approximately 0.03 for a range of Mach numbers up to 0.89 and Reynolds numbers up to 217,000. This range of experimental Mach and Reynolds numbers is higher than previously considered for high-accuracy entrainment measures, particularly in the near-vent region. The entrainment values are below those commonly used for geophysical analyses of volcanic plumes, suggesting that existing 1-D models are likely to understate the tendency for column collapse.
Accurate Bit Error Rate Calculation for Asynchronous Chaos-Based DS-CDMA over Multipath Channel
NASA Astrophysics Data System (ADS)
Kaddoum, Georges; Roviras, Daniel; Chargé, Pascal; Fournier-Prunaret, Daniele
2009-12-01
An accurate approach to compute the bit error rate expression for multiuser chaosbased DS-CDMA system is presented in this paper. For more realistic communication system a slow fading multipath channel is considered. A simple RAKE receiver structure is considered. Based on the bit energy distribution, this approach compared to others computation methods existing in literature gives accurate results with low computation charge. Perfect estimation of the channel coefficients with the associated delays and chaos synchronization is assumed. The bit error rate is derived in terms of the bit energy distribution, the number of paths, the noise variance, and the number of users. Results are illustrated by theoretical calculations and numerical simulations which point out the accuracy of our approach.
Towards a detailed anthropometric body characterization using the Microsoft Kinect.
Domingues, Ana; Barbosa, Filipa; Pereira, Eduardo M; Santos, Márcio Borgonovo; Seixas, Adérito; Vilas-Boas, João; Gabriel, Joaquim; Vardasca, Ricardo
2016-01-01
Anthropometry has been widely used in different fields, providing relevant information for medicine, ergonomics and biometric applications. However, the existent solutions present marked disadvantages, reducing the employment of this type of evaluation. Studies have been conducted in order to easily determine anthropometric measures considering data provided by low-cost sensors, such as the Microsoft Kinect. In this work, a methodology is proposed and implemented for estimating anthropometric measures considering the information acquired with this sensor. The measures obtained with this method were compared with the ones from a validation system, Qualisys. Comparing the relative errors determined with state-of-art references, for some of the estimated measures, lower errors were verified and a more complete characterization of the whole body structure was achieved.
NASA Technical Reports Server (NTRS)
Johannsen, G.; Rouse, W. B.
1978-01-01
A hierarchy of human activities is derived by analyzing automobile driving in general terms. A structural description leads to a block diagram and a time-sharing computer analogy. The range of applicability of existing mathematical models is considered with respect to the hierarchy of human activities in actual complex tasks. Other mathematical tools so far not often applied to man machine systems are also discussed. The mathematical descriptions at least briefly considered here include utility, estimation, control, queueing, and fuzzy set theory as well as artificial intelligence techniques. Some thoughts are given as to how these methods might be integrated and how further work might be pursued.
Design issues for grid-connected photovoltaic systems
NASA Astrophysics Data System (ADS)
Ropp, Michael Eugene
1998-08-01
Photovoltaics (PV) is the direct conversion of sunlight to electrical energy. In areas without centralized utility grids, the benefits of PV easily overshadow the present shortcomings of the technology. However, in locations with centralized utility systems, significant technical challenges remain before utility-interactive PV (UIPV) systems can be integrated into the mix of electricity sources. One challenge is that the needed computer design tools for optimal design of PV systems with curved PV arrays are not available, and even those that are available do not facilitate monitoring of the system once it is built. Another arises from the issue of islanding. Islanding occurs when a UIPV system continues to energize a section of a utility system after that section has been isolated from the utility voltage source. Islanding, which is potentially dangerous to both personnel and equipment, is difficult to prevent completely. The work contained within this thesis targets both of these technical challenges. In Task 1, a method for modeling a PV system with a curved PV array using only existing computer software is developed. This methodology also facilitates comparison of measured and modeled data for use in system monitoring. The procedure is applied to the Georgia Tech Aquatic Center (GTAC) FV system. In the work contained under Task 2, islanding prevention is considered. The existing state-of-the- art is thoroughly reviewed. In Subtask 2.1, an analysis is performed which suggests that standard protective relays are in fact insufficient to guarantee protection against islanding. In Subtask 2.2. several existing islanding prevention methods are compared in a novel way. The superiority of this new comparison over those used previously is demonstrated. A new islanding prevention method is the subject under Subtask 2.3. It is shown that it does not compare favorably with other existing techniques. However, in Subtask 2.4, a novel method for dramatically improving this new islanding prevention method is described. It is shown, both by computer modeling and experiment, that this new method is one of the most effective available today. Finally, under Subtask 2.5, the effects of certain types of loads; on the effectiveness of islanding prevention methods are discussed.
Compare diagnostic tests using transformation-invariant smoothed ROC curves⋆
Tang, Liansheng; Du, Pang; Wu, Chengqing
2012-01-01
Receiver operating characteristic (ROC) curve, plotting true positive rates against false positive rates as threshold varies, is an important tool for evaluating biomarkers in diagnostic medicine studies. By definition, ROC curve is monotone increasing from 0 to 1 and is invariant to any monotone transformation of test results. And it is often a curve with certain level of smoothness when test results from the diseased and non-diseased subjects follow continuous distributions. Most existing ROC curve estimation methods do not guarantee all of these properties. One of the exceptions is Du and Tang (2009) which applies certain monotone spline regression procedure to empirical ROC estimates. However, their method does not consider the inherent correlations between empirical ROC estimates. This makes the derivation of the asymptotic properties very difficult. In this paper we propose a penalized weighted least square estimation method, which incorporates the covariance between empirical ROC estimates as a weight matrix. The resulting estimator satisfies all the aforementioned properties, and we show that it is also consistent. Then a resampling approach is used to extend our method for comparisons of two or more diagnostic tests. Our simulations show a significantly improved performance over the existing method, especially for steep ROC curves. We then apply the proposed method to a cancer diagnostic study that compares several newly developed diagnostic biomarkers to a traditional one. PMID:22639484
Reducing usage of the computational resources by event driven approach to model predictive control
NASA Astrophysics Data System (ADS)
Misik, Stefan; Bradac, Zdenek; Cela, Arben
2017-08-01
This paper deals with a real-time and optimal control of dynamic systems while also considers the constraints which these systems might be subject to. Main objective of this work is to propose a simple modification of the existing Model Predictive Control approach to better suit needs of computational resource-constrained real-time systems. An example using model of a mechanical system is presented and the performance of the proposed method is evaluated in a simulated environment.
NASA Astrophysics Data System (ADS)
Burtyka, Filipp
2018-01-01
The paper considers algorithms for finding diagonalizable and non-diagonalizable roots (so called solvents) of monic arbitrary unilateral second-order matrix polynomial over prime finite field. These algorithms are based on polynomial matrices (lambda-matrices). This is an extension of existing general methods for computing solvents of matrix polynomials over field of complex numbers. We analyze how techniques for complex numbers can be adapted for finite field and estimate asymptotic complexity of the obtained algorithms.
Research of cost aspects of cement pavements construction
NASA Astrophysics Data System (ADS)
Bezuglyi, Artem; Illiash, Sergii; Tymoshchuk, Oleksandr
2017-09-01
The tendency to increasing traffic volume on public roads and to increased axle loads of vehicles makes the road scientists to develop scientifically justified methods for preserving the existing and developing the new transport network of Ukraine. One of the options for solving such issues is the construction of roads with rigid (cement concrete) pavement. However, any solution must be justified considering technical and economic components. This paper presents the results of the research of cost aspects of cement pavements construction.
Summary of vulnerability related technologies based on machine learning
NASA Astrophysics Data System (ADS)
Zhao, Lei; Chen, Zhihao; Jia, Qiong
2018-04-01
As the scale of information system increases by an order of magnitude, the complexity of system software is getting higher. The vulnerability interaction from design, development and deployment to implementation stages greatly increases the risk of the entire information system being attacked successfully. Considering the limitations and lags of the existing mainstream security vulnerability detection techniques, this paper summarizes the development and current status of related technologies based on the machine learning methods applied to deal with massive and irregular data, and handling security vulnerabilities.
Evolutionary variational-hemivariational inequalities
NASA Astrophysics Data System (ADS)
Carl, Siegfried; Le, Vy K.; Motreanu, Dumitru
2008-09-01
We consider an evolutionary quasilinear hemivariational inequality under constraints represented by some closed and convex subset. Our main goal is to systematically develop the method of sub-supersolution on the basis of which we then prove existence, comparison, compactness and extremality results. The obtained results are applied to a general obstacle problem. We improve the corresponding results in the recent monograph [S. Carl, V.K. Le, DE Motreanu, Nonsmooth Variational Problems and Their Inequalities. Comparison Principles and Applications, Springer Monogr. Math., Springer, New York, 2007].
Periodicity and positivity of a class of fractional differential equations.
Ibrahim, Rabha W; Ahmad, M Z; Mohammed, M Jasim
2016-01-01
Fractional differential equations have been discussed in this study. We utilize the Riemann-Liouville fractional calculus to implement it within the generalization of the well known class of differential equations. The Rayleigh differential equation has been generalized of fractional second order. The existence of periodic and positive outcome is established in a new method. The solution is described in a fractional periodic Sobolev space. Positivity of outcomes is considered under certain requirements. We develop and extend some recent works. An example is constructed.
Real gas flow fields about three dimensional configurations
NASA Technical Reports Server (NTRS)
Balakrishnan, A.; Lombard, C. K.; Davy, W. C.
1983-01-01
Real gas, inviscid supersonic flow fields over a three-dimensional configuration are determined using a factored implicit algorithm. Air in chemical equilibrium is considered and its local thermodynamic properties are computed by an equilibrium composition method. Numerical solutions are presented for both real and ideal gases at three different Mach numbers and at two different altitudes. Selected results are illustrated by contour plots and are also tabulated for future reference. Results obtained compare well with existing tabulated numerical solutions and hence validate the solution technique.
NASA Astrophysics Data System (ADS)
Woolfitt, Adrian R.; Boyer, Anne E.; Quinn, Conrad P.; Hoffmaster, Alex R.; Kozel, Thomas R.; de, Barun K.; Gallegos, Maribel; Moura, Hercules; Pirkle, James L.; Barr, John R.
A range of mass spectrometry-based techniques have been used to identify, characterize and differentiate Bacillus anthracis, both in culture for forensic applications and for diagnosis during infection. This range of techniques could usefully be considered to exist as a continuum, based on the degrees of specificity involved. We show two examples here, a whole-organism fingerprinting method and a high-specificity assay for one unique protein, anthrax lethal factor.
Natural photonics for industrial inspiration.
Parker, Andrew R
2009-05-13
There are two considerations for optical biomimetics: the diversity of submicrometre architectures found in the natural world, and the industrial manufacture of these. A review exists on the latter subject, where current engineering methods are considered along with those of the natural cells. Here, on the other hand, I will provide a modern review of the different categories of reflectors and antireflectors found in animals, including their optical characterization. The purpose of this is to inspire designers within the $2 billion annual optics industry.
Variance Component Selection With Applications to Microbiome Taxonomic Data.
Zhai, Jing; Kim, Juhyun; Knox, Kenneth S; Twigg, Homer L; Zhou, Hua; Zhou, Jin J
2018-01-01
High-throughput sequencing technology has enabled population-based studies of the role of the human microbiome in disease etiology and exposure response. Microbiome data are summarized as counts or composition of the bacterial taxa at different taxonomic levels. An important problem is to identify the bacterial taxa that are associated with a response. One method is to test the association of specific taxon with phenotypes in a linear mixed effect model, which incorporates phylogenetic information among bacterial communities. Another type of approaches consider all taxa in a joint model and achieves selection via penalization method, which ignores phylogenetic information. In this paper, we consider regression analysis by treating bacterial taxa at different level as multiple random effects. For each taxon, a kernel matrix is calculated based on distance measures in the phylogenetic tree and acts as one variance component in the joint model. Then taxonomic selection is achieved by the lasso (least absolute shrinkage and selection operator) penalty on variance components. Our method integrates biological information into the variable selection problem and greatly improves selection accuracies. Simulation studies demonstrate the superiority of our methods versus existing methods, for example, group-lasso. Finally, we apply our method to a longitudinal microbiome study of Human Immunodeficiency Virus (HIV) infected patients. We implement our method using the high performance computing language Julia. Software and detailed documentation are freely available at https://github.com/JingZhai63/VCselection.
Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys
Hund, Lauren; Bedrick, Edward J.; Pagano, Marcello
2015-01-01
Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis. PMID:26125967
Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys.
Hund, Lauren; Bedrick, Edward J; Pagano, Marcello
2015-01-01
Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis.
The score statistic of the LD-lod analysis: detecting linkage adaptive to linkage disequilibrium.
Huang, J; Jiang, Y
2001-01-01
We study the properties of a modified lod score method for testing linkage that incorporates linkage disequilibrium (LD-lod). By examination of its score statistic, we show that the LD-lod score method adaptively combines two sources of information: (a) the IBD sharing score which is informative for linkage regardless of the existence of LD and (b) the contrast between allele-specific IBD sharing scores which is informative for linkage only in the presence of LD. We also consider the connection between the LD-lod score method and the transmission-disequilibrium test (TDT) for triad data and the mean test for affected sib pair (ASP) data. We show that, for triad data, the recessive LD-lod test is asymptotically equivalent to the TDT; and for ASP data, it is an adaptive combination of the TDT and the ASP mean test. We demonstrate that the LD-lod score method has relatively good statistical efficiency in comparison with the ASP mean test and the TDT for a broad range of LD and the genetic models considered in this report. Therefore, the LD-lod score method is an interesting approach for detecting linkage when the extent of LD is unknown, such as in a genome-wide screen with a dense set of genetic markers. Copyright 2001 S. Karger AG, Basel
Guo, Ying; Little, Roderick J; McConnell, Daniel S
2012-01-01
Covariate measurement error is common in epidemiologic studies. Current methods for correcting measurement error with information from external calibration samples are insufficient to provide valid adjusted inferences. We consider the problem of estimating the regression of an outcome Y on covariates X and Z, where Y and Z are observed, X is unobserved, but a variable W that measures X with error is observed. Information about measurement error is provided in an external calibration sample where data on X and W (but not Y and Z) are recorded. We describe a method that uses summary statistics from the calibration sample to create multiple imputations of the missing values of X in the regression sample, so that the regression coefficients of Y on X and Z and associated standard errors can be estimated using simple multiple imputation combining rules, yielding valid statistical inferences under the assumption of a multivariate normal distribution. The proposed method is shown by simulation to provide better inferences than existing methods, namely the naive method, classical calibration, and regression calibration, particularly for correction for bias and achieving nominal confidence levels. We also illustrate our method with an example using linear regression to examine the relation between serum reproductive hormone concentrations and bone mineral density loss in midlife women in the Michigan Bone Health and Metabolism Study. Existing methods fail to adjust appropriately for bias due to measurement error in the regression setting, particularly when measurement error is substantial. The proposed method corrects this deficiency.
A needs assessment study of undergraduate surgical education.
Kaur, Navneet; Gupta, Ankit; Saini, Pradeep
2011-01-01
A needs assessment is the process of identifying performance requirements or 'gaps' between what is required and what exists at present. To identify these gaps, the inputs of all stakeholders are needed. In medical education, graduating medical students are important stakeholders who can provide valuable feedback on deficiencies in their training. To know the students' perceptions about effectiveness of their surgical training, an anonymous questionnaire seeking their opinion on the duration, content, methods of teaching and assessment was administered. Their responses were analysed using descriptive statistics. The students were largely in favour of active methods of learning and there was very little preference for didactic lectures. For clinical teaching, involvement in ward rounds and patient care activities, in addition to case discussions, was considered to facilitate learning. A clerkship model of clinical training was favoured. Any teaching-learning activity in small groups of 8-10 students were preferred. As regards their evaluation, besides internal assessment, the students felt the need for direct constructive feedback from teachers on how to improve their performance. A large number (73.5%) were opposed to attendance being considered a qualifying criterion for taking the examination. Students' feedback about their 'perceived needs' should be considered when revising training programmes.
Malone, Matthew; Goeres, Darla M; Gosbell, Iain; Vickery, Karen; Jensen, Slade; Stoodley, Paul
2017-02-01
The concept of biofilms in human health and disease is now widely accepted as cause of chronic infection. Typically, biofilms show remarkable tolerance to many forms of treatments and the host immune response. This has led to vast increase in research to identify new (and sometimes old) anti-biofilm strategies that demonstrate effectiveness against these tolerant phenotypes. Areas covered: Unfortunately, a standardized methodological approach of biofilm models has not been adopted leading to a large disparity between testing conditions. This has made it almost impossible to compare data across multiple laboratories, leaving large gaps in the evidence. Furthermore, many biofilm models testing anti-biofilm strategies aimed at the medical arena have not considered the matter of relevance to an intended application. This may explain why some in vitro models based on methodological designs that do not consider relevance to an intended application fail when applied in vivo at the clinical level. Expert commentary: This review will explore the issues that need to be considered in developing performance standards for anti-biofilm therapeutics and provide a rationale for the need to standardize models/methods that are clinically relevant. We also provide some rational as to why no standards currently exist.
Le, Duc-Hau; Verbeke, Lieven; Son, Le Hoang; Chu, Dinh-Toi; Pham, Van-Huy
2017-11-14
MicroRNAs (miRNAs) have been shown to play an important role in pathological initiation, progression and maintenance. Because identification in the laboratory of disease-related miRNAs is not straightforward, numerous network-based methods have been developed to predict novel miRNAs in silico. Homogeneous networks (in which every node is a miRNA) based on the targets shared between miRNAs have been widely used to predict their role in disease phenotypes. Although such homogeneous networks can predict potential disease-associated miRNAs, they do not consider the roles of the target genes of the miRNAs. Here, we introduce a novel method based on a heterogeneous network that not only considers miRNAs but also the corresponding target genes in the network model. Instead of constructing homogeneous miRNA networks, we built heterogeneous miRNA networks consisting of both miRNAs and their target genes, using databases of known miRNA-target gene interactions. In addition, as recent studies demonstrated reciprocal regulatory relations between miRNAs and their target genes, we considered these heterogeneous miRNA networks to be undirected, assuming mutual miRNA-target interactions. Next, we introduced a novel method (RWRMTN) operating on these mutual heterogeneous miRNA networks to rank candidate disease-related miRNAs using a random walk with restart (RWR) based algorithm. Using both known disease-associated miRNAs and their target genes as seed nodes, the method can identify additional miRNAs involved in the disease phenotype. Experiments indicated that RWRMTN outperformed two existing state-of-the-art methods: RWRMDA, a network-based method that also uses a RWR on homogeneous (rather than heterogeneous) miRNA networks, and RLSMDA, a machine learning-based method. Interestingly, we could relate this performance gain to the emergence of "disease modules" in the heterogeneous miRNA networks used as input for the algorithm. Moreover, we could demonstrate that RWRMTN is stable, performing well when using both experimentally validated and predicted miRNA-target gene interaction data for network construction. Finally, using RWRMTN, we identified 76 novel miRNAs associated with 23 disease phenotypes which were present in a recent database of known disease-miRNA associations. Summarizing, using random walks on mutual miRNA-target networks improves the prediction of novel disease-associated miRNAs because of the existence of "disease modules" in these networks.
What do international pharmacoeconomic guidelines say about economic data transferability?
Barbieri, Marco; Drummond, Michael; Rutten, Frans; Cook, John; Glick, Henry A; Lis, Joanna; Reed, Shelby D; Sculpher, Mark; Severens, Johan L
2010-12-01
The objectives of this article were to assess the positions of the various national pharmacoeconomic guidelines on the transferability (or lack of transferability) of clinical and economic data and to review the methods suggested in the guidelines for addressing issues of transferability. A review of existing national pharmacoeconomic guidelines was conducted to assess recommendations on the transferability of clinical and economic data, whether there are important differences between countries, and whether common methodologies have been suggested to address key transferability issues. Pharmacoeconomic guidelines were initially identified through the ISPOR Web site. In addition, those national guidelines not included in the ISPOR Web site, but known to us, were also considered. Across 27 sets of guidelines, baseline risk and unit costs were uniformly considered to be of low transferability, while treatment effect was classified as highly transferable. Results were more variable for resource use and utilities, which were considered to have low transferability in 63% and 45% of cases, respectively. There were some differences between older and more recent guidelines in the treatment of transferability issues. A growing number of jurisdictions are using guidelines for the economic evaluation of pharmaceuticals. The recommendations in existing guidelines regarding the transferability of clinical and economic data are quite diverse. There is a case for standardization in dealing with transferability issues. One important step would be to update guidelines more frequently. © 2010, International Society for Pharmacoeconomics and Outcomes Research (ISPOR).
Sasao, Toshiaki
2014-11-01
Waste taxes, such as landfill and incineration taxes, have emerged as a popular option in developed countries to promote the 3Rs (reduce, reuse, and recycle). However, few studies have examined the effectiveness of waste taxes. In addition, quite a few studies have considered both dynamic relationships among dependent variables and unobserved individual heterogeneity among the jurisdictions. If dependent variables are persistent, omitted variables cause a bias, or common characteristics exist across the jurisdictions that have introduced waste taxes, the standard fixed effects model may lead to biased estimation results and misunderstood causal relationships. In addition, most existing studies have examined waste in terms of total amounts rather than by categories. Even if significant reductions in total waste amounts are not observed, some reduction within each category may, nevertheless, become evident. Therefore, this study analyzes the effects of industrial waste taxation on quantities of waste in landfill in Japan by applying the bias-corrected least-squares dummy variable (LSDVC) estimators; the general method of moments (difference GMM); and the system GMM. In addition, the study investigates effect differences attributable to industrial waste categories and taxation types. This paper shows that industrial waste taxes in Japan have minimal, significant effects on the reduction of final disposal amounts thus far, considering dynamic relationships and waste categories. Copyright © 2014 Elsevier Ltd. All rights reserved.
Shape space figure-8 solution of three body problem with two equal masses
NASA Astrophysics Data System (ADS)
Yu, Guowei
2017-06-01
In a preprint by Montgomery (https://people.ucsc.edu/~rmont/Nbdy.html), the author attempted to prove the existence of a shape space figure-8 solution of the Newtonian three body problem with two equal masses (it looks like a figure 8 in the shape space, which is different from the famous figure-8 solution with three equal masses (Chenciner and Montgomery 2000 Ann. Math. 152 881-901)). Unfortunately there is an error in the proof and the problem is still open. Consider the α-homogeneous Newton-type potential, 1/rα, using action minimization method, we prove the existence of this solution, for α \\in (1, 2) ; for α=1 (the Newtonian potential), an extra condition is required, which unfortunately seems hard to verify at this moment.
Stability analysis of the Euler discretization for SIR epidemic model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Suryanto, Agus
2014-06-19
In this paper we consider a discrete SIR epidemic model obtained by the Euler method. For that discrete model, existence of disease free equilibrium and endemic equilibrium is established. Sufficient conditions on the local asymptotical stability of both disease free equilibrium and endemic equilibrium are also derived. It is found that the local asymptotical stability of the existing equilibrium is achieved only for a small time step size h. If h is further increased and passes the critical value, then both equilibriums will lose their stability. Our numerical simulations show that a complex dynamical behavior such as bifurcation or chaosmore » phenomenon will appear for relatively large h. Both analytical and numerical results show that the discrete SIR model has a richer dynamical behavior than its continuous counterpart.« less
Summary of Activities for Nondestructive Evaluation of Insulation in Cryogenic Tanks
NASA Technical Reports Server (NTRS)
Arens, Ellen
2012-01-01
This project was undertaken to investigate methods to non-intrusively determine the existence and density of perlite insulation in the annular region of the cryogenic storage vessels, specifically considering the Launch Complex 39 hydrogen tanks at Kennedy Space Center. Lack of insulation in the tanks (as existed in the pad B hydrogen tank at Kennedy Space Center) results in an excessive loss of commodity and can pose operational and safety risks if precautions are not taken to relieve the excessive gas build-up. Insulation with a density that is higher than normal (due to settling or compaction) may also pose an operational and safety risk if the insulation prevents the system from moving and responding to expansions and contractions as fluid is removed and added to the tank.
Some series of intuitionistic fuzzy interactive averaging aggregation operators.
Garg, Harish
2016-01-01
In this paper, some series of new intuitionistic fuzzy averaging aggregation operators has been presented under the intuitionistic fuzzy sets environment. For this, some shortcoming of the existing operators are firstly highlighted and then new operational law, by considering the hesitation degree between the membership functions, has been proposed to overcome these. Based on these new operation laws, some new averaging aggregation operators namely, intuitionistic fuzzy Hamacher interactive weighted averaging, ordered weighted averaging and hybrid weighted averaging operators, labeled as IFHIWA, IFHIOWA and IFHIHWA respectively has been proposed. Furthermore, some desirable properties such as idempotency, boundedness, homogeneity etc. are studied. Finally, a multi-criteria decision making method has been presented based on proposed operators for selecting the best alternative. A comparative concelebration between the proposed operators and the existing operators are investigated in detail.
Green, Linda V; Savin, Sergei; Lu, Yina
2013-01-01
Most existing estimates of the shortage of primary care physicians are based on simple ratios, such as one physician for every 2,500 patients. These estimates do not consider the impact of such ratios on patients' ability to get timely access to care. They also do not quantify the impact of changing patient demographics on the demand side and alternative methods of delivering care on the supply side. We used simulation methods to provide estimates of the number of primary care physicians needed, based on a comprehensive analysis considering access, demographics, and changing practice patterns. We show that the implementation of some increasingly popular operational changes in the ways clinicians deliver care-including the use of teams or "pods," better information technology and sharing of data, and the use of nonphysicians-have the potential to offset completely the increase in demand for physician services while improving access to care, thereby averting a primary care physician shortage.
Examining Menstrual Tracking to Inform the Design of Personal Informatics Tools
Epstein, Daniel A.; Lee, Nicole B.; Kang, Jennifer H.; Agapie, Elena; Schroeder, Jessica; Pina, Laura R.; Fogarty, James; Kientz, Julie A.; Munson, Sean A.
2017-01-01
We consider why and how women track their menstrual cycles, examining their experiences to uncover design opportunities and extend the field's understanding of personal informatics tools. To understand menstrual cycle tracking practices, we collected and analyzed data from three sources: 2,000 reviews of popular menstrual tracking apps, a survey of 687 people, and follow-up interviews with 12 survey respondents. We find that women track their menstrual cycle for varied reasons that include remembering and predicting their period as well as informing conversations with healthcare providers. Participants described six methods of tracking their menstrual cycles, including use of technology, awareness of their premenstrual physiological states, and simply remembering. Although women find apps and calendars helpful, these methods are ineffective when predictions of future menstrual cycles are inaccurate. Designs can create feelings of exclusion for gender and sexual minorities. Existing apps also generally fail to consider life stages that women experience, including young adulthood, pregnancy, and menopause. Our findings encourage expanding the field's conceptions of personal informatics. PMID:28516176
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xue, Yaosuo
The battery energy stored quasi-Z-source (BES-qZS) based photovoltaic (PV) power generation system combines advantages of the qZS inverter and the battery energy storage system. However, the second harmonic (2 ) power ripple will degrade the system's performance and affect the system's design. An accurate model to analyze the 2 ripple is very important. The existing models did not consider the battery, and with the assumption L1=L2 and C1=C2, which causes the non-optimized design for the impedance parameters of qZS network. This paper proposes a comprehensive model for single-phase BES-qZS-PV inverter system, where the battery is considered and without any restrictionmore » of L1, L2, C1, and C2. A BES-qZS impedance design method based on the built model is proposed to mitigate the 2 ripple. Simulation and experimental results verify the proposed 2 ripple model and design method.« less
Van Laere, Sven; Nyssen, Marc; Verbeke, Frank
2017-01-01
Clinical coding is a requirement to provide valuable data for billing, epidemiology and health care resource allocation. In sub-Saharan Africa, we observe a growing awareness of the need for coding of clinical data, not only in health insurances, but also in governments and the hospitals. Presently, coding systems in sub-Saharan Africa are often used for billing purposes. In this paper we consider the use of a nomenclature to also have a clinical impact. Often coding systems are assumed to be complex and too extensive to be used in daily practice. Here, we present a method for constructing a new nomenclature based on existing coding systems by considering a minimal subset in the sub-Saharan region. Evaluation of completeness will be done nationally using the requirements of national registries. The nomenclature requires an extension character for dealing with codes that have to be used for multiple registries. Hospitals will benefit most by using this extension character.
Celestial mechanics - Methods of the theory of motion of 'artificial' celestial bodies
NASA Astrophysics Data System (ADS)
Duboshin, G. N.
This book is concerned with the translational motion of 'artificial' celestial bodies. The difference between natural celestial bodies, which are ordinarily considered by celestial mechanics, and 'artificial' celestial bodies is discussed, taking into account hypothetical celestial bodies introduced in connection with mathematical developments and problems, invisible celestial bodies whose existence can be assumed on the basis of some plausible hypothesis, and man-made satellites of the earth. The book consists of two parts. The first part presents introductory material, and examines a number of general mathematical questions to provide a basis for the studies conducted in the second part. Subjects considered in the first part are related to basic problems, integration methods, and perturbation theory. In the second part, attention is given to the motion of artificial celestial bodies in the gravitational field of the basic planet, external perturbations regarding the motion of these bodies, the motion of the bodies in the earth-moon system, and periodic solutions.
Structures and Materials Working Group report
NASA Technical Reports Server (NTRS)
Torczyner, Robert; Hanks, Brantley R.
1986-01-01
The appropriateness of the selection of four issues (advanced materials development, analysis/design methods, tests of large flexible structures, and structural concepts) was evaluated. A cross-check of the issues and their relationship to the technology drivers is presented. Although all of the issues addressed numerous drivers, the advanced materials development issue impacts six out of the seven drivers and is considered to be the most crucial. The advanced materials technology development and the advanced design/analysis methods development were determined to be enabling technologies with the testing issues and development of structural concepts considered to be of great importance, although not enabling technologies. In addition, and of more general interest and criticality, the need for a Government/Industry commitment which does not now exist, was established. This commitment would call for the establishment of the required infrastructure to facilitate the development of the capabilities highlighted through the availability of resources and testbed facilities, including a national testbed in space to be in place in ten years.
Creation of quantum steering by interaction with a common bath
NASA Astrophysics Data System (ADS)
Sun, Zhe; Xu, Xiao-Qiang; Liu, Bo
2018-05-01
By applying the hierarchy equation method, we computationally study the creation of quantum steering in a two-qubit system interacting with a common bosonic bath. The calculation does not adopt conventional approximate approaches, such as the Born, Markov, rotating-wave, and other perturbative approximations. Three kinds of quantum steering, i.e., Einstein-Podolsky-Rosen steering (EPRS), temporal steering (TS), and spatiotemporal steering (STS), are considered. Since the initial state of the two qubits is chosen as a product state, there does not exist EPRS at the beginning. During the evolution, we find that STS as well as EPRS are generated at the same time. An inversion relationship between STS and TS is revealed. By varying the system-bath coupling strength from weak to ultrastrong regimes, we find the nonmonotonic dependence of STS, TS, and EPRS on the coupling strength. It is interesting to study the dynamics of the three kinds of quantum steering by using an exactly numerical method, which is not considered in previous researches.
NASA Astrophysics Data System (ADS)
Eriçok, Ozan Burak; Ertürk, Hakan
2018-07-01
Optical characterization of nanoparticle aggregates is a complex inverse problem that can be solved by deterministic or statistical methods. Previous studies showed that there exists a different lower size limit of reliable characterization, corresponding to the wavelength of light source used. In this study, these characterization limits are determined considering a light source wavelength range changing from ultraviolet to near infrared (266-1064 nm) relying on numerical light scattering experiments. Two different measurement ensembles are considered. Collection of well separated aggregates made up of same sized particles and that of having particle size distribution. Filippov's cluster-cluster algorithm is used to generate the aggregates and the light scattering behavior is calculated by discrete dipole approximation. A likelihood-free Approximate Bayesian Computation, relying on Adaptive Population Monte Carlo method, is used for characterization. It is found that when the wavelength range of 266-1064 nm is used, successful characterization limit changes from 21-62 nm effective radius for monodisperse and polydisperse soot aggregates.
Pathway analysis with next-generation sequencing data.
Zhao, Jinying; Zhu, Yun; Boerwinkle, Eric; Xiong, Momiao
2015-04-01
Although pathway analysis methods have been developed and successfully applied to association studies of common variants, the statistical methods for pathway-based association analysis of rare variants have not been well developed. Many investigators observed highly inflated false-positive rates and low power in pathway-based tests of association of rare variants. The inflated false-positive rates and low true-positive rates of the current methods are mainly due to their lack of ability to account for gametic phase disequilibrium. To overcome these serious limitations, we develop a novel statistic that is based on the smoothed functional principal component analysis (SFPCA) for pathway association tests with next-generation sequencing data. The developed statistic has the ability to capture position-level variant information and account for gametic phase disequilibrium. By intensive simulations, we demonstrate that the SFPCA-based statistic for testing pathway association with either rare or common or both rare and common variants has the correct type 1 error rates. Also the power of the SFPCA-based statistic and 22 additional existing statistics are evaluated. We found that the SFPCA-based statistic has a much higher power than other existing statistics in all the scenarios considered. To further evaluate its performance, the SFPCA-based statistic is applied to pathway analysis of exome sequencing data in the early-onset myocardial infarction (EOMI) project. We identify three pathways significantly associated with EOMI after the Bonferroni correction. In addition, our preliminary results show that the SFPCA-based statistic has much smaller P-values to identify pathway association than other existing methods.
[Indication for exercise therapy in infancy in the prevention of childhood cerebral palsy].
Weber, S
1983-01-01
As in physiotherapy of cerebral palsy early therapy is desired, if possible even in early infancy, a period, when a safe diagnosis does not yet exist, infants at risk have to be identified. The resulting difficulties in early diagnosis and inevitability of treating a considerable number of not affected infants are discussed. The most common methods of physiotherapy are briefly described and evaluated critically concerning possible side effects as well. Superiority in effectivity improving motor efficiency of one method over another cannot be proven. It is shown, however, that in the Vojta method adverse psychological side effects cannot be excluded. Therefore, physiotherapy being a purely prophylactic and not a therapeutic procedure in the multitude of cases should be considered in ordering and selecting a particular method and the one according to Bobath should be favoured.
Synthesis and optimization of four bar mechanism with six design parameters
NASA Astrophysics Data System (ADS)
Jaiswal, Ankur; Jawale, H. P.
2018-04-01
Function generation is synthesis of mechanism for specific task, involves complexity for specially synthesis above five precision of coupler points. Thus pertains to large structural error. The methodology for arriving to better precision solution is to use the optimization technique. Work presented herein considers methods of optimization of structural error in closed kinematic chain with single degree of freedom, for generating functions like log(x), ex, tan(x), sin(x) with five precision points. The equation in Freudenstein-Chebyshev method is used to develop five point synthesis of mechanism. The extended formulation is proposed and results are obtained to verify existing results in literature. Optimization of structural error is carried out using least square approach. Comparative structural error analysis is presented on optimized error through least square method and extended Freudenstein-Chebyshev method.
Finger Vein Recognition Based on Local Directional Code
Meng, Xianjing; Yang, Gongping; Yin, Yilong; Xiao, Rongyang
2012-01-01
Finger vein patterns are considered as one of the most promising biometric authentication methods for its security and convenience. Most of the current available finger vein recognition methods utilize features from a segmented blood vessel network. As an improperly segmented network may degrade the recognition accuracy, binary pattern based methods are proposed, such as Local Binary Pattern (LBP), Local Derivative Pattern (LDP) and Local Line Binary Pattern (LLBP). However, the rich directional information hidden in the finger vein pattern has not been fully exploited by the existing local patterns. Inspired by the Webber Local Descriptor (WLD), this paper represents a new direction based local descriptor called Local Directional Code (LDC) and applies it to finger vein recognition. In LDC, the local gradient orientation information is coded as an octonary decimal number. Experimental results show that the proposed method using LDC achieves better performance than methods using LLBP. PMID:23202194
Finger vein recognition based on local directional code.
Meng, Xianjing; Yang, Gongping; Yin, Yilong; Xiao, Rongyang
2012-11-05
Finger vein patterns are considered as one of the most promising biometric authentication methods for its security and convenience. Most of the current available finger vein recognition methods utilize features from a segmented blood vessel network. As an improperly segmented network may degrade the recognition accuracy, binary pattern based methods are proposed, such as Local Binary Pattern (LBP), Local Derivative Pattern (LDP) and Local Line Binary Pattern (LLBP). However, the rich directional information hidden in the finger vein pattern has not been fully exploited by the existing local patterns. Inspired by the Webber Local Descriptor (WLD), this paper represents a new direction based local descriptor called Local Directional Code (LDC) and applies it to finger vein recognition. In LDC, the local gradient orientation information is coded as an octonary decimal number. Experimental results show that the proposed method using LDC achieves better performance than methods using LLBP.
NASA Astrophysics Data System (ADS)
Xiao, Long; Liu, Xinggao; Ma, Liang; Zhang, Zeyin
2018-03-01
Dynamic optimisation problem with characteristic times, widely existing in many areas, is one of the frontiers and hotspots of dynamic optimisation researches. This paper considers a class of dynamic optimisation problems with constraints that depend on the interior points either fixed or variable, where a novel direct pseudospectral method using Legendre-Gauss (LG) collocation points for solving these problems is presented. The formula for the state at the terminal time of each subdomain is derived, which results in a linear combination of the state at the LG points in the subdomains so as to avoid the complex nonlinear integral. The sensitivities of the state at the collocation points with respect to the variable characteristic times are derived to improve the efficiency of the method. Three well-known characteristic time dynamic optimisation problems are solved and compared in detail among the reported literature methods. The research results show the effectiveness of the proposed method.
A risk-based approach to flood management decisions in a nonstationary world
NASA Astrophysics Data System (ADS)
Rosner, Ana; Vogel, Richard M.; Kirshen, Paul H.
2014-03-01
Traditional approaches to flood management in a nonstationary world begin with a null hypothesis test of "no trend" and its likelihood, with little or no attention given to the likelihood that we might ignore a trend if it really existed. Concluding a trend exists when it does not, or rejecting a trend when it exists are known as type I and type II errors, respectively. Decision-makers are poorly served by statistical and/or decision methods that do not carefully consider both over- and under-preparation errors, respectively. Similarly, little attention is given to how to integrate uncertainty in our ability to detect trends into a flood management decision context. We show how trend hypothesis test results can be combined with an adaptation's infrastructure costs and damages avoided to provide a rational decision approach in a nonstationary world. The criterion of expected regret is shown to be a useful metric that integrates the statistical, economic, and hydrological aspects of the flood management problem in a nonstationary world.
NASA Technical Reports Server (NTRS)
Fomenkova, M. N.
1997-01-01
The computer-intensive project consisted of the analysis and synthesis of existing data on composition of comet Halley dust particles. The main objective was to obtain a complete inventory of sulfur containing compounds in the comet Halley dust by building upon the existing classification of organic and inorganic compounds and applying a variety of statistical techniques for cluster and cross-correlational analyses. A student hired for this project wrote and tested the software to perform cluster analysis. The following tasks were carried out: (1) selecting the data from existing database for the proposed project; (2) finding access to a standard library of statistical routines for cluster analysis; (3) reformatting the data as necessary for input into the library routines; (4) performing cluster analysis and constructing hierarchical cluster trees using three methods to define the proximity of clusters; (5) presenting the output results in different formats to facilitate the interpretation of the obtained cluster trees; (6) selecting groups of data points common for all three trees as stable clusters. We have also considered the chemistry of sulfur in inorganic compounds.
Adapting a Cancer Literacy Measure for Use among Navajo Women
Yost, Kathleen J.; Bauer, Mark C.; Buki, Lydia P.; Austin-Garrison, Martha; Garcia, Linda V.; Hughes, Christine A.; Patten, Christi A.
2016-01-01
Purpose The authors designed a community-based participatory research study to develop and test a family-based behavioral intervention to improve cancer literacy and promote mammography among Navajo women. Methods Using data from focus groups and discussions with a community advisory committee, they adapted an existing questionnaire to assess cancer knowledge, barriers to mammography, and cancer beliefs for use among Navajo women. Questions measuring health literacy, numeracy, self-efficacy, cancer communication, and family support were also adapted. Results The resulting questionnaire was found to have good content validity, and to be culturally and linguistically appropriate for use among Navajo women. Conclusions It is important to consider culture and not just language when adapting existing measures for use with AI/AN populations. English-language versions of existing literacy measures may not be culturally appropriate for AI/AN populations, which could lead to a lack of semantic, technical, idiomatic, and conceptual equivalence, resulting in misinterpretation of study outcomes. PMID:26879319
Te Brake, Hans
2013-01-01
Background Internationally, several initiatives exist to describe standards for post-disaster psychosocial care. Objective This study explored the level of consensus of experts within Europe on a set of recommendations on early psychosocial intervention after shocking events (Dutch guidelines), and to what degree these standards are implemented into mental health care practice. Methods Two hundred and six (mental) health care professionals filled out a questionnaire to assess the extent to which they consider the guidelines’ scope and recommendations relevant and part of the regular practice in their own country. Forty-five European experts from 24 EU countries discussed the guidelines at an international seminar. Results The data suggest overall agreement on the standards although many of the recommendations appear not (yet) to be embedded in everyday practice. Conclusions Although large consensus exists on standards for early psychosocial care, a chasm between norms and practice appears to exist throughout the EU, stressing the general need for investments in guideline development and implementation. PMID:23393613
Pistollato, Francesca; Ohayon, Elan L; Lam, Ann; Langley, Gillian R; Novak, Thomas J; Pamies, David; Perry, George; Trushina, Eugenia; Williams, Robin S B; Roher, Alex E; Hartung, Thomas; Harnad, Stevan; Barnard, Neal; Morris, Martha Clare; Lai, Mei-Chun; Merkley, Ryan; Chandrasekera, P Charukeshi
2016-06-28
Much of Alzheimer disease (AD) research has been traditionally based on the use of animals, which have been extensively applied in an effort to both improve our understanding of the pathophysiological mechanisms of the disease and to test novel therapeutic approaches. However, decades of such research have not effectively translated into substantial therapeutic success for human patients. Here we critically discuss these issues in order to determine how existing human-based methods can be applied to study AD pathology and develop novel therapeutics. These methods, which include patient-derived cells, computational analysis and models, together with large-scale epidemiological studies represent novel and exciting tools to enhance and forward AD research. In particular, these methods are helping advance AD research by contributing multifactorial and multidimensional perspectives, especially considering the crucial role played by lifestyle risk factors in the determination of AD risk. In addition to research techniques, we also consider related pitfalls and flaws in the current research funding system. Conversely, we identify encouraging new trends in research and government policy. In light of these new research directions, we provide recommendations regarding prioritization of research funding. The goal of this document is to stimulate scientific and public discussion on the need to explore new avenues in AD research, considering outcome and ethics as core principles to reliably judge traditional research efforts and eventually undertake new research strategies.
NASA Astrophysics Data System (ADS)
Chintalapudi, V. S.; Sirigiri, Sivanagaraju
2017-04-01
In power system restructuring, pricing the electrical power plays a vital role in cost allocation between suppliers and consumers. In optimal power dispatch problem, not only the cost of active power generation but also the costs of reactive power generated by the generators should be considered to increase the effectiveness of the problem. As the characteristics of reactive power cost curve are similar to that of active power cost curve, a nonconvex reactive power cost function is formulated. In this paper, a more realistic multi-fuel total cost objective is formulated by considering active and reactive power costs of generators. The formulated cost function is optimized by satisfying equality, in-equality and practical constraints using the proposed uniform distributed two-stage particle swarm optimization. The proposed algorithm is a combination of uniform distribution of control variables (to start the iterative process with good initial value) and two-stage initialization processes (to obtain best final value in less number of iterations) can enhance the effectiveness of convergence characteristics. Obtained results for the considered standard test functions and electrical systems indicate the effectiveness of the proposed algorithm and can obtain efficient solution when compared to existing methods. Hence, the proposed method is a promising method and can be easily applied to optimize the power system objectives.
Pistollato, Francesca; Ohayon, Elan L.; Lam, Ann; Langley, Gillian R.; Novak, Thomas J.; Pamies, David; Perry, George; Trushina, Eugenia; Williams, Robin S.B.; Roher, Alex E.; Hartung, Thomas; Harnad, Stevan; Barnard, Neal; Morris, Martha Clare; Lai, Mei-Chun; Merkley, Ryan; Chandrasekera, P. Charukeshi
2016-01-01
Much of Alzheimer disease (AD) research has been traditionally based on the use of animals, which have been extensively applied in an effort to both improve our understanding of the pathophysiological mechanisms of the disease and to test novel therapeutic approaches. However, decades of such research have not effectively translated into substantial therapeutic success for human patients. Here we critically discuss these issues in order to determine how existing human-based methods can be applied to study AD pathology and develop novel therapeutics. These methods, which include patient-derived cells, computational analysis and models, together with large-scale epidemiological studies represent novel and exciting tools to enhance and forward AD research. In particular, these methods are helping advance AD research by contributing multifactorial and multidimensional perspectives, especially considering the crucial role played by lifestyle risk factors in the determination of AD risk. In addition to research techniques, we also consider related pitfalls and flaws in the current research funding system. Conversely, we identify encouraging new trends in research and government policy. In light of these new research directions, we provide recommendations regarding prioritization of research funding. The goal of this document is to stimulate scientific and public discussion on the need to explore new avenues in AD research, considering outcome and ethics as core principles to reliably judge traditional research efforts and eventually undertake new research strategies. PMID:27229915
Hotspots of species richness, threat and endemism for terrestrial vertebrates in SW Europe
NASA Astrophysics Data System (ADS)
Pascual, López-López; Luigi, Maiorano; Alessandra, Falcucci; Emilio, Barba; Luigi, Boitani
2011-09-01
The Mediterranean basin, and the Iberian Peninsula in particular, represent an outstanding "hotspot" of biological diversity with a long history of integration between natural ecosystems and human activities. Using deductive distribution models, and considering both Spain and Portugal, we downscaled traditional range maps for terrestrial vertebrates (amphibians, breeding birds, mammals and reptiles) to the finest possible resolution with the data at hand, and we identified hotspots based on three criteria: i) species richness; ii) vulnerability, and iii) endemism. We also provided a first evaluation of the conservation status of biodiversity hotspots based on these three criteria considering both existing and proposed protected areas (i.e., Natura 2000). For the identification of hotspots, we used a method based on the cumulative distribution functions of species richness values. We found no clear surrogacy among the different types of hotspots in the Iberian Peninsula. The most important hotspots (considering all criteria) are located in the western and southwestern portions of the study area, in the Mediterranean biogeographical region. Existing protected areas are not specifically concentrated in areas of high species richness, with only 5.2% of the hotspots of total richness being currently protected. The Natura 2000 network can potentially constitute an important baseline for protecting vertebrate diversity in the Iberian Peninsula although further improvements are needed. We suggest taking a step forward in conservation planning in the Mediterranean basin, explicitly considering the history of the region as well as its present environmental context. This would allow moving from traditional reserve networks (conservation focused on "patterns") to considerations about the "processes" that generated present biodiversity.
NASA Astrophysics Data System (ADS)
Botha, J. D. M.; Shahroki, A.; Rice, H.
2017-12-01
This paper presents an enhanced method for predicting aerodynamically generated broadband noise produced by a Vertical Axis Wind Turbine (VAWT). The method improves on existing work for VAWT noise prediction and incorporates recently developed airfoil noise prediction models. Inflow-turbulence and airfoil self-noise mechanisms are both considered. Airfoil noise predictions are dependent on aerodynamic input data and time dependent Computational Fluid Dynamics (CFD) calculations are carried out to solve for the aerodynamic solution. Analytical flow methods are also benchmarked against the CFD informed noise prediction results to quantify errors in the former approach. Comparisons to experimental noise measurements for an existing turbine are encouraging. A parameter study is performed and shows the sensitivity of overall noise levels to changes in inflow velocity and inflow turbulence. Noise sources are characterised and the location and mechanism of the primary sources is determined, inflow-turbulence noise is seen to be the dominant source. The use of CFD calculations is seen to improve the accuracy of noise predictions when compared to the analytic flow solution as well as showing that, for inflow-turbulence noise sources, blade generated turbulence dominates the atmospheric inflow turbulence.
NASA Astrophysics Data System (ADS)
Liu, Chunlei; Ding, Wenrui; Li, Hongguang; Li, Jiankun
2017-09-01
Haze removal is a nontrivial work for medium-altitude unmanned aerial vehicle (UAV) image processing because of the effects of light absorption and scattering. The challenges are attributed mainly to image distortion and detail blur during the long-distance and large-scale imaging process. In our work, a metadata-assisted nonuniform atmospheric scattering model is proposed to deal with the aforementioned problems of medium-altitude UAV. First, to better describe the real atmosphere, we propose a nonuniform atmospheric scattering model according to the aerosol distribution, which directly benefits the image distortion correction. Second, considering the characteristics of long-distance imaging, we calculate the depth map, which is an essential clue to modeling, on the basis of UAV metadata information. An accurate depth map reduces the color distortion compared with the depth of field obtained by other existing methods based on priors or assumptions. Furthermore, we use an adaptive median filter to address the problem of fuzzy details caused by the global airlight value. Experimental results on both real flight and synthetic images demonstrate that our proposed method outperforms four other existing haze removal methods.
Current gamma knife treatment for ophthalmic branch of primary trigeminal neuralgia
Shan, Guo-Yong; Liang, Hao-Fang; Zhang, Jian-Hua
2011-01-01
AIM To probe into problems existing in gamma knife treatment of ophthalmic branch of primary trigeminal neuralgia (TN), and propose a safe and effective solution to the problem. METHODS Through sorting the literature reporting gamma knife treatment of refractory TN in recent years, this article analyzed the advantages and problems of gamma knife treatment of primary TN, and proposed reasonable assessment for existing problems and the possible solution. RESULTS Gamma knife treatment of TN has drawn increasing attention of clinicians due to its unique non-invasion, safety and effectiveness, but there are three related issues to be considered. The first one is the uncertainty of the optimal dose (70-90GY); the second one is the difference in radiotherapy target selection (using a single isocenter or two isocenters); and the third one is the big difference of recurrent pains (specific treatment methods need to be summarized and improved). CONCLUSION For patients with refractory TN, gamma knife treatment can be selected when the medical treatment fails or drug side effects emerge. The analysis of a large number of TN patients receiving gamma knife treatment has shown that this is a safe and effective treatment method. PMID:22553625
Variational approach to stability boundary for the Taylor-Goldstein equation
NASA Astrophysics Data System (ADS)
Hirota, Makoto; Morrison, Philip J.
2015-11-01
Linear stability of inviscid stratified shear flow is studied by developing an efficient method for finding neutral (i.e., marginally stable) solutions of the Taylor-Goldstein equation. The classical Miles-Howard criterion states that stratified shear flow is stable if the local Richardson number JR is greater than 1/4 everywhere. In this work, the case of JR > 0 everywhere is considered by assuming strictly monotonic and smooth profiles of the ambient shear flow and density. It is shown that singular neutral modes that are embedded in the continuous spectrum can be found by solving one-parameter families of self-adjoint eigenvalue problems. The unstable ranges of wavenumber are searched for accurately and efficiently by adopting this method in a numerical algorithm. Because the problems are self-adjoint, the variational method can be applied to ascertain the existence of singular neutral modes. For certain shear flow and density profiles, linear stability can be proven by showing the non-existence of a singular neutral mode. New sufficient conditions, extensions of the Rayleigh-Fjortoft stability criterion for unstratified shear flows, are derived in this manner. This work was supported by JSPS Strategic Young Researcher Overseas Visits Program for Accelerating Brain Circulation # 55053270.
A Hybrid Tabu Search Heuristic for a Bilevel Competitive Facility Location Model
NASA Astrophysics Data System (ADS)
Küçükaydın, Hande; Aras, Necati; Altınel, I. Kuban
We consider a problem in which a firm or franchise enters a market by locating new facilities where there are existing facilities belonging to a competitor. The firm aims at finding the location and attractiveness of each facility to be opened so as to maximize its profit. The competitor, on the other hand, can react by adjusting the attractiveness of its existing facilities, opening new facilities and/or closing existing ones with the objective of maximizing its own profit. The demand is assumed to be aggregated at certain points in the plane and the facilities of the firm can be located at prespecified candidate sites. We employ Huff's gravity-based rule in modeling the behavior of the customers where the fraction of customers at a demand point that visit a certain facility is proportional to the facility attractiveness and inversely proportional to the distance between the facility site and demand point. We formulate a bilevel mixed-integer nonlinear programming model where the firm entering the market is the leader and the competitor is the follower. In order to find a feasible solution of this model, we develop a hybrid tabu search heuristic which makes use of two exact methods as subroutines: a gradient ascent method and a branch-and-bound algorithm with nonlinear programming relaxation.
Chlorine measurement in the jet singlet oxygen generator considering the effects of the droplets.
Goodarzi, Mohamad S; Saghafifar, Hossein
2016-09-01
A new method is presented to measure chlorine concentration more accurately than conventional method in exhaust gases of a jet-type singlet oxygen generator. One problem in this measurement is the existence of micrometer-sized droplets. In this article, an empirical method is reported to eliminate the effects of the droplets. Two wavelengths from a fiber coupled LED are adopted and the measurement is made on both selected wavelengths. Chlorine is measured by the two-wavelength more accurately than the one-wavelength method by eliminating the droplet term in the equations. This method is validated without the basic hydrogen peroxide injection in the reactor. In this case, a pressure meter value in the diagnostic cell is compared with the optically calculated pressure, which is obtained by the one-wavelength and the two-wavelength methods. It is found that chlorine measurement by the two-wavelength method and pressure meter is nearly the same, while the one-wavelength method has a significant error due to the droplets.
Garcia, Fernando; Lopez, Francisco J; Cano, Carlos; Blanco, Armando
2009-01-01
Background Regulatory motifs describe sets of related transcription factor binding sites (TFBSs) and can be represented as position frequency matrices (PFMs). De novo identification of TFBSs is a crucial problem in computational biology which includes the issue of comparing putative motifs with one another and with motifs that are already known. The relative importance of each nucleotide within a given position in the PFMs should be considered in order to compute PFM similarities. Furthermore, biological data are inherently noisy and imprecise. Fuzzy set theory is particularly suitable for modeling imprecise data, whereas fuzzy integrals are highly appropriate for representing the interaction among different information sources. Results We propose FISim, a new similarity measure between PFMs, based on the fuzzy integral of the distance of the nucleotides with respect to the information content of the positions. Unlike existing methods, FISim is designed to consider the higher contribution of better conserved positions to the binding affinity. FISim provides excellent results when dealing with sets of randomly generated motifs, and outperforms the remaining methods when handling real datasets of related motifs. Furthermore, we propose a new cluster methodology based on kernel theory together with FISim to obtain groups of related motifs potentially bound by the same TFs, providing more robust results than existing approaches. Conclusion FISim corrects a design flaw of the most popular methods, whose measures favour similarity of low information content positions. We use our measure to successfully identify motifs that describe binding sites for the same TF and to solve real-life problems. In this study the reliability of fuzzy technology for motif comparison tasks is proven. PMID:19615102
Quantification of ETS exposure in hospitality workers who have never smoked
2010-01-01
Background Environmental Tobacco Smoke (ETS) was classified as human carcinogen (K1) by the German Research Council in 1998. According to epidemiological studies, the relative risk especially for lung cancer might be twice as high in persons who have never smoked but who are in the highest exposure category, for example hospitality workers. In order to implement these results in the German regulations on occupational illnesses, a valid method is needed to retrospectively assess the cumulative ETS exposure in the hospitality environment. Methods A literature-based review was carried out to locate a method that can be used for the German hospitality sector. Studies assessing ETS exposure using biological markers (for example urinary cotinine, DNA adducts) or questionnaires were excluded. Biological markers are not considered relevant as they assess exposure only over the last hours, weeks or months. Self-reported exposure based on questionnaires also does not seem adequate for medico-legal purposes. Therefore, retrospective exposure assessment should be based on mathematical models to approximate past exposure. Results For this purpose a validated model developed by Repace and Lowrey was considered appropriate. It offers the possibility of retrospectively assessing exposure with existing parameters (such as environmental dimensions, average number of smokers, ventilation characteristics and duration of exposure). The relative risk of lung cancer can then be estimated based on the individual cumulative exposure of the worker. Conclusion In conclusion, having adapted it to the German hospitality sector, an existing mathematical model appears to be capable of approximating the cumulative exposure. However, the level of uncertainty of these approximations has to be taken into account, especially for diseases with a long latency period such as lung cancer. PMID:20704719
Hosseini, Marjan; Kerachian, Reza
2017-09-01
This paper presents a new methodology for analyzing the spatiotemporal variability of water table levels and redesigning a groundwater level monitoring network (GLMN) using the Bayesian Maximum Entropy (BME) technique and a multi-criteria decision-making approach based on ordered weighted averaging (OWA). The spatial sampling is determined using a hexagonal gridding pattern and a new method, which is proposed to assign a removal priority number to each pre-existing station. To design temporal sampling, a new approach is also applied to consider uncertainty caused by lack of information. In this approach, different time lag values are tested by regarding another source of information, which is simulation result of a numerical groundwater flow model. Furthermore, to incorporate the existing uncertainties in available monitoring data, the flexibility of the BME interpolation technique is taken into account in applying soft data and improving the accuracy of the calculations. To examine the methodology, it is applied to the Dehgolan plain in northwestern Iran. Based on the results, a configuration of 33 monitoring stations for a regular hexagonal grid of side length 3600 m is proposed, in which the time lag between samples is equal to 5 weeks. Since the variance estimation errors of the BME method are almost identical for redesigned and existing networks, the redesigned monitoring network is more cost-effective and efficient than the existing monitoring network with 52 stations and monthly sampling frequency.
Secure Indoor Localization Based on Extracting Trusted Fingerprint
Yin, Xixi; Zheng, Yanliu; Wang, Chun
2018-01-01
Indoor localization based on WiFi has attracted a lot of research effort because of the widespread application of WiFi. Fingerprinting techniques have received much attention due to their simplicity and compatibility with existing hardware. However, existing fingerprinting localization algorithms may not resist abnormal received signal strength indication (RSSI), such as unexpected environmental changes, impaired access points (APs) or the introduction of new APs. Traditional fingerprinting algorithms do not consider the problem of new APs and impaired APs in the environment when using RSSI. In this paper, we propose a secure fingerprinting localization (SFL) method that is robust to variable environments, impaired APs and the introduction of new APs. In the offline phase, a voting mechanism and a fingerprint database update method are proposed. We use the mutual cooperation between reference anchor nodes to update the fingerprint database, which can reduce the interference caused by the user measurement data. We analyze the standard deviation of RSSI, mobilize the reference points in the database to vote on APs and then calculate the trust factors of APs based on the voting results. In the online phase, we first make a judgment about the new APs and the broken APs, then extract the secure fingerprints according to the trusted factors of APs and obtain the localization results by using the trusted fingerprints. In the experiment section, we demonstrate the proposed method and find that the proposed strategy can resist abnormal RSSI and can improve the localization accuracy effectively compared with the existing fingerprinting localization algorithms. PMID:29401755
Secure Indoor Localization Based on Extracting Trusted Fingerprint.
Luo, Juan; Yin, Xixi; Zheng, Yanliu; Wang, Chun
2018-02-05
[-5]Indoor localization based on WiFi has attracted a lot of research effort because of the widespread application of WiFi. Fingerprinting techniques have received much attention due to their simplicity and compatibility with existing hardware. However, existing fingerprinting localization algorithms may not resist abnormal received signal strength indication (RSSI), such as unexpected environmental changes, impaired access points (APs) or the introduction of new APs. Traditional fingerprinting algorithms do not consider the problem of new APs and impaired APs in the environment when using RSSI. In this paper, we propose a secure fingerprinting localization (SFL) method that is robust to variable environments, impaired APs and the introduction of new APs. In the offline phase, a voting mechanism and a fingerprint database update method are proposed. We use the mutual cooperation between reference anchor nodes to update the fingerprint database, which can reduce the interference caused by the user measurement data. We analyze the standard deviation of RSSI, mobilize the reference points in the database to vote on APs and then calculate the trust factors of APs based on the voting results. In the online phase, we first make a judgment about the new APs and the broken APs, then extract the secure fingerprints according to the trusted factors of APs and obtain the localization results by using the trusted fingerprints. In the experiment section, we demonstrate the proposed method and find that the proposed strategy can resist abnormal RSSI and can improve the localization accuracy effectively compared with the existing fingerprinting localization algorithms.
Human systems dynamics: Toward a computational model
NASA Astrophysics Data System (ADS)
Eoyang, Glenda H.
2012-09-01
A robust and reliable computational model of complex human systems dynamics could support advancements in theory and practice for social systems at all levels, from intrapersonal experience to global politics and economics. Models of human interactions have evolved from traditional, Newtonian systems assumptions, which served a variety of practical and theoretical needs of the past. Another class of models has been inspired and informed by models and methods from nonlinear dynamics, chaos, and complexity science. None of the existing models, however, is able to represent the open, high dimension, and nonlinear self-organizing dynamics of social systems. An effective model will represent interactions at multiple levels to generate emergent patterns of social and political life of individuals and groups. Existing models and modeling methods are considered and assessed against characteristic pattern-forming processes in observed and experienced phenomena of human systems. A conceptual model, CDE Model, based on the conditions for self-organizing in human systems, is explored as an alternative to existing models and methods. While the new model overcomes the limitations of previous models, it also provides an explanatory base and foundation for prospective analysis to inform real-time meaning making and action taking in response to complex conditions in the real world. An invitation is extended to readers to engage in developing a computational model that incorporates the assumptions, meta-variables, and relationships of this open, high dimension, and nonlinear conceptual model of the complex dynamics of human systems.
An Improved Filtering Method for Quantum Color Image in Frequency Domain
NASA Astrophysics Data System (ADS)
Li, Panchi; Xiao, Hong
2018-01-01
In this paper we investigate the use of quantum Fourier transform (QFT) in the field of image processing. We consider QFT-based color image filtering operations and their applications in image smoothing, sharpening, and selective filtering using quantum frequency domain filters. The underlying principle used for constructing the proposed quantum filters is to use the principle of the quantum Oracle to implement the filter function. Compared with the existing methods, our method is not only suitable for color images, but also can flexibly design the notch filters. We provide the quantum circuit that implements the filtering task and present the results of several simulation experiments on color images. The major advantages of the quantum frequency filtering lies in the exploitation of the efficient implementation of the quantum Fourier transform.
Cheating prevention in visual cryptography.
Hu, Chih-Ming; Tzeng, Wen-Guey
2007-01-01
Visual cryptography (VC) is a method of encrypting a secret image into shares such that stacking a sufficient number of shares reveals the secret image. Shares are usually presented in transparencies. Each participant holds a transparency. Most of the previous research work on VC focuses on improving two parameters: pixel expansion and contrast. In this paper, we studied the cheating problem in VC and extended VC. We considered the attacks of malicious adversaries who may deviate from the scheme in any way. We presented three cheating methods and applied them on attacking existent VC or extended VC schemes. We improved one cheat-preventing scheme. We proposed a generic method that converts a VCS to another VCS that has the property of cheating prevention. The overhead of the conversion is near optimal in both contrast degression and pixel expansion.
Link prediction based on nonequilibrium cooperation effect
NASA Astrophysics Data System (ADS)
Li, Lanxi; Zhu, Xuzhen; Tian, Hui
2018-04-01
Link prediction in complex networks has become a common focus of many researchers. But most existing methods concentrate on neighbors, and rarely consider degree heterogeneity of two endpoints. Node degree represents the importance or status of endpoints. We describe the large-degree heterogeneity as the nonequilibrium between nodes. This nonequilibrium facilitates a stable cooperation between endpoints, so that two endpoints with large-degree heterogeneity tend to connect stably. We name such a phenomenon as the nonequilibrium cooperation effect. Therefore, this paper proposes a link prediction method based on the nonequilibrium cooperation effect to improve accuracy. Theoretical analysis will be processed in advance, and at the end, experiments will be performed in 12 real-world networks to compare the mainstream methods with our indices in the network through numerical analysis.
Review of various treatment methods for the abatement of phenolic compounds from wastewater.
Girish, C R; Murty, V Ramachandra
2012-04-01
Phenol and its derivatives are considered among the most hazardous organic pollutants from industrial wastewater and they are toxic even at low concentrations. Besides the existence of phenol in natural water source it can lead to the formation of other toxic substituted compounds. So this has led to growing concern on setting up of rigid limits on the acceptable level of phenol in the environment. The various methods for the treatment of phenol from wastewater streams are briefly reviewed. The various technologies like distillation, liquid-liquid extraction with different solvents, adsorption over activated carbons and polymeric and inorganic adsorbents, membrane pervaporation and membrane-solvent extraction, have been elucidated. The advantages and disadvantages of the various methods are illustrated and their performances are compared.
NASA Astrophysics Data System (ADS)
Zhang, Ye; Gong, Rongfang; Cheng, Xiaoliang; Gulliksson, Mårten
2018-06-01
This study considers the inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary data. The unknown source term is to be determined by additional boundary conditions. Unlike the existing methods found in the literature, which usually employ the first-order in time gradient-like system (such as the steepest descent methods) for numerically solving the regularized optimization problem with a fixed regularization parameter, we propose a novel method with a second-order in time dissipative gradient-like system and a dynamical selected regularization parameter. A damped symplectic scheme is proposed for the numerical solution. Theoretical analysis is given for both the continuous model and the numerical algorithm. Several numerical examples are provided to show the robustness of the proposed algorithm.
Rozenbaum, H
Early sexual activity in young women has created new problems in contraception and gynecologic pathology for physicians. None of the existing birth control methods seems ideally adapted to the young: oran contraceptives, the only infallible method, may present adverse effects. Intrauterine devices may result in expulsion or infection. Diaphragms or spermicides are less effective and not always well accepted by young girls. The physician, however, must bear in mind that whatever inconvenience may result, birth control is always preferable to an unwanted pregnancy or to abortion. Given the seemingly growing incidence of veneral disease and of abnormalities of cervical cytology, physicians must exercise the utmost care and consider a birth control consultation by a young girl as a full medical act.
Wang, Tianbo; Zhou, Wuneng; Zhao, Shouwei; Yu, Weiqin
2014-03-01
In this paper, the robust exponential synchronization problem for a class of uncertain delayed master-slave dynamical system is investigated by using the adaptive control method. Different from some existing master-slave models, the considered master-slave system includes bounded unmodeled dynamics. In order to compensate the effect of unmodeled dynamics and effectively achieve synchronization, a novel adaptive controller with simple updated laws is proposed. Moreover, the results are given in terms of LMIs, which can be easily solved by LMI Toolbox in Matlab. A numerical example is given to illustrate the effectiveness of the method. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Estimating standard errors in feature network models.
Frank, Laurence E; Heiser, Willem J
2007-05-01
Feature network models are graphical structures that represent proximity data in a discrete space while using the same formalism that is the basis of least squares methods employed in multidimensional scaling. Existing methods to derive a network model from empirical data only give the best-fitting network and yield no standard errors for the parameter estimates. The additivity properties of networks make it possible to consider the model as a univariate (multiple) linear regression problem with positivity restrictions on the parameters. In the present study, both theoretical and empirical standard errors are obtained for the constrained regression parameters of a network model with known features. The performance of both types of standard error is evaluated using Monte Carlo techniques.
Creep rupture of polymer-matrix composites
NASA Technical Reports Server (NTRS)
Brinson, H. F.; Morris, D. H.; Griffith, W. I.
1981-01-01
The time-dependent creep-rupture process in graphite-epoxy laminates is examined as a function of temperature and stress level. Moisture effects are not considered. An accelerated characterization method of composite-laminate viscoelastic modulus and strength properties is reviewed. It is shown that lamina-modulus master curves can be obtained using a minimum of normally performed quality-control-type testing. Lamina-strength master curves, obtained by assuming a constant-strain-failure criterion, are presented along with experimental data, and reasonably good agreement is shown to exist between the two. Various phenomenological delayed failure models are reviewed and two (the modified rate equation and the Larson-Miller parameter method) are compared to creep-rupture data with poor results.
Local SIMPLE multi-atlas-based segmentation applied to lung lobe detection on chest CT
NASA Astrophysics Data System (ADS)
Agarwal, M.; Hendriks, E. A.; Stoel, B. C.; Bakker, M. E.; Reiber, J. H. C.; Staring, M.
2012-02-01
For multi atlas-based segmentation approaches, a segmentation fusion scheme which considers local performance measures may be more accurate than a method which uses a global performance measure. We improve upon an existing segmentation fusion method called SIMPLE and extend it to be localized and suitable for multi-labeled segmentations. We demonstrate the algorithm performance on 23 CT scans of COPD patients using a leave-one- out experiment. Our algorithm performs significantly better (p < 0.01) than majority voting, STAPLE, and SIMPLE, with a median overlap of the fissure of 0.45, 0.48, 0.55 and 0.6 for majority voting, STAPLE, SIMPLE, and the proposed algorithm, respectively.
Network Reconstruction From High-Dimensional Ordinary Differential Equations.
Chen, Shizhe; Shojaie, Ali; Witten, Daniela M
2017-01-01
We consider the task of learning a dynamical system from high-dimensional time-course data. For instance, we might wish to estimate a gene regulatory network from gene expression data measured at discrete time points. We model the dynamical system nonparametrically as a system of additive ordinary differential equations. Most existing methods for parameter estimation in ordinary differential equations estimate the derivatives from noisy observations. This is known to be challenging and inefficient. We propose a novel approach that does not involve derivative estimation. We show that the proposed method can consistently recover the true network structure even in high dimensions, and we demonstrate empirical improvement over competing approaches. Supplementary materials for this article are available online.
Wang, Jue; Kwan, Mei-Po; Chai, Yanwei
2018-04-09
Scholars in the fields of health geography, urban planning, and transportation studies have long attempted to understand the relationships among human movement, environmental context, and accessibility. One fundamental question for this research area is how to measure individual activity space, which is an indicator of where and how people have contact with their social and physical environments. Conventionally, standard deviational ellipses, road network buffers, minimum convex polygons, and kernel density surfaces have been used to represent people's activity space, but they all have shortcomings. Inconsistent findings of the effects of environmental exposures on health behaviors/outcomes suggest that the reliability of existing studies may be affected by the uncertain geographic context problem (UGCoP). This paper proposes the context-based crystal-growth activity space as an innovative method for generating individual activity space based on both GPS trajectories and the environmental context. This method not only considers people's actual daily activity patterns based on GPS tracks but also takes into account the environmental context which either constrains or encourages people's daily activity. Using GPS trajectory data collected in Chicago, the results indicate that the proposed new method generates more reasonable activity space when compared to other existing methods. This can help mitigate the UGCoP in environmental health studies.
Chai, Yanwei
2018-01-01
Scholars in the fields of health geography, urban planning, and transportation studies have long attempted to understand the relationships among human movement, environmental context, and accessibility. One fundamental question for this research area is how to measure individual activity space, which is an indicator of where and how people have contact with their social and physical environments. Conventionally, standard deviational ellipses, road network buffers, minimum convex polygons, and kernel density surfaces have been used to represent people’s activity space, but they all have shortcomings. Inconsistent findings of the effects of environmental exposures on health behaviors/outcomes suggest that the reliability of existing studies may be affected by the uncertain geographic context problem (UGCoP). This paper proposes the context-based crystal-growth activity space as an innovative method for generating individual activity space based on both GPS trajectories and the environmental context. This method not only considers people’s actual daily activity patterns based on GPS tracks but also takes into account the environmental context which either constrains or encourages people’s daily activity. Using GPS trajectory data collected in Chicago, the results indicate that the proposed new method generates more reasonable activity space when compared to other existing methods. This can help mitigate the UGCoP in environmental health studies. PMID:29642530
1983-01-01
considered important, complete, and a lasting contribution to existing knowledge. -’ Mechanical Engineering Reports (MS): Scientific and technical information...pertaining to investigations outside aeronautics considered important, complete, and a lasting contribution to existing knowledge. * AERONAUTICAL...NOTES (AN): Information les~s broad in scope but nevertheless of importance as a * contribution to existing knowledge. LABORATORY TECHNICAL REPORTS (LTR
Multiple positive solutions to a coupled systems of nonlinear fractional differential equations.
Shah, Kamal; Khan, Rahmat Ali
2016-01-01
In this article, we study existence, uniqueness and nonexistence of positive solution to a highly nonlinear coupled system of fractional order differential equations. Necessary and sufficient conditions for the existence and uniqueness of positive solution are developed by using Perov's fixed point theorem for the considered problem. Further, we also established sufficient conditions for existence of multiplicity results for positive solutions. Also, we developed some conditions under which the considered coupled system of fractional order differential equations has no positive solution. Appropriate examples are also provided which demonstrate our results.
Mosecker, Linda; Saeed-Akbari, Alireza
2013-06-01
Nitrogen in austenitic stainless steels and its effect on the stacking fault energy (SFE) has been the subject of intense discussions in the literature. Until today, no generally accepted method for the SFE calculation exists that can be applied to a wide range of chemical compositions in these systems. Besides different types of models that are used from first-principle to thermodynamics-based approaches, one main reason is the general lack of experimentally measured SFE values for these steels. Moreover, in the respective studies, not only different alloying systems but also different domains of nitrogen contents were analyzed resulting in contrary conclusions on the effect of nitrogen on the SFE. This work gives a review on the current state of SFE calculation by computational thermodynamics for the Fe-Cr-Mn-N system. An assessment of the thermodynamic effective Gibbs free energy, [Formula: see text], model for the [Formula: see text] phase transformation considering existing data from different literature and commercial databases is given. Furthermore, we introduce the application of a non-constant composition-dependent interfacial energy, б γ / ε , required to consider the effect of nitrogen on SFE in these systems.
Mosecker, Linda; Saeed-Akbari, Alireza
2013-01-01
Nitrogen in austenitic stainless steels and its effect on the stacking fault energy (SFE) has been the subject of intense discussions in the literature. Until today, no generally accepted method for the SFE calculation exists that can be applied to a wide range of chemical compositions in these systems. Besides different types of models that are used from first-principle to thermodynamics-based approaches, one main reason is the general lack of experimentally measured SFE values for these steels. Moreover, in the respective studies, not only different alloying systems but also different domains of nitrogen contents were analyzed resulting in contrary conclusions on the effect of nitrogen on the SFE. This work gives a review on the current state of SFE calculation by computational thermodynamics for the Fe–Cr–Mn–N system. An assessment of the thermodynamic effective Gibbs free energy, , model for the phase transformation considering existing data from different literature and commercial databases is given. Furthermore, we introduce the application of a non-constant composition-dependent interfacial energy, бγ/ε, required to consider the effect of nitrogen on SFE in these systems. PMID:27877573
Infrastructure optimisation via MBR retrofit: a design guide.
Bagg, W K
2009-01-01
Wastewater management is continually evolving with the development and implementation of new, more efficient technologies. One of these is the Membrane Bioreactor (MBR). Although a relatively new technology in Australia, MBR wastewater treatment has been widely used elsewhere for over 20 years, with thousands of MBRs now in operation worldwide. Over the past 5 years, MBR technology has been enthusiastically embraced in Australia as a potential treatment upgrade option, and via retrofit typically offers two major benefits: (1) more capacity using mostly existing facilities, and (2) very high quality treated effluent. However, infrastructure optimisation via MBR retrofit is not a simple or low-cost solution and there are many factors which should be carefully evaluated before deciding on this method of plant upgrade. The paper reviews a range of design parameters which should be carefully evaluated when considering an MBR retrofit solution. Several actual and conceptual case studies are considered to demonstrate both advantages and disadvantages. Whilst optimising existing facilities and production of high quality water for reuse are powerful drivers, it is suggested that MBRs are perhaps not always the most sustainable Whole-of-Life solution for a wastewater treatment plant upgrade, especially by way of a retrofit.
Quasilinear parabolic variational inequalities with multi-valued lower-order terms
NASA Astrophysics Data System (ADS)
Carl, Siegfried; Le, Vy K.
2014-10-01
In this paper, we provide an analytical frame work for the following multi-valued parabolic variational inequality in a cylindrical domain : Find and an such that where is some closed and convex subset, A is a time-dependent quasilinear elliptic operator, and the multi-valued function is assumed to be upper semicontinuous only, so that Clarke's generalized gradient is included as a special case. Thus, parabolic variational-hemivariational inequalities are special cases of the problem considered here. The extension of parabolic variational-hemivariational inequalities to the general class of multi-valued problems considered in this paper is not only of disciplinary interest, but is motivated by the need in applications. The main goals are as follows. First, we provide an existence theory for the above-stated problem under coercivity assumptions. Second, in the noncoercive case, we establish an appropriate sub-supersolution method that allows us to get existence, comparison, and enclosure results. Third, the order structure of the solution set enclosed by sub-supersolutions is revealed. In particular, it is shown that the solution set within the sector of sub-supersolutions is a directed set. As an application, a multi-valued parabolic obstacle problem is treated.
NASA Astrophysics Data System (ADS)
Sharma, Anuj K.; Gupta, Jyoti; Basu, Rikmantra
2018-01-01
A fiber optic sensor is proposed for the identification of healthy and cancerous liver tissues through determination of their corresponding refractive index values. Existing experimental results describing variation of complex refractive index of liver tissues in near infrared (NIR) spectral region are considered for theoretical calculations. The intensity interrogation method with chalcogenide fiber is considered. The sensor's performance is closely analyzed in terms of its sensitivity at multiple operating wavelengths falling in NIR region. Operating at shorter NIR wavelengths leads to greater sensitivity. The effect of design parameters (sensing region length and fiber core diameter), different launching conditions, and fiber glass materials on sensor's performance is examined. The proposed sensor has the potential to provide high sensitivity of liver tissue detection.
Vangaveti, S; Travesset, A
2014-12-28
We present here a method to separate the Stern and diffuse layer in general systems into two regions that can be analyzed separately. The Stern layer can be described in terms of Bjerrum pairing and the diffuse layer in terms of Poisson-Boltzmann theory (monovalent) or strong coupling theory plus a slowly decaying tail (divalent). We consider three anionic phospholipids: phosphatidyl serine, phosphatidic acid, and phosphatidylinositol(4,5)bisphosphate (PIP2), which we describe within a minimal coarse-grained model as a function of ionic concentration. The case of mixed lipid systems is also considered, which shows a high level of binding cooperativity as a function of PIP2 localization. Implications for existing experimental systems of lipid heterogeneities are also discussed.
NASA Astrophysics Data System (ADS)
Vangaveti, S.; Travesset, A.
2014-12-01
We present here a method to separate the Stern and diffuse layer in general systems into two regions that can be analyzed separately. The Stern layer can be described in terms of Bjerrum pairing and the diffuse layer in terms of Poisson-Boltzmann theory (monovalent) or strong coupling theory plus a slowly decaying tail (divalent). We consider three anionic phospholipids: phosphatidyl serine, phosphatidic acid, and phosphatidylinositol(4,5)bisphosphate (PIP2), which we describe within a minimal coarse-grained model as a function of ionic concentration. The case of mixed lipid systems is also considered, which shows a high level of binding cooperativity as a function of PIP2 localization. Implications for existing experimental systems of lipid heterogeneities are also discussed.
Three-dimensional flow of Prandtl fluid with Cattaneo-Christov double diffusion
NASA Astrophysics Data System (ADS)
Hayat, Tasawar; Aziz, Arsalan; Muhammad, Taseer; Alsaedi, Ahmed
2018-06-01
This research paper intends to investigate the 3D flow of Prandtl liquid in the existence of improved heat conduction and mass diffusion models. Flow is created by considering linearly bidirectional stretchable sheet. Thermal and concentration diffusions are considered by employing Cattaneo-Christov double diffusion models. Boundary layer approach has been used to simplify the governing PDEs. Suitable nondimensional similarity variables correspond to strong nonlinear ODEs. Optimal homotopy analysis method (OHAM) is employed for solutions development. The role of various pertinent variables on temperature and concentration are analyzed through graphs. The physical quantities such as surface drag coefficients and heat and mass transfer rates at the wall are also plotted and discussed. Our results indicate that the temperature and concentration are decreasing functions of thermal and concentration relaxation parameters respectively.
Fraysse, François; Thewlis, Dominic
2014-11-07
Numerous methods exist to estimate the pose of the axes of rotation of the forearm. These include anatomical definitions, such as the conventions proposed by the ISB, and functional methods based on instantaneous helical axes, which are commonly accepted as the modelling gold standard for non-invasive, in-vivo studies. We investigated the validity of a third method, based on regression equations, to estimate the rotation axes of the forearm. We also assessed the accuracy of both ISB methods. Axes obtained from a functional method were considered as the reference. Results indicate a large inter-subject variability in the axes positions, in accordance with previous studies. Both ISB methods gave the same level of accuracy in axes position estimations. Regression equations seem to improve estimation of the flexion-extension axis but not the pronation-supination axis. Overall, given the large inter-subject variability, the use of regression equations cannot be recommended. Copyright © 2014 Elsevier Ltd. All rights reserved.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 28 2010-07-01 2010-07-01 true Selected Industrial Subcategories Considered Dilute for Purposes of the Combined Wastestream Formula D Appendix D to Part 403 Protection of...-MENT REGULATIONS FOR EXIST-ING AND NEW SOURCES OF POLLUTION Pt. 403, App. D Appendix D to Part 403...
NASA Astrophysics Data System (ADS)
Chen, Huabin
2013-08-01
In this paper, the problems about the existence and uniqueness, attraction for strong solution of stochastic age-structured population systems with diffusion and Poisson jump are considered. Under the non-Lipschitz condition with the Lipschitz condition being considered as a special case, the existence and uniqueness for such systems is firstly proved by using the Burkholder-Davis-Gundy inequality (B-D-G inequality) and Itô's formula. And then by using a novel inequality technique, some sufficient conditions ensuring the existence for the domain of attraction are established. As another by-product, the exponential stability in mean square moment of strong solution for such systems can be also discussed.
NASA Astrophysics Data System (ADS)
Sivarami Reddy, N.; Ramamurthy, D. V., Dr.; Prahlada Rao, K., Dr.
2017-08-01
This article addresses simultaneous scheduling of machines, AGVs and tools where machines are allowed to share the tools considering transfer times of jobs and tools between machines, to generate best optimal sequences that minimize makespan in a multi-machine Flexible Manufacturing System (FMS). Performance of FMS is expected to improve by effective utilization of its resources, by proper integration and synchronization of their scheduling. Symbiotic Organisms Search (SOS) algorithm is a potent tool which is a better alternative for solving optimization problems like scheduling and proven itself. The proposed SOS algorithm is tested on 22 job sets with makespan as objective for scheduling of machines and tools where machines are allowed to share tools without considering transfer times of jobs and tools and the results are compared with the results of existing methods. The results show that the SOS has outperformed. The same SOS algorithm is used for simultaneous scheduling of machines, AGVs and tools where machines are allowed to share tools considering transfer times of jobs and tools to determine the best optimal sequences that minimize makespan.
Raster Data Partitioning for Supporting Distributed GIS Processing
NASA Astrophysics Data System (ADS)
Nguyen Thai, B.; Olasz, A.
2015-08-01
In the geospatial sector big data concept also has already impact. Several studies facing originally computer science techniques applied in GIS processing of huge amount of geospatial data. In other research studies geospatial data is considered as it were always been big data (Lee and Kang, 2015). Nevertheless, we can prove data acquisition methods have been improved substantially not only the amount, but the resolution of raw data in spectral, spatial and temporal aspects as well. A significant portion of big data is geospatial data, and the size of such data is growing rapidly at least by 20% every year (Dasgupta, 2013). The produced increasing volume of raw data, in different format, representation and purpose the wealth of information derived from this data sets represents only valuable results. However, the computing capability and processing speed rather tackle with limitations, even if semi-automatic or automatic procedures are aimed on complex geospatial data (Kristóf et al., 2014). In late times, distributed computing has reached many interdisciplinary areas of computer science inclusive of remote sensing and geographic information processing approaches. Cloud computing even more requires appropriate processing algorithms to be distributed and handle geospatial big data. Map-Reduce programming model and distributed file systems have proven their capabilities to process non GIS big data. But sometimes it's inconvenient or inefficient to rewrite existing algorithms to Map-Reduce programming model, also GIS data can not be partitioned as text-based data by line or by bytes. Hence, we would like to find an alternative solution for data partitioning, data distribution and execution of existing algorithms without rewriting or with only minor modifications. This paper focuses on technical overview of currently available distributed computing environments, as well as GIS data (raster data) partitioning, distribution and distributed processing of GIS algorithms. A proof of concept implementation have been made for raster data partitioning, distribution and processing. The first results on performance have been compared against commercial software ERDAS IMAGINE 2011 and 2014. Partitioning methods heavily depend on application areas, therefore we may consider data partitioning as a preprocessing step before applying processing services on data. As a proof of concept we have implemented a simple tile-based partitioning method splitting an image into smaller grids (NxM tiles) and comparing the processing time to existing methods by NDVI calculation. The concept is demonstrated using own development open source processing framework.
Analytical Methods of Decoupling the Automotive Engine Torque Roll Axis
NASA Astrophysics Data System (ADS)
JEONG, TAESEOK; SINGH, RAJENDRA
2000-06-01
This paper analytically examines the multi-dimensional mounting schemes of an automotive engine-gearbox system when excited by oscillating torques. In particular, the issue of torque roll axis decoupling is analyzed in significant detail since it is poorly understood. New dynamic decoupling axioms are presented an d compared with the conventional elastic axis mounting and focalization methods. A linear time-invariant system assumption is made in addition to a proportionally damped system. Only rigid-body modes of the powertrain are considered and the chassis elements are assumed to be rigid. Several simplified physical systems are considered and new closed-form solutions for symmetric and asymmetric engine-mounting systems are developed. These clearly explain the design concepts for the 4-point mounting scheme. Our analytical solutions match with the existing design formulations that are only applicable to symmetric geometries. Spectra for all six rigid-body motions are predicted using the alternate decoupling methods and the closed-form solutions are verified. Also, our method is validated by comparing modal solutions with prior experimental and analytical studies. Parametric design studies are carried out to illustrate the methodology. Chief contributions of this research include the development of new or refined analytical models and closed-form solutions along with improved design strategies for the torque roll axis decoupling.
Global stabilisation of a class of generalised cascaded systems by homogeneous method
NASA Astrophysics Data System (ADS)
Ding, Shihong; Zheng, Wei Xing
2016-04-01
This paper considers the problem of global stabilisation of a class of generalised cascaded systems. By using the extended adding a power integrator technique, a global controller is first constructed for the driving subsystem. Then based on the homogeneous properties and polynomial assumption, it is shown that the stabilisation of the driving subsystem implies the stabilisation of the overall cascaded system. Meanwhile, by properly choosing some control parameters, the global finite-time stability of the closed-loop cascaded system is also established. The proposed control method has several new features. First, the nonlinear cascaded systems considered in the paper are more general than the conventional ones, since the powers in the nominal part of the driving subsystem are not required to be restricted to ratios of positive odd numbers. Second, the proposed method has some flexible parameters which provide the possibility for designing continuously differentiable controllers for cascaded systems, while the existing designed controllers for such kind of cascaded systems are only continuous. Third, the homogenous and polynomial conditions adopted for the driven subsystem are easier to verify when compared with the matching conditions that are widely used previously. Furthermore, the efficiency of the proposed control method is validated by its application to finite-time tracking control of non-holonomic wheeled mobile robot.
Localized transversal-rotational modes in linear chains of equal masses.
Pichard, H; Duclos, A; Groby, J-P; Tournat, V; Gusev, V E
2014-01-01
The propagation and localization of transversal-rotational waves in a two-dimensional granular chain of equal masses are analyzed in this study. The masses are infinitely long cylinders possessing one translational and one rotational degree of freedom. Two dispersive propagating modes are predicted in this granular crystal. By considering the semi-infinite chain with a boundary condition applied at its beginning, the analytical study demonstrates the existence of localized modes, each mode composed of two evanescent modes. Their existence, position (either in the gap between the propagating modes or in the gap above the upper propagating mode), and structure of spatial localization are analyzed as a function of the relative strength of the shear and bending interparticle interactions and for different boundary conditions. This demonstrates the existence of a localized mode in a semi-infinite monatomic chain when transversal-rotational waves are considered, while it is well known that these types of modes do not exist when longitudinal waves are considered.
Parental duties and untreatable genetic conditions
Clarkeburn, H.
2000-01-01
This paper considers parental duties of beneficence and non-maleficence to use prenatal genetic testing for non-treatable conditions. It is proposed that this can be a duty only if the testing is essential to protect the interests of the child ie only if there is a risk of the child being born to a life worse than non-existence. It is argued here that non-existence can be rationally preferred to a severely impaired life. Uncontrollable pain and a lack of any opportunity to develop a continuous self are considered to be sufficient criteria for such preference. When parents are at risk of having a child whose life would be worse than non-existence, the parents have a duty to use prenatal testing and a duty to terminate an affected pregnancy. Further, such duty does not apply to any conditions where the resulting life can be considered better than non-existence. Key Words: Prenatal testing • parental duties • beneficence • non-maleficence PMID:11055047
NASA Astrophysics Data System (ADS)
Niu, Mingfei; Wang, Yufang; Sun, Shaolong; Li, Yongwu
2016-06-01
To enhance prediction reliability and accuracy, a hybrid model based on the promising principle of "decomposition and ensemble" and a recently proposed meta-heuristic called grey wolf optimizer (GWO) is introduced for daily PM2.5 concentration forecasting. Compared with existing PM2.5 forecasting methods, this proposed model has improved the prediction accuracy and hit rates of directional prediction. The proposed model involves three main steps, i.e., decomposing the original PM2.5 series into several intrinsic mode functions (IMFs) via complementary ensemble empirical mode decomposition (CEEMD) for simplifying the complex data; individually predicting each IMF with support vector regression (SVR) optimized by GWO; integrating all predicted IMFs for the ensemble result as the final prediction by another SVR optimized by GWO. Seven benchmark models, including single artificial intelligence (AI) models, other decomposition-ensemble models with different decomposition methods and models with the same decomposition-ensemble method but optimized by different algorithms, are considered to verify the superiority of the proposed hybrid model. The empirical study indicates that the proposed hybrid decomposition-ensemble model is remarkably superior to all considered benchmark models for its higher prediction accuracy and hit rates of directional prediction.
Efficiently computing exact geodesic loops within finite steps.
Xin, Shi-Qing; He, Ying; Fu, Chi-Wing
2012-06-01
Closed geodesics, or geodesic loops, are crucial to the study of differential topology and differential geometry. Although the existence and properties of closed geodesics on smooth surfaces have been widely studied in mathematics community, relatively little progress has been made on how to compute them on polygonal surfaces. Most existing algorithms simply consider the mesh as a graph and so the resultant loops are restricted only on mesh edges, which are far from the actual geodesics. This paper is the first to prove the existence and uniqueness of geodesic loop restricted on a closed face sequence; it contributes also with an efficient algorithm to iteratively evolve an initial closed path on a given mesh into an exact geodesic loop within finite steps. Our proposed algorithm takes only an O(k) space complexity and an O(mk) time complexity (experimentally), where m is the number of vertices in the region bounded by the initial loop and the resultant geodesic loop, and k is the average number of edges in the edge sequences that the evolving loop passes through. In contrast to the existing geodesic curvature flow methods which compute an approximate geodesic loop within a predefined threshold, our method is exact and can apply directly to triangular meshes without needing to solve any differential equation with a numerical solver; it can run at interactive speed, e.g., in the order of milliseconds, for a mesh with around 50K vertices, and hence, significantly outperforms existing algorithms. Actually, our algorithm could run at interactive speed even for larger meshes. Besides the complexity of the input mesh, the geometric shape could also affect the number of evolving steps, i.e., the performance. We motivate our algorithm with an interactive shape segmentation example shown later in the paper.
Does Science Presuppose Naturalism (or Anything at All)?
NASA Astrophysics Data System (ADS)
Fishman, Yonatan I.; Boudry, Maarten
2013-05-01
Several scientists, scientific institutions, and philosophers have argued that science is committed to Methodological Naturalism (MN), the view that science, by virtue of its methods, is limited to studying `natural' phenomena and cannot consider or evaluate hypotheses that refer to supernatural entities. While they may in fact exist, gods, ghosts, spirits, and extrasensory or psi phenomena are inherently outside the domain of scientific investigation. Recently, Mahner (Sci Educ 3:357-371, 2012) has taken this position one step further, proposing the more radical view that science presupposes an a priori commitment not just to MN, but also to ontological naturalism (ON), the metaphysical thesis that supernatural entities and phenomena do not exist. Here, we argue that science presupposes neither MN nor ON and that science can indeed investigate supernatural hypotheses via standard methodological approaches used to evaluate any `non-supernatural' claim. Science, at least ideally, is committed to the pursuit of truth about the nature of reality, whatever it may be, and hence cannot exclude the existence of the supernatural a priori, be it on methodological or metaphysical grounds, without artificially limiting its scope and power. Hypotheses referring to the supernatural or paranormal should be rejected not because they violate alleged a priori methodological or metaphysical presuppositions of the scientific enterprise, but rather because they fail to satisfy basic explanatory criteria, such as explanatory power and parsimony, which are routinely considered when evaluating claims in science and everyday life. Implications of our view for science education are discussed.
Resilience as a response to the stigma of depression: a mixed methods analysis.
Boardman, Felicity; Griffiths, Frances; Kokanovic, Renata; Potiriadis, Maria; Dowrick, Christopher; Gunn, Jane
2011-12-01
Stigma has been shown to have a significant influence on help-seeking, adherence to treatment and social opportunities for those experiencing depression. There is a need for studies which examine how the stigma of depression intersects with responses to depression. 161 telephone interviews with people experiencing depressive symptoms, derived from a longitudinal cohort study, were sampled on the basis of their perceptions of stigma around depression. Interview transcripts were searched for references to stigma and analysed thematically. The frequency of the themes was calculated and cross-referenced, producing a meta-theme matrix. Stigma was closely linked to ideas about responsibility for causation and/or continuation of depressive symptoms. Stigmatized individuals felt compelled to take steps to develop their resilience including drawing on existing support networks and expanding on positive emotions and personal strengths in order to counteract this stigma. However, such strategies were burdensome for some. These participants gained relief from relinquishing their personal responsibility. The data were briefer than many interview studies. This narrowed its interpretation, but allowed a large sample of participants. When considering how to tailor therapies for those experiencing depressive symptoms, health professionals should consider the interaction of stigma with coping strategies. Many individuals can build on existing relationships and personal strengths to develop resilience, some however need to first relinquish the expectation of having sufficient pre-existing resilience within themselves. Crown Copyright © 2011. Published by Elsevier B.V. All rights reserved.
An Isometric Mapping Based Co-Location Decision Tree Algorithm
NASA Astrophysics Data System (ADS)
Zhou, G.; Wei, J.; Zhou, X.; Zhang, R.; Huang, W.; Sha, H.; Chen, J.
2018-05-01
Decision tree (DT) induction has been widely used in different pattern classification. However, most traditional DTs have the disadvantage that they consider only non-spatial attributes (ie, spectral information) as a result of classifying pixels, which can result in objects being misclassified. Therefore, some researchers have proposed a co-location decision tree (Cl-DT) method, which combines co-location and decision tree to solve the above the above-mentioned traditional decision tree problems. Cl-DT overcomes the shortcomings of the existing DT algorithms, which create a node for each value of a given attribute, which has a higher accuracy than the existing decision tree approach. However, for non-linearly distributed data instances, the euclidean distance between instances does not reflect the true positional relationship between them. In order to overcome these shortcomings, this paper proposes an isometric mapping method based on Cl-DT (called, (Isomap-based Cl-DT), which is a method that combines heterogeneous and Cl-DT together. Because isometric mapping methods use geodetic distances instead of Euclidean distances between non-linearly distributed instances, the true distance between instances can be reflected. The experimental results and several comparative analyzes show that: (1) The extraction method of exposed carbonate rocks is of high accuracy. (2) The proposed method has many advantages, because the total number of nodes, the number of leaf nodes and the number of nodes are greatly reduced compared to Cl-DT. Therefore, the Isomap -based Cl-DT algorithm can construct a more accurate and faster decision tree.
Husch, Andreas; V Petersen, Mikkel; Gemmar, Peter; Goncalves, Jorge; Hertel, Frank
2018-01-01
Deep brain stimulation (DBS) is a neurosurgical intervention where electrodes are permanently implanted into the brain in order to modulate pathologic neural activity. The post-operative reconstruction of the DBS electrodes is important for an efficient stimulation parameter tuning. A major limitation of existing approaches for electrode reconstruction from post-operative imaging that prevents the clinical routine use is that they are manual or semi-automatic, and thus both time-consuming and subjective. Moreover, the existing methods rely on a simplified model of a straight line electrode trajectory, rather than the more realistic curved trajectory. The main contribution of this paper is that for the first time we present a highly accurate and fully automated method for electrode reconstruction that considers curved trajectories. The robustness of our proposed method is demonstrated using a multi-center clinical dataset consisting of N = 44 electrodes. In all cases the electrode trajectories were successfully identified and reconstructed. In addition, the accuracy is demonstrated quantitatively using a high-accuracy phantom with known ground truth. In the phantom experiment, the method could detect individual electrode contacts with high accuracy and the trajectory reconstruction reached an error level below 100 μm (0.046 ± 0.025 mm). An implementation of the method is made publicly available such that it can directly be used by researchers or clinicians. This constitutes an important step towards future integration of lead reconstruction into standard clinical care.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2016-04-01
The phase appearance/disappearance issue presents serious numerical challenges in two-phase flow simulations. Many existing reactor safety analysis codes use different kinds of treatments for the phase appearance/disappearance problem. However, to our best knowledge, there are no fully satisfactory solutions. Additionally, the majority of the existing reactor system analysis codes were developed using low-order numerical schemes in both space and time. In many situations, it is desirable to use high-resolution spatial discretization and fully implicit time integration schemes to reduce numerical errors. In this work, we adapted a high-resolution spatial discretization scheme on staggered grid mesh and fully implicit time integrationmore » methods (such as BDF1 and BDF2) to solve the two-phase flow problems. The discretized nonlinear system was solved by the Jacobian-free Newton Krylov (JFNK) method, which does not require the derivation and implementation of analytical Jacobian matrix. These methods were tested with a few two-phase flow problems with phase appearance/disappearance phenomena considered, such as a linear advection problem, an oscillating manometer problem, and a sedimentation problem. The JFNK method demonstrated extremely robust and stable behaviors in solving the two-phase flow problems with phase appearance/disappearance. No special treatments such as water level tracking or void fraction limiting were used. High-resolution spatial discretization and second- order fully implicit method also demonstrated their capabilities in significantly reducing numerical errors.« less
Web mining in soft computing framework: relevance, state of the art and future directions.
Pal, S K; Talwar, V; Mitra, P
2002-01-01
The paper summarizes the different characteristics of Web data, the basic components of Web mining and its different types, and the current state of the art. The reason for considering Web mining, a separate field from data mining, is explained. The limitations of some of the existing Web mining methods and tools are enunciated, and the significance of soft computing (comprising fuzzy logic (FL), artificial neural networks (ANNs), genetic algorithms (GAs), and rough sets (RSs) are highlighted. A survey of the existing literature on "soft Web mining" is provided along with the commercially available systems. The prospective areas of Web mining where the application of soft computing needs immediate attention are outlined with justification. Scope for future research in developing "soft Web mining" systems is explained. An extensive bibliography is also provided.
Cyber Physical Systems for User Reliability Measurements in a Sharing Economy Environment
Seo, Aria; Kim, Yeichang
2017-01-01
As the sharing economic market grows, the number of users is also increasing but many problems arise in terms of reliability between providers and users in the processing of services. The existing methods provide shared economic systems that judge the reliability of the provider from the viewpoint of the user. In this paper, we have developed a system for establishing mutual trust between providers and users in a shared economic environment to solve existing problems. In order to implement a system that can measure and control users’ situation in a shared economic environment, we analyzed the necessary factors in a cyber physical system (CPS). In addition, a user measurement system based on a CPS structure in a sharing economic environment is implemented through analysis of the factors to consider when constructing a CPS. PMID:28805709
Uniqueness of boundary blow-up solutions on exterior domain of RN
NASA Astrophysics Data System (ADS)
Dong, Wei; Pang, Changci
2007-06-01
In this paper, we consider the existence and uniqueness of positive solutions of the degenerate logistic type elliptic equation where N[greater-or-equal, slanted]2, D[subset of]RN is a bounded domain with smooth boundary and a(x), b(x) are continuous functions on RN with b(x)[greater-or-equal, slanted]0, b(x)[not identical with]0. We show that under rather general conditions on a(x) and b(x) for large x, there exists a unique positive solution. Our results improve the corresponding ones in [W. Dong, Y. Du, Unbounded principal eigenfunctions and the logistic equation on RN, Bull. Austral. Math. Soc. 67 (2003) 413-427] and [Y. Du, L. Ma, Logistic type equations on RN by a squeezing method involving boundary blow-up solutions, J. London Math. Soc. (2) 64 (2001) 107-124].
Cyber Physical Systems for User Reliability Measurements in a Sharing Economy Environment.
Seo, Aria; Jeong, Junho; Kim, Yeichang
2017-08-13
As the sharing economic market grows, the number of users is also increasing but many problems arise in terms of reliability between providers and users in the processing of services. The existing methods provide shared economic systems that judge the reliability of the provider from the viewpoint of the user. In this paper, we have developed a system for establishing mutual trust between providers and users in a shared economic environment to solve existing problems. In order to implement a system that can measure and control users' situation in a shared economic environment, we analyzed the necessary factors in a cyber physical system (CPS). In addition, a user measurement system based on a CPS structure in a sharing economic environment is implemented through analysis of the factors to consider when constructing a CPS.
Topological photonic crystals with zero Berry curvature
NASA Astrophysics Data System (ADS)
Liu, Feng; Deng, Hai-Yao; Wakabayashi, Katsunori
2018-02-01
Topological photonic crystals are designed based on the concept of Zak's phase rather than the topological invariants such as the Chern number and spin Chern number, which rely on the existence of a nonvanishing Berry curvature. Our photonic crystals (PCs) are made of pure dielectrics and sit on a square lattice obeying the C4 v point-group symmetry. Two varieties of PCs are considered: one closely resembles the electronic two-dimensional Su-Schrieffer-Heeger model, and the other continues as an extension of this analogy. In both cases, the topological transitions are induced by adjusting the lattice constants. Topological edge modes (TEMs) are shown to exist within the nontrivial photonic band gaps on the termination of those PCs. The high efficiency of these TEMs transferring electromagnetic energy against several types of disorders has been demonstrated using the finite-element method.
Müller-Schmid, A; Ganss, B; Gorr, T; Hoffmann, W
1993-06-01
Ependymins represent the predominant protein constituents in the cerebrospinal fluid of many teleost fish and they are synthesized in meningeal fibroblasts. Here, we present the ependymin sequences from the herring (Clupea harengus) and the pike (Esox lucius). A comparison of ependymin homologous sequences from three different orders of teleost fish (Salmoniformes, Cypriniformes, and Clupeiformes) revealed the highest similarity between Clupeiformes and Cypriniformes. This result is unexpected because it does not reflect current systematics, in which Clupeiformes belong to a separate infradivision (Clupeomorpha) than Salmoniformes and Cypriniformes (Euteleostei). Furthermore, in Salmoniformes the evolutionary rate of ependymins seems to be accelerated mainly on the protein level. However, considering these inconstant rates, neither neighbor-joining trees nor DNA parsimony methods gave any indication that a separate euteleost infradivision exists.
Soyer, Jessica L; Möller, Mareike; Schotanus, Klaas; Connolly, Lanelle R; Galazka, Jonathan M; Freitag, Michael; Stukenbrock, Eva H
2015-06-01
The presence or absence of specific transcription factors, chromatin remodeling machineries, chromatin modification enzymes, post-translational histone modifications and histone variants all play crucial roles in the regulation of pathogenicity genes. Chromatin immunoprecipitation (ChIP) followed by high-throughput sequencing (ChIP-seq) provides an important tool to study genome-wide protein-DNA interactions to help understand gene regulation in the context of native chromatin. ChIP-seq is a convenient in vivo technique to identify, map and characterize occupancy of specific DNA fragments with proteins against which specific antibodies exist or which can be epitope-tagged in vivo. We optimized existing ChIP protocols for use in the wheat pathogen Zymoseptoria tritici and closely related sister species. Here, we provide a detailed method, underscoring which aspects of the technique are organism-specific. Library preparation for Illumina sequencing is described, as this is currently the most widely used ChIP-seq method. One approach for the analysis and visualization of representative sequence is described; improved tools for these analyses are constantly being developed. Using ChIP-seq with antibodies against H3K4me2, which is considered a mark for euchromatin or H3K9me3 and H3K27me3, which are considered marks for heterochromatin, the overall distribution of euchromatin and heterochromatin in the genome of Z. tritici can be determined. Our ChIP-seq protocol was also successfully applied to Z. tritici strains with high levels of melanization or aberrant colony morphology, and to different species of the genus (Z. ardabiliae and Z. pseudotritici), suggesting that our technique is robust. The methods described here provide a powerful framework to study new aspects of chromatin biology and gene regulation in this prominent wheat pathogen. Copyright © 2015 Elsevier Inc. All rights reserved.
An adaptive reentry guidance method considering the influence of blackout zone
NASA Astrophysics Data System (ADS)
Wu, Yu; Yao, Jianyao; Qu, Xiangju
2018-01-01
Reentry guidance has been researched as a popular topic because it is critical for a successful flight. In view that the existing guidance methods do not take into account the accumulated navigation error of Inertial Navigation System (INS) in the blackout zone, in this paper, an adaptive reentry guidance method is proposed to obtain the optimal reentry trajectory quickly with the target of minimum aerodynamic heating rate. The terminal error in position and attitude can be also reduced with the proposed method. In this method, the whole reentry guidance task is divided into two phases, i.e., the trajectory updating phase and the trajectory planning phase. In the first phase, the idea of model predictive control (MPC) is used, and the receding optimization procedure ensures the optimal trajectory in the next few seconds. In the trajectory planning phase, after the vehicle has flown out of the blackout zone, the optimal reentry trajectory is obtained by online planning to adapt to the navigation information. An effective swarm intelligence algorithm, i.e. pigeon inspired optimization (PIO) algorithm, is applied to obtain the optimal reentry trajectory in both of the two phases. Compared to the trajectory updating method, the proposed method can reduce the terminal error by about 30% considering both the position and attitude, especially, the terminal error of height has almost been eliminated. Besides, the PIO algorithm performs better than the particle swarm optimization (PSO) algorithm both in the trajectory updating phase and the trajectory planning phases.
Optimal sensors placement and spillover suppression
NASA Astrophysics Data System (ADS)
Hanis, Tomas; Hromcik, Martin
2012-04-01
A new approach to optimal placement of sensors (OSP) in mechanical structures is presented. In contrast to existing methods, the presented procedure enables a designer to seek for a trade-off between the presence of desirable modes in captured measurements and the elimination of influence of those mode shapes that are not of interest in a given situation. An efficient numerical algorithm is presented, developed from an existing routine based on the Fischer information matrix analysis. We consider two requirements in the optimal sensor placement procedure. On top of the classical EFI approach, the sensors configuration should also minimize spillover of unwanted higher modes. We use the information approach to OSP, based on the effective independent method (EFI), and modify the underlying criterion to meet both of our requirements—to maximize useful signals and minimize spillover of unwanted modes at the same time. Performance of our approach is demonstrated by means of examples, and a flexible Blended Wing Body (BWB) aircraft case study related to a running European-level FP7 research project 'ACFA 2020—Active Control for Flexible Aircraft'.
Efficient Computing Budget Allocation for Finding Simplest Good Designs
Jia, Qing-Shan; Zhou, Enlu; Chen, Chun-Hung
2012-01-01
In many applications some designs are easier to implement, require less training data and shorter training time, and consume less storage than the others. Such designs are called simple designs, and are usually preferred over complex ones when they all have good performance. Despite the abundant existing studies on how to find good designs in simulation-based optimization (SBO), there exist few studies on finding simplest good designs. We consider this important problem in this paper, and make the following contributions. First, we provide lower bounds for the probabilities of correctly selecting the m simplest designs with top performance, and selecting the best m such simplest good designs, respectively. Second, we develop two efficient computing budget allocation methods to find m simplest good designs and to find the best m such designs, respectively; and show their asymptotic optimalities. Third, we compare the performance of the two methods with equal allocations over 6 academic examples and a smoke detection problem in wireless sensor networks. We hope that this work brings insight to finding the simplest good designs in general. PMID:23687404
Assessing the Accuracy of Ortho-image using Photogrammetric Unmanned Aerial System
NASA Astrophysics Data System (ADS)
Jeong, H. H.; Park, J. W.; Kim, J. S.; Choi, C. U.
2016-06-01
Smart-camera can not only be operated under network environment anytime and any place but also cost less than the existing photogrammetric UAV since it provides high-resolution image, 3D location and attitude data on a real-time basis from a variety of built-in sensors. This study's proposed UAV photogrammetric method, low-cost UAV and smart camera were used. The elements of interior orientation were acquired through camera calibration. The image triangulation was conducted in accordance with presence or absence of consideration of the interior orientation (IO) parameters determined by camera calibration, The Digital Elevation Model (DEM) was constructed using the image data photographed at the target area and the results of the ground control point survey. This study also analyzes the proposed method's application possibility by comparing a Ortho-image the results of the ground control point survey. Considering these study findings, it is suggested that smartphone is very feasible as a payload for UAV system. It is also expected that smartphone may be loaded onto existing UAV playing direct or indirect roles significantly.
NASA Astrophysics Data System (ADS)
Neff, Patrizio; Lankeit, Johannes; Ghiba, Ionel-Dumitrel; Martin, Robert; Steigmann, David
2015-08-01
We consider a family of isotropic volumetric-isochoric decoupled strain energies based on the Hencky-logarithmic (true, natural) strain tensor log U, where μ > 0 is the infinitesimal shear modulus, is the infinitesimal bulk modulus with the first Lamé constant, are dimensionless parameters, is the gradient of deformation, is the right stretch tensor and is the deviatoric part (the projection onto the traceless tensors) of the strain tensor log U. For small elastic strains, the energies reduce to first order to the classical quadratic Hencky energy which is known to be not rank-one convex. The main result in this paper is that in plane elastostatics the energies of the family are polyconvex for , extending a previous finding on its rank-one convexity. Our method uses a judicious application of Steigmann's polyconvexity criteria based on the representation of the energy in terms of the principal invariants of the stretch tensor U. These energies also satisfy suitable growth and coercivity conditions. We formulate the equilibrium equations, and we prove the existence of minimizers by the direct methods of the calculus of variations.
Building Energy Simulation Test for Existing Homes (BESTEST-EX) (Presentation)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Judkoff, R.; Neymark, J.; Polly, B.
2011-12-01
This presentation discusses the goals of NREL Analysis Accuracy R&D; BESTEST-EX goals; what BESTEST-EX is; how it works; 'Building Physics' cases; 'Building Physics' reference results; 'utility bill calibration' cases; limitations and potential future work. Goals of NREL Analysis Accuracy R&D are: (1) Provide industry with the tools and technical information needed to improve the accuracy and consistency of analysis methods; (2) Reduce the risks associated with purchasing, financing, and selling energy efficiency upgrades; and (3) Enhance software and input collection methods considering impacts on accuracy, cost, and time of energy assessments. BESTEST-EX Goals are: (1) Test software predictions of retrofitmore » energy savings in existing homes; (2) Ensure building physics calculations and utility bill calibration procedures perform up to a minimum standard; and (3) Quantify impact of uncertainties in input audit data and occupant behavior. BESTEST-EX is a repeatable procedure that tests how well audit software predictions compare to the current state of the art in building energy simulation. There is no direct truth standard. However, reference software have been subjected to validation testing, including comparisons with empirical data.« less
Identification and ranking of environmental threats with ecosystem vulnerability distributions.
Zijp, Michiel C; Huijbregts, Mark A J; Schipper, Aafke M; Mulder, Christian; Posthuma, Leo
2017-08-24
Responses of ecosystems to human-induced stress vary in space and time, because both stressors and ecosystem vulnerabilities vary in space and time. Presently, ecosystem impact assessments mainly take into account variation in stressors, without considering variation in ecosystem vulnerability. We developed a method to address ecosystem vulnerability variation by quantifying ecosystem vulnerability distributions (EVDs) based on monitoring data of local species compositions and environmental conditions. The method incorporates spatial variation of both abiotic and biotic variables to quantify variation in responses among species and ecosystems. We show that EVDs can be derived based on a selection of locations, existing monitoring data and a selected impact boundary, and can be used in stressor identification and ranking for a region. A case study on Ohio's freshwater ecosystems, with freshwater fish as target species group, showed that physical habitat impairment and nutrient loads ranked highest as current stressors, with species losses higher than 5% for at least 6% of the locations. EVDs complement existing approaches of stressor assessment and management, which typically account only for variability in stressors, by accounting for variation in the vulnerability of the responding ecosystems.
Extending existing structural identifiability analysis methods to mixed-effects models.
Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D
2018-01-01
The concept of structural identifiability for state-space models is expanded to cover mixed-effects state-space models. Two methods applicable for the analytical study of the structural identifiability of mixed-effects models are presented. The two methods are based on previously established techniques for non-mixed-effects models; namely the Taylor series expansion and the input-output form approach. By generating an exhaustive summary, and by assuming an infinite number of subjects, functions of random variables can be derived which in turn determine the distribution of the system's observation function(s). By considering the uniqueness of the analytical statistical moments of the derived functions of the random variables, the structural identifiability of the corresponding mixed-effects model can be determined. The two methods are applied to a set of examples of mixed-effects models to illustrate how they work in practice. Copyright © 2017 Elsevier Inc. All rights reserved.
Fan, Yue; Wang, Xiao; Peng, Qinke
2017-01-01
Gene regulatory networks (GRNs) play an important role in cellular systems and are important for understanding biological processes. Many algorithms have been developed to infer the GRNs. However, most algorithms only pay attention to the gene expression data but do not consider the topology information in their inference process, while incorporating this information can partially compensate for the lack of reliable expression data. Here we develop a Bayesian group lasso with spike and slab priors to perform gene selection and estimation for nonparametric models. B-spline basis functions are used to capture the nonlinear relationships flexibly and penalties are used to avoid overfitting. Further, we incorporate the topology information into the Bayesian method as a prior. We present the application of our method on DREAM3 and DREAM4 datasets and two real biological datasets. The results show that our method performs better than existing methods and the topology information prior can improve the result.
Information Filtering via Heterogeneous Diffusion in Online Bipartite Networks
Zhang, Fu-Guo; Zeng, An
2015-01-01
The rapid expansion of Internet brings us overwhelming online information, which is impossible for an individual to go through all of it. Therefore, recommender systems were created to help people dig through this abundance of information. In networks composed by users and objects, recommender algorithms based on diffusion have been proven to be one of the best performing methods. Previous works considered the diffusion process from user to object, and from object to user to be equivalent. We show in this work that it is not the case and we improve the quality of the recommendation by taking into account the asymmetrical nature of this process. We apply this idea to modify the state-of-the-art recommendation methods. The simulation results show that the new methods can outperform these existing methods in both recommendation accuracy and diversity. Finally, this modification is checked to be able to improve the recommendation in a realistic case. PMID:26125631
A dynamic integrated fault diagnosis method for power transformers.
Gao, Wensheng; Bai, Cuifen; Liu, Tong
2015-01-01
In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified.
A Dynamic Integrated Fault Diagnosis Method for Power Transformers
Gao, Wensheng; Liu, Tong
2015-01-01
In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified. PMID:25685841
Bozan, Mahir; Akyol, Çağrı; Ince, Orhan; Aydin, Sevcan; Ince, Bahar
2017-09-01
The anaerobic digestion of lignocellulosic wastes is considered an efficient method for managing the world's energy shortages and resolving contemporary environmental problems. However, the recalcitrance of lignocellulosic biomass represents a barrier to maximizing biogas production. The purpose of this review is to examine the extent to which sequencing methods can be employed to monitor such biofuel conversion processes. From a microbial perspective, we present a detailed insight into anaerobic digesters that utilize lignocellulosic biomass and discuss some benefits and disadvantages associated with the microbial sequencing techniques that are typically applied. We further evaluate the extent to which a hybrid approach incorporating a variation of existing methods can be utilized to develop a more in-depth understanding of microbial communities. It is hoped that this deeper knowledge will enhance the reliability and extent of research findings with the end objective of improving the stability of anaerobic digesters that manage lignocellulosic biomass.
Information Filtering via Heterogeneous Diffusion in Online Bipartite Networks.
Zhang, Fu-Guo; Zeng, An
2015-01-01
The rapid expansion of Internet brings us overwhelming online information, which is impossible for an individual to go through all of it. Therefore, recommender systems were created to help people dig through this abundance of information. In networks composed by users and objects, recommender algorithms based on diffusion have been proven to be one of the best performing methods. Previous works considered the diffusion process from user to object, and from object to user to be equivalent. We show in this work that it is not the case and we improve the quality of the recommendation by taking into account the asymmetrical nature of this process. We apply this idea to modify the state-of-the-art recommendation methods. The simulation results show that the new methods can outperform these existing methods in both recommendation accuracy and diversity. Finally, this modification is checked to be able to improve the recommendation in a realistic case.
Rank-Optimized Logistic Matrix Regression toward Improved Matrix Data Classification.
Zhang, Jianguang; Jiang, Jianmin
2018-02-01
While existing logistic regression suffers from overfitting and often fails in considering structural information, we propose a novel matrix-based logistic regression to overcome the weakness. In the proposed method, 2D matrices are directly used to learn two groups of parameter vectors along each dimension without vectorization, which allows the proposed method to fully exploit the underlying structural information embedded inside the 2D matrices. Further, we add a joint [Formula: see text]-norm on two parameter matrices, which are organized by aligning each group of parameter vectors in columns. This added co-regularization term has two roles-enhancing the effect of regularization and optimizing the rank during the learning process. With our proposed fast iterative solution, we carried out extensive experiments. The results show that in comparison to both the traditional tensor-based methods and the vector-based regression methods, our proposed solution achieves better performance for matrix data classifications.
NASA Astrophysics Data System (ADS)
Tran, Annelise; Goutard, Flavie; Chamaillé, Lise; Baghdadi, Nicolas; Lo Seen, Danny
2010-02-01
Recent studies have highlighted the potential role of water in the transmission of avian influenza (AI) viruses and the existence of often interacting variables that determine the survival rate of these viruses in water; the two main variables are temperature and salinity. Remote sensing has been used to map and monitor water bodies for several decades. In this paper, we review satellite image analysis methods used for water detection and characterization, focusing on the main variables that influence AI virus survival in water. Optical and radar imagery are useful for detecting water bodies at different spatial and temporal scales. Methods to monitor the temperature of large water surfaces are also available. Current methods for estimating other relevant water variables such as salinity, pH, turbidity and water depth are not presently considered to be effective.
Optimal harvesting of a stochastic delay logistic model with Lévy jumps
NASA Astrophysics Data System (ADS)
Qiu, Hong; Deng, Wenmin
2016-10-01
The optimal harvesting problem of a stochastic time delay logistic model with Lévy jumps is considered in this article. We first show that the model has a unique global positive solution and discuss the uniform boundedness of its pth moment with harvesting. Then we prove that the system is globally attractive and asymptotically stable in distribution under our assumptions. Furthermore, we obtain the existence of the optimal harvesting effort by the ergodic method, and then we give the explicit expression of the optimal harvesting policy and maximum yield.
NASA Technical Reports Server (NTRS)
Barth, Timothy; Charrier, Pierre; Mansour, Nagi N. (Technical Monitor)
2001-01-01
We consider the discontinuous Galerkin (DG) finite element discretization of first order systems of conservation laws derivable as moments of the kinetic Boltzmann equation. This includes well known conservation law systems such as the Euler For the class of first order nonlinear conservation laws equipped with an entropy extension, an energy analysis of the DG method for the Cauchy initial value problem is developed. Using this DG energy analysis, several new variants of existing numerical flux functions are derived and shown to be energy stable.
A dynamic unilateral contact problem with adhesion and friction in viscoelasticity
NASA Astrophysics Data System (ADS)
Cocou, Marius; Schryve, Mathieu; Raous, Michel
2010-08-01
The aim of this paper is to study an interaction law coupling recoverable adhesion, friction and unilateral contact between two viscoelastic bodies of Kelvin-Voigt type. A dynamic contact problem with adhesion and nonlocal friction is considered and its variational formulation is written as the coupling between an implicit variational inequality and a parabolic variational inequality describing the evolution of the intensity of adhesion. The existence and approximation of variational solutions are analysed, based on a penalty method, some abstract results and compactness properties. Finally, some numerical examples are presented.
A flow-control mechanism for distributed systems
NASA Technical Reports Server (NTRS)
Maitan, J.
1991-01-01
A new approach to the rate-based flow control in store-and-forward networks is evaluated. Existing methods display oscillations in the presence of transport delays. The proposed scheme is based on the explicit use of an embedded dynamic model of a store-and-forward buffer in a controller's feedback loop. It is shown that the use of the model eliminates the oscillations caused by the transport delays. The paper presents simulation examples and assesses the applicability of the scheme in the new generation of high-speed photonic networks where transport delays must be considered.
Non-Engineered Nanoparticles of C60
Deguchi, Shigeru; Mukai, Sada-atsu; Sakaguchi, Hide; Nonomura, Yoshimune
2013-01-01
We discovered that rubbing bulk solids of C60 between fingertips generates nanoparticles including the ones smaller than 20 nm. Considering the difficulties usually associated with nanoparticle production by pulverisation, formation of nanoparticles by such a mundane method is unprecedented and noteworthy. We also found that nanoparticles of C60 could be generated from bulk solids incidentally without deliberate engineering of any sort. Our findings imply that there exist highly unusual human exposure routes to nanoparticles of C60, and elucidating formation mechanisms of nanoparticles is crucial in assessing their environmental impacts. PMID:23807024
Generalized shrunken type-GM estimator and its application
NASA Astrophysics Data System (ADS)
Ma, C. Z.; Du, Y. L.
2014-03-01
The parameter estimation problem in linear model is considered when multicollinearity and outliers exist simultaneously. A class of new robust biased estimator, Generalized Shrunken Type-GM Estimation, with their calculated methods are established by combination of GM estimator and biased estimator include Ridge estimate, Principal components estimate and Liu estimate and so on. A numerical example shows that the most attractive advantage of these new estimators is that they can not only overcome the multicollinearity of coefficient matrix and outliers but also have the ability to control the influence of leverage points.
Monte Carlo simulation to investigate the formation of molecular hydrogen and its deuterated forms
NASA Astrophysics Data System (ADS)
Sahu, Dipen; Das, Ankan; Majumdar, Liton; Chakrabarti, Sandip K.
2015-07-01
H2 is the most abundant interstellar species, and its deuterated forms (HD and D2) are also present in high abundance. The high abundance of these molecules could be explained by considering the chemistry that occurs on interstellar dust. Because of its simplicity, the rate equation method is widely used to study the formation of grain-surface species. However, because the recombination efficiency for the formation of any surface species is highly dependent on various physical and chemical parameters, the Monte Carlo method is best suited for addressing the randomness of the processes. We perform Monte Carlo simulations to study the formation of H2, HD and D2 on interstellar ice. The adsorption energies of surface species are the key inputs for the formation of any species on interstellar dusts, but the binding energies of deuterated species have yet to be determined with certainty. A zero-point energy correction exists between hydrogenated and deuterated species, which should be considered during modeling of the chemistry on interstellar dusts. Following some previous studies, we consider various sets of adsorption energies to investigate the formation of these species under diverse physical conditions. As expected, notable differences in these two approaches (rate equation method and Monte Carlo method) are observed for the production of these simple molecules on interstellar ice. We introduce two factors, namely, Sf and β , to explain these discrepancies: Sf is a scaling factor, which can be used to correlate the discrepancies between the rate equation and Monte Carlo methods, and β indicates the formation efficiency under various conditions. Higher values of β indicate a lower production efficiency. We observed that β increases with a decrease in the rate of accretion from the gas phase to the grain phase.
Characterization of Nanoscale Gas Transport in Shale Formations
NASA Astrophysics Data System (ADS)
Chai, D.; Li, X.
2017-12-01
Non-Darcy flow behavior can be commonly observed in nano-sized pores of matrix. Most existing gas flow models characterize non-Darcy flow by empirical or semi-empirical methods without considering the real gas effect. In this paper, a novel layered model with physical meanings is proposed for both ideal and real gas transports in nanopores. It can be further coupled with hydraulic fracturing models and consequently benefit the storage evaluation and production prediction for shale gas recovery. It is hypothesized that a nanotube can be divided into a central circular zone where the viscous flow behavior mainly exists due to dominant intermolecular collisions and an outer annular zone where the Knudsen diffusion mainly exists because of dominant collisions between molecules and the wall. The flux is derived based on integration of two zones by applying the virtual boundary. Subsequently, the model is modified by incorporating slip effect, real gas effect, porosity distribution, and tortuosity. Meanwhile, a multi-objective optimization method (MOP) is applied to assist the validation of analytical model to search fitting parameters which are highly localized and contain significant uncertainties. The apparent permeability is finally derived and analyzed with various impact factors. The developed nanoscale gas transport model is well validated by the flux data collected from both laboratory experiments and molecular simulations over the entire spectrum of flow regimes. It has a decrease of as much as 43.8% in total molar flux when the real gas effect is considered in the model. Such an effect is found to be more significant as pore size shrinks. Knudsen diffusion accounts for more than 60% of the total gas flux when pressure is lower than 0.2 MPa and pore size is smaller than 50 nm. Overall, the apparent permeability is found to decrease with pressure, though it rarely changes when pressure is higher than 5.0 MPa and pore size is larger than 50 nm.
A global optimization algorithm for protein surface alignment
2010-01-01
Background A relevant problem in drug design is the comparison and recognition of protein binding sites. Binding sites recognition is generally based on geometry often combined with physico-chemical properties of the site since the conformation, size and chemical composition of the protein surface are all relevant for the interaction with a specific ligand. Several matching strategies have been designed for the recognition of protein-ligand binding sites and of protein-protein interfaces but the problem cannot be considered solved. Results In this paper we propose a new method for local structural alignment of protein surfaces based on continuous global optimization techniques. Given the three-dimensional structures of two proteins, the method finds the isometric transformation (rotation plus translation) that best superimposes active regions of two structures. We draw our inspiration from the well-known Iterative Closest Point (ICP) method for three-dimensional (3D) shapes registration. Our main contribution is in the adoption of a controlled random search as a more efficient global optimization approach along with a new dissimilarity measure. The reported computational experience and comparison show viability of the proposed approach. Conclusions Our method performs well to detect similarity in binding sites when this in fact exists. In the future we plan to do a more comprehensive evaluation of the method by considering large datasets of non-redundant proteins and applying a clustering technique to the results of all comparisons to classify binding sites. PMID:20920230
Enhancing the estimation of blood pressure using pulse arrival time and two confounding factors.
Baek, Hyun Jae; Kim, Ko Keun; Kim, Jung Soo; Lee, Boreom; Park, Kwang Suk
2010-02-01
A new method of blood pressure (BP) estimation using multiple regression with pulse arrival time (PAT) and two confounding factors was evaluated in clinical and unconstrained monitoring situations. For the first analysis with clinical data, electrocardiogram (ECG), photoplethysmogram (PPG) and invasive BP signals were obtained by a conventional patient monitoring device during surgery. In the second analysis, ECG, PPG and non-invasive BP were measured using systems developed to obtain data under conditions in which the subject was not constrained. To enhance the performance of BP estimation methods, heart rate (HR) and arterial stiffness were considered as confounding factors in regression analysis. The PAT and HR were easily extracted from ECG and PPG signals. For arterial stiffness, the duration from the maximum derivative point to the maximum of the dicrotic notch in the PPG signal, a parameter called TDB, was employed. In two experiments that normally cause BP variation, the correlation between measured BP and the estimated BP was investigated. Multiple-regression analysis with the two confounding factors improved correlation coefficients for diastolic blood pressure and systolic blood pressure to acceptable confidence levels, compared to existing methods that consider PAT only. In addition, reproducibility for the proposed method was determined using constructed test sets. Our results demonstrate that non-invasive, non-intrusive BP estimation can be obtained using methods that can be applied in both clinical and daily healthcare situations.
2013-01-01
Background Mass distribution of long-lasting insecticide treated bed nets (LLINs) has led to large increases in LLIN coverage in many African countries. As LLIN ownership levels increase, planners of future mass distributions face the challenge of deciding whether to ignore the nets already owned by households or to take these into account and attempt to target individuals or households without nets. Taking existing nets into account would reduce commodity costs but require more sophisticated, and potentially more costly, distribution procedures. The decision may also have implications for the average age of nets in use and therefore on the maintenance of universal LLIN coverage over time. Methods A stochastic simulation model based on the NetCALC algorithm was used to determine the scenarios under which it would be cost saving to take existing nets into account, and the potential effects of doing so on the age profile of LLINs owned. The model accounted for variability in timing of distributions, concomitant use of continuous distribution systems, population growth, sampling error in pre-campaign coverage surveys, variable net ‘decay’ parameters and other factors including the feasibility and accuracy of identifying existing nets in the field. Results Results indicate that (i) where pre-campaign coverage is around 40% (of households owning at least 1 LLIN), accounting for existing nets in the campaign will have little effect on the mean age of the net population and (ii) even at pre-campaign coverage levels above 40%, an approach that reduces LLIN distribution requirements by taking existing nets into account may have only a small chance of being cost-saving overall, depending largely on the feasibility of identifying nets in the field. Based on existing literature the epidemiological implications of such a strategy is likely to vary by transmission setting, and the risks of leaving older nets in the field when accounting for existing nets must be considered. Conclusions Where pre-campaign coverage levels established by a household survey are below 40% we recommend that planners do not take such LLINs into account and instead plan a blanket mass distribution. At pre-campaign coverage levels above 40%, campaign planners should make explicit consideration of the cost and feasibility of accounting for existing LLINs before planning blanket mass distributions. Planners should also consider restricting the coverage estimates used for this decision to only include nets under two years of age in order to ensure that old and damaged nets do not compose too large a fraction of existing net coverage. PMID:23763773
Practical Application Limits of Fuel Cells and Batteries for Zero Emission Vessels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minnehan, John J.; Pratt, Joseph William
Batteries and hydrogen fuel cells provide zero emission power at the point of use. They are studied as an alternative powerplant for maritime vessels by considering 14 case studies of various ship sizes and routes varying from small passenger vessels to the largest cargo ships. The method used was to compare the mass and volume of the required zero emission solution to the available mass and volume on an existing vessel considering its current engine and fuel storage systems. The results show that it is practically feasible to consider these zero emission technologies for most vessels in the world's fleet.more » Hydrogen fuel cells proved to be the most capable while battery systems showed an advantage for high power, short duration missions. The results provide a guide to ship designers to determine the most suitable types of zero emission powerplants to fit a ship based on its size and energy requirements.« less
Bim-Based Indoor Path Planning Considering Obstacles
NASA Astrophysics Data System (ADS)
Xu, M.; Wei, S.; Zlatanova, S.; Zhang, R.
2017-09-01
At present, 87 % of people's activities are in indoor environment; indoor navigation has become a research issue. As the building structures for people's daily life are more and more complex, many obstacles influence humans' moving. Therefore it is essential to provide an accurate and efficient indoor path planning. Nowadays there are many challenges and problems in indoor navigation. Most existing path planning approaches are based on 2D plans, pay more attention to the geometric configuration of indoor space, often ignore rich semantic information of building components, and mostly consider simple indoor layout without taking into account the furniture. Addressing the above shortcomings, this paper uses BIM (IFC) as the input data and concentrates on indoor navigation considering obstacles in the multi-floor buildings. After geometric and semantic information are extracted, 2D and 3D space subdivision methods are adopted to build the indoor navigation network and to realize a path planning that avoids obstacles. The 3D space subdivision is based on triangular prism. The two approaches are verified by the experiments.
Numerical model for the locomotion of spirilla.
Ramia, M
1991-11-01
The swimming of trailing, leading, and bipolar spirilla (with realistic flagellar centerline geometries) is considered. A boundary element method is used to predict the instantaneous swimming velocity, counter-rotation angular velocity, and power dissipation of a given organism as functions of time and the geometry of the organism. Based on such velocities, swimming trajectories have been deduced enabling a realistic definition of mean swimming speeds. The power dissipation normalized in terms of the square of the mean swimming speed is considered to be a measure of hydrodynamic efficiency. In addition, kinematic efficiency is defined as the extent of deviation of the swimming motion from that of a previously proposed ideal corkscrew mechanism. The dependence of these efficiencies on the organism's geometry is examined giving estimates of its optimum dimensions. It is concluded that appreciable correlation exists between the two alternative definitions for many of the geometrical parameters considered. Furthermore, the organism having the deduced optimum dimensions closely resembles the real organism as experimentally observed.
Numerical model for the locomotion of spirilla
Ramia, M.
1991-01-01
The swimming of trailing, leading, and bipolar spirilla (with realistic flagellar centerline geometries) is considered. A boundary element method is used to predict the instantaneous swimming velocity, counter-rotation angular velocity, and power dissipation of a given organism as functions of time and the geometry of the organism. Based on such velocities, swimming trajectories have been deduced enabling a realistic definition of mean swimming speeds. The power dissipation normalized in terms of the square of the mean swimming speed is considered to be a measure of hydrodynamic efficiency. In addition, kinematic efficiency is defined as the extent of deviation of the swimming motion from that of a previously proposed ideal corkscrew mechanism. The dependence of these efficiencies on the organism's geometry is examined giving estimates of its optimum dimensions. It is concluded that appreciable correlation exists between the two alternative definitions for many of the geometrical parameters considered. Furthermore, the organism having the deduced optimum dimensions closely resembles the real organism as experimentally observed. PMID:19431804
NASA Astrophysics Data System (ADS)
Jeon, Haemin; Yu, Jaesang; Lee, Hunsu; Kim, G. M.; Kim, Jae Woo; Jung, Yong Chae; Yang, Cheol-Min; Yang, B. J.
2017-09-01
Continuous fiber-reinforced composites are important materials that have the highest commercialized potential in the upcoming future among existing advanced materials. Despite their wide use and value, their theoretical mechanisms have not been fully established due to the complexity of the compositions and their unrevealed failure mechanisms. This study proposes an effective three-dimensional damage modeling of a fibrous composite by combining analytical micromechanics and evolutionary computation. The interface characteristics, debonding damage, and micro-cracks are considered to be the most influential factors on the toughness and failure behaviors of composites, and a constitutive equation considering these factors was explicitly derived in accordance with the micromechanics-based ensemble volume averaged method. The optimal set of various model parameters in the analytical model were found using modified evolutionary computation that considers human-induced error. The effectiveness of the proposed formulation was validated by comparing a series of numerical simulations with experimental data from available studies.
Practice guideline summary: Treatment of restless legs syndrome in adults
Winkelman, John W.; Armstrong, Melissa J.; Allen, Richard P.; Chaudhuri, K. Ray; Ondo, William; Trenkwalder, Claudia; Zee, Phyllis C.; Gronseth, Gary S.; Gloss, David; Zesiewicz, Theresa
2016-01-01
Objective: To make evidence-based recommendations regarding restless legs syndrome (RLS) management in adults. Methods: Articles were classified per the 2004 American Academy of Neurology evidence rating scheme. Recommendations were tied to evidence strength. Results and recommendations: In moderate to severe primary RLS, clinicians should consider prescribing medication to reduce RLS symptoms. Strong evidence supports pramipexole, rotigotine, cabergoline, and gabapentin enacarbil use (Level A); moderate evidence supports ropinirole, pregabalin, and IV ferric carboxymaltose use (Level B). Clinicians may consider prescribing levodopa (Level C). Few head-to-head comparisons exist to suggest agents preferentially. Cabergoline is rarely used (cardiac valvulopathy risks). Augmentation risks with dopaminergic agents should be considered. When treating periodic limb movements of sleep, clinicians should consider prescribing ropinirole (Level A) or pramipexole, rotigotine, cabergoline, or pregabalin (Level B). For subjective sleep measures, clinicians should consider prescribing cabergoline or gabapentin enacarbil (Level A), or ropinirole, pramipexole, rotigotine, or pregabalin (Level B). For patients failing other treatments for RLS symptoms, clinicians may consider prescribing prolonged-release oxycodone/naloxone where available (Level C). In patients with RLS with ferritin ≤75 μg/L, clinicians should consider prescribing ferrous sulfate with vitamin C (Level B). When nonpharmacologic approaches are desired, clinicians should consider prescribing pneumatic compression (Level B) and may consider prescribing near-infrared spectroscopy or transcranial magnetic stimulation (Level C). Clinicians may consider prescribing vibrating pads to improve subjective sleep (Level C). In patients on hemodialysis with secondary RLS, clinicians should consider prescribing vitamin C and E supplementation (Level B) and may consider prescribing ropinirole, levodopa, or exercise (Level C). PMID:27856776
Gebremedhin, Daniel H; Weatherford, Charles A
2015-02-01
This is a response to the comment we received on our recent paper "Calculations for the one-dimensional soft Coulomb problem and the hard Coulomb limit." In that paper, we introduced a computational algorithm that is appropriate for solving stiff initial value problems, and which we applied to the one-dimensional time-independent Schrödinger equation with a soft Coulomb potential. We solved for the eigenpairs using a shooting method and hence turned it into an initial value problem. In particular, we examined the behavior of the eigenpairs as the softening parameter approached zero (hard Coulomb limit). The commenters question the existence of the ground state of the hard Coulomb potential, which we inferred by extrapolation of the softening parameter to zero. A key distinction between the commenters' approach and ours is that they consider only the half-line while we considered the entire x axis. Based on mathematical considerations, the commenters consider only a vanishing solution function at the origin, and they question our conclusion that the ground state of the hard Coulomb potential exists. The ground state we inferred resembles a δ(x), and hence it cannot even be addressed based on their argument. For the excited states, there is agreement with the fact that the particle is always excluded from the origin. Our discussion with regard to the symmetry of the excited states is an extrapolation of the soft Coulomb case and is further explained herein.
TAXONOMY OF MEDICAL DEVICES IN THE LOGIC OF HEALTH TECHNOLOGY ASSESSMENT.
Henschke, Cornelia; Panteli, Dimitra; Perleth, Matthias; Busse, Reinhard
2015-01-01
The suitability of general HTA methodology for medical devices is gaining interest as a topic of scientific discourse. Given the broad range of medical devices, there might be differences between groups of devices that impact both the necessity and the methods of their assessment. Our aim is to develop a taxonomy that provides researchers and policy makers with an orientation tool on how to approach the assessment of different types of medical devices. Several classifications for medical devices based on varying rationales for different regulatory and reporting purposes were analyzed in detail to develop a comprehensive taxonomic model. The taxonomy is based on relevant aspects of existing classification schemes incorporating elements of risk and functionality. Its 9 × 6 matrix distinguishes between the diagnostic or therapeutic nature of devices and considers whether the medical device is directly used by patients, constitutes part of a specific procedure, or can be used for a variety of procedures. We considered the relevance of different device categories in regard to HTA to be considerably variable, ranging from high to low. Existing medical device classifications cannot be used for HTA as they are based on different underlying logics. The developed taxonomy combines different device classification schemes used for different purposes. It aims at providing decision makers with a tool enabling them to consider device characteristics in detail across more than one dimension. The placement of device groups in the matrix can provide decision support on the necessity of conducting a full HTA.
Crystal defect studies using x-ray diffuse scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larson, B.C.
1980-01-01
Microscopic lattice defects such as point (single atom) defects, dislocation loops, and solute precipitates are characterized by local electronic density changes at the defect sites and by distortions of the lattice structure surrounding the defects. The effect of these interruptions of the crystal lattice on the scattering of x-rays is considered in this paper, and examples are presented of the use of the diffuse scattering to study the defects. X-ray studies of self-interstitials in electron irradiated aluminum and copper are discussed in terms of the identification of the interstitial configuration. Methods for detecting the onset of point defect aggregation intomore » dislocation loops are considered and new techniques for the determination of separate size distributions for vacancy loops and interstitial loops are presented. Direct comparisons of dislocation loop measurements by x-rays with existing electron microscopy studies of dislocation loops indicate agreement for larger size loops, but x-ray measurements report higher concentrations in the smaller loop range. Methods for distinguishing between loops and three-dimensional precipitates are discussed and possibilities for detailed studies considered. A comparison of dislocation loop size distributions obtained from integral diffuse scattering measurements with those from TEM show a discrepancy in the smaller sizes similar to that described above.« less
20 CFR 718.305 - Presumption of pneumoconiosis.
Code of Federal Regulations, 2011 CFR
2011-04-01
... existence of a totally disabling respiratory or pulmonary impairment, for purposes of applying the... other evidence demonstrates the existence of a totally disabling respiratory or pulmonary impairment... the miner's condition shall be considered to be sufficient to establish the existence of a totally...
20 CFR 718.305 - Presumption of pneumoconiosis.
Code of Federal Regulations, 2010 CFR
2010-04-01
... existence of a totally disabling respiratory or pulmonary impairment, for purposes of applying the... other evidence demonstrates the existence of a totally disabling respiratory or pulmonary impairment... the miner's condition shall be considered to be sufficient to establish the existence of a totally...
Groundwater pumping effects on contaminant loading management in agricultural regions.
Park, Dong Kyu; Bae, Gwang-Ok; Kim, Seong-Kyun; Lee, Kang-Kun
2014-06-15
Groundwater pumping changes the behavior of subsurface water, including the location of the water table and characteristics of the flow system, and eventually affects the fate of contaminants, such as nitrate from agricultural fertilizers. The objectives of this study were to demonstrate the importance of considering the existing pumping conditions for contaminant loading management and to develop a management model to obtain a contaminant loading design more appropriate and practical for agricultural regions where groundwater pumping is common. Results from this study found that optimal designs for contaminant loading could be determined differently when the existing pumping conditions were considered. This study also showed that prediction of contamination and contaminant loading management without considering pumping activities might be unrealistic. Motivated by these results, a management model optimizing the permissible on-ground contaminant loading mass together with pumping rates was developed and applied to field investigation and monitoring data from Icheon, Korea. The analytical solution for 1-D unsaturated solute transport was integrated with the 3-D saturated solute transport model in order to approximate the fate of contaminants loaded periodically from on-ground sources. This model was further expanded to manage agricultural contaminant loading in regions where groundwater extraction tends to be concentrated in a specific period of time, such as during the rice-growing season, using a method that approximates contaminant leaching to a fluctuating water table. The results illustrated that the simultaneous management of groundwater quantity and quality was effective and appropriate to the agricultural contaminant loading management and the model developed in this study, which can consider time-variant pumping, could be used to accurately estimate and to reasonably manage contaminant loading in agricultural areas. Copyright © 2014 Elsevier Ltd. All rights reserved.
Manganello, Jennifer A; Henderson, Vani R; Jordan, Amy; Trentacoste, Nicole; Martin, Suzanne; Hennessy, Michael; Fishbein, Martin
2010-07-01
Many studies of sexual messages in media utilize content analysis methods. At times, this research assumes that researchers and trained coders using content analysis methods and the intended audience view and interpret media content similarly. This article compares adolescents' perceptions of the presence or absence of sexual content on television to those of researchers using three different coding schemes. Results from this formative research study suggest that participants and researchers are most likely to agree with content categories assessing manifest content, and that differences exist among adolescents who view sexual messages on television. Researchers using content analysis methods to examine sexual content in media and media effects on sexual behavior should consider identifying how audience characteristics may affect interpretation of content and account for audience perspectives in content analysis study protocols when appropriate for study goals.
Trapping penguins with entangled B mesons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dadisman, Ryan; Gardner, Susan; Yan, Xinshuai
2016-01-08
Our first direct observation of time-reversal (T) violation in the BBsystem was reported by the BaBar Collaboration, employing the method of Bañuls and Bernabéu. Given this, we generalize their analysis of the time-dependent T-violating asymmetry (AT) to consider different choices of CP tags for which the dominant amplitudes have the same weak phase. As one application, we find that it is possible to measure departures from the universality of sin(2β)directly. If sin(2β)is universal, as in the Standard Model, the method permits the direct determination of penguin effects in these channels. This method, although no longer a strict test of T,more » can yield tests of the sin(2β)universality, or, alternatively, of penguin effects, of much improved precision even with existing data sets.« less
A new license plate extraction framework based on fast mean shift
NASA Astrophysics Data System (ADS)
Pan, Luning; Li, Shuguang
2010-08-01
License plate extraction is considered to be the most crucial step of Automatic license plate recognition (ALPR) system. In this paper, a region-based license plate hybrid detection method is proposed to solve practical problems under complex background in which existing large quantity of disturbing information. In this method, coarse license plate location is carried out firstly to get the head part of a vehicle. Then a new Fast Mean Shift method based on random sampling of Kernel Density Estimate (KDE) is adopted to segment the color vehicle images, in order to get candidate license plate regions. The remarkable speed-up it brings makes Mean Shift segmentation more suitable for this application. Feature extraction and classification is used to accurately separate license plate from other candidate regions. At last, tilted license plate regulation is used for future recognition steps.
Fast Low-Rank Bayesian Matrix Completion With Hierarchical Gaussian Prior Models
NASA Astrophysics Data System (ADS)
Yang, Linxiao; Fang, Jun; Duan, Huiping; Li, Hongbin; Zeng, Bing
2018-06-01
The problem of low rank matrix completion is considered in this paper. To exploit the underlying low-rank structure of the data matrix, we propose a hierarchical Gaussian prior model, where columns of the low-rank matrix are assumed to follow a Gaussian distribution with zero mean and a common precision matrix, and a Wishart distribution is specified as a hyperprior over the precision matrix. We show that such a hierarchical Gaussian prior has the potential to encourage a low-rank solution. Based on the proposed hierarchical prior model, a variational Bayesian method is developed for matrix completion, where the generalized approximate massage passing (GAMP) technique is embedded into the variational Bayesian inference in order to circumvent cumbersome matrix inverse operations. Simulation results show that our proposed method demonstrates superiority over existing state-of-the-art matrix completion methods.
A study on reliability of power customer in distribution network
NASA Astrophysics Data System (ADS)
Liu, Liyuan; Ouyang, Sen; Chen, Danling; Ma, Shaohua; Wang, Xin
2017-05-01
The existing power supply reliability index system is oriented to power system without considering actual electricity availability in customer side. In addition, it is unable to reflect outage or customer’s equipment shutdown caused by instantaneous interruption and power quality problem. This paper thus makes a systematic study on reliability of power customer. By comparing with power supply reliability, reliability of power customer is defined and extracted its evaluation requirements. An indexes system, consisting of seven customer indexes and two contrast indexes, are designed to describe reliability of power customer from continuity and availability. In order to comprehensively and quantitatively evaluate reliability of power customer in distribution networks, reliability evaluation method is proposed based on improved entropy method and the punishment weighting principle. Practical application has proved that reliability index system and evaluation method for power customer is reasonable and effective.
Prediction of forces and moments for hypersonic flight vehicle control effectors
NASA Technical Reports Server (NTRS)
Maughmer, Mark D.; Long, Lyle N.; Guilmette, Neal; Pagano, Peter
1993-01-01
This research project includes three distinct phases. For completeness, all three phases of the work are briefly described in this report. The goal was to develop methods of predicting flight control forces and moments for hypersonic vehicles which could be used in a preliminary design environment. The first phase included a preliminary assessment of subsonic/supersonic panel methods and hypersonic local flow inclination methods for such predictions. While these findings clearly indicated the usefulness of such methods for conceptual design activities, deficiencies exist in some areas. Thus, a second phase of research was conducted in which a better understanding was sought for the reasons behind the successes and failures of the methods considered, particularly for the cases at hypersonic Mach numbers. This second phase involved using computational fluid dynamics methods to examine the flow fields in detail. Through these detailed predictions, the deficiencies in the simple surface inclination methods were determined. In the third phase of this work, an improvement to the surface inclination methods was developed. This used a novel method for including viscous effects by modifying the geometry to include the viscous/shock layer.
Hole filling and library optimization: application to commercially available fragment libraries.
An, Yuling; Sherman, Woody; Dixon, Steven L
2012-09-15
Compound libraries comprise an integral component of drug discovery in the pharmaceutical and biotechnology industries. While in-house libraries often contain millions of molecules, this number pales in comparison to the accessible space of drug-like molecules. Therefore, care must be taken when adding new compounds to an existing library in order to ensure that unexplored regions in the chemical space are filled efficiently while not needlessly increasing the library size. In this work, we present an automated method to fill holes in an existing library using compounds from an external source and apply it to commercially available fragment libraries. The method, called Canvas HF, uses distances computed from 2D chemical fingerprints and selects compounds that fill vacuous regions while not suffering from the problem of selecting only compounds at the edge of the chemical space. We show that the method is robust with respect to different databases and the number of requested compounds to retrieve. We also present an extension of the method where chemical properties can be considered simultaneously with the selection process to bias the compounds toward a desired property space without imposing hard property cutoffs. We compare the results of Canvas HF to those obtained with a standard sphere exclusion method and with random compound selection and find that Canvas HF performs favorably. Overall, the method presented here offers an efficient and effective hole-filling strategy to augment compound libraries with compounds from external sources. The method does not have any fit parameters and therefore it should be applicable in most hole-filling applications. Copyright © 2012 Elsevier Ltd. All rights reserved.
Study of the costs and benefits of composite materials in advanced turbofan engines
NASA Technical Reports Server (NTRS)
Steinhagen, C. A.; Stotler, C. L.; Neitzel, R. E.
1974-01-01
Composite component designs were developed for a number of applicable engine parts and functions. The cost and weight of each detail component was determined and its effect on the total engine cost to the aircraft manufacturer was ascertained. The economic benefits of engine or nacelle composite or eutectic turbine alloy substitutions was then calculated. Two time periods of engine certification were considered for this investigation, namely 1979 and 1985. Two methods of applying composites to these engines were employed. The first method just considered replacing an existing metal part with a composite part with no other change to the engine. The other method involved major engine redesign so that more efficient composite designs could be employed. Utilization of polymeric composites wherever payoffs were available indicated that a total improvement in Direct Operating Cost (DOC) of 2.82 to 4.64 percent, depending on the engine considered, could be attained. In addition, the percent fuel saving ranged from 1.91 to 3.53 percent. The advantages of using advanced materials in the turbine are more difficult to quantify but could go as high as an improvement in DOC of 2.33 percent and a fuel savings of 2.62 percent. Typically, based on a fleet of one hundred aircraft, a percent savings in DOC represents a savings of four million dollars per year and a percent of fuel savings equals 23,000 cu m (7,000,000 gallons) per year.
The rise of environmental analytical chemistry as an interdisciplinary activity.
Brown, Richard
2009-07-01
Modern scientific endeavour is increasingly delivered within an interdisciplinary framework. Analytical environmental chemistry is a long-standing example of an interdisciplinary approach to scientific research where value is added by the close cooperation of different disciplines. This editorial piece discusses the rise of environmental analytical chemistry as an interdisciplinary activity and outlines the scope of the Analytical Chemistry and the Environmental Chemistry domains of TheScientificWorldJOURNAL (TSWJ), and the appropriateness of TSWJ's domain format in covering interdisciplinary research. All contributions of new data, methods, case studies, and instrumentation, or new interpretations and developments of existing data, case studies, methods, and instrumentation, relating to analytical and/or environmental chemistry, to the Analytical and Environmental Chemistry domains, are welcome and will be considered equally.