Nonlinear transient analysis via energy minimization
NASA Technical Reports Server (NTRS)
Kamat, M. P.; Knight, N. F., Jr.
1978-01-01
The formulation basis for nonlinear transient analysis of finite element models of structures using energy minimization is provided. Geometric and material nonlinearities are included. The development is restricted to simple one and two dimensional finite elements which are regarded as being the basic elements for modeling full aircraft-like structures under crash conditions. The results indicate the effectiveness of the technique as a viable tool for this purpose.
Generalized sensitivity analysis of the minimal model of the intravenous glucose tolerance test.
Munir, Mohammad
2018-06-01
Generalized sensitivity functions characterize the sensitivity of the parameter estimates with respect to the nominal parameters. We observe from the generalized sensitivity analysis of the minimal model of the intravenous glucose tolerance test that the measurements of insulin, 62 min after the administration of the glucose bolus into the experimental subject's body, possess no information about the parameter estimates. The glucose measurements possess the information about the parameter estimates up to three hours. These observations have been verified by the parameter estimation of the minimal model. The standard errors of the estimates and crude Monte Carlo process also confirm this observation. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Genberg, Victor L.; Michels, Gregory J.
2017-08-01
The ultimate design goal of an optical system subjected to dynamic loads is to minimize system level wavefront error (WFE). In random response analysis, system WFE is difficult to predict from finite element results due to the loss of phase information. In the past, the use of ystem WFE was limited by the difficulty of obtaining a linear optics model. In this paper, an automated method for determining system level WFE using a linear optics model is presented. An error estimate is included in the analysis output based on fitting errors of mode shapes. The technique is demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.
On the convergence of nonconvex minimization methods for image recovery.
Xiao, Jin; Ng, Michael Kwok-Po; Yang, Yu-Fei
2015-05-01
Nonconvex nonsmooth regularization method has been shown to be effective for restoring images with neat edges. Fast alternating minimization schemes have also been proposed and developed to solve the nonconvex nonsmooth minimization problem. The main contribution of this paper is to show the convergence of these alternating minimization schemes, based on the Kurdyka-Łojasiewicz property. In particular, we show that the iterates generated by the alternating minimization scheme, converges to a critical point of this nonconvex nonsmooth objective function. We also extend the analysis to nonconvex nonsmooth regularization model with box constraints, and obtain similar convergence results of the related minimization algorithm. Numerical examples are given to illustrate our convergence analysis.
General squark flavour mixing: constraints, phenomenology and benchmarks
De Causmaecker, Karen; Fuks, Benjamin; Herrmann, Bjorn; ...
2015-11-19
Here, we present an extensive study of non-minimal flavour violation in the squark sector in the framework of the Minimal Supersymmetric Standard Model. We investigate the effects of multiple non-vanishing flavour-violating elements in the squark mass matrices by means of a Markov Chain Monte Carlo scanning technique and identify parameter combinations that are favoured by both current data and theoretical constraints. We then detail the resulting distributions of the flavour-conserving and flavour-violating model parameters. Based on this analysis, we propose a set of benchmark scenarios relevant for future studies of non-minimal flavour violation in the Minimal Supersymmetric Standard Model.
Testing non-minimally coupled inflation with CMB data: a Bayesian analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Campista, Marcela; Benetti, Micol; Alcaniz, Jailson, E-mail: campista@on.br, E-mail: micolbenetti@on.br, E-mail: alcaniz@on.br
2017-09-01
We use the most recent cosmic microwave background (CMB) data to perform a Bayesian statistical analysis and discuss the observational viability of inflationary models with a non-minimal coupling, ξ, between the inflaton field and the Ricci scalar. We particularize our analysis to two examples of small and large field inflationary models, namely, the Coleman-Weinberg and the chaotic quartic potentials. We find that ( i ) the ξ parameter is closely correlated with the primordial amplitude ; ( ii ) although improving the agreement with the CMB data in the r − n {sub s} plane, where r is the tensor-to-scalarmore » ratio and n {sub s} the primordial spectral index, a non-null coupling is strongly disfavoured with respect to the minimally coupled standard ΛCDM model, since the upper bounds of the Bayes factor (odds) for ξ parameter are greater than 150:1.« less
Reduced Uncertainties in the Flutter Analysis of the Aerostructures Test Wing
NASA Technical Reports Server (NTRS)
Pak, Chan-gi; Lung, Shun-fat
2010-01-01
Tuning the finite element model using measured data to minimize the model uncertainties is a challenging task in the area of structural dynamics. A test validated finite element model can provide a reliable flutter analysis to define the flutter placard speed to which the aircraft can be flown prior to flight flutter testing. Minimizing the difference between numerical and experimental results is a type of optimization problem. Through the use of the National Aeronautics and Space Administration Dryden Flight Research Center s (Edwards, California, USA) multidisciplinary design, analysis, and optimization tool to optimize the objective function and constraints; the mass properties, the natural frequencies, and the mode shapes are matched to the target data and the mass matrix orthogonality is retained. The approach in this study has been applied to minimize the model uncertainties for the structural dynamic model of the aerostructures test wing, which was designed, built, and tested at the National Aeronautics and Space Administration Dryden Flight Research Center. A 25-percent change in flutter speed has been shown after reducing the uncertainties
Reduced Uncertainties in the Flutter Analysis of the Aerostructures Test Wing
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi; Lung, Shun Fat
2011-01-01
Tuning the finite element model using measured data to minimize the model uncertainties is a challenging task in the area of structural dynamics. A test validated finite element model can provide a reliable flutter analysis to define the flutter placard speed to which the aircraft can be flown prior to flight flutter testing. Minimizing the difference between numerical and experimental results is a type of optimization problem. Through the use of the National Aeronautics and Space Administration Dryden Flight Research Center's (Edwards, California) multidisciplinary design, analysis, and optimization tool to optimize the objective function and constraints; the mass properties, the natural frequencies, and the mode shapes are matched to the target data, and the mass matrix orthogonality is retained. The approach in this study has been applied to minimize the model uncertainties for the structural dynamic model of the aerostructures test wing, which was designed, built, and tested at the National Aeronautics and Space Administration Dryden Flight Research Center. A 25 percent change in flutter speed has been shown after reducing the uncertainties.
Automatic network coupling analysis for dynamical systems based on detailed kinetic models.
Lebiedz, Dirk; Kammerer, Julia; Brandt-Pollmann, Ulrich
2005-10-01
We introduce a numerical complexity reduction method for the automatic identification and analysis of dynamic network decompositions in (bio)chemical kinetics based on error-controlled computation of a minimal model dimension represented by the number of (locally) active dynamical modes. Our algorithm exploits a generalized sensitivity analysis along state trajectories and subsequent singular value decomposition of sensitivity matrices for the identification of these dominant dynamical modes. It allows for a dynamic coupling analysis of (bio)chemical species in kinetic models that can be exploited for the piecewise computation of a minimal model on small time intervals and offers valuable functional insight into highly nonlinear reaction mechanisms and network dynamics. We present results for the identification of network decompositions in a simple oscillatory chemical reaction, time scale separation based model reduction in a Michaelis-Menten enzyme system and network decomposition of a detailed model for the oscillatory peroxidase-oxidase enzyme system.
Replica Approach for Minimal Investment Risk with Cost
NASA Astrophysics Data System (ADS)
Shinzato, Takashi
2018-06-01
In the present work, the optimal portfolio minimizing the investment risk with cost is discussed analytically, where an objective function is constructed in terms of two negative aspects of investment, the risk and cost. We note the mathematical similarity between the Hamiltonian in the mean-variance model and the Hamiltonians in the Hopfield model and the Sherrington-Kirkpatrick model, show that we can analyze this portfolio optimization problem by using replica analysis, and derive the minimal investment risk with cost and the investment concentration of the optimal portfolio. Furthermore, we validate our proposed method through numerical simulations.
Perturbed Yukawa textures in the minimal seesaw model
NASA Astrophysics Data System (ADS)
Rink, Thomas; Schmitz, Kai
2017-03-01
We revisit the minimal seesaw model, i.e., the type-I seesaw mechanism involving only two right-handed neutrinos. This model represents an important minimal benchmark scenario for future experimental updates on neutrino oscillations. It features four real parameters that cannot be fixed by the current data: two CP -violating phases, δ and σ, as well as one complex parameter, z, that is experimentally inaccessible at low energies. The parameter z controls the structure of the neutrino Yukawa matrix at high energies, which is why it may be regarded as a label or index for all UV completions of the minimal seesaw model. The fact that z encompasses only two real degrees of freedom allows us to systematically scan the minimal seesaw model over all of its possible UV completions. In doing so, we address the following question: suppose δ and σ should be measured at particular values in the future — to what extent is one then still able to realize approximate textures in the neutrino Yukawa matrix? Our analysis, thus, generalizes previous studies of the minimal seesaw model based on the assumption of exact texture zeros. In particular, our study allows us to assess the theoretical uncertainty inherent to the common texture ansatz. One of our main results is that a normal light-neutrino mass hierarchy is, in fact, still consistent with a two-zero Yukawa texture, provided that the two texture zeros receive corrections at the level of O (10%). While our numerical results pertain to the minimal seesaw model only, our general procedure appears to be applicable to other neutrino mass models as well.
Regression Model Optimization for the Analysis of Experimental Data
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2009-01-01
A candidate math model search algorithm was developed at Ames Research Center that determines a recommended math model for the multivariate regression analysis of experimental data. The search algorithm is applicable to classical regression analysis problems as well as wind tunnel strain gage balance calibration analysis applications. The algorithm compares the predictive capability of different regression models using the standard deviation of the PRESS residuals of the responses as a search metric. This search metric is minimized during the search. Singular value decomposition is used during the search to reject math models that lead to a singular solution of the regression analysis problem. Two threshold dependent constraints are also applied. The first constraint rejects math models with insignificant terms. The second constraint rejects math models with near-linear dependencies between terms. The math term hierarchy rule may also be applied as an optional constraint during or after the candidate math model search. The final term selection of the recommended math model depends on the regressor and response values of the data set, the user s function class combination choice, the user s constraint selections, and the result of the search metric minimization. A frequently used regression analysis example from the literature is used to illustrate the application of the search algorithm to experimental data.
The Preventive Control of a Dengue Disease Using Pontryagin Minimum Principal
NASA Astrophysics Data System (ADS)
Ratna Sari, Eminugroho; Insani, Nur; Lestari, Dwi
2017-06-01
Behaviour analysis for host-vector model without control of dengue disease is based on the value of basic reproduction number obtained using next generation matrices. Furthermore, the model is further developed involving a preventive control to minimize the contact between host and vector. The purpose is to obtain an optimal preventive strategy with minimal cost. The Pontryagin Minimum Principal is used to find the optimal control analytically. The derived optimality model is then solved numerically to investigate control effort to reduce infected class.
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.; Chang, B.-C.; Fischl, Robert
1989-01-01
In the design and analysis of robust control systems for uncertain plants, the technique of formulating what is termed an M-delta model has become widely accepted and applied in the robust control literature. The M represents the transfer function matrix M(s) of the nominal system, and delta represents an uncertainty matrix acting on M(s). The uncertainty can arise from various sources, such as structured uncertainty from parameter variations or multiple unstructured uncertainties from unmodeled dynamics and other neglected phenomena. In general, delta is a block diagonal matrix, and for real parameter variations the diagonal elements are real. As stated in the literature, this structure can always be formed for any linear interconnection of inputs, outputs, transfer functions, parameter variations, and perturbations. However, very little of the literature addresses methods for obtaining this structure, and none of this literature addresses a general methodology for obtaining a minimal M-delta model for a wide class of uncertainty. Since have a delta matrix of minimum order would improve the efficiency of structured singular value (or multivariable stability margin) computations, a method of obtaining a minimal M-delta model would be useful. A generalized method of obtaining a minimal M-delta structure for systems with real parameter variations is given.
Applications of minimal physiologically-based pharmacokinetic models
Cao, Yanguang
2012-01-01
Conventional mammillary models are frequently used for pharmacokinetic (PK) analysis when only blood or plasma data are available. Such models depend on the quality of the drug disposition data and have vague biological features. An alternative minimal-physiologically-based PK (minimal-PBPK) modeling approach is proposed which inherits and lumps major physiologic attributes from whole-body PBPK models. The body and model are represented as actual blood and tissue usually total body weight) volumes, fractions (fd) of cardiac output with Fick’s Law of Perfusion, tissue/blood partitioning (Kp), and systemic or intrinsic clearance. Analyzing only blood or plasma concentrations versus time, the minimal-PBPK models parsimoniously generate physiologically-relevant PK parameters which are more easily interpreted than those from mam-millary models. The minimal-PBPK models were applied to four types of therapeutic agents and conditions. The models well captured the human PK profiles of 22 selected beta-lactam antibiotics allowing comparison of fitted and calculated Kp values. Adding a classical hepatic compartment with hepatic blood flow allowed joint fitting of oral and intravenous (IV) data for four hepatic elimination drugs (dihydrocodeine, verapamil, repaglinide, midazolam) providing separate estimates of hepatic intrinsic clearance, non-hepatic clearance, and pre-hepatic bioavailability. The basic model was integrated with allometric scaling principles to simultaneously describe moxifloxacin PK in five species with common Kp and fd values. A basic model assigning clearance to the tissue compartment well characterized plasma concentrations of six monoclonal antibodies in human subjects, providing good concordance of predictions with expected tissue kinetics. The proposed minimal-PBPK modeling approach offers an alternative and more rational basis for assessing PK than compartmental models. PMID:23179857
NASA Astrophysics Data System (ADS)
Sun, Biao; Zhao, Wenfeng; Zhu, Xinshan
2017-06-01
Objective. Data compression is crucial for resource-constrained wireless neural recording applications with limited data bandwidth, and compressed sensing (CS) theory has successfully demonstrated its potential in neural recording applications. In this paper, an analytical, training-free CS recovery method, termed group weighted analysis {{\\ell}1} -minimization (GWALM), is proposed for wireless neural recording. Approach. The GWALM method consists of three parts: (1) the analysis model is adopted to enforce sparsity of the neural signals, therefore overcoming the drawbacks of conventional synthesis models and enhancing the recovery performance. (2) A multi-fractional-order difference matrix is constructed as the analysis operator, thus avoiding the dictionary learning procedure and reducing the need for previously acquired data and computational complexities. (3) By exploiting the statistical properties of the analysis coefficients, a group weighting approach is developed to enhance the performance of analysis {{\\ell}1} -minimization. Main results. Experimental results on synthetic and real datasets reveal that the proposed approach outperforms state-of-the-art CS-based methods in terms of both spike recovery quality and classification accuracy. Significance. Energy and area efficiency of the GWALM make it an ideal candidate for resource-constrained, large scale wireless neural recording applications. The training-free feature of the GWALM further improves its robustness to spike shape variation, thus making it more practical for long term wireless neural recording.
Sun, Biao; Zhao, Wenfeng; Zhu, Xinshan
2017-06-01
Data compression is crucial for resource-constrained wireless neural recording applications with limited data bandwidth, and compressed sensing (CS) theory has successfully demonstrated its potential in neural recording applications. In this paper, an analytical, training-free CS recovery method, termed group weighted analysis [Formula: see text]-minimization (GWALM), is proposed for wireless neural recording. The GWALM method consists of three parts: (1) the analysis model is adopted to enforce sparsity of the neural signals, therefore overcoming the drawbacks of conventional synthesis models and enhancing the recovery performance. (2) A multi-fractional-order difference matrix is constructed as the analysis operator, thus avoiding the dictionary learning procedure and reducing the need for previously acquired data and computational complexities. (3) By exploiting the statistical properties of the analysis coefficients, a group weighting approach is developed to enhance the performance of analysis [Formula: see text]-minimization. Experimental results on synthetic and real datasets reveal that the proposed approach outperforms state-of-the-art CS-based methods in terms of both spike recovery quality and classification accuracy. Energy and area efficiency of the GWALM make it an ideal candidate for resource-constrained, large scale wireless neural recording applications. The training-free feature of the GWALM further improves its robustness to spike shape variation, thus making it more practical for long term wireless neural recording.
MUSiC - Model-independent search for deviations from Standard Model predictions in CMS
NASA Astrophysics Data System (ADS)
Pieta, Holger
2010-02-01
We present an approach for a model independent search in CMS. Systematically scanning the data for deviations from the standard model Monte Carlo expectations, such an analysis can help to understand the detector and tune event generators. By minimizing the theoretical bias the analysis is furthermore sensitive to a wide range of models for new physics, including the uncounted number of models not-yet-thought-of. After sorting the events into classes defined by their particle content (leptons, photons, jets and missing transverse energy), a minimally prejudiced scan is performed on a number of distributions. Advanced statistical methods are used to determine the significance of the deviating regions, rigorously taking systematic uncertainties into account. A number of benchmark scenarios, including common models of new physics and possible detector effects, have been used to gauge the power of such a method. )
Non-minimally coupled condensate cosmologies: a phase space analysis
NASA Astrophysics Data System (ADS)
Carloni, Sante; Vignolo, Stefano; Cianci, Roberto
2014-09-01
We present an analysis of the phase space of cosmological models based on a non-minimal coupling between the geometry and a fermionic condensate. We observe that the strong constraint coming from the Dirac equations allows a detailed design of the cosmology of these models, and at the same time guarantees an evolution towards a state indistinguishable from general relativistic cosmological models. In this light, we show in detail how the use of some specific potentials can naturally reproduce a phase of accelerated expansion. In particular, we find for the first time that an exponential potential is able to induce two de Sitter phases separated by a power law expansion, which could be an interesting model for the unification of an inflationary phase and a dark energy era.
Non-minimal Higgs inflation and frame dependence in cosmology
NASA Astrophysics Data System (ADS)
Steinwachs, Christian F.; Kamenshchik, Alexander Yu.
2013-02-01
We investigate a very general class of cosmological models with scalar fields non-minimally coupled to gravity. A particular representative in this class is given by the non-minimal Higgs inflation model in which the Standard Model Higgs boson and the inflaton are described by one and the same scalar particle. While the predictions of the non-minimal Higgs inflation scenario come numerically remarkably close to the recently discovered mass of the Higgs boson, there remains a conceptual problem in this model that is associated with the choice of the cosmological frame. While the classical theory is independent of this choice, we find by an explicit calculation that already the first quantum corrections induce a frame dependence. We give a geometrical explanation of this frame dependence by embedding it into a more general field theoretical context. From this analysis, some conceptional points in the long lasting cosmological debate: "Jordan frame vs. Einstein frame" become more transparent and in principle can be resolved in a natural way.
Brown, Melissa M; Brown, Gary C; Brown, Heidi C; Peet, Jonathan
2008-06-01
To assess the conferred value and average cost-utility (cost-effectiveness) for intravitreal ranibizumab used to treat occult/minimally classic subfoveal choroidal neovascularization associated with age-related macular degeneration (AMD). Value-based medicine cost-utility analysis. MARINA (Minimally Classic/Occult Trial of the Anti-Vascular Endothelial Growth Factor Antibody Ranibizumab in the Treatment of Neovascular AMD) Study patients utilizing published primary data. Reference case, third-party insurer perspective, cost-utility analysis using 2006 United States dollars. Conferred value in the forms of (1) quality-adjusted life-years (QALYs) and (2) percent improvement in health-related quality of life. Cost-utility is expressed in terms of dollars expended per QALY gained. All outcomes are discounted at a 3% annual rate, as recommended by the Panel on Cost-effectiveness in Health and Medicine. Data are presented for the second-eye model, first-eye model, and combined model. Twenty-two intravitreal injections of 0.5 mg of ranibizumab administered over a 2-year period confer 1.039 QALYs, or a 15.8% improvement in quality of life for the 12-year period of the second-eye model reference case of occult/minimally classic age-related subfoveal choroidal neovascularization. The reference case treatment cost is $52652, and the cost-utility for the second-eye model is $50691/QALY. The quality-of-life gain from the first-eye model is 6.4% and the cost-utility is $123887, whereas the most clinically simulating combined model yields a quality-of-life gain of 10.4% and cost-utility of $74169. By conventional standards and the most commonly used second-eye and combined models, intravitreal ranibizumab administered for occult/minimally classic subfoveal choroidal neovascularization is a cost-effective therapy. Ranibizumab treatment confers considerably greater value than other neovascular macular degeneration pharmaceutical therapies that have been studied in randomized clinical trials.
NASA Technical Reports Server (NTRS)
Mcfarland, E.; Tabakoff, W.; Hamed, A.
1977-01-01
An investigation of the effects of coolant injection on the aerodynamic performance of cooled turbine blades is presented. The coolant injection is modeled in the inviscid irrotational adiabatic flow analysis through the cascade using the distributed singularities approach. The resulting integral equations are solved using a minimized surface singularity density criteria. The aerodynamic performance was evaluated using this solution in conjunction with an existing mixing theory analysis. The results of the present analysis are compared with experimental measurements in cold flow tests.
SASS wind ambiguity removal by direct minimization. [Seasat-A satellite scatterometer
NASA Technical Reports Server (NTRS)
Hoffman, R. N.
1982-01-01
An objective analysis procedure is presented which combines Seasat-A satellite scatterometer (SASS) data with other available data on wind speeds by minimizing an objective function of gridded wind speed values. The functions are defined as the loss functions for the SASS velocity data, the forecast, the SASS velocity magnitude data, and conventional wind speed data. Only aliases closest to the analysis were included, and a method for improving the first guess while using a minimization technique and slowly changing the parameters of the problem is introduced. The model is employed to predict the wind field for the North Atlantic on Sept. 10, 1978. Dealiased SASS data is compared with available ship readings, showing good agreement between the SASS dealiased winds and the winds measured at the surface. Expansion of the model to take in low-level cloud measurements, pressure data, and convergence and cloud level data correlations is discussed.
Analysis of counting data: Development of the SATLAS Python package
NASA Astrophysics Data System (ADS)
Gins, W.; de Groote, R. P.; Bissell, M. L.; Granados Buitrago, C.; Ferrer, R.; Lynch, K. M.; Neyens, G.; Sels, S.
2018-01-01
For the analysis of low-statistics counting experiments, a traditional nonlinear least squares minimization routine may not always provide correct parameter and uncertainty estimates due to the assumptions inherent in the algorithm(s). In response to this, a user-friendly Python package (SATLAS) was written to provide an easy interface between the data and a variety of minimization algorithms which are suited for analyzinglow, as well as high, statistics data. The advantage of this package is that it allows the user to define their own model function and then compare different minimization routines to determine the optimal parameter values and their respective (correlated) errors. Experimental validation of the different approaches in the package is done through analysis of hyperfine structure data of 203Fr gathered by the CRIS experiment at ISOLDE, CERN.
Bilinear Factor Matrix Norm Minimization for Robust PCA: Algorithms and Applications.
Shang, Fanhua; Cheng, James; Liu, Yuanyuan; Luo, Zhi-Quan; Lin, Zhouchen
2017-09-04
The heavy-tailed distributions of corrupted outliers and singular values of all channels in low-level vision have proven effective priors for many applications such as background modeling, photometric stereo and image alignment. And they can be well modeled by a hyper-Laplacian. However, the use of such distributions generally leads to challenging non-convex, non-smooth and non-Lipschitz problems, and makes existing algorithms very slow for large-scale applications. Together with the analytic solutions to Lp-norm minimization with two specific values of p, i.e., p=1/2 and p=2/3, we propose two novel bilinear factor matrix norm minimization models for robust principal component analysis. We first define the double nuclear norm and Frobenius/nuclear hybrid norm penalties, and then prove that they are in essence the Schatten-1/2 and 2/3 quasi-norms, respectively, which lead to much more tractable and scalable Lipschitz optimization problems. Our experimental analysis shows that both our methods yield more accurate solutions than original Schatten quasi-norm minimization, even when the number of observations is very limited. Finally, we apply our penalties to various low-level vision problems, e.g. moving object detection, image alignment and inpainting, and show that our methods usually outperform the state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Osman, Ayat E.
Energy use in commercial buildings constitutes a major proportion of the energy consumption and anthropogenic emissions in the USA. Cogeneration systems offer an opportunity to meet a building's electrical and thermal demands from a single energy source. To answer the question of what is the most beneficial and cost effective energy source(s) that can be used to meet the energy demands of the building, optimizations techniques have been implemented in some studies to find the optimum energy system based on reducing cost and maximizing revenues. Due to the significant environmental impacts that can result from meeting the energy demands in buildings, building design should incorporate environmental criteria in the decision making criteria. The objective of this research is to develop a framework and model to optimize a building's operation by integrating congregation systems and utility systems in order to meet the electrical, heating, and cooling demand by considering the potential life cycle environmental impact that might result from meeting those demands as well as the economical implications. Two LCA Optimization models have been developed within a framework that uses hourly building energy data, life cycle assessment (LCA), and mixed-integer linear programming (MILP). The objective functions that are used in the formulation of the problems include: (1) Minimizing life cycle primary energy consumption, (2) Minimizing global warming potential, (3) Minimizing tropospheric ozone precursor potential, (4) Minimizing acidification potential, (5) Minimizing NOx, SO 2 and CO2, and (6) Minimizing life cycle costs, considering a study period of ten years and the lifetime of equipment. The two LCA optimization models can be used for: (a) long term planning and operational analysis in buildings by analyzing the hourly energy use of a building during a day and (b) design and quick analysis of building operation based on periodic analysis of energy use of a building in a year. A Pareto-optimal frontier is also derived, which defines the minimum cost required to achieve any level of environmental emission or primary energy usage value or inversely the minimum environmental indicator and primary energy usage value that can be achieved and the cost required to achieve that value.
On the topology of the inflaton field in minimal supergravity models
NASA Astrophysics Data System (ADS)
Ferrara, Sergio; Fré, Pietro; Sorin, Alexander S.
2014-04-01
We consider global issues in minimal supergravity models where a single field inflaton potential emerges. In a particular case we reproduce the Starobinsky model and its description dual to a certain formulation of R + R 2 supergravity. For definiteness we confine our analysis to spaces at constant curvature, either vanishing or negative. Five distinct models arise, two flat models with respectively a quadratic and a quartic potential and three based on the space where its distinct isometries, elliptic, hyperbolic and parabolic are gauged. Fayet-Iliopoulos terms are introduced in a geometric way and they turn out to be a crucial ingredient in order to describe the de Sitter inflationary phase of the Starobinsky model.
Stress granule formation via ATP depletion-triggered phase separation
NASA Astrophysics Data System (ADS)
Wurtz, Jean David; Lee, Chiu Fan
2018-04-01
Stress granules (SG) are droplets of proteins and RNA that form in the cell cytoplasm during stress conditions. We consider minimal models of stress granule formation based on the mechanism of phase separation regulated by ATP-driven chemical reactions. Motivated by experimental observations, we identify a minimal model of SG formation triggered by ATP depletion. Our analysis indicates that ATP is continuously hydrolysed to deter SG formation under normal conditions, and we provide specific predictions that can be tested experimentally.
2006-05-18
Minimize environmental impact. One of the chief ways in which the ship can harm the environment is by spilling untreated bilge water or fuel...containment): Fire suppression and fire containment can be performed in ways that minimize the amount of contaminated water that enters the bilges ...flood control can be performed to delay the need to return bilge water to the sea. Topological links: None. 3.18 – Resource allocation Description
NASA Astrophysics Data System (ADS)
Yang, Jia Sheng
2018-06-01
In this paper, we investigate a H∞ memory controller with input limitation minimization (HMCIM) for offshore jacket platforms stabilization. The main objective of this study is to reduce the control consumption as well as protect the actuator when satisfying the requirement of the system performance. First, we introduce a dynamic model of offshore platform with low order main modes based on mode reduction method in numerical analysis. Then, based on H∞ control theory and matrix inequality techniques, we develop a novel H∞ memory controller with input limitation. Furthermore, a non-convex optimization model to minimize input energy consumption is proposed. Since it is difficult to solve this non-convex optimization model by optimization algorithm, we use a relaxation method with matrix operations to transform this non-convex optimization model to be a convex optimization model. Thus, it could be solved by a standard convex optimization solver in MATLAB or CPLEX. Finally, several numerical examples are given to validate the proposed models and methods.
SH c realization of minimal model CFT: triality, poset and Burge condition
NASA Astrophysics Data System (ADS)
Fukuda, M.; Nakamura, S.; Matsuo, Y.; Zhu, R.-D.
2015-11-01
Recently an orthogonal basis of {{W}}_N -algebra (AFLT basis) labeled by N-tuple Young diagrams was found in the context of 4D/2D duality. Recursion relations among the basis are summarized in the form of an algebra SH c which is universal for any N. We show that it has an {{S}}_3 automorphism which is referred to as triality. We study the level-rank duality between minimal models, which is a special example of the automorphism. It is shown that the nonvanishing states in both systems are described by N or M Young diagrams with the rows of boxes appropriately shuffled. The reshuffling of rows implies there exists partial ordering of the set which labels them. For the simplest example, one can compute the partition functions for the partially ordered set (poset) explicitly, which reproduces the Rogers-Ramanujan identities. We also study the description of minimal models by SH c . Simple analysis reproduces some known properties of minimal models, the structure of singular vectors and the N-Burge condition in the Hilbert space.
Observational constraints on tachyonic chameleon dark energy model
NASA Astrophysics Data System (ADS)
Banijamali, A.; Bellucci, S.; Fazlpour, B.; Solbi, M.
2018-03-01
It has been recently shown that tachyonic chameleon model of dark energy in which tachyon scalar field non-minimally coupled to the matter admits stable scaling attractor solution that could give rise to the late-time accelerated expansion of the universe and hence alleviate the coincidence problem. In the present work, we use data from Type Ia supernova (SN Ia) and Baryon Acoustic oscillations to place constraints on the model parameters. In our analysis we consider in general exponential and non-exponential forms for the non-minimal coupling function and tachyonic potential and show that the scenario is compatible with observations.
Design and architecture of the Mars relay network planning and analysis framework
NASA Technical Reports Server (NTRS)
Cheung, K. M.; Lee, C. H.
2002-01-01
In this paper we describe the design and architecture of the Mars Network planning and analysis framework that supports generation and validation of efficient planning and scheduling strategy. The goals are to minimize the transmitting time, minimize the delaying time, and/or maximize the network throughputs. The proposed framework would require (1) a client-server architecture to support interactive, batch, WEB, and distributed analysis and planning applications for the relay network analysis scheme, (2) a high-fidelity modeling and simulation environment that expresses link capabilities between spacecraft to spacecraft and spacecraft to Earth stations as time-varying resources, and spacecraft activities, link priority, Solar System dynamic events, the laws of orbital mechanics, and other limiting factors as spacecraft power and thermal constraints, (3) an optimization methodology that casts the resource and constraint models into a standard linear and nonlinear constrained optimization problem that lends itself to commercial off-the-shelf (COTS)planning and scheduling algorithms.
On isometry anomalies in minimal 𝒩 = (0,1) and 𝒩 = (0,2) sigma models
NASA Astrophysics Data System (ADS)
Chen, Jin; Cui, Xiaoyi; Shifman, Mikhail; Vainshtein, Arkady
2016-09-01
The two-dimensional minimal supersymmetric sigma models with homogeneous target spaces G/H and chiral fermions of the same chirality are revisited. In particular, we look into the isometry anomalies in O(N) and CP(N - 1) models. These anomalies are generated by fermion loop diagrams which we explicitly calculate. In the case of O(N) sigma models the first Pontryagin class vanishes, so there is no global obstruction for the minimal 𝒩 = (0, 1) supersymmetrization of these models. We show that at the local level isometries in these models can be made anomaly free by specifying the counterterms explicitly. Thus, there are no obstructions to quantizing the minimal 𝒩 = (0, 1) models with the SN-1 = SO(N)/SO(N - 1) target space while preserving the isometries. This also includes CP(1) (equivalent to S2) which is an exceptional case from the CP(N - 1) series. For other CP(N - 1) models, the isometry anomalies cannot be rescued even locally, this leads us to a discussion on the relation between the geometric and gauged formulations of the CP(N - 1) models to compare the original of different anomalies. A dual formalism of O(N) model is also given, in order to show the consistency of our isometry anomaly analysis in different formalisms. The concrete counterterms to be added, however, will be formalism dependent.
A Practical Model for Forecasting New Freshman Enrollment during the Application Period.
ERIC Educational Resources Information Center
Paulsen, Michael B.
1989-01-01
A simple and effective model for forecasting freshman enrollment during the application period is presented step by step. The model requires minimal and readily available information, uses a simple linear regression analysis on a personal computer, and provides updated monthly forecasts. (MSE)
Unitary subsector of generalized minimal models
NASA Astrophysics Data System (ADS)
Behan, Connor
2018-05-01
We revisit the line of nonunitary theories that interpolate between the Virasoro minimal models. Numerical bootstrap applications have brought about interest in the four-point function involving the scalar primary of lowest dimension. Using recent progress in harmonic analysis on the conformal group, we prove the conjecture that global conformal blocks in this correlator appear with positive coefficients. We also compute many such coefficients in the simplest mixed correlator system. Finally, we comment on the status of using global conformal blocks to isolate the truly unitary points on this line.
On the formulation of a minimal uncertainty model for robust control with structured uncertainty
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.; Chang, B.-C.; Fischl, Robert
1991-01-01
In the design and analysis of robust control systems for uncertain plants, representing the system transfer matrix in the form of what has come to be termed an M-delta model has become widely accepted and applied in the robust control literature. The M represents a transfer function matrix M(s) of the nominal closed loop system, and the delta represents an uncertainty matrix acting on M(s). The nominal closed loop system M(s) results from closing the feedback control system, K(s), around a nominal plant interconnection structure P(s). The uncertainty can arise from various sources, such as structured uncertainty from parameter variations or multiple unsaturated uncertainties from unmodeled dynamics and other neglected phenomena. In general, delta is a block diagonal matrix, but for real parameter variations delta is a diagonal matrix of real elements. Conceptually, the M-delta structure can always be formed for any linear interconnection of inputs, outputs, transfer functions, parameter variations, and perturbations. However, very little of the currently available literature addresses computational methods for obtaining this structure, and none of this literature addresses a general methodology for obtaining a minimal M-delta model for a wide class of uncertainty, where the term minimal refers to the dimension of the delta matrix. Since having a minimally dimensioned delta matrix would improve the efficiency of structured singular value (or multivariable stability margin) computations, a method of obtaining a minimal M-delta would be useful. Hence, a method of obtaining the interconnection system P(s) is required. A generalized procedure for obtaining a minimal P-delta structure for systems with real parameter variations is presented. Using this model, the minimal M-delta model can then be easily obtained by closing the feedback loop. The procedure involves representing the system in a cascade-form state-space realization, determining the minimal uncertainty matrix, delta, and constructing the state-space representation of P(s). Three examples are presented to illustrate the procedure.
Inference regarding multiple structural changes in linear models with endogenous regressors☆
Hall, Alastair R.; Han, Sanggohn; Boldea, Otilia
2012-01-01
This paper considers the linear model with endogenous regressors and multiple changes in the parameters at unknown times. It is shown that minimization of a Generalized Method of Moments criterion yields inconsistent estimators of the break fractions, but minimization of the Two Stage Least Squares (2SLS) criterion yields consistent estimators of these parameters. We develop a methodology for estimation and inference of the parameters of the model based on 2SLS. The analysis covers the cases where the reduced form is either stable or unstable. The methodology is illustrated via an application to the New Keynesian Phillips Curve for the US. PMID:23805021
Aad, G.; Abbott, B.; Abdallah, J.; ...
2015-10-08
A summary is presented of ATLAS searches for gluinos and first- and second-generation squarks in final states containing jets and missing transverse momentum, with or without leptons or b-jets, in the √s = 8 TeV data set collected at the Large Hadron Collider in 2012. This paper reports the results of new interpretations and statistical combinations of previously published analyses, as well as a new analysis. Since no significant excess of events over the Standard Model expectation is observed, the data are used to set limits in a variety of models. In all the considered simplified models that assume R-paritymore » conservation, the limit on the gluino mass exceeds 1150 GeV at 95% confidence level, for an LSP mass smaller than 100 GeV. Moreover, exclusion limits are set for left-handed squarks in a phenomenological MSSM model, a minimal Supergravity/Constrained MSSM model, R-parity-violation scenarios, a minimal gauge-mediated supersymmetry breaking model, a natural gauge mediation model, a non-universal Higgs mass model with gaugino mediation and a minimal model of universal extra dimensions.« less
Updating the Finite Element Model of the Aerostructures Test Wing Using Ground Vibration Test Data
NASA Technical Reports Server (NTRS)
Lung, Shun-Fat; Pak, Chan-Gi
2009-01-01
Improved and/or accelerated decision making is a crucial step during flutter certification processes. Unfortunately, most finite element structural dynamics models have uncertainties associated with model validity. Tuning the finite element model using measured data to minimize the model uncertainties is a challenging task in the area of structural dynamics. The model tuning process requires not only satisfactory correlations between analytical and experimental results, but also the retention of the mass and stiffness properties of the structures. Minimizing the difference between analytical and experimental results is a type of optimization problem. By utilizing the multidisciplinary design, analysis, and optimization (MDAO) tool in order to optimize the objective function and constraints; the mass properties, the natural frequencies, and the mode shapes can be matched to the target data to retain the mass matrix orthogonality. This approach has been applied to minimize the model uncertainties for the structural dynamics model of the aerostructures test wing (ATW), which was designed and tested at the National Aeronautics and Space Administration Dryden Flight Research Center (Edwards, California). This study has shown that natural frequencies and corresponding mode shapes from the updated finite element model have excellent agreement with corresponding measured data.
Updating the Finite Element Model of the Aerostructures Test Wing using Ground Vibration Test Data
NASA Technical Reports Server (NTRS)
Lung, Shun-fat; Pak, Chan-gi
2009-01-01
Improved and/or accelerated decision making is a crucial step during flutter certification processes. Unfortunately, most finite element structural dynamics models have uncertainties associated with model validity. Tuning the finite element model using measured data to minimize the model uncertainties is a challenging task in the area of structural dynamics. The model tuning process requires not only satisfactory correlations between analytical and experimental results, but also the retention of the mass and stiffness properties of the structures. Minimizing the difference between analytical and experimental results is a type of optimization problem. By utilizing the multidisciplinary design, analysis, and optimization (MDAO) tool in order to optimize the objective function and constraints; the mass properties, the natural frequencies, and the mode shapes can be matched to the target data to retain the mass matrix orthogonality. This approach has been applied to minimize the model uncertainties for the structural dynamics model of the Aerostructures Test Wing (ATW), which was designed and tested at the National Aeronautics and Space Administration (NASA) Dryden Flight Research Center (DFRC) (Edwards, California). This study has shown that natural frequencies and corresponding mode shapes from the updated finite element model have excellent agreement with corresponding measured data.
Minimal string theories and integrable hierarchies
NASA Astrophysics Data System (ADS)
Iyer, Ramakrishnan
Well-defined, non-perturbative formulations of the physics of string theories in specific minimal or superminimal model backgrounds can be obtained by solving matrix models in the double scaling limit. They provide us with the first examples of completely solvable string theories. Despite being relatively simple compared to higher dimensional critical string theories, they furnish non-perturbative descriptions of interesting physical phenomena such as geometrical transitions between D-branes and fluxes, tachyon condensation and holography. The physics of these theories in the minimal model backgrounds is succinctly encoded in a non-linear differential equation known as the string equation, along with an associated hierarchy of integrable partial differential equations (PDEs). The bosonic string in (2,2m-1) conformal minimal model backgrounds and the type 0A string in (2,4 m) superconformal minimal model backgrounds have the Korteweg-de Vries system, while type 0B in (2,4m) backgrounds has the Zakharov-Shabat system. The integrable PDE hierarchy governs flows between backgrounds with different m. In this thesis, we explore this interesting connection between minimal string theories and integrable hierarchies further. We uncover the remarkable role that an infinite hierarchy of non-linear differential equations plays in organizing and connecting certain minimal string theories non-perturbatively. We are able to embed the type 0A and 0B (A,A) minimal string theories into this single framework. The string theories arise as special limits of a rich system of equations underpinned by an integrable system known as the dispersive water wave hierarchy. We find that there are several other string-like limits of the system, and conjecture that some of them are type IIA and IIB (A,D) minimal string backgrounds. We explain how these and several other string-like special points arise and are connected. In some cases, the framework endows the theories with a non-perturbative definition for the first time. Notably, we discover that the Painleve IV equation plays a key role in organizing the string theory physics, joining its siblings, Painleve I and II, whose roles have previously been identified in this minimal string context. We then present evidence that the conjectured type II theories have smooth non-perturbative solutions, connecting two perturbative asymptotic regimes, in a 't Hooft limit. Our technique also demonstrates evidence for new minimal string theories that are not apparent in a perturbative analysis.
NASA Astrophysics Data System (ADS)
Fillion, Anthony; Bocquet, Marc; Gratton, Serge
2018-04-01
The analysis in nonlinear variational data assimilation is the solution of a non-quadratic minimization. Thus, the analysis efficiency relies on its ability to locate a global minimum of the cost function. If this minimization uses a Gauss-Newton (GN) method, it is critical for the starting point to be in the attraction basin of a global minimum. Otherwise the method may converge to a local extremum, which degrades the analysis. With chaotic models, the number of local extrema often increases with the temporal extent of the data assimilation window, making the former condition harder to satisfy. This is unfortunate because the assimilation performance also increases with this temporal extent. However, a quasi-static (QS) minimization may overcome these local extrema. It accomplishes this by gradually injecting the observations in the cost function. This method was introduced by Pires et al. (1996) in a 4D-Var context. We generalize this approach to four-dimensional strong-constraint nonlinear ensemble variational (EnVar) methods, which are based on both a nonlinear variational analysis and the propagation of dynamical error statistics via an ensemble. This forces one to consider the cost function minimizations in the broader context of cycled data assimilation algorithms. We adapt this QS approach to the iterative ensemble Kalman smoother (IEnKS), an exemplar of nonlinear deterministic four-dimensional EnVar methods. Using low-order models, we quantify the positive impact of the QS approach on the IEnKS, especially for long data assimilation windows. We also examine the computational cost of QS implementations and suggest cheaper algorithms.
NASA Astrophysics Data System (ADS)
Jamaludin, Amril Hadri; Karim, Nurulzatushima Abdul; Noor, Raja Nor Husna Raja Mohd; Othman, Nurulhidayah; Malik, Sulaiman Abdul
2017-08-01
Construction waste management (CWM) is the practice of minimizing and diverting construction waste, demolition debris, and land-clearing debris from disposal and redirecting recyclable resources back into the construction process. Best practice model means best choice from the collection of other practices that was built for purpose of construction waste management. The practice model can help the contractors in minimizing waste before the construction activities will be started. The importance of minimizing wastage will have direct impact on time, cost and quality of a construction project. This paper is focusing on the preliminary study to determine the factors of waste generation in the construction sites and identify the effectiveness of existing construction waste management practice conducted in Malaysia. The paper will also include the preliminary works of planned research location, data collection method, and analysis to be done by using the Analytical Hierarchy Process (AHP) to help in developing suitable waste management best practice model that can be used in the country.
NASA Astrophysics Data System (ADS)
Bhattacharya, Somnath; Mukherjee, Pradip; Roy, Amit Singha; Saha, Anirban
2018-03-01
We consider a scalar field which is generally non-minimally coupled to gravity and has a characteristic cubic Galilean-like term and a generic self-interaction, as a candidate of a Dark Energy model. The system is dynamically analyzed and novel fixed points with perturbative stability are demonstrated. Evolution of the system is numerically studied near a novel fixed point which owes its existence to the Galileon character of the model. It turns out that demanding the stability of this novel fixed point puts a strong restriction on the allowed non-minimal coupling and the choice of the self-interaction. The evolution of the equation of state parameter is studied, which shows that our model predicts an accelerated universe throughout and the phantom limit is only approached closely but never crossed. Our result thus extends the findings of Coley, Dynamical systems and cosmology. Kluwer Academic Publishers, Boston (2013) for more general NMC than linear and quadratic couplings.
An Analysis of Measured Pressure Signatures From Two Theory-Validation Low-Boom Models
NASA Technical Reports Server (NTRS)
Mack, Robert J.
2003-01-01
Two wing/fuselage/nacelle/fin concepts were designed to check the validity and the applicability of sonic-boom minimization theory, sonic-boom analysis methods, and low-boom design methodology in use at the end of the 1980is. Models of these concepts were built, and the pressure signatures they generated were measured in the wind-tunnel. The results of these measurements lead to three conclusions: (1) the existing methods could adequately predict sonic-boom characteristics of wing/fuselage/fin(s) configurations if the equivalent area distributions of each component were smooth and continuous; (2) these methods needed revision so the engine-nacelle volume and the nacelle-wing interference lift disturbances could be accurately predicted; and (3) current nacelle-configuration integration methods had to be updated. With these changes in place, the existing sonic-boom analysis and minimization methods could be effectively applied to supersonic-cruise concepts for acceptable/tolerable sonic-boom overpressures during cruise.
Pandey, Rupesh Kumar; Panda, Sudhansu Sekhar
2014-11-01
Drilling of bone is a common procedure in orthopedic surgery to produce hole for screw insertion to fixate the fracture devices and implants. The increase in temperature during such a procedure increases the chances of thermal invasion of bone which can cause thermal osteonecrosis resulting in the increase of healing time or reduction in the stability and strength of the fixation. Therefore, drilling of bone with minimum temperature is a major challenge for orthopedic fracture treatment. This investigation discusses the use of fuzzy logic and Taguchi methodology for predicting and minimizing the temperature produced during bone drilling. The drilling experiments have been conducted on bovine bone using Taguchi's L25 experimental design. A fuzzy model is developed for predicting the temperature during orthopedic drilling as a function of the drilling process parameters (point angle, helix angle, feed rate and cutting speed). Optimum bone drilling process parameters for minimizing the temperature are determined using Taguchi method. The effect of individual cutting parameters on the temperature produced is evaluated using analysis of variance. The fuzzy model using triangular and trapezoidal membership predicts the temperature within a maximum error of ±7%. Taguchi analysis of the obtained results determined the optimal drilling conditions for minimizing the temperature as A3B5C1.The developed system will simplify the tedious task of modeling and determination of the optimal process parameters to minimize the bone drilling temperature. It will reduce the risk of thermal osteonecrosis and can be very effective for the online condition monitoring of the process. © IMechE 2014.
Multiobjective Collaborative Optimization of Systems of Systems
2005-06-01
K: HSC MODEL AND OPTIMIZATION DESCRIPTION ................................................ 157 APPENDIX L: HSC OPTIMIZATION CODE...7 0 Table 6. System Variables of FPF Data Set Showing Minimal HSC Impact on...App.E, F) Data Analysis Front ITS Model (App. I, J) Chap.] 1 ConclusionsSHSC Model (App. K, L) Cot[& HSC Model (App. M, NV) MoeJ Future Work Figure
A Limitation of the Applicability of Interval Shift Analysis to Program Evaluation
ERIC Educational Resources Information Center
Hardy, Roy
1975-01-01
Interval Shift Analysis (ISA) is an adaptation of the linear programming model used to determine maximum benefits or minimal losses in quantifiable economics problems. ISA is applied to pre and posttest score distributions for 43 classes of second graders. (RC)
Qualitative properties of the minimal model of carbon circulation in the biosphere
NASA Astrophysics Data System (ADS)
Pestunov, Aleksandr; Fedotov, Anatoliy; Medvedev, Sergey
2014-05-01
Substantial changes in the biosphere during recent decades have caused legitimate concern in the international community. The fact that feedbacks between the atmospheric CO2 concentration, global temperature, permafrost, ocean CO2 concentration and air humidity increases the risk of catastrophic phenomena on the planetary scale. The precautionary principle allows us to consider greenhouse effect using the mathematical models of the biosphere-climate system. Minimal models do not allow us to make a quantitative description of the "biosphere-climate" system dynamics, which is determined by the aggregate effect of the set of known climatic and biosphere processes. However, the study of such models makes it possible to understand the qualitative mechanisms of biosphere processes and to evaluate their possible consequences. The global minimal model of long-term dynamics of carbon in biosphere is considered basing on assumption that anthropogenous carbon emissions in atmosphere are absent [1]. Qualitative analysis of the model shows that there exists a set of model parameters (taken from the current estimation ranges), such that the system becomes unstable. It is also shown that external influences on the carbon circulation can lead either to degradation of the biosphere or to global temperature change [2]. This work is aimed at revealing the conditions under which the biosphere model can become unstable, which can result in catastrophic changes in the Earth's biogeocenoses. The minimal model of the biosphere-climate system describes an improbable, but, nevertheless, a possible worst-case scenario of the biosphere evolution takes into consideration only the most dangerous biosphere mechanisms and ignores some climate feedbacks (such as transpiration). This work demonstrates the possibility of implementing the trigger mode in the biosphere, which can lead to dramatic changes in the state of the biosphere even without additional burning of fossil fuels. This mode implementation is possible under parameter values of the biosphere, lying within the ranges of their existing estimates. Hence a potential hazard of any drastic change of the biosphere conditions that may speed up possible shift of the biosphere to a new stable state. References 1. Bartsev S.I., Degermendzhi A.G., Fedotov A.M., Medvedev S.B., Pestunov A.I., Pestunov I.A. The Biosphere Trigger Mechanism in the Minimal Model for the Global Carbon Cycle of the Earth // Doklady Earth Sciences, 2012, Vol. 443, Part 2, pp. 489-492. © Pleiades Publishing, Ltd., 2012. 2. Fedotov A.M., Medvedev S.B., Pestunov A.I., Pestunov I.A., Bartsev S.I., Degermendzhi A.G. Qualitative analysis of the minimal model of carbon dynamics in the biosphere // Computational Technologies. 2012. Vol. 17. N 3. pp. 91-108 (in Russian).
Atmospheric model development in support of SEASAT. Volume 1: Summary of findings
NASA Technical Reports Server (NTRS)
Kesel, P. G.
1977-01-01
Atmospheric analysis and prediction models of varying (grid) resolution were developed. The models were tested using real observational data for the purpose of assessing the impact of grid resolution on short range numerical weather prediction. The discretionary model procedures were examined so that the computational viability of SEASAT data might be enhanced during the conduct of (future) sensitivity tests. The analysis effort covers: (1) examining the procedures for allowing data to influence the analysis; (2) examining the effects of varying the weights in the analysis procedure; (3) testing and implementing procedures for solving the minimization equation in an optimal way; (4) describing the impact of grid resolution on analysis; and (5) devising and implementing numerous practical solutions to analysis problems, generally.
2014-12-01
example of maximizing or minimizing decision variables within a model. Carol Stoker and Stephen Mehay present a comparative analysis of marketing and advertising strategies...strategy development process; documenting various recruiting, marketing , and advertising initiatives in each nation; and examining efforts to
Analysis of a New Variational Model to Restore Point-Like and Curve-Like Singularities in Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aubert, Gilles, E-mail: gaubert@unice.fr; Blanc-Feraud, Laure, E-mail: Laure.Blanc-Feraud@inria.fr; Graziani, Daniele, E-mail: Daniele.Graziani@inria.fr
2013-02-15
The paper is concerned with the analysis of a new variational model to restore point-like and curve-like singularities in biological images. To this aim we investigate the variational properties of a suitable energy which governs these pathologies. Finally in order to realize numerical experiments we minimize, in the discrete setting, a regularized version of this functional by fast descent gradient scheme.
Analysis of a kinetic multi-segment foot model part II: kinetics and clinical implications.
Bruening, Dustin A; Cooney, Kevin M; Buczek, Frank L
2012-04-01
Kinematic multi-segment foot models have seen increased use in clinical and research settings, but the addition of kinetics has been limited and hampered by measurement limitations and modeling assumptions. In this second of two companion papers, we complete the presentation and analysis of a three segment kinetic foot model by incorporating kinetic parameters and calculating joint moments and powers. The model was tested on 17 pediatric subjects (ages 7-18 years) during normal gait. Ground reaction forces were measured using two adjacent force platforms, requiring targeted walking and the creation of two sub-models to analyze ankle, midtarsal, and 1st metatarsophalangeal joints. Targeted walking resulted in only minimal kinematic and kinetic differences compared with walking at self selected speeds. Joint moments and powers were calculated and ensemble averages are presented as a normative database for comparison purposes. Ankle joint powers are shown to be overestimated when using a traditional single-segment foot model, as substantial angular velocities are attributed to the mid-tarsal joint. Power transfer is apparent between the 1st metatarsophalangeal and mid-tarsal joints in terminal stance/pre-swing. While the measurement approach presented here is limited to clinical populations with only minimal impairments, some elements of the model can also be incorporated into routine clinical gait analysis. Copyright © 2011 Elsevier B.V. All rights reserved.
Aggarwal, Rohit; Rider, Lisa G; Ruperto, Nicolino; Bayat, Nastaran; Erman, Brian; Feldman, Brian M; Oddis, Chester V; Amato, Anthony A; Chinoy, Hector; Cooper, Robert G; Dastmalchi, Maryam; Fiorentino, David; Isenberg, David; Katz, James D; Mammen, Andrew; de Visser, Marianne; Ytterberg, Steven R; Lundberg, Ingrid E; Chung, Lorinda; Danko, Katalin; García-De la Torre, Ignacio; Song, Yeong Wook; Villa, Luca; Rinaldi, Mariangela; Rockette, Howard; Lachenbruch, Peter A; Miller, Frederick W; Vencovsky, Jiri
2017-05-01
To develop response criteria for adult dermatomyositis (DM) and polymyositis (PM). Expert surveys, logistic regression, and conjoint analysis were used to develop 287 definitions using core set measures. Myositis experts rated greater improvement among multiple pairwise scenarios in conjoint analysis surveys, where different levels of improvement in 2 core set measures were presented. The PAPRIKA (Potentially All Pairwise Rankings of All Possible Alternatives) method determined the relative weights of core set measures and conjoint analysis definitions. The performance characteristics of the definitions were evaluated on patient profiles using expert consensus (gold standard) and were validated using data from a clinical trial. The nominal group technique was used to reach consensus. Consensus was reached for a conjoint analysis-based continuous model using absolute percent change in core set measures (physician, patient, and extramuscular global activity, muscle strength, Health Assessment Questionnaire, and muscle enzyme levels). A total improvement score (range 0-100), determined by summing scores for each core set measure, was based on improvement in and relative weight of each core set measure. Thresholds for minimal, moderate, and major improvement were ≥20, ≥40, and ≥60 points in the total improvement score. The same criteria were chosen for juvenile DM, with different improvement thresholds. Sensitivity and specificity in DM/PM patient cohorts were 85% and 92%, 90% and 96%, and 92% and 98% for minimal, moderate, and major improvement, respectively. Definitions were validated in the clinical trial analysis for differentiating the physician rating of improvement (P < 0.001). The response criteria for adult DM/PM consisted of the conjoint analysis model based on absolute percent change in 6 core set measures, with thresholds for minimal, moderate, and major improvement. © 2017, American College of Rheumatology.
Minimizing the total harmonic distortion for a 3 kW, 20 kHz ac to dc converter using SPICE
NASA Technical Reports Server (NTRS)
Lollar, Louis F.; Kapustka, Robert E.
1988-01-01
This paper describes the SPICE model of a transformer-rectified-filter (TRF) circuit and the Micro-CAP (Microcomputer Circuit Analysis Program) model and their application. The models were used to develop an actual circuit with reduced input current THD. The SPICE analysis consistently predicted the THD improvements in actual circuits as various designs were attempted. In an effort to predict and verify load regulation, the incorporation of saturable inductor models significantly improved the fidelity of the TRF circuit output voltage.
Saad, M F; Anderson, R L; Laws, A; Watanabe, R M; Kades, W W; Chen, Y D; Sands, R E; Pei, D; Savage, P J; Bergman, R N
1994-09-01
An insulin-modified frequently sampled intravenous glucose tolerance test (FSIGTT) with minimal model analysis was compared with the glucose clamp in 11 subjects with normal glucose tolerance (NGT), 20 with impaired glucose tolerance (IGT), and 24 with non-insulin-dependent diabetes mellitus (NIDDM). The insulin sensitivity index (SI) was calculated from FSIGTT using 22- and 12-sample protocols (SI(22) and SI(12), respectively). Insulin sensitivity from the clamp was expressed as SI(clamp) and SIP(clamp). Minimal model parameters were similar when calculated with SI(22) and SI(12). SI could not be distinguished from 0 in approximately 50% of diabetic patients with either protocol. SI(22) correlated significantly with SI(clamp) in the whole group (r = 0.62), and in the NGT (r = 0.53), IGT (r = 0.48), and NIDDM (r = 0.41) groups (P < 0.05 for each). SI(12) correlated significantly with SI(clamp) in the whole group (r = 0.55, P < 0.001) and in the NGT (r = 0.53, P = 0.046) and IGT (r = 0.58, P = 0.008) but not NIDDM (r = 0.30, P = 0.085) groups. When SI(22), SI(clamp), and SIP(clamp) were expressed in the same units, SI(22) was 66 +/- 5% (mean +/- SE) and 50 +/- 8% lower than SI(clamp) and SIP(clamp), respectively. Thus, minimal model analysis of the insulin-modified FSIGTT provides estimates of insulin sensitivity that correlate significantly with those from the glucose clamp. The correlation was weaker, however, in NIDDM. The insulin-modified FSIGTT can be used as a simple test for assessment of insulin sensitivity in population studies involving nondiabetic subjects. Additional studies are needed before using this test routinely in patients with NIDDM.
USDA-ARS?s Scientific Manuscript database
Soil moisture datasets (e.g. satellite-, model-, station-based) vary greatly with respect to their signal, noise, and/or combined time-series variability. Minimizing differences in signal variances is particularly important in data assimilation techniques to optimize the accuracy of the analysis obt...
Developing a model for hospital inherent safety assessment: Conceptualization and validation.
Yari, Saeed; Akbari, Hesam; Gholami Fesharaki, Mohammad; Khosravizadeh, Omid; Ghasemi, Mohammad; Barsam, Yalda; Akbari, Hamed
2018-01-01
Paying attention to the safety of hospitals, as the most crucial institute for providing medical and health services wherein a bundle of facilities, equipment, and human resource exist, is of significant importance. The present research aims at developing a model for assessing hospitals' safety based on principles of inherent safety design. Face validity (30 experts), content validity (20 experts), construct validity (268 examples), convergent validity, and divergent validity have been employed to validate the prepared questionnaire; and the items analysis, the Cronbach's alpha test, ICC test (to measure reliability of the test), composite reliability coefficient have been used to measure primary reliability. The relationship between variables and factors has been confirmed at 0.05 significance level by conducting confirmatory factor analysis (CFA) and structural equations modeling (SEM) technique with the use of Smart-PLS. R-square and load factors values, which were higher than 0.67 and 0.300 respectively, indicated the strong fit. Moderation (0.970), simplification (0.959), substitution (0.943), and minimization (0.5008) have had the most weights in determining the inherent safety of hospital respectively. Moderation, simplification, and substitution, among the other dimensions, have more weight on the inherent safety, while minimization has the less weight, which could be due do its definition as to minimize the risk.
Non-minimally coupled tachyon field in teleparallel gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fazlpour, Behnaz; Banijamali, Ali, E-mail: b.fazlpour@umz.ac.ir, E-mail: a.banijamali@nit.ac.ir
2015-04-01
We perform a full investigation on dynamics of a new dark energy model in which the four-derivative of a non-canonical scalar field (tachyon) is non-minimally coupled to the vector torsion. Our analysis is done in the framework of teleparallel equivalent of general relativity which is based on torsion instead of curvature. We show that in our model there exists a late-time scaling attractor (point P{sub 4}), corresponding to an accelerating universe with the property that dark energy and dark matter densities are of the same order. Such a point can help to alleviate the cosmological coincidence problem. Existence of thismore » point is the most significant difference between our model and another model in which a canonical scalar field (quintessence) is used instead of tachyon field.« less
Mathematical Analysis for Non-reciprocal-interaction-based Model of Collective Behavior
NASA Astrophysics Data System (ADS)
Kano, Takeshi; Osuka, Koichi; Kawakatsu, Toshihiro; Ishiguro, Akio
2017-12-01
In many natural and social systems, collective behaviors emerge as a consequence of non-reciprocal interaction between their constituents. As a first step towards understanding the core principle that underlies these phenomena, we previously proposed a minimal model of collective behavior based on non-reciprocal interactions by drawing inspiration from friendship formation in human society, and demonstrated via simulations that various non-trivial patterns emerge by changing parameters. In this study, a mathematical analysis of the proposed model wherein the system size is small is performed. Through the analysis, the mechanism of the transition between several patterns is elucidated.
Yu, Chanki; Lee, Sang Wook
2016-05-20
We present a reliable and accurate global optimization framework for estimating parameters of isotropic analytical bidirectional reflectance distribution function (BRDF) models. This approach is based on a branch and bound strategy with linear programming and interval analysis. Conventional local optimization is often very inefficient for BRDF estimation since its fitting quality is highly dependent on initial guesses due to the nonlinearity of analytical BRDF models. The algorithm presented in this paper employs L1-norm error minimization to estimate BRDF parameters in a globally optimal way and interval arithmetic to derive our feasibility problem and lower bounding function. Our method is developed for the Cook-Torrance model but with several normal distribution functions such as the Beckmann, Berry, and GGX functions. Experiments have been carried out to validate the presented method using 100 isotropic materials from the MERL BRDF database, and our experimental results demonstrate that the L1-norm minimization provides a more accurate and reliable solution than the L2-norm minimization.
Nazeer, Shaiju S; Sandhyamani, S; Jayasree, Ramapurath S
2015-06-07
Worldwide, liver cancer is the fifth most common cancer in men and seventh most common cancer in women. Intoxicant-induced liver injury is one of the major causes for severe structural damage with fibrosis and functional derangement of the liver leading to cancer in its later stages. This report focuses on the minimally invasive autofluorescence spectroscopic (AFS) studies on intoxicant, carbon tetrachloride (CCl4)-induced liver damage in a rodent model. Different stages of liver damage, including the reversed stage, on stoppage of the intoxicant are examined. Emission from prominent fluorophores, such as collagen, nicotinamide adenine dinucleotide (NADH), and flavin adenine dinucleotide (FAD), and variations in redox ratio have been studied. A direct correlation between the severity of the disease and the levels of collagen and redox ratio was observed. On withdrawal of the intoxicant, a gradual reversal of the disease to normal conditions was observed as indicated by the decrease in collagen levels and redox ratio. Multivariate statistical techniques and principal component analysis followed by linear discriminant analysis (PC-LDA) were used to develop diagnostic algorithms for distinguishing different stages of the liver disease based on spectral features. The PC-LDA modeling on a minimally invasive AFS dataset yielded diagnostic sensitivities of 93%, 87% and 87% and specificities of 90%, 98% and 98% for pairwise classification among normal, fibrosis, cirrhosis and reversal conditions. We conclude that AFS along with PC-LDA algorithm has the potential for rapid and accurate minimally invasive diagnosis and detection of structural changes due to liver injury resulting from various intoxicants.
Minimal Network Topologies for Signal Processing during Collective Cell Chemotaxis.
Yue, Haicen; Camley, Brian A; Rappel, Wouter-Jan
2018-06-19
Cell-cell communication plays an important role in collective cell migration. However, it remains unclear how cells in a group cooperatively process external signals to determine the group's direction of motion. Although the topology of signaling pathways is vitally important in single-cell chemotaxis, the signaling topology for collective chemotaxis has not been systematically studied. Here, we combine mathematical analysis and simulations to find minimal network topologies for multicellular signal processing in collective chemotaxis. We focus on border cell cluster chemotaxis in the Drosophila egg chamber, in which responses to several experimental perturbations of the signaling network are known. Our minimal signaling network includes only four elements: a chemoattractant, the protein Rac (indicating cell activation), cell protrusion, and a hypothesized global factor responsible for cell-cell interaction. Experimental data on cell protrusion statistics allows us to systematically narrow the number of possible topologies from more than 40,000,000 to only six minimal topologies with six interactions between the four elements. This analysis does not require a specific functional form of the interactions, and only qualitative features are needed; it is thus robust to many modeling choices. Simulations of a stochastic biochemical model of border cell chemotaxis show that the qualitative selection procedure accurately determines which topologies are consistent with the experiment. We fit our model for all six proposed topologies; each produces results that are consistent with all experimentally available data. Finally, we suggest experiments to further discriminate possible pathway topologies. Copyright © 2018 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Use of simulation to compare the performance of minimization with stratified blocked randomization.
Toorawa, Robert; Adena, Michael; Donovan, Mark; Jones, Steve; Conlon, John
2009-01-01
Minimization is an alternative method to stratified permuted block randomization, which may be more effective at balancing treatments when there are many strata. However, its use in the regulatory setting for industry trials remains controversial, primarily due to the difficulty in interpreting conventional asymptotic statistical tests under restricted methods of treatment allocation. We argue that the use of minimization should be critically evaluated when designing the study for which it is proposed. We demonstrate by example how simulation can be used to investigate whether minimization improves treatment balance compared with stratified randomization, and how much randomness can be incorporated into the minimization before any balance advantage is no longer retained. We also illustrate by example how the performance of the traditional model-based analysis can be assessed, by comparing the nominal test size with the observed test size over a large number of simulations. We recommend that the assignment probability for the minimization be selected using such simulations. Copyright (c) 2008 John Wiley & Sons, Ltd.
Robust model-based analysis of single-particle tracking experiments with Spot-On
Grimm, Jonathan B; Lavis, Luke D
2018-01-01
Single-particle tracking (SPT) has become an important method to bridge biochemistry and cell biology since it allows direct observation of protein binding and diffusion dynamics in live cells. However, accurately inferring information from SPT studies is challenging due to biases in both data analysis and experimental design. To address analysis bias, we introduce ‘Spot-On’, an intuitive web-interface. Spot-On implements a kinetic modeling framework that accounts for known biases, including molecules moving out-of-focus, and robustly infers diffusion constants and subpopulations from pooled single-molecule trajectories. To minimize inherent experimental biases, we implement and validate stroboscopic photo-activation SPT (spaSPT), which minimizes motion-blur bias and tracking errors. We validate Spot-On using experimentally realistic simulations and show that Spot-On outperforms other methods. We then apply Spot-On to spaSPT data from live mammalian cells spanning a wide range of nuclear dynamics and demonstrate that Spot-On consistently and robustly infers subpopulation fractions and diffusion constants. PMID:29300163
Robust model-based analysis of single-particle tracking experiments with Spot-On.
Hansen, Anders S; Woringer, Maxime; Grimm, Jonathan B; Lavis, Luke D; Tjian, Robert; Darzacq, Xavier
2018-01-04
Single-particle tracking (SPT) has become an important method to bridge biochemistry and cell biology since it allows direct observation of protein binding and diffusion dynamics in live cells. However, accurately inferring information from SPT studies is challenging due to biases in both data analysis and experimental design. To address analysis bias, we introduce 'Spot-On', an intuitive web-interface. Spot-On implements a kinetic modeling framework that accounts for known biases, including molecules moving out-of-focus, and robustly infers diffusion constants and subpopulations from pooled single-molecule trajectories. To minimize inherent experimental biases, we implement and validate stroboscopic photo-activation SPT (spaSPT), which minimizes motion-blur bias and tracking errors. We validate Spot-On using experimentally realistic simulations and show that Spot-On outperforms other methods. We then apply Spot-On to spaSPT data from live mammalian cells spanning a wide range of nuclear dynamics and demonstrate that Spot-On consistently and robustly infers subpopulation fractions and diffusion constants. © 2018, Hansen et al.
Gray, Wayne D; Sims, Chris R; Fu, Wai-Tat; Schoelles, Michael J
2006-07-01
Soft constraints hypothesis (SCH) is a rational analysis approach that holds that the mixture of perceptual-motor and cognitive resources allocated for interactive behavior is adjusted based on temporal cost-benefit tradeoffs. Alternative approaches maintain that cognitive resources are in some sense protected or conserved in that greater amounts of perceptual-motor effort will be expended to conserve lesser amounts of cognitive effort. One alternative, the minimum memory hypothesis (MMH), holds that people favor strategies that minimize the use of memory. SCH is compared with MMH across 3 experiments and with predictions of an Ideal Performer Model that uses ACT-R's memory system in a reinforcement learning approach that maximizes expected utility by minimizing time. Model and data support the SCH view of resource allocation; at the under 1000-ms level of analysis, mixtures of cognitive and perceptual-motor resources are adjusted based on their cost-benefit tradeoffs for interactive behavior. ((c) 2006 APA, all rights reserved).
Graham, Christopher N; Maglinte, Gregory A; Schwartzberg, Lee S; Price, Timothy J; Knox, Hediyyih N; Hechmati, Guy; Hjelmgren, Jonas; Barber, Beth; Fakih, Marwan G
2016-06-01
In this analysis, we compared costs and explored the cost-effectiveness of subsequent-line treatment with cetuximab or panitumumab in patients with wild-type KRAS (exon 2) metastatic colorectal cancer (mCRC) after previous chemotherapy treatment failure. Data were used from ASPECCT (A Study of Panitumumab Efficacy and Safety Compared to Cetuximab in Patients With KRAS Wild-Type Metastatic Colorectal Cancer), a Phase III, head-to-head randomized noninferiority study comparing the efficacy and safety of panitumumab and cetuximab in this population. A decision-analytic model was developed to perform a cost-minimization analysis and a semi-Markov model was created to evaluate the cost-effectiveness of panitumumab monotherapy versus cetuximab monotherapy in chemotherapy-resistant wild-type KRAS (exon 2) mCRC. The cost-minimization model assumed equivalent efficacy (progression-free survival) based on data from ASPECCT. The cost-effectiveness analysis was conducted with the full information (uncertainty) from ASPECCT. Both analyses were conducted from a US third-party payer perspective and calculated average anti-epidermal growth factor receptor doses from ASPECCT. Costs associated with drug acquisition, treatment administration (every 2 weeks for panitumumab, weekly for cetuximab), and incidence of infusion reactions were estimated in both models. The cost-effectiveness model also included physician visits, disease progression monitoring, best supportive care, and end-of-life costs and utility weights estimated from EuroQol 5-Dimension questionnaire responses from ASPECCT. The cost-minimization model results demonstrated lower projected costs for patients who received panitumumab versus cetuximab, with a projected cost savings of $9468 (16.5%) per panitumumab-treated patient. In the cost-effectiveness model, the incremental cost per quality-adjusted life-year gained revealed panitumumab to be less costly, with marginally better outcomes than cetuximab. These economic analyses comparing panitumumab and cetuximab in chemorefractory wild-type KRAS (exon 2) mCRC suggest benefits in favor of panitumumab. ClinicalTrials.gov identifier: NCT01001377. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Multi-level optimization of a beam-like space truss utilizing a continuum model
NASA Technical Reports Server (NTRS)
Yates, K.; Gurdal, Z.; Thangjitham, S.
1992-01-01
A continuous beam model is developed for approximate analysis of a large, slender, beam-like truss. The model is incorporated in a multi-level optimization scheme for the weight minimization of such trusses. This scheme is tested against traditional optimization procedures for savings in computational cost. Results from both optimization methods are presented for comparison.
Regression Is a Univariate General Linear Model Subsuming Other Parametric Methods as Special Cases.
ERIC Educational Resources Information Center
Vidal, Sherry
Although the concept of the general linear model (GLM) has existed since the 1960s, other univariate analyses such as the t-test and the analysis of variance models have remained popular. The GLM produces an equation that minimizes the mean differences of independent variables as they are related to a dependent variable. From a computer printout…
Reduction of shock induced noise in imperfectly expanded supersonic jets using convex optimization
NASA Astrophysics Data System (ADS)
Adhikari, Sam
2007-11-01
Imperfectly expanded jets generate screech noise. The imbalance between the backpressure and the exit pressure of the imperfectly expanded jets produce shock cells and expansion or compression waves from the nozzle. The instability waves and the shock cells interact to generate the screech sound. The mathematical model consists of cylindrical coordinate based full Navier-Stokes equations and large-eddy-simulation turbulence modeling. Analytical and computational analysis of the three-dimensional helical effects provide a model that relates several parameters with shock cell patterns, screech frequency and distribution of shock generation locations. Convex optimization techniques minimize the shock cell patterns and the instability waves. The objective functions are (convex) quadratic and the constraint functions are affine. In the quadratic optimization programs, minimization of the quadratic functions over a set of polyhedrons provides the optimal result. Various industry standard methods like regression analysis, distance between polyhedra, bounding variance, Markowitz optimization, and second order cone programming is used for Quadratic Optimization.
Toxic release consequence analysis tool (TORCAT) for inherently safer design plant.
Shariff, Azmi Mohd; Zaini, Dzulkarnain
2010-10-15
Many major accidents due to toxic release in the past have caused many fatalities such as the tragedy of MIC release in Bhopal, India (1984). One of the approaches is to use inherently safer design technique that utilizes inherent safety principle to eliminate or minimize accidents rather than to control the hazard. This technique is best implemented in preliminary design stage where the consequence of toxic release can be evaluated and necessary design improvements can be implemented to eliminate or minimize the accidents to as low as reasonably practicable (ALARP) without resorting to costly protective system. However, currently there is no commercial tool available that has such capability. This paper reports on the preliminary findings on the development of a prototype tool for consequence analysis and design improvement via inherent safety principle by utilizing an integrated process design simulator with toxic release consequence analysis model. The consequence analysis based on the worst-case scenarios during process flowsheeting stage were conducted as case studies. The preliminary finding shows that toxic release consequences analysis tool (TORCAT) has capability to eliminate or minimize the potential toxic release accidents by adopting the inherent safety principle early in preliminary design stage. 2010 Elsevier B.V. All rights reserved.
Replica analysis for the duality of the portfolio optimization problem
NASA Astrophysics Data System (ADS)
Shinzato, Takashi
2016-11-01
In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.
Replica analysis for the duality of the portfolio optimization problem.
Shinzato, Takashi
2016-11-01
In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.
NASA Technical Reports Server (NTRS)
Kamat, M. P.
1980-01-01
The formulation basis for establishing the static or dynamic equilibrium configurations of finite element models of structures which may behave in the nonlinear range are provided. With both geometric and time independent material nonlinearities included, the development is restricted to simple one and two dimensional finite elements which are regarded as being the basic elements for modeling full aircraft-like structures under crash conditions. Representations of a rigid link and an impenetrable contact plane are added to the deformation model so that any number of nodes of the finite element model may be connected by a rigid link or may contact the plane. Equilibrium configurations are derived as the stationary conditions of a potential function of the generalized nodal variables of the model. Minimization of the nonlinear potential function is achieved by using the best current variable metric update formula for use in unconstrained minimization. Powell's conjugate gradient algorithm, which offers very low storage requirements at some slight increase in the total number of calculations, is the other alternative algorithm to be used for extremely large scale problems.
NASA Astrophysics Data System (ADS)
Xie, Dexuan
2014-10-01
The Poisson-Boltzmann equation (PBE) is one widely-used implicit solvent continuum model in the calculation of electrostatic potential energy for biomolecules in ionic solvent, but its numerical solution remains a challenge due to its strong singularity and nonlinearity caused by its singular distribution source terms and exponential nonlinear terms. To effectively deal with such a challenge, in this paper, new solution decomposition and minimization schemes are proposed, together with a new PBE analysis on solution existence and uniqueness. Moreover, a PBE finite element program package is developed in Python based on the FEniCS program library and GAMer, a molecular surface and volumetric mesh generation program package. Numerical tests on proteins and a nonlinear Born ball model with an analytical solution validate the new solution decomposition and minimization schemes, and demonstrate the effectiveness and efficiency of the new PBE finite element program package.
Development of analysis technique to predict the material behavior of blowing agent
NASA Astrophysics Data System (ADS)
Hwang, Ji Hoon; Lee, Seonggi; Hwang, So Young; Kim, Naksoo
2014-11-01
In order to numerically simulate the foaming behavior of mastic sealer containing the blowing agent, a foaming and driving force model are needed which incorporate the foaming characteristics. Also, the elastic stress model is required to represent the material behavior of co-existing phase of liquid state and the cured polymer. It is important to determine the thermal properties such as thermal conductivity and specific heat because foaming behavior is heavily influenced by temperature change. In this study, three models are proposed to explain the foaming process and material behavior during and after the process. To obtain the material parameters in each model, following experiments and the numerical simulations are performed: thermal test, simple shear test and foaming test. The error functions are defined as differences between the experimental measurements and the numerical simulation results, and then the parameters are determined by minimizing the error functions. To ensure the validity of the obtained parameters, the confirmation simulation for each model is conducted by applying the determined parameters. The cross-verification is performed by measuring the foaming/shrinkage force. The results of cross-verification tended to follow the experimental results. Interestingly, it was possible to estimate the micro-deformation occurring in automobile roof surface by applying the proposed model to oven process analysis. The application of developed analysis technique will contribute to the design with minimized micro-deformation.
Dynamic Modeling, Model-Based Control, and Optimization of Solid Oxide Fuel Cells
NASA Astrophysics Data System (ADS)
Spivey, Benjamin James
2011-07-01
Solid oxide fuel cells are a promising option for distributed stationary power generation that offers efficiencies ranging from 50% in stand-alone applications to greater than 80% in cogeneration. To advance SOFC technology for widespread market penetration, the SOFC should demonstrate improved cell lifetime and load-following capability. This work seeks to improve lifetime through dynamic analysis of critical lifetime variables and advanced control algorithms that permit load-following while remaining in a safe operating zone based on stress analysis. Control algorithms typically have addressed SOFC lifetime operability objectives using unconstrained, single-input-single-output control algorithms that minimize thermal transients. Existing SOFC controls research has not considered maximum radial thermal gradients or limits on absolute temperatures in the SOFC. In particular, as stress analysis demonstrates, the minimum cell temperature is the primary thermal stress driver in tubular SOFCs. This dissertation presents a dynamic, quasi-two-dimensional model for a high-temperature tubular SOFC combined with ejector and prereformer models. The model captures dynamics of critical thermal stress drivers and is used as the physical plant for closed-loop control simulations. A constrained, MIMO model predictive control algorithm is developed and applied to control the SOFC. Closed-loop control simulation results demonstrate effective load-following, constraint satisfaction for critical lifetime variables, and disturbance rejection. Nonlinear programming is applied to find the optimal SOFC size and steady-state operating conditions to minimize total system costs.
Terluin, Berend; Eekhout, Iris; Terwee, Caroline B
2017-03-01
Patients have their individual minimal important changes (iMICs) as their personal benchmarks to determine whether a perceived health-related quality of life (HRQOL) change constitutes a (minimally) important change for them. We denote the mean iMIC in a group of patients as the "genuine MIC" (gMIC). The aims of this paper are (1) to examine the relationship between the gMIC and the anchor-based minimal important change (MIC), determined by receiver operating characteristic analysis or by predictive modeling; (2) to examine the impact of the proportion of improved patients on these MICs; and (3) to explore the possibility to adjust the MIC for the influence of the proportion of improved patients. Multiple simulations of patient samples involved in anchor-based MIC studies with different characteristics of HRQOL (change) scores and distributions of iMICs. In addition, a real data set is analyzed for illustration. The receiver operating characteristic-based and predictive modeling MICs equal the gMIC when the proportion of improved patients equals 0.5. The MIC is estimated higher than the gMIC when the proportion improved is greater than 0.5, and the MIC is estimated lower than the gMIC when the proportion improved is less than 0.5. Using an equation including the predictive modeling MIC, the log-odds of improvement, the standard deviation of the HRQOL change score, and the correlation between the HRQOL change score and the anchor results in an adjusted MIC reflecting the gMIC irrespective of the proportion of improved patients. Adjusting the predictive modeling MIC for the proportion of improved patients assures that the adjusted MIC reflects the gMIC. We assumed normal distributions and global perceived change scores that were independent on the follow-up score. Additionally, floor and ceiling effects were not taken into account. Copyright © 2017 Elsevier Inc. All rights reserved.
NLEAP/GIS approach for identifying and mitigating regional nitrate-nitrogen leaching
Shaffer, M.J.; Hall, M.D.; Wylie, B.K.; Wagner, D.G.; Corwin, D.L.; Loague, K.
1996-01-01
Improved simulation-based methodology is needed to help identify broad geographical areas where potential NO3-N leaching may be occurring from agriculture and suggest management alternatives that minimize the problem. The Nitrate Leaching and Economic Analysis Package (NLEAP) model was applied to estimate regional NO3-N leaching in eastern Colorado. Results show that a combined NLEAP/GIS technology can be used to identify potential NO3-N hot spots in shallow alluvial aquifers under irrigated agriculture. The NLEAP NO3-N Leached (NL) index provided the most promising single index followed by NO3-N Available for Leaching (NAL). The same combined technology also shows promise in identifying Best Management Practice (BMP) methods that help minimize NO3-N leaching in vulnerable areas. Future plans call for linkage of the NLEAP/GIS procedures with groundwater modeling to establish a mechanistic analysis of agriculture-aquifer interactions at a regional scale.
Al-Nawas, B; Groetz, K A; Goetz, H; Duschner, H; Wagner, W
2008-01-01
Test of favourable conditions for osseointegration with respect to optimum bone-implant contact (BIC) in a loaded animal model. The varied parameters were surface roughness and surface topography of commercially available dental implants. Thirty-two implants of six types of macro and microstructure were included in the study (total 196). The different types were: minimally rough control: Branemark machined Mk III; oxidized surface: TiUnite MkIII and MkIV; ZL Ticer; blasted and etched surface: Straumann SLA; rough control: titanium plasma sprayed (TPS). Sixteen beagle dogs were implanted with the whole set of the above implants. After a healing period of 8 weeks, implants were loaded for 3 months. For the evaluation of the BIC areas, adequately sectioned biopsies were visualized by subsurface scans with confocal laser scanning microscopy (CLSM). The primary statistical analysis testing BIC of the moderately rough implants (mean 56.1+/-13.0%) vs. the minimally rough and the rough controls (mean 53.9+/-11.2%) does not reveal a significant difference (P=0.57). Mean values of 50-70% BIC were found for all implant types. Moderately rough oxidized implants show a median BIC, which is 8% higher than their minimally rough turned counterpart. The intraindividual difference between the TPS and the blasted and etched counterparts revealed no significant difference. The turned and the oxidized implants show median values of the resonance frequency [implant stability quotients (ISQ)] over 60; the nonself-tapping blasted and etched and TPS implants show median values below 60. In conclusion, the benefit of rough surfaces relative to minimally rough ones in this loaded animal model was confirmed histologically. The comparison of different surface treatment modalities revealed no significant differences between the modern moderately rough surfaces. Resonance frequency analysis seems to be influenced in a major part by the transducer used, thus prohibiting the comparison of different implant systems.
Silva, M M; Lemos, J M; Coito, A; Costa, B A; Wigren, T; Mendonça, T
2014-01-01
This paper addresses the local identifiability and sensitivity properties of two classes of Wiener models for the neuromuscular blockade and depth of hypnosis, when drug dose profiles like the ones commonly administered in the clinical practice are used as model inputs. The local parameter identifiability was assessed based on the singular value decomposition of the normalized sensitivity matrix. For the given input signal excitation, the results show an over-parameterization of the standard pharmacokinetic/pharmacodynamic models. The same identifiability assessment was performed on recently proposed minimally parameterized parsimonious models for both the neuromuscular blockade and the depth of hypnosis. The results show that the majority of the model parameters are identifiable from the available input-output data. This indicates that any identification strategy based on the minimally parameterized parsimonious Wiener models for the neuromuscular blockade and for the depth of hypnosis is likely to be more successful than if standard models are used. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Ledzewicz, Urszula; Schättler, Heinz
2017-08-10
Metronomic chemotherapy refers to the frequent administration of chemotherapy at relatively low, minimally toxic doses without prolonged treatment interruptions. Different from conventional or maximum-tolerated-dose chemotherapy which aims at an eradication of all malignant cells, in a metronomic dosing the goal often lies in the long-term management of the disease when eradication proves elusive. Mathematical modeling and subsequent analysis (theoretical as well as numerical) have become an increasingly more valuable tool (in silico) both for determining conditions under which specific treatment strategies should be preferred and for numerically optimizing treatment regimens. While elaborate, computationally-driven patient specific schemes that would optimize the timing and drug dose levels are still a part of the future, such procedures may become instrumental in making chemotherapy effective in situations where it currently fails. Ideally, mathematical modeling and analysis will develop into an additional decision making tool in the complicated process that is the determination of efficient chemotherapy regimens. In this article, we review some of the results that have been obtained about metronomic chemotherapy from mathematical models and what they infer about the structure of optimal treatment regimens. Copyright © 2017 Elsevier B.V. All rights reserved.
Non-minimally coupled scalar field cosmology with torsion
NASA Astrophysics Data System (ADS)
Cid, Antonella; Izaurieta, Fernando; Leon, Genly; Medina, Perla; Narbona, Daniela
2018-04-01
In this work we present a generalized Brans-Dicke lagrangian including a non-minimally coupled Gauss-Bonnet term without imposing the vanishing torsion condition. In the resulting field equations, the torsion is closely related to the dynamics of the scalar field, i.e., if non-minimally coupled terms are present in the theory, then the torsion must be present. For the studied lagrangian we analyze the cosmological consequences of an effective torsional fluid and we show that this fluid can be responsible for the current acceleration of the universe. Finally, we perform a detailed dynamical system analysis to describe the qualitative features of the model, we find that accelerated stages are a generic feature of this scenario.
Search for the minimal standard model Higgs boson in e +e - collisions at LEP
NASA Astrophysics Data System (ADS)
Akrawy, M. Z.; Alexander, G.; Allison, J.; Allport, P. P.; Anderson, K. J.; Armitage, J. C.; Arnison, G. T. J.; Ashton, P.; Azuelos, G.; Baines, J. T. M.; Ball, A. H.; Banks, J.; Barker, G. J.; Barlow, R. J.; Batley, J. R.; Beck, A.; Becker, J.; Behnke, T.; Bell, K. W.; Bella, G.; Bethke, S.; Biebel, O.; Binder, U.; Bloodworth, I. J.; Bock, P.; Breuker, H.; Brown, R. M.; Brun, R.; Buijs, A.; Burckhart, H. J.; Capiluppi, P.; Carnegie, R. K.; Carter, A. A.; Carter, J. R.; Chang, C. Y.; Charlton, D. G.; Chrin, J. T. M.; Clarke, P. E. L.; Cohen, I.; Collins, W. J.; Conboy, J. E.; Couch, M.; Coupland, M.; Cuffiani, M.; Dado, S.; Dallavalle, G. M.; Debu, P.; Deninno, M. M.; Dieckman, A.; Dittmar, M.; Dixit, M. S.; Duchovni, E.; Duerdoth, I. P.; Dumas, D. J. P.; Elcombe, P. A.; Estabrooks, P. G.; Etzion, E.; Fabbri, F.; Farthouat, P.; Fischer, H. M.; Fong, D. G.; French, M. T.; Fukunaga, C.; Gaidot, A.; Ganel, O.; Gary, J. W.; Gascon, J.; Geddes, N. I.; Gee, C. N. P.; Geich-Gimbel, C.; Gensler, S. W.; Gentit, F. X.; Giacomelli, G.; Gibson, V.; Gibson, W. R.; Gillies, J. D.; Goldberg, J.; Goodrick, M. J.; Gorn, W.; Granite, D.; Gross, E.; Grunhaus, J.; Hagedorn, H.; Hagemann, J.; Hansroul, M.; Hargrove, C. K.; Harrus, I.; Hart, J.; Hattersley, P. M.; Hauschild, M.; Hawkes, C. M.; Heflin, E.; Hemingway, R. J.; Heuer, R. D.; Hill, J. C.; Hillier, S. J.; Ho, C.; Hobbs, J. D.; Hobson, P. R.; Hochman, D.; Holl, B.; Homer, R. J.; Hou, S. R.; Howarth, C. P.; Hughes-Jones, R. E.; Humbert, R.; Igo-Kemenes, P.; Ihssen, H.; Imrie, D. C.; Janissen, L.; Jawahery, A.; Jeffreys, P. W.; Jeremie, H.; Jimack, M.; Jobes, M.; Jones, R. W. L.; Jovanovic, P.; Karlen, D.; Kawagoe, K.; Kawamoto, T.; Kellogg, R. G.; Kennedy, B. W.; Kleinwort, C.; Klem, D. E.; Knop, G.; Kobayashi, T.; Kokott, T. P.; Köpke, L.; Kowalewski, R.; Kreutzmann, H.; Kroll, J.; Kuwano, M.; Kyberd, P.; Lafferty, G. D.; Lamarche, F.; Larson, W. J.; Layter, J. G.; Le Du, P.; Leblanc, P.; Lee, A. M.; Lehto, M. H.; Lellouch, D.; Lennert, P.; Lessard, L.; Levinson, L.; Lloyd, S. L.; Loebinger, F. K.; Lorah, J. M.; Lorazo, B.; Losty, M. J.; Ludwig, J.; Ma, J.; Macbeth, A. A.; Mannelli, M.; Marcellini, S.; Maringer, G.; Martin, A. J.; Martin, J. P.; Mashimo, T.; Mättig, P.; Maur, U.; McMahon, T. J.; McNutt, J. R.; Meijers, F.; Menszner, D.; Merritt, F. S.; Mes, H.; Michelini, A.; Middleton, R. P.; Mikenberg, G.; Mildenberger, J.; Miller, D. J.; Milstene, C.; Minowa, M.; Mohr, W.; Montanari, A.; Mori, T.; Moss, M. W.; Murphy, P. G.; Murray, W. J.; Nellen, B.; Nguyen, H. H.; Nozaki, M.; O'Dowd, A. J. P.; O'Neale, S. W.; O'Neill, B. P.; Oakham, F. G.; Odorici, F.; Ogg, M.; Oh, H.; Oreglia, M. J.; Orito, S.; Pansart, J. P.; Patrick, G. N.; Pawley, S. J.; Pfister, P.; Pilcher, J. E.; Pinfold, J. L.; Plane, D. E.; Poli, B.; Pouladdej, A.; Prebys, E.; Pritchard, T. W.; Quast, G.; Raab, J.; Redmond, M. W.; Rees, D. L.; Regimbald, M.; Riles, K.; Roach, C. M.; Robins, S. A.; Rollnik, A.; Roney, J. M.; Rossberg, S.; Rossi, A. M.; Routenburg, P.; Runge, K.; Runolfsson, O.; Sanghera, S.; Sansum, R. A.; Sasaki, M.; Saunders, B. J.; Schaile, A. D.; Schaile, O.; Schappert, W.; Scharff-Hansen, P.; Schreiber, S.; Schwarz, J.; Shapira, A.; Shen, B. C.; Sherwood, P.; Simon, A.; Singh, P.; Siroli, G. P.; Skuja, A.; Smith, A. M.; Smith, T. J.; Snow, G. A.; Springer, R. W.; Sproston, M.; Stephens, K.; Stier, H. E.; Stroehmer, R.; Strom, D.; Takeda, H.; Takeshita, T.; Taras, P.; Thackray, N. J.; Tsukamoto, T.; Turner, M. F.; Tysarczyk-Niemeyer, G.; Van den plas, D.; VanDalen, G. J.; Van Kooten, R.; Vasseur, G.; Virtue, C. J.; von der Schmitt, H.; von Krogh, J.; Wagner, A.; Wahl, C.; Walker, J. P.; Ward, C. P.; Ward, D. R.; Watkins, P. M.; Watson, A. T.; Watson, N. K.; Weber, M.; Weisz, S.; Wells, P. S.; Wermes, N.; Weymann, M.; Wilson, G. W.; Wilson, J. A.; Wingerter, I.; Winterer, V.-H.; Wood, N. C.; Wotton, S.; Wuensch, B.; Wyatt, T. R.; Yaari, R.; Yang, Y.; Yekutieli, G.; Yoshida, T.; Zeuner, W.; Zorn, G. T.; OPAL Collaboration
1991-01-01
A search for the minimal standard model Higgs boson (H 0) has been performed with data from e +e - collisions in the OPAL detector at LEP. The analysis is based on approximately 8 pb -1 of data taken at centre-of-mass energies between 88.2 and 95.0 GeV. The search concentrated on the reaction e+e-→( e+e-, μ +μ -, voverlinevor τ +τ -) H0, H0→( qoverlineqor τ +τ -) for Higgs boson masses above 25 GeV/ c2. No Higgs boson candidates have been observed. The present study, combined with previous OPAL publications, excludes the existence of a standard model Higgs boson with mass in the range 3< mH 0<44GeV/ c2 at the 95% confidence level.
Stability analysis for non-minimally coupled dark energy models in the Palatini formalism
NASA Astrophysics Data System (ADS)
Wang, Zuobin; Wu, Puxun; Yu, Hongwei
2018-06-01
In this paper, we use the method of global analysis to study the stability of de-Sitter solutions in an universe dominated by a scalar field dark energy, which couples non-minimally with the Ricci scalar defined in the Palatini formalism. Effective potential and phase-space diagrams are introduced to describe qualitatively the de-Sitter solutions and their stabilities. We find that for the simple power-law function V(φ)=V0φn there are no stable de-Sitter solutions. While for some more complicated potentials, i.e. V(φ)=V0φn+Λ and V(φ)=V0 (e ^{-λφ}+e^{λφ)2, stable de-Sitter solutions can exist.
Li, Bo; Zhao, Yanxiang
2013-01-01
Central in a variational implicit-solvent description of biomolecular solvation is an effective free-energy functional of the solute atomic positions and the solute-solvent interface (i.e., the dielectric boundary). The free-energy functional couples together the solute molecular mechanical interaction energy, the solute-solvent interfacial energy, the solute-solvent van der Waals interaction energy, and the electrostatic energy. In recent years, the sharp-interface version of the variational implicit-solvent model has been developed and used for numerical computations of molecular solvation. In this work, we propose a diffuse-interface version of the variational implicit-solvent model with solute molecular mechanics. We also analyze both the sharp-interface and diffuse-interface models. We prove the existence of free-energy minimizers and obtain their bounds. We also prove the convergence of the diffuse-interface model to the sharp-interface model in the sense of Γ-convergence. We further discuss properties of sharp-interface free-energy minimizers, the boundary conditions and the coupling of the Poisson-Boltzmann equation in the diffuse-interface model, and the convergence of forces from diffuse-interface to sharp-interface descriptions. Our analysis relies on the previous works on the problem of minimizing surface areas and on our observations on the coupling between solute molecular mechanical interactions with the continuum solvent. Our studies justify rigorously the self consistency of the proposed diffuse-interface variational models of implicit solvation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finsterle, Stefan A.
2010-11-01
iTOUGH2 (inverse TOUGH2) provides inverse modeling capabilities for TOUGH2, a simulator for multi-dimensional , multi-phase, multi-component, non-isothermal flow and transport in fractured porous media. It performs sensitivity analysis, parameter estimation, and uncertainty propagation, analysis in geosciences and reservoir engineering and other application areas. It supports a number of different combination of fluids and components [equation-of-state (EOS) modules]. In addition, the optimization routines implemented in iTOUGH2 can also be used or sensitivity analysis, automatic model calibration, and uncertainty quantification of any external code that uses text-based input and output files. This link is achieved by means of the PEST application programmingmore » interface. iTOUGH2 solves the inverse problem by minimizing a non-linear objective function of the weighted differences between model output and the corresponding observations. Multiple minimization algorithms (derivative fee, gradient-based and second-order; local and global) are available. iTOUGH2 also performs Latin Hypercube Monte Carlos simulation for uncertainty propagation analysis. A detailed residual and error analysis is provided. This upgrade includes new EOS modules (specifically EOS7c, ECO2N and TMVOC), hysteretic relative permeability and capillary pressure functions and the PEST API. More details can be found at http://esd.lbl.gov/iTOUGH2 and the publications cited there. Hardware Req.: Multi-platform; Related/auxiliary software PVM (if running in parallel).« less
Connolly, Niamh M C; D'Orsi, Beatrice; Monsefi, Naser; Huber, Heinrich J; Prehn, Jochen H M
2016-01-01
Loss of ionic homeostasis during excitotoxic stress depletes ATP levels and activates the AMP-activated protein kinase (AMPK), re-establishing energy production by increased expression of glucose transporters on the plasma membrane. Here, we develop a computational model to test whether this AMPK-mediated glucose import can rapidly restore ATP levels following a transient excitotoxic insult. We demonstrate that a highly compact model, comprising a minimal set of critical reactions, can closely resemble the rapid dynamics and cell-to-cell heterogeneity of ATP levels and AMPK activity, as confirmed by single-cell fluorescence microscopy in rat primary cerebellar neurons exposed to glutamate excitotoxicity. The model further correctly predicted an excitotoxicity-induced elevation of intracellular glucose, and well resembled the delayed recovery and cell-to-cell heterogeneity of experimentally measured glucose dynamics. The model also predicted necrotic bioenergetic collapse and altered calcium dynamics following more severe excitotoxic insults. In conclusion, our data suggest that a minimal set of critical reactions may determine the acute bioenergetic response to transient excitotoxicity and that an AMPK-mediated increase in intracellular glucose may be sufficient to rapidly recover ATP levels following an excitotoxic insult.
Connolly, Niamh M. C.; D’Orsi, Beatrice; Monsefi, Naser; Huber, Heinrich J.; Prehn, Jochen H. M.
2016-01-01
Loss of ionic homeostasis during excitotoxic stress depletes ATP levels and activates the AMP-activated protein kinase (AMPK), re-establishing energy production by increased expression of glucose transporters on the plasma membrane. Here, we develop a computational model to test whether this AMPK-mediated glucose import can rapidly restore ATP levels following a transient excitotoxic insult. We demonstrate that a highly compact model, comprising a minimal set of critical reactions, can closely resemble the rapid dynamics and cell-to-cell heterogeneity of ATP levels and AMPK activity, as confirmed by single-cell fluorescence microscopy in rat primary cerebellar neurons exposed to glutamate excitotoxicity. The model further correctly predicted an excitotoxicity-induced elevation of intracellular glucose, and well resembled the delayed recovery and cell-to-cell heterogeneity of experimentally measured glucose dynamics. The model also predicted necrotic bioenergetic collapse and altered calcium dynamics following more severe excitotoxic insults. In conclusion, our data suggest that a minimal set of critical reactions may determine the acute bioenergetic response to transient excitotoxicity and that an AMPK-mediated increase in intracellular glucose may be sufficient to rapidly recover ATP levels following an excitotoxic insult. PMID:26840769
Aggarwal, Rohit; Rider, Lisa G; Ruperto, Nicolino; Bayat, Nastaran; Erman, Brian; Feldman, Brian M; Oddis, Chester V; Amato, Anthony A; Chinoy, Hector; Cooper, Robert G; Dastmalchi, Maryam; Fiorentino, David; Isenberg, David; Katz, James D; Mammen, Andrew; de Visser, Marianne; Ytterberg, Steven R; Lundberg, Ingrid E; Chung, Lorinda; Danko, Katalin; García-De la Torre, Ignacio; Song, Yeong Wook; Villa, Luca; Rinaldi, Mariangela; Rockette, Howard; Lachenbruch, Peter A; Miller, Frederick W; Vencovsky, Jiri
2017-05-01
To develop response criteria for adult dermatomyositis (DM) and polymyositis (PM). Expert surveys, logistic regression, and conjoint analysis were used to develop 287 definitions using core set measures. Myositis experts rated greater improvement among multiple pairwise scenarios in conjoint analysis surveys, where different levels of improvement in 2 core set measures were presented. The PAPRIKA (Potentially All Pairwise Rankings of All Possible Alternatives) method determined the relative weights of core set measures and conjoint analysis definitions. The performance characteristics of the definitions were evaluated on patient profiles using expert consensus (gold standard) and were validated using data from a clinical trial. The nominal group technique was used to reach consensus. Consensus was reached for a conjoint analysis-based continuous model using absolute per cent change in core set measures (physician, patient, and extramuscular global activity, muscle strength, Health Assessment Questionnaire, and muscle enzyme levels). A total improvement score (range 0-100), determined by summing scores for each core set measure, was based on improvement in and relative weight of each core set measure. Thresholds for minimal, moderate, and major improvement were ≥20, ≥40, and ≥60 points in the total improvement score. The same criteria were chosen for juvenile DM, with different improvement thresholds. Sensitivity and specificity in DM/PM patient cohorts were 85% and 92%, 90% and 96%, and 92% and 98% for minimal, moderate, and major improvement, respectively. Definitions were validated in the clinical trial analysis for differentiating the physician rating of improvement (p<0.001). The response criteria for adult DM/PM consisted of the conjoint analysis model based on absolute per cent change in 6 core set measures, with thresholds for minimal, moderate, and major improvement. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Minimal Pleural Effusion in Small Cell Lung Cancer: Proportion, Mechanisms, and Prognostic Effect.
Ryu, Jeong-Seon; Lim, Jun Hyeok; Lee, Jeong Min; Kim, Woo Chul; Lee, Kyung-Hee; Memon, Azra; Lee, Seul-Ki; Yi, Bo-Rim; Kim, Hyun-Jung; Hwang, Seung-Sik
2016-02-01
To determine the frequency and investigate possible mechanisms and prognostic relevance of minimal (<10-mm thickness) pleural effusion in patients with small cell lung cancer (SCLC). The single-center retrospective study was approved by the institutional review board of the hospital, and informed consent was waived by the patients. A cohort of 360 consecutive patients diagnosed with SCLC by using histologic analysis was enrolled in this study. Based on the status of pleural effusion on chest computed tomographic (CT) scans at diagnosis, patients were classified into three groups: no pleural effusion, minimal pleural effusion, and malignant pleural effusion. Eighteen variables related to patient, environment, stage, and treatment were included in the final model as potential confounders. Minimal pleural effusion was present in 74 patients (20.6%) and malignant pleural effusion in 83 patients (23.0%). Median survival was significantly different in patients with no, minimal, or malignant pleural effusion (median survival, 11.2, 5.93, and 4.83 months, respectively; P < .001, log-rank test). In the fully adjusted final model, patients with minimal pleural effusion had a significantly increased risk of death compared with those with no pleural effusion (adjusted hazard ratio, 1.454 [95% confidence interval: 1.012, 2.090]; P = .001). The prognostic effect was significant in patients with stage I-III disease (adjusted hazard ratio, 2.751 [95% confidence interval: 1.586, 4.773]; P < .001), but it disappeared in stage IV disease. An indirect mechanism representing mediastinal lymphadenopathy was responsible for the accumulation in all but one patient with minimal pleural effusion. Minimal pleural effusion is a common clinical finding in staging SCLC. Its presence is associated with worse survival in patients and should be considered when CT scans are interpreted. © RSNA, 2015.
Initial Results from Lunar Electromagnetic Sounding with ARTEMIS
NASA Astrophysics Data System (ADS)
Fuqua, H.; Fatemi, S.; Poppe, A. R.; Delory, G. T.; Grimm, R. E.; De Pater, I.
2016-12-01
Electromagnetic Sounding constrains conducting layers of the lunar interior by observing variations in the Interplanetary Magnetic Field. Here, we focus our analysis on the time domain transfer function method locating transient events observed by two magnetometers near the Moon. We analyze ARTEMIS and Apollo magnetometer data. This analysis assumes the induced field responds undisturbed in a vacuum. In actuality, the dynamic plasma environment interacts with the induced field. Our models indicate distortion but not confinement occurs in the nightside wake cavity. Moreover, within the deep wake, near-vacuum region, distortion of the induced dipole fields due to the interaction with the wake is minimal depending on the magnitude of the induced field, the geometry of the upstream fields, and the upstream plasma parameters such as particle densities, solar wind velocity, and temperatures. Our results indicate the assumption of a vacuum dipolar response is reasonable within this minimally disturbed zone. We then interpret the ATEMIS magnetic field signal through a geophysical forward model capturing the induced response based on prescribed electrical conductivity models. We demonstrate our forward model passes benchmarking analyses and solves the magnetic induction response for any input signal as well as any 2 or 3 dimensional conductivity profile. We locate data windows according to the following criteria: (1) probe locations such that the wake probe is within 500km altitude within the wake cavity and minimally disturbed zone, and the second probe is in the free streaming solar wind; (2) a transient event consisting of an abrupt change in the magnetic field occurs enabling the observation of induction; (3) cross correlation analysis reveals the magnetic field signals are well correlated between the two probes and distances observed. Here we present initial ARTEMIS results providing further insight into the lunar interior structure. This method and modeling results are applicable to any airless body with a conducting interior, interacting directly with the solar wind in the absence of a parent body magnetic field as well as any two point magnetometer constellation.
Anuradha, C M; Mulakayala, Chaitanya; Babajan, Banaganapalli; Naveen, M; Rajasekhar, Chikati; Kumar, Chitta Suresh
2010-01-01
Multi drug resistance capacity for Mycobacterium tuberculosis (MDR-Mtb) demands the profound need for developing new anti-tuberculosis drugs. The present work is on Mtb-MurC ligase, which is an enzyme involved in biosynthesis of peptidoglycan, a component of Mtb cell wall. In this paper the 3-D structure of Mtb-MurC has been constructed using the templates 1GQQ and 1P31. Structural refinement and energy minimization of the predicted Mtb-MurC ligase model has been carried out by molecular dynamics. The streochemical check failures in the energy minimized model have been evaluated through Procheck, Whatif ProSA, and Verify 3D. Further torsion angles for the side chains of amino acid residues of the developed model were determined using Predictor. Docking analysis of Mtb-MurC model with ligands and natural substrates enabled us to identify specific residues viz. Gly125, Lys126, Arg331, and Arg332, within the Mtb-MurC binding pocket to play an important role in ligand and substrate binding affinity and selectivity. The availability of Mtb-MurC ligase built model, together with insights gained from docking analysis will promote the rational design of potent and selective Mtb-MurC ligase inhibitors as antituberculosis therapeutics.
An improved car-following model considering headway changes with memory
NASA Astrophysics Data System (ADS)
Yu, Shaowei; Shi, Zhongke
2015-03-01
To describe car-following behaviors in complex situations better, increase roadway traffic mobility and minimize cars' fuel consumptions, the linkage between headway changes with memory and car-following behaviors was explored with the field car-following data by using the gray correlation analysis method, and then an improved car-following model considering headway changes with memory on a single lane was proposed based on the full velocity difference model. Some numerical simulations were carried out by employing the improved car-following model to explore how headway changes with memory affected each car's velocity, acceleration, headway and fuel consumptions. The research results show that headway changes with memory have significant effects on car-following behaviors and fuel consumptions and that considering headway changes with memory in designing the adaptive cruise control strategy can improve the traffic flow stability and minimize cars' fuel consumptions.
NASA Astrophysics Data System (ADS)
Kant Garg, Girish; Garg, Suman; Sangwan, K. S.
2018-04-01
The manufacturing sector consumes huge energy demand and the machine tools used in this sector have very less energy efficiency. Selection of the optimum machining parameters for machine tools is significant for energy saving and for reduction of environmental emission. In this work an empirical model is developed to minimize the power consumption using response surface methodology. The experiments are performed on a lathe machine tool during the turning of AISI 6061 Aluminum with coated tungsten inserts. The relationship between the power consumption and machining parameters is adequately modeled. This model is used for formulation of minimum power consumption criterion as a function of optimal machining parameters using desirability function approach. The influence of machining parameters on the energy consumption has been found using the analysis of variance. The validation of the developed empirical model is proved using the confirmation experiments. The results indicate that the developed model is effective and has potential to be adopted by the industry for minimum power consumption of machine tools.
Search for high-mass dilepton resonances in p p collisions at s = 8 TeV with the ATLAS detector
Aad, G.; Abbott, B.; Abdallah, J.; ...
2014-09-19
Here, the ATLAS detector at the Large Hadron Collider is used to search for high-mass resonances decaying to dielectron or dimuon final states. Results are presented from an analysis of proton-proton (pp) collisions at a center-of-mass energy of 8 TeV corresponding to an integrated luminosity of 20.3 fb –1 in the dimuon channel. A narrow resonance with Standard Model Z couplings to fermions is excluded at 95% confidence level for masses less than 2.79 TeV in the dielectron channel, 2.53 TeV in the dimuon channel, and 2.90 TeV in the two channels combined. Limits on other model interpretations are alsomore » presented, including a grand-unification model based on the E 6 gauge group, Z* bosons, minimal Z' models, a spin-2 graviton excitation from Randall-Sundrum models, quantum black holes, and a minimal walking technicolor model with a composite Higgs boson.« less
CUTSETS - MINIMAL CUT SET CALCULATION FOR DIGRAPH AND FAULT TREE RELIABILITY MODELS
NASA Technical Reports Server (NTRS)
Iverson, D. L.
1994-01-01
Fault tree and digraph models are frequently used for system failure analysis. Both type of models represent a failure space view of the system using AND and OR nodes in a directed graph structure. Fault trees must have a tree structure and do not allow cycles or loops in the graph. Digraphs allow any pattern of interconnection between loops in the graphs. A common operation performed on digraph and fault tree models is the calculation of minimal cut sets. A cut set is a set of basic failures that could cause a given target failure event to occur. A minimal cut set for a target event node in a fault tree or digraph is any cut set for the node with the property that if any one of the failures in the set is removed, the occurrence of the other failures in the set will not cause the target failure event. CUTSETS will identify all the minimal cut sets for a given node. The CUTSETS package contains programs that solve for minimal cut sets of fault trees and digraphs using object-oriented programming techniques. These cut set codes can be used to solve graph models for reliability analysis and identify potential single point failures in a modeled system. The fault tree minimal cut set code reads in a fault tree model input file with each node listed in a text format. In the input file the user specifies a top node of the fault tree and a maximum cut set size to be calculated. CUTSETS will find minimal sets of basic events which would cause the failure at the output of a given fault tree gate. The program can find all the minimal cut sets of a node, or minimal cut sets up to a specified size. The algorithm performs a recursive top down parse of the fault tree, starting at the specified top node, and combines the cut sets of each child node into sets of basic event failures that would cause the failure event at the output of that gate. Minimal cut set solutions can be found for all nodes in the fault tree or just for the top node. The digraph cut set code uses the same techniques as the fault tree cut set code, except it includes all upstream digraph nodes in the cut sets for a given node and checks for cycles in the digraph during the solution process. CUTSETS solves for specified nodes and will not automatically solve for all upstream digraph nodes. The cut sets will be output as a text file. CUTSETS includes a utility program that will convert the popular COD format digraph model description files into text input files suitable for use with the CUTSETS programs. FEAT (MSC-21873) and FIRM (MSC-21860) available from COSMIC are examples of programs that produce COD format digraph model description files that may be converted for use with the CUTSETS programs. CUTSETS is written in C-language to be machine independent. It has been successfully implemented on a Sun running SunOS, a DECstation running ULTRIX, a Macintosh running System 7, and a DEC VAX running VMS. The RAM requirement varies with the size of the models. CUTSETS is available in UNIX tar format on a .25 inch streaming magnetic tape cartridge (standard distribution) or on a 3.5 inch diskette. It is also available on a 3.5 inch Macintosh format diskette or on a 9-track 1600 BPI magnetic tape in DEC VAX FILES-11 format. Sample input and sample output are provided on the distribution medium. An electronic copy of the documentation in Macintosh Microsoft Word format is included on the distribution medium. Sun and SunOS are trademarks of Sun Microsystems, Inc. DEC, DeCstation, ULTRIX, VAX, and VMS are trademarks of Digital Equipment Corporation. UNIX is a registered trademark of AT&T Bell Laboratories. Macintosh is a registered trademark of Apple Computer, Inc.
Minimizers with Bounded Action for the High-Dimensional Frenkel-Kontorova Model
NASA Astrophysics Data System (ADS)
Miao, Xue-Qing; Wang, Ya-Nan; Qin, Wen-Xin
In Aubry-Mather theory for monotone twist maps or for one-dimensional Frenkel-Kontorova (FK) model with nearest neighbor interactions, each global minimizer (minimal energy configuration) is naturally Birkhoff. However, this is not true for the one-dimensional FK model with non-nearest neighbor interactions or for the high-dimensional FK model. In this paper, we study the Birkhoff property of minimizers with bounded action for the high-dimensional FK model.
Sun, Yanhua; Gong, Bing; Yuan, Xin; Zheng, Zhe; Wang, Guyan; Chen, Guo; Zhou, Chenghui; Wang, Wei; Ji, Bingyang
2015-08-01
The benefits of minimized extracorporeal circulation (MECC) compared with conventional extracorporeal circulation (CECC) are still in debate. PubMed, EMBASE and the Cochrane Library were searched until November 10, 2014. After quality assessment, we chose a fixed-effects model when the trials showed low heterogeneity, otherwise a random-effects model was used. We performed univariate meta-regression and sensitivity analysis to search for the potential sources of heterogeneity. Cumulative meta-analysis was performed to access the evolution of outcome over time. 41 RCTs enrolling 3744 patients were included after independent article review by 2 authors. MECC significantly reduced atrial fibrillation (RR, 0.76; 95% CI, 0.66 to 0.89; P < 0.001; I2 = 0%), and myocardial infarction (RR, 0.43; 95% CI, 0.26 to 0.71; P = 0.001; I2 = 0%). In addition, the results regarding chest tube drainage, transfusion rate, blood loss, red blood cell transfusion volume, and platelet count favored MECC as well. MECC diminished morbidity of cardiovascular complications postoperatively, conserved blood cells, and reduced allogeneic blood transfusion.
Free-energy analysis of spin models on hyperbolic lattice geometries.
Serina, Marcel; Genzor, Jozef; Lee, Yoju; Gendiar, Andrej
2016-04-01
We investigate relations between spatial properties of the free energy and the radius of Gaussian curvature of the underlying curved lattice geometries. For this purpose we derive recurrence relations for the analysis of the free energy normalized per lattice site of various multistate spin models in the thermal equilibrium on distinct non-Euclidean surface lattices of the infinite sizes. Whereas the free energy is calculated numerically by means of the corner transfer matrix renormalization group algorithm, the radius of curvature has an analytic expression. Two tasks are considered in this work. First, we search for such a lattice geometry, which minimizes the free energy per site. We conjecture that the only Euclidean flat geometry results in the minimal free energy per site regardless of the spin model. Second, the relations among the free energy, the radius of curvature, and the phase transition temperatures are analyzed. We found out that both the free energy and the phase transition temperature inherit the structure of the lattice geometry and asymptotically approach the profile of the Gaussian radius of curvature. This achievement opens new perspectives in the AdS-CFT correspondence theories.
Generic Modeling of a Life Support System for Process Technology Comparison
NASA Technical Reports Server (NTRS)
Ferrall, J. F.; Seshan, P. K.; Rohatgi, N. K.; Ganapathi, G. B.
1993-01-01
This paper describes a simulation model called the Life Support Systems Analysis Simulation Tool (LiSSA-ST), the spreadsheet program called the Life Support Systems Analysis Trade Tool (LiSSA-TT), and the Generic Modular Flow Schematic (GMFS) modeling technique. Results of using the LiSSA-ST and the LiSSA-TT will be presented for comparing life support system and process technology options for a Lunar Base with a crew size of 4 and mission lengths of 90 and 600 days. System configurations to minimize the life support system weight and power are explored.
A study was initiated by the EPA/ORD National Exposure Research Lab (NERL) in FY05 to quantify risk reduction resulting from this national EPA initiative to reduce WMPC disposal. Using the 3MRA modeling system, which was recommended for use by the EPA Science Advisory Board for ...
Uncertainty analysis of signal deconvolution using a measured instrument response function
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartouni, E. P.; Beeman, B.; Caggiano, J. A.
2016-10-05
A common analysis procedure minimizes the ln-likelihood that a set of experimental observables matches a parameterized model of the observation. The model includes a description of the underlying physical process as well as the instrument response function (IRF). Here, we investigate the National Ignition Facility (NIF) neutron time-of-flight (nTOF) spectrometers, the IRF is constructed from measurements and models. IRF measurements have a finite precision that can make significant contributions to the uncertainty estimate of the physical model’s parameters. Finally, we apply a Bayesian analysis to properly account for IRF uncertainties in calculating the ln-likelihood function used to find the optimummore » physical parameters.« less
Using the CABLES model to assess and minimize risk in research: control group hazards.
Koocher, G P
2002-01-01
CABLES is both an acronym and metaphor for conceptualizing research participation risk by considering 6 distinct domains in which risks of harm to research participants may exist: cognitive, affective, biological, legal, economic, and social/cultural. These domains are described and illustrated, along with suggestions for minimizing or eliminating the potential hazards to human participants in biomedical and behavioral science research. Adoption of a thoughtful ethical analysis addressing all 6 CABLES strands in designing research provides a strong protective step toward safeguarding and promoting the well-being of study participants.
NASA Astrophysics Data System (ADS)
Kamagara, Abel; Wang, Xiangzhao; Li, Sikun
2018-03-01
We propose a method to compensate for the projector intensity nonlinearity induced by gamma effect in three-dimensional (3-D) fringe projection metrology by extending high-order spectra analysis and bispectral norm minimization to digital sinusoidal fringe pattern analysis. The bispectrum estimate allows extraction of vital signal information features such as spectral component correlation relationships in fringe pattern images. Our approach exploits the fact that gamma introduces high-order harmonic correlations in the affected fringe pattern image. Estimation and compensation of projector nonlinearity is realized by detecting and minimizing the normed bispectral coherence of these correlations. The proposed technique does not require calibration information and technical knowledge or specification of fringe projection unit. This is promising for developing a modular and calibration-invariant model for intensity nonlinear gamma compensation in digital fringe pattern projection profilometry. Experimental and numerical simulation results demonstrate this method to be efficient and effective in improving the phase measuring accuracies with phase-shifting fringe pattern projection profilometry.
Energy and time determine scaling in biological and computer designs
Bezerra, George; Edwards, Benjamin; Brown, James; Forrest, Stephanie
2016-01-01
Metabolic rate in animals and power consumption in computers are analogous quantities that scale similarly with size. We analyse vascular systems of mammals and on-chip networks of microprocessors, where natural selection and human engineering, respectively, have produced systems that minimize both energy dissipation and delivery times. Using a simple network model that simultaneously minimizes energy and time, our analysis explains empirically observed trends in the scaling of metabolic rate in mammals and power consumption and performance in microprocessors across several orders of magnitude in size. Just as the evolutionary transitions from unicellular to multicellular animals in biology are associated with shifts in metabolic scaling, our model suggests that the scaling of power and performance will change as computer designs transition to decentralized multi-core and distributed cyber-physical systems. More generally, a single energy–time minimization principle may govern the design of many complex systems that process energy, materials and information. This article is part of the themed issue ‘The major synthetic evolutionary transitions’. PMID:27431524
Energy and time determine scaling in biological and computer designs.
Moses, Melanie; Bezerra, George; Edwards, Benjamin; Brown, James; Forrest, Stephanie
2016-08-19
Metabolic rate in animals and power consumption in computers are analogous quantities that scale similarly with size. We analyse vascular systems of mammals and on-chip networks of microprocessors, where natural selection and human engineering, respectively, have produced systems that minimize both energy dissipation and delivery times. Using a simple network model that simultaneously minimizes energy and time, our analysis explains empirically observed trends in the scaling of metabolic rate in mammals and power consumption and performance in microprocessors across several orders of magnitude in size. Just as the evolutionary transitions from unicellular to multicellular animals in biology are associated with shifts in metabolic scaling, our model suggests that the scaling of power and performance will change as computer designs transition to decentralized multi-core and distributed cyber-physical systems. More generally, a single energy-time minimization principle may govern the design of many complex systems that process energy, materials and information.This article is part of the themed issue 'The major synthetic evolutionary transitions'. © 2016 The Author(s).
Automatic classification of minimally invasive instruments based on endoscopic image sequences
NASA Astrophysics Data System (ADS)
Speidel, Stefanie; Benzko, Julia; Krappe, Sebastian; Sudra, Gunther; Azad, Pedram; Müller-Stich, Beat Peter; Gutt, Carsten; Dillmann, Rüdiger
2009-02-01
Minimally invasive surgery is nowadays a frequently applied technique and can be regarded as a major breakthrough in surgery. The surgeon has to adopt special operation-techniques and deal with difficulties like the complex hand-eye coordination and restricted mobility. To alleviate these constraints we propose to enhance the surgeon's capabilities by providing a context-aware assistance using augmented reality techniques. To analyze the current situation for context-aware assistance, we need intraoperatively gained sensor data and a model of the intervention. A situation consists of information about the performed activity, the used instruments, the surgical objects, the anatomical structures and defines the state of an intervention for a given moment in time. The endoscopic images provide a rich source of information which can be used for an image-based analysis. Different visual cues are observed in order to perform an image-based analysis with the objective to gain as much information as possible about the current situation. An important visual cue is the automatic recognition of the instruments which appear in the scene. In this paper we present the classification of minimally invasive instruments using the endoscopic images. The instruments are not modified by markers. The system segments the instruments in the current image and recognizes the instrument type based on three-dimensional instrument models.
Zhao, Jinzhe; Zhao, Qi; Jiang, Yingxu; Li, Weitao; Yang, Yamin; Qian, Zhiyu; Liu, Jia
2018-06-01
Liver thermal ablation techniques have been widely used for the treatment of liver cancer. Kinetic model of damage propagation play an important role for ablation prediction and real-time efficacy assessment. However, practical methods for modeling liver thermal damage are rare. A minimally invasive optical method especially adequate for in situ liver thermal damage modeling is introduced in this paper. Porcine liver tissue was heated by water bath under different temperatures. During thermal treatment, diffuse reflectance spectrum of liver was measured by optical fiber and used to deduce reduced scattering coefficient (μ ' s ). Arrhenius parameters were obtained through non-isothermal heating approach with damage marker of μ ' s . Activation energy (E a ) and frequency factor (A) was deduced from these experiments. A pair of averaged value is 1.200 × 10 5 J mol -1 and 4.016 × 10 17 s -1 . The results were verified for their reasonableness and practicality. Therefore, it is feasible to modeling liver thermal damage based on minimally invasive measurement of optical property and in situ kinetic analysis of damage progress with Arrhenius model. These parameters and this method are beneficial for preoperative planning and real-time efficacy assessment of liver ablation therapy. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Non-minimally coupled scalar field in Kantowski-Sachs model and symmetry analysis
NASA Astrophysics Data System (ADS)
Dutta, Sourav; Lakshmanan, Muthusamy; Chakraborty, Subenoy
2018-06-01
The paper deals with a non-minimally coupled scalar field in the background of homogeneous but anisotropic Kantowski-Sachs space-time model. The form of the coupling function of the scalar field with gravity and the potential function of the scalar field are not assumed phenomenologically, rather they are evaluated by imposing Noether symmetry to the Lagrangian of the present physical system. The physical system gets considerable mathematical simplification by a suitable transformation of the augmented variables (a , b , ϕ) →(u , v , w) and by the use of the conserved quantities due to the geometrical symmetry. Finally, cosmological solutions are evaluated and analyzed from the point of view of the present evolution of the Universe.
Computation and analysis for a constrained entropy optimization problem in finance
NASA Astrophysics Data System (ADS)
He, Changhong; Coleman, Thomas F.; Li, Yuying
2008-12-01
In [T. Coleman, C. He, Y. Li, Calibrating volatility function bounds for an uncertain volatility model, Journal of Computational Finance (2006) (submitted for publication)], an entropy minimization formulation has been proposed to calibrate an uncertain volatility option pricing model (UVM) from market bid and ask prices. To avoid potential infeasibility due to numerical error, a quadratic penalty function approach is applied. In this paper, we show that the solution to the quadratic penalty problem can be obtained by minimizing an objective function which can be evaluated via solving a Hamilton-Jacobian-Bellman (HJB) equation. We prove that the implicit finite difference solution of this HJB equation converges to its viscosity solution. In addition, we provide computational examples illustrating accuracy of calibration.
Maximally Symmetric Composite Higgs Models.
Csáki, Csaba; Ma, Teng; Shu, Jing
2017-09-29
Maximal symmetry is a novel tool for composite pseudo Goldstone boson Higgs models: it is a remnant of an enhanced global symmetry of the composite fermion sector involving a twisting with the Higgs field. Maximal symmetry has far-reaching consequences: it ensures that the Higgs potential is finite and fully calculable, and also minimizes the tuning. We present a detailed analysis of the maximally symmetric SO(5)/SO(4) model and comment on its observational consequences.
High‐resolution trench photomosaics from image‐based modeling: Workflow and error analysis
Reitman, Nadine G.; Bennett, Scott E. K.; Gold, Ryan D.; Briggs, Richard; Duross, Christopher
2015-01-01
Photomosaics are commonly used to construct maps of paleoseismic trench exposures, but the conventional process of manually using image‐editing software is time consuming and produces undesirable artifacts and distortions. Herein, we document and evaluate the application of image‐based modeling (IBM) for creating photomosaics and 3D models of paleoseismic trench exposures, illustrated with a case‐study trench across the Wasatch fault in Alpine, Utah. Our results include a structure‐from‐motion workflow for the semiautomated creation of seamless, high‐resolution photomosaics designed for rapid implementation in a field setting. Compared with conventional manual methods, the IBM photomosaic method provides a more accurate, continuous, and detailed record of paleoseismic trench exposures in approximately half the processing time and 15%–20% of the user input time. Our error analysis quantifies the effect of the number and spatial distribution of control points on model accuracy. For this case study, an ∼87 m2 exposure of a benched trench photographed at viewing distances of 1.5–7 m yields a model with <2 cm root mean square error (rmse) with as few as six control points. Rmse decreases as more control points are implemented, but the gains in accuracy are minimal beyond 12 control points. Spreading control points throughout the target area helps to minimize error. We propose that 3D digital models and corresponding photomosaics should be standard practice in paleoseismic exposure archiving. The error analysis serves as a guide for future investigations that seek balance between speed and accuracy during photomosaic and 3D model construction.
Narayanan, Sarath Kumar; Cohen, Ralph Clinton; Shun, Albert
2014-06-01
Minimal access techniques have transformed the way pediatric surgery is practiced. Due to various constraints, surgical residency programs have not been able to tutor adequate training skills in the routine setting. The advent of new technology and methods in minimally invasive surgery (MIS), has similarly contributed to the need for systematic skills' training in a safe, simulated environment. To enable the training of the proper technique among pediatric surgery trainees, we have advanced a porcine non-survival model for endoscopic surgery. The technical advancements over the past 3 years and a subjective validation of the porcine model from 114 participating trainees using a standard questionnaire and a 5-point Likert scale have been described here. Mean attitude scores and analysis of variance (ANOVA) were used for statistical analysis of the data. Almost all trainees agreed or strongly agreed that the animal-based model was appropriate (98.35%) and also acknowledged that such workshops provided adequate practical experience before attempting on human subjects (96.6%). Mean attitude score for respondents was 19.08 (SD 3.4, range 4-20). Attitude scores showed no statistical association with years of experience or the level of seniority, indicating a positive attitude among all groups of respondents. Structured porcine-based MIS training should be an integral part of skill acquisition for pediatric surgery trainees and the experience gained can be transferred into clinical practice. We advocate that laparoscopic training should begin in a controlled workshop setting before procedures are attempted on human patients.
AMMOS2: a web server for protein-ligand-water complexes refinement via molecular mechanics.
Labbé, Céline M; Pencheva, Tania; Jereva, Dessislava; Desvillechabrol, Dimitri; Becot, Jérôme; Villoutreix, Bruno O; Pajeva, Ilza; Miteva, Maria A
2017-07-03
AMMOS2 is an interactive web server for efficient computational refinement of protein-small organic molecule complexes. The AMMOS2 protocol employs atomic-level energy minimization of a large number of experimental or modeled protein-ligand complexes. The web server is based on the previously developed standalone software AMMOS (Automatic Molecular Mechanics Optimization for in silico Screening). AMMOS utilizes the physics-based force field AMMP sp4 and performs optimization of protein-ligand interactions at five levels of flexibility of the protein receptor. The new version 2 of AMMOS implemented in the AMMOS2 web server allows the users to include explicit water molecules and individual metal ions in the protein-ligand complexes during minimization. The web server provides comprehensive analysis of computed energies and interactive visualization of refined protein-ligand complexes. The ligands are ranked by the minimized binding energies allowing the users to perform additional analysis for drug discovery or chemical biology projects. The web server has been extensively tested on 21 diverse protein-ligand complexes. AMMOS2 minimization shows consistent improvement over the initial complex structures in terms of minimized protein-ligand binding energies and water positions optimization. The AMMOS2 web server is freely available without any registration requirement at the URL: http://drugmod.rpbs.univ-paris-diderot.fr/ammosHome.php. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
AMMOS2: a web server for protein–ligand–water complexes refinement via molecular mechanics
Labbé, Céline M.; Pencheva, Tania; Jereva, Dessislava; Desvillechabrol, Dimitri; Becot, Jérôme; Villoutreix, Bruno O.; Pajeva, Ilza
2017-01-01
Abstract AMMOS2 is an interactive web server for efficient computational refinement of protein–small organic molecule complexes. The AMMOS2 protocol employs atomic-level energy minimization of a large number of experimental or modeled protein–ligand complexes. The web server is based on the previously developed standalone software AMMOS (Automatic Molecular Mechanics Optimization for in silico Screening). AMMOS utilizes the physics-based force field AMMP sp4 and performs optimization of protein–ligand interactions at five levels of flexibility of the protein receptor. The new version 2 of AMMOS implemented in the AMMOS2 web server allows the users to include explicit water molecules and individual metal ions in the protein–ligand complexes during minimization. The web server provides comprehensive analysis of computed energies and interactive visualization of refined protein–ligand complexes. The ligands are ranked by the minimized binding energies allowing the users to perform additional analysis for drug discovery or chemical biology projects. The web server has been extensively tested on 21 diverse protein–ligand complexes. AMMOS2 minimization shows consistent improvement over the initial complex structures in terms of minimized protein–ligand binding energies and water positions optimization. The AMMOS2 web server is freely available without any registration requirement at the URL: http://drugmod.rpbs.univ-paris-diderot.fr/ammosHome.php. PMID:28486703
Application of Harmony Search algorithm to the solution of groundwater management models
NASA Astrophysics Data System (ADS)
Tamer Ayvaz, M.
2009-06-01
This study proposes a groundwater resources management model in which the solution is performed through a combined simulation-optimization model. A modular three-dimensional finite difference groundwater flow model, MODFLOW is used as the simulation model. This model is then combined with a Harmony Search (HS) optimization algorithm which is based on the musical process of searching for a perfect state of harmony. The performance of the proposed HS based management model is tested on three separate groundwater management problems: (i) maximization of total pumping from an aquifer (steady-state); (ii) minimization of the total pumping cost to satisfy the given demand (steady-state); and (iii) minimization of the pumping cost to satisfy the given demand for multiple management periods (transient). The sensitivity of HS algorithm is evaluated by performing a sensitivity analysis which aims to determine the impact of related solution parameters on convergence behavior. The results show that HS yields nearly same or better solutions than the previous solution methods and may be used to solve management problems in groundwater modeling.
Multiscale geometric modeling of macromolecules I: Cartesian representation
NASA Astrophysics Data System (ADS)
Xia, Kelin; Feng, Xin; Chen, Zhan; Tong, Yiying; Wei, Guo-Wei
2014-01-01
This paper focuses on the geometric modeling and computational algorithm development of biomolecular structures from two data sources: Protein Data Bank (PDB) and Electron Microscopy Data Bank (EMDB) in the Eulerian (or Cartesian) representation. Molecular surface (MS) contains non-smooth geometric singularities, such as cusps, tips and self-intersecting facets, which often lead to computational instabilities in molecular simulations, and violate the physical principle of surface free energy minimization. Variational multiscale surface definitions are proposed based on geometric flows and solvation analysis of biomolecular systems. Our approach leads to geometric and potential driven Laplace-Beltrami flows for biomolecular surface evolution and formation. The resulting surfaces are free of geometric singularities and minimize the total free energy of the biomolecular system. High order partial differential equation (PDE)-based nonlinear filters are employed for EMDB data processing. We show the efficacy of this approach in feature-preserving noise reduction. After the construction of protein multiresolution surfaces, we explore the analysis and characterization of surface morphology by using a variety of curvature definitions. Apart from the classical Gaussian curvature and mean curvature, maximum curvature, minimum curvature, shape index, and curvedness are also applied to macromolecular surface analysis for the first time. Our curvature analysis is uniquely coupled to the analysis of electrostatic surface potential, which is a by-product of our variational multiscale solvation models. As an expository investigation, we particularly emphasize the numerical algorithms and computational protocols for practical applications of the above multiscale geometric models. Such information may otherwise be scattered over the vast literature on this topic. Based on the curvature and electrostatic analysis from our multiresolution surfaces, we introduce a new concept, the polarized curvature, for the prediction of protein binding sites.
Neurophysiological model of tinnitus: dependence of the minimal masking level on treatment outcome.
Jastreboff, P J; Hazell, J W; Graham, R L
1994-11-01
Validity of the neurophysiological model of tinnitus (Jastreboff, 1990), outlined in this paper, was tested on data from multicenter trial of tinnitus masking (Hazell et al., 1985). Minimal masking level, intensity match of tinnitus, and the threshold of hearing have been evaluated on a total of 382 patients before and after 6 months of treatment with maskers, hearing aids, or combination devices. The data has been divided into categories depending on treatment outcome and type of approach used. Results of analysis revealed that: i) the psychoacoustical description of tinnitus does not possess a predictive value for the outcome of the treatment; ii) minimal masking level changed significantly depending on the treatment outcome, decreasing on average by 5.3 dB in patients reporting improvement, and increasing by 4.9 dB in those whose tinnitus remained the same or worsened; iii) 73.9% of patients reporting improvement had their minimal masking level decreased as compared with 50.5% for patients not showing improvement, which is at the level of random change; iv) the type of device used has no significant impact on the treatment outcome and minimal masking level change; v) intensity match and threshold of hearing did not exhibit any significant changes which can be related to treatment outcome. These results are fully consistent with the neurophysiological interpretation of mechanisms involved in the phenomenon of tinnitus and its alleviation.
Kumar, Abhishek; Clement, Shibu; Agrawal, V P
2010-07-15
An attempt is made to address a few ecological and environment issues by developing different structural models for effluent treatment system for electroplating. The effluent treatment system is defined with the help of different subsystems contributing to waste minimization. Hierarchical tree and block diagram showing all possible interactions among subsystems are proposed. These non-mathematical diagrams are converted into mathematical models for design improvement, analysis, comparison, storage retrieval and commercially off-the-shelf purchases of different subsystems. This is achieved by developing graph theoretic model, matrix models and variable permanent function model. Analysis is carried out by permanent function, hierarchical tree and block diagram methods. Storage and retrieval is done using matrix models. The methodology is illustrated with the help of an example. Benefits to the electroplaters/end user are identified. 2010 Elsevier B.V. All rights reserved.
Adams, Christopher S; Antoci, Valentin; Harrison, Gerald; Patal, Payal; Freeman, Terry A; Shapiro, Irving M; Parvizi, Javad; Hickok, Noreen J; Radin, Shula; Ducheyne, Paul
2009-06-01
Peri-prosthetic infection remains a serious complication of joint replacement surgery. Herein, we demonstrate that a vancomycin-containing sol-gel film on Ti alloy rods can successfully treat bacterial infections in an animal model. The vancomycin-containing sol-gel films exhibited predictable release kinetics, while significantly inhibiting S. aureus adhesion. When evaluated in a rat osteomyelitis model, microbiological analysis indicated that the vancomycin-containing sol-gel film caused a profound decrease in S. aureus number. Radiologically, while the control side showed extensive bone degradation, including abscesses and an extensive periosteal reaction, rods coated with the vancomycin-containing sol-gel film resulted in minimal signs of infection. MicroCT analysis confirmed the radiological results, while demonstrating that the vancomycin-containing sol-gel film significantly protected dense bone from resorption and minimized remodeling. These results clearly demonstrate that this novel thin sol-gel technology can be used for the targeted delivery of antibiotics for the treatment of periprosthetic as well as other bone infections. Copyright 2008 Orthopaedic Research Society
Graph cuts for curvature based image denoising.
Bae, Egil; Shi, Juan; Tai, Xue-Cheng
2011-05-01
Minimization of total variation (TV) is a well-known method for image denoising. Recently, the relationship between TV minimization problems and binary MRF models has been much explored. This has resulted in some very efficient combinatorial optimization algorithms for the TV minimization problem in the discrete setting via graph cuts. To overcome limitations, such as staircasing effects, of the relatively simple TV model, variational models based upon higher order derivatives have been proposed. The Euler's elastica model is one such higher order model of central importance, which minimizes the curvature of all level lines in the image. Traditional numerical methods for minimizing the energy in such higher order models are complicated and computationally complex. In this paper, we will present an efficient minimization algorithm based upon graph cuts for minimizing the energy in the Euler's elastica model, by simplifying the problem to that of solving a sequence of easy graph representable problems. This sequence has connections to the gradient flow of the energy function, and converges to a minimum point. The numerical experiments show that our new approach is more effective in maintaining smooth visual results while preserving sharp features better than TV models.
Geographic information system/watershed model interface
Fisher, Gary T.
1989-01-01
Geographic information systems allow for the interactive analysis of spatial data related to water-resources investigations. A conceptual design for an interface between a geographic information system and a watershed model includes functions for the estimation of model parameter values. Design criteria include ease of use, minimal equipment requirements, a generic data-base management system, and use of a macro language. An application is demonstrated for a 90.1-square-kilometer subbasin of the Patuxent River near Unity, Maryland, that performs automated derivation of watershed parameters for hydrologic modeling.
Ensemble habitat mapping of invasive plant species
Stohlgren, T.J.; Ma, P.; Kumar, S.; Rocca, M.; Morisette, J.T.; Jarnevich, C.S.; Benson, N.
2010-01-01
Ensemble species distribution models combine the strengths of several species environmental matching models, while minimizing the weakness of any one model. Ensemble models may be particularly useful in risk analysis of recently arrived, harmful invasive species because species may not yet have spread to all suitable habitats, leaving species-environment relationships difficult to determine. We tested five individual models (logistic regression, boosted regression trees, random forest, multivariate adaptive regression splines (MARS), and maximum entropy model or Maxent) and ensemble modeling for selected nonnative plant species in Yellowstone and Grand Teton National Parks, Wyoming; Sequoia and Kings Canyon National Parks, California, and areas of interior Alaska. The models are based on field data provided by the park staffs, combined with topographic, climatic, and vegetation predictors derived from satellite data. For the four invasive plant species tested, ensemble models were the only models that ranked in the top three models for both field validation and test data. Ensemble models may be more robust than individual species-environment matching models for risk analysis. ?? 2010 Society for Risk Analysis.
Stabilized High-order Galerkin Methods Based on a Parameter-free Dynamic SGS Model for LES
2015-01-01
stresses obtained via Dyn-SGS are residual-based, the effect of the artificial diffusion is minimal in the regions where the solution is smooth. The direct...used in the analysis of the results rather than in the definition and analysis of the LES equations described from now on. 2.1 LES and the Dyn-SGS model... definition is sucient given the scope of the current study; nevertheless, a more proper defi- nition of for LES should be used in future work
Classification of Phase Transitions by Microcanonical Inflection-Point Analysis
NASA Astrophysics Data System (ADS)
Qi, Kai; Bachmann, Michael
2018-05-01
By means of the principle of minimal sensitivity we generalize the microcanonical inflection-point analysis method by probing derivatives of the microcanonical entropy for signals of transitions in complex systems. A strategy of systematically identifying and locating independent and dependent phase transitions of any order is proposed. The power of the generalized method is demonstrated in applications to the ferromagnetic Ising model and a coarse-grained model for polymer adsorption onto a substrate. The results shed new light on the intrinsic phase structure of systems with cooperative behavior.
Minimal agent based model for financial markets I. Origin and self-organization of stylized facts
NASA Astrophysics Data System (ADS)
Alfi, V.; Cristelli, M.; Pietronero, L.; Zaccaria, A.
2009-02-01
We introduce a minimal agent based model for financial markets to understand the nature and self-organization of the stylized facts. The model is minimal in the sense that we try to identify the essential ingredients to reproduce the most important deviations of price time series from a random walk behavior. We focus on four essential ingredients: fundamentalist agents which tend to stabilize the market; chartist agents which induce destabilization; analysis of price behavior for the two strategies; herding behavior which governs the possibility of changing strategy. Bubbles and crashes correspond to situations dominated by chartists, while fundamentalists provide a long time stability (on average). The stylized facts are shown to correspond to an intermittent behavior which occurs only for a finite value of the number of agents N. Therefore they correspond to finite size effects which, however, can occur at different time scales. We propose a new mechanism for the self-organization of this state which is linked to the existence of a threshold for the agents to be active or not active. The feedback between price fluctuations and number of active agents represents a crucial element for this state of self-organized intermittency. The model can be easily generalized to consider more realistic variants.
Inflation in the mixed Higgs-R2 model
NASA Astrophysics Data System (ADS)
He, Minxi; Starobinsky, Alexei A.; Yokoyama, Jun'ichi
2018-05-01
We analyze a two-field inflationary model consisting of the Ricci scalar squared (R2) term and the standard Higgs field non-minimally coupled to gravity in addition to the Einstein R term. Detailed analysis of the power spectrum of this model with mass hierarchy is presented, and we find that one can describe this model as an effective single-field model in the slow-roll regime with a modified sound speed. The scalar spectral index predicted by this model coincides with those given by the R2 inflation and the Higgs inflation implying that there is a close relation between this model and the R2 inflation already in the original (Jordan) frame. For a typical value of the self-coupling of the standard Higgs field at the high energy scale of inflation, the role of the Higgs field in parameter space involved is to modify the scalaron mass, so that the original mass parameter in the R2 inflation can deviate from its standard value when non-minimal coupling between the Ricci scalar and the Higgs field is large enough.
NASA Astrophysics Data System (ADS)
Kirschner, Matthias; Wesarg, Stefan
2011-03-01
Active Shape Models (ASMs) are a popular family of segmentation algorithms which combine local appearance models for boundary detection with a statistical shape model (SSM). They are especially popular in medical imaging due to their ability for fast and accurate segmentation of anatomical structures even in large and noisy 3D images. A well-known limitation of ASMs is that the shape constraints are over-restrictive, because the segmentations are bounded by the Principal Component Analysis (PCA) subspace learned from the training data. To overcome this limitation, we propose a new energy minimization approach which combines an external image energy with an internal shape model energy. Our shape energy uses the Distance From Feature Space (DFFS) concept to allow deviations from the PCA subspace in a theoretically sound and computationally fast way. In contrast to previous approaches, our model does not rely on post-processing with constrained free-form deformation or additional complex local energy models. In addition to the energy minimization approach, we propose a new method for liver detection, a new method for initializing an SSM and an improved k-Nearest Neighbour (kNN)-classifier for boundary detection. Our ASM is evaluated with leave-one-out tests on a data set with 34 tomographic CT scans of the liver and is compared to an ASM with standard shape constraints. The quantitative results of our experiments show that we achieve higher segmentation accuracy with our energy minimization approach than with standard shape constraints.nym
Characterizing and modeling the dynamics of online popularity.
Ratkiewicz, Jacob; Fortunato, Santo; Flammini, Alessandro; Menczer, Filippo; Vespignani, Alessandro
2010-10-08
Online popularity has an enormous impact on opinions, culture, policy, and profits. We provide a quantitative, large scale, temporal analysis of the dynamics of online content popularity in two massive model systems: the Wikipedia and an entire country's Web space. We find that the dynamics of popularity are characterized by bursts, displaying characteristic features of critical systems such as fat-tailed distributions of magnitude and interevent time. We propose a minimal model combining the classic preferential popularity increase mechanism with the occurrence of random popularity shifts due to exogenous factors. The model recovers the critical features observed in the empirical analysis of the systems analyzed here, highlighting the key factors needed in the description of popularity dynamics.
Adopting epidemic model to optimize medication and surgical intervention of excess weight
NASA Astrophysics Data System (ADS)
Sun, Ruoyan
2017-01-01
We combined an epidemic model with an objective function to minimize the weighted sum of people with excess weight and the cost of a medication and surgical intervention in the population. The epidemic model is consisted of ordinary differential equations to describe three subpopulation groups based on weight. We introduced an intervention using medication and surgery to deal with excess weight. An objective function is constructed taking into consideration the cost of the intervention as well as the weight distribution of the population. Using empirical data, we show that fixed participation rate reduces the size of obese population but increases the size for overweight. An optimal participation rate exists and decreases with respect to time. Both theoretical analysis and empirical example confirm the existence of an optimal participation rate, u*. Under u*, the weighted sum of overweight (S) and obese (O) population as well as the cost of the program is minimized. This article highlights the existence of an optimal participation rate that minimizes the number of people with excess weight and the cost of the intervention. The time-varying optimal participation rate could contribute to designing future public health interventions of excess weight.
Singularity-free dynamic equations of spacecraft-manipulator systems
NASA Astrophysics Data System (ADS)
From, Pål J.; Ytterstad Pettersen, Kristin; Gravdahl, Jan T.
2011-12-01
In this paper we derive the singularity-free dynamic equations of spacecraft-manipulator systems using a minimal representation. Spacecraft are normally modeled using Euler angles, which leads to singularities, or Euler parameters, which is not a minimal representation and thus not suited for Lagrange's equations. We circumvent these issues by introducing quasi-coordinates which allows us to derive the dynamics using minimal and globally valid non-Euclidean configuration coordinates. This is a great advantage as the configuration space of a spacecraft is non-Euclidean. We thus obtain a computationally efficient and singularity-free formulation of the dynamic equations with the same complexity as the conventional Lagrangian approach. The closed form formulation makes the proposed approach well suited for system analysis and model-based control. This paper focuses on the dynamic properties of free-floating and free-flying spacecraft-manipulator systems and we show how to calculate the inertia and Coriolis matrices in such a way that this can be implemented for simulation and control purposes without extensive knowledge of the mathematical background. This paper represents the first detailed study of modeling of spacecraft-manipulator systems with a focus on a singularity free formulation using the proposed framework.
A User's Guide for the Differential Reduced Ejector/Mixer Analysis "DREA" Program. 1.0
NASA Technical Reports Server (NTRS)
DeChant, Lawrence J.; Nadell, Shari-Beth
1999-01-01
A system of analytical and numerical two-dimensional mixer/ejector nozzle models that require minimal empirical input has been developed and programmed for use in conceptual and preliminary design. This report contains a user's guide describing the operation of the computer code, DREA (Differential Reduced Ejector/mixer Analysis), that contains these mathematical models. This program is currently being adopted by the Propulsion Systems Analysis Office at the NASA Glenn Research Center. A brief summary of the DREA method is provided, followed by detailed descriptions of the program input and output files. Sample cases demonstrating the application of the program are presented.
NASA Technical Reports Server (NTRS)
Lung, Shun-fat; Pak, Chan-gi
2008-01-01
Updating the finite element model using measured data is a challenging problem in the area of structural dynamics. The model updating process requires not only satisfactory correlations between analytical and experimental results, but also the retention of dynamic properties of structures. Accurate rigid body dynamics are important for flight control system design and aeroelastic trim analysis. Minimizing the difference between analytical and experimental results is a type of optimization problem. In this research, a multidisciplinary design, analysis, and optimization (MDAO) tool is introduced to optimize the objective function and constraints such that the mass properties, the natural frequencies, and the mode shapes are matched to the target data as well as the mass matrix being orthogonalized.
NASA Technical Reports Server (NTRS)
Lung, Shun-fat; Pak, Chan-gi
2008-01-01
Updating the finite element model using measured data is a challenging problem in the area of structural dynamics. The model updating process requires not only satisfactory correlations between analytical and experimental results, but also the retention of dynamic properties of structures. Accurate rigid body dynamics are important for flight control system design and aeroelastic trim analysis. Minimizing the difference between analytical and experimental results is a type of optimization problem. In this research, a multidisciplinary design, analysis, and optimization [MDAO] tool is introduced to optimize the objective function and constraints such that the mass properties, the natural frequencies, and the mode shapes are matched to the target data as well as the mass matrix being orthogonalized.
Coupling mechanical tension and GTPase signaling to generate cell and tissue dynamics
NASA Astrophysics Data System (ADS)
Zmurchok, Cole; Bhaskar, Dhananjay; Edelstein-Keshet, Leah
2018-07-01
Regulators of the actin cytoskeleton such Rho GTPases can modulate forces developed in cells by promoting actomyosin contraction. At the same time, through mechanosensing, tension is known to affect the activity of Rho GTPases. What happens when these effects act in concert? Using a minimal model (1 GTPase coupled to a Kelvin–Voigt element), we show that two-way feedback between signaling (‘RhoA’) and mechanical tension (stretching) leads to a spectrum of cell behaviors, including contracted or relaxed cells, and cells that oscillate between these extremes. When such ‘model cells’ are connected to one another in a row or in a 2D sheet (‘epithelium’), we observe waves of contraction/relaxation and GTPase activity sweeping through the tissue. The minimal model lends itself to full bifurcation analysis, and suggests a mechanism that explains behavior observed in the context of development and collective cell behavior.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aartsen, M.G.; Abraham, K.; Ackermann, M.
We present an improved event-level likelihood formalism for including neutrino telescope data in global fits to new physics. We derive limits on spin-dependent dark matter-proton scattering by employing the new formalism in a re-analysis of data from the 79-string IceCube search for dark matter annihilation in the Sun, including explicit energy information for each event. The new analysis excludes a number of models in the weak-scale minimal supersymmetric standard model (MSSM) for the first time. This work is accompanied by the public release of the 79-string IceCube data, as well as an associated computer code for applying the new likelihoodmore » to arbitrary dark matter models.« less
Graham, Christopher N; Hechmati, Guy; Fakih, Marwan G; Knox, Hediyyih N; Maglinte, Gregory A; Hjelmgren, Jonas; Barber, Beth; Schwartzberg, Lee S
2015-01-01
To compare the costs of first-line treatment with panitumumab + FOLFOX in comparison to cetuximab + FOLFIRI among patients with wild-type (WT) RAS metastatic colorectal cancer (mCRC) in the US. A cost-minimization model was developed assuming similar treatment efficacy between both regimens. The model estimated the costs associated with drug acquisition, treatment administration frequency (every 2 weeks for panitumumab, weekly for cetuximab), and incidence of infusion reactions. Average anti-EGFR doses were calculated from the ASPECCT clinical trial, and average doses of chemotherapy regimens were based on product labels. Using the medical component of the consumer price index, adverse event costs were inflated to 2014 US dollars, and all other costs were reported in 2014 US dollars. The time horizon for the model was based on average first-line progression-free survival of a WT RAS patient, estimated from parametric survival analyses of PRIME clinical trial data. Relative to cetuximab + FOLFIRI in the first-line treatment of WT RAS mCRC, the cost-minimization model demonstrated lower projected drug acquisition, administration, and adverse event costs for patients who received panitumumab + FOLFOX. The overall cost per patient for first-line treatment was $179,219 for panitumumab + FOLFOX vs $202,344 for cetuximab + FOLFIRI, resulting in a per-patient saving of $23,125 (11.4%) in favor of panitumumab + FOLFOX. From a value perspective, the cost-minimization model supports panitumumab + FOLFOX instead of cetuximab + FOLFIRI as the preferred first-line treatment of WT RAS mCRC patients requiring systemic therapy.
Chindelevitch, Leonid; Trigg, Jason; Regev, Aviv; Berger, Bonnie
2014-01-01
Constraint-based models are currently the only methodology that allows the study of metabolism at the whole-genome scale. Flux balance analysis is commonly used to analyse constraint-based models. Curiously, the results of this analysis vary with the software being run, a situation that we show can be remedied by using exact rather than floating-point arithmetic. Here we introduce MONGOOSE, a toolbox for analysing the structure of constraint-based metabolic models in exact arithmetic. We apply MONGOOSE to the analysis of 98 existing metabolic network models and find that the biomass reaction is surprisingly blocked (unable to sustain non-zero flux) in nearly half of them. We propose a principled approach for unblocking these reactions and extend it to the problems of identifying essential and synthetic lethal reactions and minimal media. Our structural insights enable a systematic study of constraint-based metabolic models, yielding a deeper understanding of their possibilities and limitations. PMID:25291352
BnmrOffice: A Free Software for β-nmr Data Analysis
NASA Astrophysics Data System (ADS)
Saadaoui, Hassan
A data-analysis framework with a graphical user interface (GUI) is developed to analyze β-nmr spectra in an automated and intuitive way. This program, named BnmrOffice is written in C++ and employs the QT libraries and tools for designing the GUI, and the CERN's Minuit optimization routines for minimization. The program runs under multiple platforms, and is available for free under the terms of the GNU GPL standards. The GUI is structured in tabs to search, plot and analyze data, along other functionalities. The user can tweak the minimization options; and fit multiple data files (or runs) using single or global fitting routines with pre-defined or new models. Currently, BnmrOffice reads TRIUMF's MUD data and ASCII files, and can be extended to other formats.
NASA Technical Reports Server (NTRS)
Oswald, Fred B.; Savage, Michael; Zaretsky, Erwin V.
2015-01-01
The U.S. Space Shuttle fleet was originally intended to have a life of 100 flights for each vehicle, lasting over a 10-year period, with minimal scheduled maintenance or inspection. The first space shuttle flight was that of the Space Shuttle Columbia (OV-102), launched April 12, 1981. The disaster that destroyed Columbia occurred on its 28th flight, February 1, 2003, nearly 22 years after its first launch. In order to minimize risk of losing another Space Shuttle, a probabilistic life and reliability analysis was conducted for the Space Shuttle rudder/speed brake actuators to determine the number of flights the actuators could sustain. A life and reliability assessment of the actuator gears was performed in two stages: a contact stress fatigue model and a gear tooth bending fatigue model. For the contact stress analysis, the Lundberg-Palmgren bearing life theory was expanded to include gear-surface pitting for the actuator as a system. The mission spectrum of the Space Shuttle rudder/speed brake actuator was combined into equivalent effective hinge moment loads including an actuator input preload for the contact stress fatigue and tooth bending fatigue models. Gear system reliabilities are reported for both models and their combination. Reliability of the actuator bearings was analyzed separately, based on data provided by the actuator manufacturer. As a result of the analysis, the reliability of one half of a single actuator was calculated to be 98.6 percent for 12 flights. Accordingly, each actuator was subsequently limited to 12 flights before removal from service in the Space Shuttle.
Electroweak symmetry breaking and collider signatures in the next-to-minimal composite Higgs model
NASA Astrophysics Data System (ADS)
Niehoff, Christoph; Stangl, Peter; Straub, David M.
2017-04-01
We conduct a detailed numerical analysis of the composite pseudo-Nambu-Goldstone Higgs model based on the next-to-minimal coset SO(6)/SO(5) ≅ SU(4)/Sp(4), featuring an additional SM singlet scalar in the spectrum, which we allow to mix with the Higgs boson. We identify regions in parameter space compatible with all current exper-imental constraints, including radiative electroweak symmetry breaking, flavour physics, and direct searches at colliders. We find the additional scalar, with a mass predicted to be below a TeV, to be virtually unconstrained by current LHC data, but potentially in reach of run 2 searches. Promising indirect searches include rare semi-leptonic B decays, CP violation in B s mixing, and the electric dipole moment of the neutron.
Biological applications of phase-contrast electron microscopy.
Nagayama, Kuniaki
2014-01-01
Here, I review the principles and applications of phase-contrast electron microscopy using phase plates. First, I develop the principle of phase contrast based on a minimal model of microscopy, introducing a double Fourier-transform process to mathematically formulate the image formation. Next, I explain four phase-contrast (PC) schemes, defocus PC, Zernike PC, Hilbert differential contrast, and schlieren optics, as image-filtering processes in the context of the minimal model, with particular emphases on the Zernike PC and corresponding Zernike phase plates. Finally, I review applications of Zernike PC cryo-electron microscopy to biological systems such as protein molecules, virus particles, and cells, including single-particle analysis to delineate three-dimensional (3D) structures of protein and virus particles and cryo-electron tomography to reconstruct 3D images of complex protein systems and cells.
NASA Astrophysics Data System (ADS)
Foufoula-Georgiou, E.; Czuba, J. A.; Belmont, P.; Wilcock, P. R.; Gran, K. B.; Kumar, P.
2015-12-01
Climatic trends and agricultural intensification in Midwestern U.S. landscapes has contributed to hydrologic regime shifts and a cascade of changes to water quality and river ecosystems. Informing management and policy to mitigate undesired consequences requires a careful scientific analysis that includes data-based inference and conceptual/physical modeling. It also calls for a systems approach that sees beyond a single stream to the whole watershed, favoring the adoption of minimal complexity rather than highly parameterized models for scenario evaluation and comparison. Minimal complexity models can focus on key dynamic processes of the system of interest, reducing problems of model structure bias and equifinality. Here we present a comprehensive analysis of climatic, hydrologic, and ecologic trends in the Minnesota River basin, a 45,000 km2 basin undergoing continuous agricultural intensification and suffering from declining water quality and aquatic biodiversity. We show that: (a) it is easy to arrive at an erroneous view of the system using traditional analyses and modeling tools; (b) even with a well-founded understanding of the key drivers and processes contributing to the problem, there are multiple pathways for minimizing/reversing environmental degradation; and (c) addressing the underlying driver of change (i.e., increased streamflows and reduced water storage due to agricultural drainage practices) by restoring a small amount of water storage in the landscape results in multiple non-linear improvements in downstream water quality. We argue that "optimization" between ecosystem services and economic considerations requires simple modeling frameworks, which include the most essential elements of the whole system and allow for evaluation of alternative management scenarios. Science-based approaches informing management and policy are urgent in this region calling for a new era of watershed management to new and accelerating stressors at the intersection of the food-water-energy-environment nexus.
1987 Oak Ridge model conference: Proceedings: Volume I, Part 3, Waste Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1987-01-01
A conference sponsored by the United States Department of Energy (DOE), was held on waste management. Topics of discussion were transuranic waste management, chemical and physical treatment technologies, waste minimization, land disposal technology and characterization and analysis. Individual projects are processed separately for the data bases. (CBS)
USDA-ARS?s Scientific Manuscript database
Computer Monte-Carlo (MC) simulations (Geant4) of neutron propagation and acquisition of gamma response from soil samples was applied to evaluate INS system performance characteristic [sensitivity, minimal detectable level (MDL)] for soil carbon measurement. The INS system model with best performanc...
DOT National Transportation Integrated Search
2012-08-01
With the purpose to minimize or prevent crash-induced fires in road and rail transportation, the : current interest in bio-derived and blended transportation fuels is increasing. Based on two years : of preliminary testing and analysis, it appears to...
Random Effects Structure for Confirmatory Hypothesis Testing: Keep It Maximal
ERIC Educational Resources Information Center
Barr, Dale J.; Levy, Roger; Scheepers, Christoph; Tily, Harry J.
2013-01-01
Linear mixed-effects models (LMEMs) have become increasingly prominent in psycholinguistics and related areas. However, many researchers do not seem to appreciate how random effects structures affect the generalizability of an analysis. Here, we argue that researchers using LMEMs for confirmatory hypothesis testing should minimally adhere to the…
Decision Modeling Framework to Minimize Arrival Delays from Ground Delay Programs
NASA Astrophysics Data System (ADS)
Mohleji, Nandita
Convective weather and other constraints create uncertainty in air transportation, leading to costly delays. A Ground Delay Program (GDP) is a strategy to mitigate these effects. Systematic decision support can increase GDP efficacy, reduce delays, and minimize direct operating costs. In this study, a decision analysis (DA) model is constructed by combining a decision tree and Bayesian belief network. Through a study of three New York region airports, the DA model demonstrates that larger GDP scopes that include more flights in the program, along with longer lead times that provide stakeholders greater notice of a pending program, trigger the fewest average arrival delays. These findings are demonstrated to result in a savings of up to $1,850 per flight. Furthermore, when convective weather is predicted, forecast weather confidences remain the same level or greater at least 70% of the time, supporting more strategic decision making. The DA model thus enables quantification of uncertainties and insights on causal relationships, providing support for future GDP decisions.
Lever, Melissa; Lim, Hong-Sheng; Kruger, Philipp; Nguyen, John; Trendel, Nicola; Abu-Shah, Enas; Maini, Philip Kumar; van der Merwe, Philip Anton
2016-01-01
T cells must respond differently to antigens of varying affinity presented at different doses. Previous attempts to map peptide MHC (pMHC) affinity onto T-cell responses have produced inconsistent patterns of responses, preventing formulations of canonical models of T-cell signaling. Here, a systematic analysis of T-cell responses to 1 million-fold variations in both pMHC affinity and dose produced bell-shaped dose–response curves and different optimal pMHC affinities at different pMHC doses. Using sequential model rejection/identification algorithms, we identified a unique, minimal model of cellular signaling incorporating kinetic proofreading with limited signaling coupled to an incoherent feed-forward loop (KPL-IFF) that reproduces these observations. We show that the KPL-IFF model correctly predicts the T-cell response to antigen copresentation. Our work offers a general approach for studying cellular signaling that does not require full details of biochemical pathways. PMID:27702900
Minimal model for a hydrodynamic fingering instability in microroller suspensions
NASA Astrophysics Data System (ADS)
Delmotte, Blaise; Donev, Aleksandar; Driscoll, Michelle; Chaikin, Paul
2017-11-01
We derive a minimal continuum model to investigate the hydrodynamic mechanism behind the fingering instability recently discovered in a suspension of microrollers near a floor [M. Driscoll et al., Nat. Phys. 13, 375 (2017), 10.1038/nphys3970]. Our model, consisting of two continuous lines of rotlets, exhibits a linear instability driven only by hydrodynamic interactions and reproduces the length-scale selection observed in large-scale particle simulations and in experiments. By adjusting only one parameter, the distance between the two lines, our dispersion relation exhibits quantitative agreement with the simulations and qualitative agreement with experimental measurements. Our linear stability analysis indicates that this instability is caused by the combination of the advective and transverse flows generated by the microrollers near a no-slip surface. Our simple model offers an interesting formalism to characterize other hydrodynamic instabilities that have not been well understood, such as size scale selection in suspensions of particles sedimenting adjacent to a wall, or the recently observed formations of traveling phonons in systems of confined driven particles.
Lever, Melissa; Lim, Hong-Sheng; Kruger, Philipp; Nguyen, John; Trendel, Nicola; Abu-Shah, Enas; Maini, Philip Kumar; van der Merwe, Philip Anton; Dushek, Omer
2016-10-25
T cells must respond differently to antigens of varying affinity presented at different doses. Previous attempts to map peptide MHC (pMHC) affinity onto T-cell responses have produced inconsistent patterns of responses, preventing formulations of canonical models of T-cell signaling. Here, a systematic analysis of T-cell responses to 1 million-fold variations in both pMHC affinity and dose produced bell-shaped dose-response curves and different optimal pMHC affinities at different pMHC doses. Using sequential model rejection/identification algorithms, we identified a unique, minimal model of cellular signaling incorporating kinetic proofreading with limited signaling coupled to an incoherent feed-forward loop (KPL-IFF) that reproduces these observations. We show that the KPL-IFF model correctly predicts the T-cell response to antigen copresentation. Our work offers a general approach for studying cellular signaling that does not require full details of biochemical pathways.
A method of hidden Markov model optimization for use with geophysical data sets
NASA Technical Reports Server (NTRS)
Granat, R. A.
2003-01-01
Geophysics research has been faced with a growing need for automated techniques with which to process large quantities of data. A successful tool must meet a number of requirements: it should be consistent, require minimal parameter tuning, and produce scientifically meaningful results in reasonable time. We introduce a hidden Markov model (HMM)-based method for analysis of geophysical data sets that attempts to address these issues.
Evaluation of Second-Level Inference in fMRI Analysis
Roels, Sanne P.; Loeys, Tom; Moerkerke, Beatrijs
2016-01-01
We investigate the impact of decisions in the second-level (i.e., over subjects) inferential process in functional magnetic resonance imaging on (1) the balance between false positives and false negatives and on (2) the data-analytical stability, both proxies for the reproducibility of results. Second-level analysis based on a mass univariate approach typically consists of 3 phases. First, one proceeds via a general linear model for a test image that consists of pooled information from different subjects. We evaluate models that take into account first-level (within-subjects) variability and models that do not take into account this variability. Second, one proceeds via inference based on parametrical assumptions or via permutation-based inference. Third, we evaluate 3 commonly used procedures to address the multiple testing problem: familywise error rate correction, False Discovery Rate (FDR) correction, and a two-step procedure with minimal cluster size. Based on a simulation study and real data we find that the two-step procedure with minimal cluster size results in most stable results, followed by the familywise error rate correction. The FDR results in most variable results, for both permutation-based inference and parametrical inference. Modeling the subject-specific variability yields a better balance between false positives and false negatives when using parametric inference. PMID:26819578
Froese, Tom; Lenay, Charles; Ikegami, Takashi
2012-01-01
One of the major challenges faced by explanations of imitation is the “correspondence problem”: how is an agent able to match its bodily expression to the observed bodily expression of another agent, especially when there is no possibility of external self-observation? Current theories only consider the possibility of an innate or acquired matching mechanism belonging to an isolated individual. In this paper we evaluate an alternative that situates the explanation of imitation in the inter-individual dynamics of the interaction process itself. We implemented a minimal model of two interacting agents based on a recent psychological study of imitative behavior during minimalist perceptual crossing. The agents cannot sense the configuration of their own body, and do not have access to other's body configuration, either. And yet surprisingly they are still capable of converging on matching bodily configurations. Analysis revealed that the agents solved this version of the correspondence problem in terms of collective properties of the interaction process. Contrary to the assumption that such properties merely serve as external input or scaffolding for individual mechanisms, it was found that the behavioral dynamics were distributed across the model as a whole. PMID:23060768
Lu, Xinjiang; Liu, Wenbo; Zhou, Chuang; Huang, Minghui
2017-06-13
The least-squares support vector machine (LS-SVM) is a popular data-driven modeling method and has been successfully applied to a wide range of applications. However, it has some disadvantages, including being ineffective at handling non-Gaussian noise as well as being sensitive to outliers. In this paper, a robust LS-SVM method is proposed and is shown to have more reliable performance when modeling a nonlinear system under conditions where Gaussian or non-Gaussian noise is present. The construction of a new objective function allows for a reduction of the mean of the modeling error as well as the minimization of its variance, and it does not constrain the mean of the modeling error to zero. This differs from the traditional LS-SVM, which uses a worst-case scenario approach in order to minimize the modeling error and constrains the mean of the modeling error to zero. In doing so, the proposed method takes the modeling error distribution information into consideration and is thus less conservative and more robust in regards to random noise. A solving method is then developed in order to determine the optimal parameters for the proposed robust LS-SVM. An additional analysis indicates that the proposed LS-SVM gives a smaller weight to a large-error training sample and a larger weight to a small-error training sample, and is thus more robust than the traditional LS-SVM. The effectiveness of the proposed robust LS-SVM is demonstrated using both artificial and real life cases.
Quasi-Optimal Elimination Trees for 2D Grids with Singularities
Paszyńska, A.; Paszyński, M.; Jopek, K.; ...
2015-01-01
We consmore » truct quasi-optimal elimination trees for 2D finite element meshes with singularities. These trees minimize the complexity of the solution of the discrete system. The computational cost estimates of the elimination process model the execution of the multifrontal algorithms in serial and in parallel shared-memory executions. Since the meshes considered are a subspace of all possible mesh partitions, we call these minimizers quasi-optimal. We minimize the cost functionals using dynamic programming. Finding these minimizers is more computationally expensive than solving the original algebraic system. Nevertheless, from the insights provided by the analysis of the dynamic programming minima, we propose a heuristic construction of the elimination trees that has cost O N e log N e , where N e is the number of elements in the mesh. We show that this heuristic ordering has similar computational cost to the quasi-optimal elimination trees found with dynamic programming and outperforms state-of-the-art alternatives in our numerical experiments.« less
NASA Astrophysics Data System (ADS)
Chabab, M.; El Batoul, A.; Lahbas, A.; Oulne, M.
2018-05-01
Based on the minimal length concept, inspired by Heisenberg algebra, a closed analytical formula is derived for the energy spectrum of the prolate γ-rigid Bohr-Mottelson Hamiltonian of nuclei, within a quantum perturbation method (QPM), by considering a scaled Davidson potential in β shape variable. In the resulting solution, called X(3)-D-ML, the ground state and the first β-band are all studied as a function of the free parameters. The fact of introducing the minimal length concept with a QPM makes the model very flexible and a powerful approach to describe nuclear collective excitations of a variety of vibrational-like nuclei. The introduction of scaling parameters in the Davidson potential enables us to get a physical minimum of this latter in comparison with previous works. The analysis of the corrected wave function, as well as the probability density distribution, shows that the minimal length parameter has a physical upper bound limit.
Quasi-Optimal Elimination Trees for 2D Grids with Singularities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paszyńska, A.; Paszyński, M.; Jopek, K.
We consmore » truct quasi-optimal elimination trees for 2D finite element meshes with singularities. These trees minimize the complexity of the solution of the discrete system. The computational cost estimates of the elimination process model the execution of the multifrontal algorithms in serial and in parallel shared-memory executions. Since the meshes considered are a subspace of all possible mesh partitions, we call these minimizers quasi-optimal. We minimize the cost functionals using dynamic programming. Finding these minimizers is more computationally expensive than solving the original algebraic system. Nevertheless, from the insights provided by the analysis of the dynamic programming minima, we propose a heuristic construction of the elimination trees that has cost O N e log N e , where N e is the number of elements in the mesh. We show that this heuristic ordering has similar computational cost to the quasi-optimal elimination trees found with dynamic programming and outperforms state-of-the-art alternatives in our numerical experiments.« less
NASA Astrophysics Data System (ADS)
Miehe, Christian; Mauthe, Steffen; Teichtmeister, Stephan
2015-09-01
This work develops new minimization and saddle point principles for the coupled problem of Darcy-Biot-type fluid transport in porous media at fracture. It shows that the quasi-static problem of elastically deforming, fluid-saturated porous media is related to a minimization principle for the evolution problem. This two-field principle determines the rate of deformation and the fluid mass flux vector. It provides a canonically compact model structure, where the stress equilibrium and the inverse Darcy's law appear as the Euler equations of a variational statement. A Legendre transformation of the dissipation potential relates the minimization principle to a characteristic three field saddle point principle, whose Euler equations determine the evolutions of deformation and fluid content as well as Darcy's law. A further geometric assumption results in modified variational principles for a simplified theory, where the fluid content is linked to the volumetric deformation. The existence of these variational principles underlines inherent symmetries of Darcy-Biot theories of porous media. This can be exploited in the numerical implementation by the construction of time- and space-discrete variational principles, which fully determine the update problems of typical time stepping schemes. Here, the proposed minimization principle for the coupled problem is advantageous with regard to a new unconstrained stable finite element design, while space discretizations of the saddle point principles are constrained by the LBB condition. The variational principles developed provide the most fundamental approach to the discretization of nonlinear fluid-structure interactions, showing symmetric systems in algebraic update procedures. They also provide an excellent starting point for extensions towards more complex problems. This is demonstrated by developing a minimization principle for a phase field description of fracture in fluid-saturated porous media. It is designed for an incorporation of alternative crack driving forces, such as a convenient criterion in terms of the effective stress. The proposed setting provides a modeling framework for the analysis of complex problems such as hydraulic fracture. This is demonstrated by a spectrum of model simulations.
Osuch, Tomasz; Markowski, Konrad; Jędrzejewski, Kazimierz
2015-06-10
A versatile numerical model for spectral transmission/reflection, group delay characteristic analysis, and design of tapered fiber Bragg gratings (TFBGs) is presented. This approach ensures flexibility with defining both distribution of refractive index change of the gratings (including apodization) and shape of the taper profile. Additionally, sensing and tunable dispersion properties of the TFBGs were fully examined, considering strain-induced effects. The presented numerical approach, together with Pareto optimization, were also used to design the best tanh apodization profiles of the TFBG in terms of maximizing its spectral width with simultaneous minimization of the group delay oscillations. Experimental verification of the model confirms its correctness. The combination of model versatility and possibility to define the other objective functions of Pareto optimization creates a universal tool for TFBG analysis and design.
Evolutionary Optimization of a Geometrically Refined Truss
NASA Technical Reports Server (NTRS)
Hull, P. V.; Tinker, M. L.; Dozier, G. V.
2007-01-01
Structural optimization is a field of research that has experienced noteworthy growth for many years. Researchers in this area have developed optimization tools to successfully design and model structures, typically minimizing mass while maintaining certain deflection and stress constraints. Numerous optimization studies have been performed to minimize mass, deflection, and stress on a benchmark cantilever truss problem. Predominantly traditional optimization theory is applied to this problem. The cross-sectional area of each member is optimized to minimize the aforementioned objectives. This Technical Publication (TP) presents a structural optimization technique that has been previously applied to compliant mechanism design. This technique demonstrates a method that combines topology optimization, geometric refinement, finite element analysis, and two forms of evolutionary computation: genetic algorithms and differential evolution to successfully optimize a benchmark structural optimization problem. A nontraditional solution to the benchmark problem is presented in this TP, specifically a geometrically refined topological solution. The design process begins with an alternate control mesh formulation, multilevel geometric smoothing operation, and an elastostatic structural analysis. The design process is wrapped in an evolutionary computing optimization toolset.
Maximum Likelihood Estimation with Emphasis on Aircraft Flight Data
NASA Technical Reports Server (NTRS)
Iliff, K. W.; Maine, R. E.
1985-01-01
Accurate modeling of flexible space structures is an important field that is currently under investigation. Parameter estimation, using methods such as maximum likelihood, is one of the ways that the model can be improved. The maximum likelihood estimator has been used to extract stability and control derivatives from flight data for many years. Most of the literature on aircraft estimation concentrates on new developments and applications, assuming familiarity with basic estimation concepts. Some of these basic concepts are presented. The maximum likelihood estimator and the aircraft equations of motion that the estimator uses are briefly discussed. The basic concepts of minimization and estimation are examined for a simple computed aircraft example. The cost functions that are to be minimized during estimation are defined and discussed. Graphic representations of the cost functions are given to help illustrate the minimization process. Finally, the basic concepts are generalized, and estimation from flight data is discussed. Specific examples of estimation of structural dynamics are included. Some of the major conclusions for the computed example are also developed for the analysis of flight data.
A minimal model of predator–swarm interactions
Chen, Yuxin; Kolokolnikov, Theodore
2014-01-01
We propose a minimal model of predator–swarm interactions which captures many of the essential dynamics observed in nature. Different outcomes are observed depending on the predator strength. For a ‘weak’ predator, the swarm is able to escape the predator completely. As the strength is increased, the predator is able to catch up with the swarm as a whole, but the individual prey is able to escape by ‘confusing’ the predator: the prey forms a ring with the predator at the centre. For higher predator strength, complex chasing dynamics are observed which can become chaotic. For even higher strength, the predator is able to successfully capture the prey. Our model is simple enough to be amenable to a full mathematical analysis, which is used to predict the shape of the swarm as well as the resulting predator–prey dynamics as a function of model parameters. We show that, as the predator strength is increased, there is a transition (owing to a Hopf bifurcation) from confusion state to chasing dynamics, and we compute the threshold analytically. Our analysis indicates that the swarming behaviour is not helpful in avoiding the predator, suggesting that there are other reasons why the species may swarm. The complex shape of the swarm in our model during the chasing dynamics is similar to the shape of a flock of sheep avoiding a shepherd. PMID:24598204
A minimal model of predator-swarm interactions.
Chen, Yuxin; Kolokolnikov, Theodore
2014-05-06
We propose a minimal model of predator-swarm interactions which captures many of the essential dynamics observed in nature. Different outcomes are observed depending on the predator strength. For a 'weak' predator, the swarm is able to escape the predator completely. As the strength is increased, the predator is able to catch up with the swarm as a whole, but the individual prey is able to escape by 'confusing' the predator: the prey forms a ring with the predator at the centre. For higher predator strength, complex chasing dynamics are observed which can become chaotic. For even higher strength, the predator is able to successfully capture the prey. Our model is simple enough to be amenable to a full mathematical analysis, which is used to predict the shape of the swarm as well as the resulting predator-prey dynamics as a function of model parameters. We show that, as the predator strength is increased, there is a transition (owing to a Hopf bifurcation) from confusion state to chasing dynamics, and we compute the threshold analytically. Our analysis indicates that the swarming behaviour is not helpful in avoiding the predator, suggesting that there are other reasons why the species may swarm. The complex shape of the swarm in our model during the chasing dynamics is similar to the shape of a flock of sheep avoiding a shepherd.
Noel, Melanie; Palermo, Tonya M.; Essner, Bonnie; Zhou, Chuan; Levy, Rona L.; Langer, Shelby L.; Sherman, Amanda L.; Walker, Lynn S.
2015-01-01
The widely used Adult Responses to Children’s Symptoms measures parental responses to child symptom complaints among youth aged 7 to 18 years with recurrent/chronic pain. Given developmental differences between children and adolescents and the impact of developmental stage on parenting, the factorial validity of the parent-report version of the Adult Responses to Children’s Symptoms with a pain-specific stem was examined separately in 743 parents of 281 children (7–11 years) and 462 adolescents (12–18 years) with chronic pain or pain-related chronic illness. Factor structures of the Adult Responses to Children’s Symptoms beyond the original 3-factor model were also examined. Exploratory factor analysis with oblique rotation was conducted on a randomly chosen half of the sample of children and adolescents as well as the 2 groups combined to assess underlying factor structure. Confirmatory factor analysis was conducted on the other randomly chosen half of the sample to cross-validate factor structure revealed by exploratory factor analyses and compare it to other model variants. Poor loading and high cross loading items were removed. A 4-factor model (Protect, Minimize, Monitor, and Distract) for children and the combined (child and adolescent) sample and a 5-factor model (Protect, Minimize, Monitor, Distract, and Solicitousness) for adolescents was superior to the 3-factor model proposed in previous literature. Future research should examine the validity of derived subscales and developmental differences in their relationships with parent and child functioning. PMID:25451623
An EOQ model for weibull distribution deterioration with time-dependent cubic demand and backlogging
NASA Astrophysics Data System (ADS)
Santhi, G.; Karthikeyan, K.
2017-11-01
In this article we introduce an economic order quantity model with weibull deterioration and time dependent cubic demand rate where holding costs as a linear function of time. Shortages are allowed in the inventory system are partially and fully backlogging. The objective of this model is to minimize the total inventory cost by using the optimal order quantity and the cycle length. The proposed model is illustrated by numerical examples and the sensitivity analysis is performed to study the effect of changes in parameters on the optimum solutions.
Model Predictive Flight Control System with Full State Observer using H∞ Method
NASA Astrophysics Data System (ADS)
Sanwale, Jitu; Singh, Dhan Jeet
2018-03-01
This paper presents the application of the model predictive approach to design a flight control system (FCS) for longitudinal dynamics of a fixed wing aircraft. Longitudinal dynamics is derived for a conventional aircraft. Open loop aircraft response analysis is carried out. Simulation studies are illustrated to prove the efficacy of the proposed model predictive controller using H ∞ state observer. The estimation criterion used in the {H}_{∞} observer design is to minimize the worst possible effects of the modelling errors and additive noise on the parameter estimation.
Exploring non-holomorphic soft terms in the framework of gauge mediated supersymmetry breaking
NASA Astrophysics Data System (ADS)
Chattopadhyay, Utpal; Das, Debottam; Mukherjee, Samadrita
2018-01-01
It is known that in the absence of a gauge singlet field, a specific class of supersymmetry (SUSY) breaking non-holomorphic (NH) terms can be soft breaking in nature so that they may be considered along with the Minimal Supersymmetric Standard Model (MSSM) and beyond. There have been studies related to these terms in minimal supergravity based models. Consideration of an F-type SUSY breaking scenario in the hidden sector with two chiral superfields however showed Planck scale suppression of such terms. In an unbiased point of view for the sources of SUSY breaking, the NH terms in a phenomenological MSSM (pMSSM) type of analysis showed a possibility of a large SUSY contribution to muon g - 2, a reasonable amount of corrections to the Higgs boson mass and a drastic reduction of the electroweak fine-tuning for a higgsino dominated {\\tilde{χ}}_1^0 in some regions of parameter space. We first investigate here the effects of the NH terms in a low scale SUSY breaking scenario. In our analysis with minimal gauge mediated supersymmetry breaking (mGMSB) we probe how far the results can be compared with the previous pMSSM plus NH terms based study. We particularly analyze the Higgs, stop and the electroweakino sectors focusing on a higgsino dominated {\\tilde{χ}}_1^0 and {\\tilde{χ}}_1^{± } , a feature typically different from what appears in mGMSB. The effect of a limited degree of RG evolutions and vanishing of the trilinear coupling terms at the messenger scale can be overcome by choosing a non-minimal GMSB scenario, such as one with a matter-messenger interaction.
Replica Analysis for Portfolio Optimization with Single-Factor Model
NASA Astrophysics Data System (ADS)
Shinzato, Takashi
2017-06-01
In this paper, we use replica analysis to investigate the influence of correlation among the return rates of assets on the solution of the portfolio optimization problem. We consider the behavior of an optimal solution for the case where the return rate is described with a single-factor model and compare the findings obtained from our proposed methods with correlated return rates with those obtained with independent return rates. We then analytically assess the increase in the investment risk when correlation is included. Furthermore, we also compare our approach with analytical procedures for minimizing the investment risk from operations research.
NASA Technical Reports Server (NTRS)
Waszak, Martin R.; Fung, Jimmy
1998-01-01
This report describes the development of transfer function models for the trailing-edge and upper and lower spoiler actuators of the Benchmark Active Control Technology (BACT) wind tunnel model for application to control system analysis and design. A simple nonlinear least-squares parameter estimation approach is applied to determine transfer function parameters from frequency response data. Unconstrained quasi-Newton minimization of weighted frequency response error was employed to estimate the transfer function parameters. An analysis of the behavior of the actuators over time to assess the effects of wear and aerodynamic load by using the transfer function models is also presented. The frequency responses indicate consistent actuator behavior throughout the wind tunnel test and only slight degradation in effectiveness due to aerodynamic hinge loading. The resulting actuator models have been used in design, analysis, and simulation of controllers for the BACT to successfully suppress flutter over a wide range of conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
FINSTERLE, STEFAN; JUNG, YOOJIN; KOWALSKY, MICHAEL
2016-09-15
iTOUGH2 (inverse TOUGH2) provides inverse modeling capabilities for TOUGH2, a simulator for multi-dimensional, multi-phase, multi-component, non-isothermal flow and transport in fractured porous media. iTOUGH2 performs sensitivity analyses, data-worth analyses, parameter estimation, and uncertainty propagation analyses in geosciences and reservoir engineering and other application areas. iTOUGH2 supports a number of different combinations of fluids and components (equation-of-state (EOS) modules). In addition, the optimization routines implemented in iTOUGH2 can also be used for sensitivity analysis, automatic model calibration, and uncertainty quantification of any external code that uses text-based input and output files using the PEST protocol. iTOUGH2 solves the inverse problem bymore » minimizing a non-linear objective function of the weighted differences between model output and the corresponding observations. Multiple minimization algorithms (derivative-free, gradient-based, and second-order; local and global) are available. iTOUGH2 also performs Latin Hypercube Monte Carlo simulations for uncertainty propagation analyses. A detailed residual and error analysis is provided. This upgrade includes (a) global sensitivity analysis methods, (b) dynamic memory allocation (c) additional input features and output analyses, (d) increased forward simulation capabilities, (e) parallel execution on multicore PCs and Linux clusters, and (f) bug fixes. More details can be found at http://esd.lbl.gov/iTOUGH2.« less
The impact of joint responses of devices in an airport security system.
Nie, Xiaofeng; Batta, Rajan; Drury, Colin G; Lin, Li
2009-02-01
In this article, we consider a model for an airport security system in which the declaration of a threat is based on the joint responses of inspection devices. This is in contrast to the typical system in which each check station independently declares a passenger as having a threat or not having a threat. In our framework the declaration of threat/no-threat is based upon the passenger scores at the check stations he/she goes through. To do this we use concepts from classification theory in the field of multivariate statistics analysis and focus on the main objective of minimizing the expected cost of misclassification. The corresponding correct classification and misclassification probabilities can be obtained by using a simulation-based method. After computing the overall false alarm and false clear probabilities, we compare our joint response system with two other independently operated systems. A model that groups passengers in a manner that minimizes the false alarm probability while maintaining the false clear probability within specifications set by a security authority is considered. We also analyze the staffing needs at each check station for such an inspection scheme. An illustrative example is provided along with sensitivity analysis on key model parameters. A discussion is provided on some implementation issues, on the various assumptions made in the analysis, and on potential drawbacks of the approach.
Conformal standard model, leptogenesis, and dark matter
NASA Astrophysics Data System (ADS)
Lewandowski, Adrian; Meissner, Krzysztof A.; Nicolai, Hermann
2018-02-01
The conformal standard model is a minimal extension of the Standard Model (SM) of particle physics based on the assumed absence of large intermediate scales between the TeV scale and the Planck scale, which incorporates only right-chiral neutrinos and a new complex scalar in addition to the usual SM degrees of freedom, but no other features such as supersymmetric partners. In this paper, we present a comprehensive quantitative analysis of this model, and show that all outstanding issues of particle physics proper can in principle be solved "in one go" within this framework. This includes in particular the stabilization of the electroweak scale, "minimal" leptogenesis and the explanation of dark matter, with a small mass and very weakly interacting Majoron as the dark matter candidate (for which we propose to use the name "minoron"). The main testable prediction of the model is a new and almost sterile scalar boson that would manifest itself as a narrow resonance in the TeV region. We give a representative range of parameter values consistent with our assumptions and with observation.
Quantum theory of the generalised uncertainty principle
NASA Astrophysics Data System (ADS)
Bruneton, Jean-Philippe; Larena, Julien
2017-04-01
We extend significantly previous works on the Hilbert space representations of the generalized uncertainty principle (GUP) in 3 + 1 dimensions of the form [X_i,P_j] = i F_{ij} where F_{ij} = f({{P}}^2) δ _{ij} + g({{P}}^2) P_i P_j for any functions f. However, we restrict our study to the case of commuting X's. We focus in particular on the symmetries of the theory, and the minimal length that emerge in some cases. We first show that, at the algebraic level, there exists an unambiguous mapping between the GUP with a deformed quantum algebra and a quadratic Hamiltonian into a standard, Heisenberg algebra of operators and an aquadratic Hamiltonian, provided the boost sector of the symmetries is modified accordingly. The theory can also be mapped to a completely standard Quantum Mechanics with standard symmetries, but with momentum dependent position operators. Next, we investigate the Hilbert space representations of these algebraically equivalent models, and focus specifically on whether they exhibit a minimal length. We carry the functional analysis of the various operators involved, and show that the appearance of a minimal length critically depends on the relationship between the generators of translations and the physical momenta. In particular, because this relationship is preserved by the algebraic mapping presented in this paper, when a minimal length is present in the standard GUP, it is also present in the corresponding Aquadratic Hamiltonian formulation, despite the perfectly standard algebra of this model. In general, a minimal length requires bounded generators of translations, i.e. a specific kind of quantization of space, and this depends on the precise shape of the function f defined previously. This result provides an elegant and unambiguous classification of which universal quantum gravity corrections lead to the emergence of a minimal length.
Analysis of portfolio optimization with lot of stocks amount constraint: case study index LQ45
NASA Astrophysics Data System (ADS)
Chin, Liem; Chendra, Erwinna; Sukmana, Agus
2018-01-01
To form an optimum portfolio (in the sense of minimizing risk and / or maximizing return), the commonly used model is the mean-variance model of Markowitz. However, there is no amount of lots of stocks constraint. And, retail investors in Indonesia cannot do short selling. So, in this study we will develop an existing model by adding an amount of lot of stocks and short-selling constraints to get the minimum risk of portfolio with and without any target return. We will analyse the stocks listed in the LQ45 index based on the stock market capitalization. To perform this analysis, we will use Solver that available in Microsoft Excel.
Hyperopt: a Python library for model selection and hyperparameter optimization
NASA Astrophysics Data System (ADS)
Bergstra, James; Komer, Brent; Eliasmith, Chris; Yamins, Dan; Cox, David D.
2015-01-01
Sequential model-based optimization (also known as Bayesian optimization) is one of the most efficient methods (per function evaluation) of function minimization. This efficiency makes it appropriate for optimizing the hyperparameters of machine learning algorithms that are slow to train. The Hyperopt library provides algorithms and parallelization infrastructure for performing hyperparameter optimization (model selection) in Python. This paper presents an introductory tutorial on the usage of the Hyperopt library, including the description of search spaces, minimization (in serial and parallel), and the analysis of the results collected in the course of minimization. This paper also gives an overview of Hyperopt-Sklearn, a software project that provides automatic algorithm configuration of the Scikit-learn machine learning library. Following Auto-Weka, we take the view that the choice of classifier and even the choice of preprocessing module can be taken together to represent a single large hyperparameter optimization problem. We use Hyperopt to define a search space that encompasses many standard components (e.g. SVM, RF, KNN, PCA, TFIDF) and common patterns of composing them together. We demonstrate, using search algorithms in Hyperopt and standard benchmarking data sets (MNIST, 20-newsgroups, convex shapes), that searching this space is practical and effective. In particular, we improve on best-known scores for the model space for both MNIST and convex shapes. The paper closes with some discussion of ongoing and future work.
León Blanco, José M; González-R, Pedro L; Arroyo García, Carmen Martina; Cózar-Bernal, María José; Calle Suárez, Marcos; Canca Ortiz, David; Rabasco Álvarez, Antonio María; González Rodríguez, María Luisa
2018-01-01
This work was aimed at determining the feasibility of artificial neural networks (ANN) by implementing backpropagation algorithms with default settings to generate better predictive models than multiple linear regression (MLR) analysis. The study was hypothesized on timolol-loaded liposomes. As tutorial data for ANN, causal factors were used, which were fed into the computer program. The number of training cycles has been identified in order to optimize the performance of the ANN. The optimization was performed by minimizing the error between the predicted and real response values in the training step. The results showed that training was stopped at 10 000 training cycles with 80% of the pattern values, because at this point the ANN generalizes better. Minimum validation error was achieved at 12 hidden neurons in a single layer. MLR has great prediction ability, with errors between predicted and real values lower than 1% in some of the parameters evaluated. Thus, the performance of this model was compared to that of the MLR using a factorial design. Optimal formulations were identified by minimizing the distance among measured and theoretical parameters, by estimating the prediction errors. Results indicate that the ANN shows much better predictive ability than the MLR model. These findings demonstrate the increased efficiency of the combination of ANN and design of experiments, compared to the conventional MLR modeling techniques.
Optimal design method to minimize users' thinking mapping load in human-machine interactions.
Huang, Yanqun; Li, Xu; Zhang, Jie
2015-01-01
The discrepancy between human cognition and machine requirements/behaviors usually results in serious mental thinking mapping loads or even disasters in product operating. It is important to help people avoid human-machine interaction confusions and difficulties in today's mental work mastered society. Improving the usability of a product and minimizing user's thinking mapping and interpreting load in human-machine interactions. An optimal human-machine interface design method is introduced, which is based on the purpose of minimizing the mental load in thinking mapping process between users' intentions and affordance of product interface states. By analyzing the users' thinking mapping problem, an operating action model is constructed. According to human natural instincts and acquired knowledge, an expected ideal design with minimized thinking loads is uniquely determined at first. Then, creative alternatives, in terms of the way human obtains operational information, are provided as digital interface states datasets. In the last, using the cluster analysis method, an optimum solution is picked out from alternatives, by calculating the distances between two datasets. Considering multiple factors to minimize users' thinking mapping loads, a solution nearest to the ideal value is found in the human-car interaction design case. The clustering results show its effectiveness in finding an optimum solution to the mental load minimizing problems in human-machine interaction design.
Nonlinear field equations for aligning self-propelled rods.
Peshkov, Anton; Aranson, Igor S; Bertin, Eric; Chaté, Hugues; Ginelli, Francesco
2012-12-28
We derive a set of minimal and well-behaved nonlinear field equations describing the collective properties of self-propelled rods from a simple microscopic starting point, the Vicsek model with nematic alignment. Analysis of their linear and nonlinear dynamics shows good agreement with the original microscopic model. In particular, we derive an explicit expression for density-segregated, banded solutions, allowing us to develop a more complete analytic picture of the problem at the nonlinear level.
Effect of the internal optics on the outcome of custom-LASIK in an eye model
NASA Astrophysics Data System (ADS)
Manns, Fabrice; Ho, Arthur; Parel, Jean-Marie
2004-07-01
Purpose. The purpose of this study was to evaluate if changes in the aberration-contribution of the internal optics of the eye have a significant effect on the outcome of wavefront-guided corneal reshaping. Methods. The Navarro-Escudero eye model was simulated using optical analysis software. The eye was rendered myopic by shifting the plane of the retina. Custom-LASIK was simulated by changing the radius of curvature and asphericity of the anterior corneal surface of the eye model. The radius of curvature was adjusted to provide a retinal conjugate at infinity. Three approaches were used to determine the postoperative corneal asphericity: minimizing third-order spherical aberration, minimizing third-order coma, and maximizing the Strehl ratio. The aberration contribution of the anterior corneal surface and internal optics was calculated before and after each simulated customized correction. Results. For a 5.2mm diameter pupil, the contribution of the anterior corneal surface to third-order spherical aberration and coma (in micrometers) was 2.22 and 2.49 preop, -0.36 and 2.83 postop when spherical aberration is minimized, 5.88 and 1.10 postop when coma is minimized, and -0.63 and 2.91 postop when Strehl ratio is maximized. The contribution of the internal optics of the eye to spherical aberration and coma for the same four conditions was: 0.43 and -1.13, 0.37 and -1.10, 0.37 and -1.10 and 0.37 and -1.10, respectively. Conclusion. In the model eye, the contribution of the internal optics of the eye to the change in the ocular aberration state is negligible.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friberg, Ari T.; Visser, Taco D.; Wolf, Emil
A reciprocity inequality is derived, involving the effective size of a planar, secondary, Gaussian Schell-model source and the effective angular spread of the beam that the source generates. The analysis is shown to imply that a fully spatially coherent source of that class (which generates the lowest-order Hermite-Gaussian laser mode) has certain minimal properties. (c) 2000 Optical Society of America.
ERIC Educational Resources Information Center
Wahl, Ana-Maria; Perez, Eduardo T.; Deegan, Mary Jo; Sanchez, Thomas W.; Applegate, Cheryl
2000-01-01
Offers a model for a collective strategy that can be used to deal more effectively with problems associated with race relations courses. Presents a multidimensional analysis of the constraints that create problems for race relations instructors and highlights a multidimensional approach to minimizing these problems. Includes references. (CMK)
Homentcovschi, Dorel; Murray, Bruce T.; Miles, Ronald N.
2013-01-01
There are a number of applications for microstructure devices consisting of a regular pattern of perforations, and many of these utilize fluid damping. For the analysis of viscous damping and for calculating the spring force in some cases, it is possible to take advantage of the regular hole pattern by assuming periodicity. Here a model is developed to determine these quantities based on the solution of the Stokes' equations for the air flow. Viscous damping is directly related to thermal-mechanical noise. As a result, the design of perforated microstructures with minimal viscous damping is of real practical importance. A method is developed to calculate the damping coefficient in microstructures with periodic perforations. The result can be used to minimize squeeze film damping. Since micromachined devices have finite dimensions, the periodic model for the perforated microstructure has to be associated with the calculation of some frame (edge) corrections. Analysis of the edge corrections has also been performed. Results from analytical formulas and numerical simulations match very well with published measured data. PMID:24058267
Homentcovschi, Dorel; Murray, Bruce T; Miles, Ronald N
2013-10-15
There are a number of applications for microstructure devices consisting of a regular pattern of perforations, and many of these utilize fluid damping. For the analysis of viscous damping and for calculating the spring force in some cases, it is possible to take advantage of the regular hole pattern by assuming periodicity. Here a model is developed to determine these quantities based on the solution of the Stokes' equations for the air flow. Viscous damping is directly related to thermal-mechanical noise. As a result, the design of perforated microstructures with minimal viscous damping is of real practical importance. A method is developed to calculate the damping coefficient in microstructures with periodic perforations. The result can be used to minimize squeeze film damping. Since micromachined devices have finite dimensions, the periodic model for the perforated microstructure has to be associated with the calculation of some frame (edge) corrections. Analysis of the edge corrections has also been performed. Results from analytical formulas and numerical simulations match very well with published measured data.
Morettini, Micaela; Faelli, Emanuela; Perasso, Luisa; Fioretti, Sandro; Burattini, Laura; Ruggeri, Piero; Di Nardo, Francesco
2017-01-01
For the assessment of glucose tolerance from IVGTT data in Zucker rat, minimal model methodology is reliable but time- and money-consuming. This study aimed to validate for the first time in Zucker rat, simple surrogate indexes of insulin sensitivity and secretion against the glucose-minimal-model insulin sensitivity index (SI) and against first- (Φ1) and second-phase (Φ2) β-cell responsiveness indexes provided by C-peptide minimal model. Validation of the surrogate insulin sensitivity index (ISI) and of two sets of coupled insulin-based indexes for insulin secretion, differing from the cut-off point between phases (FPIR3-SPIR3, t = 3 min and FPIR5-SPIR5, t = 5 min), was carried out in a population of ten Zucker fatty rats (ZFR) and ten Zucker lean rats (ZLR). Considering the whole rat population (ZLR+ZFR), ISI showed a significant strong correlation with SI (Spearman's correlation coefficient, r = 0.88; P<0.001). Both FPIR3 and FPIR5 showed a significant (P<0.001) strong correlation with Φ1 (r = 0.76 and r = 0.75, respectively). Both SPIR3 and SPIR5 showed a significant (P<0.001) strong correlation with Φ2 (r = 0.85 and r = 0.83, respectively). ISI is able to detect (P<0.001) the well-recognized reduction in insulin sensitivity in ZFRs, compared to ZLRs. The insulin-based indexes of insulin secretion are able to detect in ZFRs (P<0.001) the compensatory increase of first- and second-phase secretion, associated to the insulin-resistant state. The ability of the surrogate indexes in describing glucose tolerance in the ZFRs was confirmed by the Disposition Index analysis. The model-based validation performed in the present study supports the utilization of low-cost, insulin-based indexes for the assessment of glucose tolerance in Zucker rat, reliable animal model of human metabolic syndrome.
Differential geometry based solvation model II: Lagrangian formulation
Chen, Zhan; Baker, Nathan A.; Wei, G. W.
2010-01-01
Solvation is an elementary process in nature and is of paramount importance to more sophisticated chemical, biological and biomolecular processes. The understanding of solvation is an essential prerequisite for the quantitative description and analysis of biomolecular systems. This work presents a Lagrangian formulation of our differential geometry based solvation model. The Lagrangian representation of biomolecular surfaces has a few utilities/advantages. First, it provides an essential basis for biomolecular visualization, surface electrostatic potential map and visual perception of biomolecules. Additionally, it is consistent with the conventional setting of implicit solvent theories and thus, many existing theoretical algorithms and computational software packages can be directly employed. Finally, the Lagrangian representation does not need to resort to artificially enlarged van der Waals radii as often required by the Eulerian representation in solvation analysis. The main goal of the present work is to analyze the connection, similarity and difference between the Eulerian and Lagrangian formalisms of the solvation model. Such analysis is important to the understanding of the differential geometry based solvation model. The present model extends the scaled particle theory (SPT) of nonpolar solvation model with a solvent-solute interaction potential. The nonpolar solvation model is completed with a Poisson-Boltzmann (PB) theory based polar solvation model. The differential geometry theory of surfaces is employed to provide a natural description of solvent-solute interfaces. The minimization of the total free energy functional, which encompasses the polar and nonpolar contributions, leads to coupled potential driven geometric flow and Poisson-Boltzmann equations. Due to the development of singularities and nonsmooth manifolds in the Lagrangian representation, the resulting potential-driven geometric flow equation is embedded into the Eulerian representation for the purpose of computation, thanks to the equivalence of the Laplace-Beltrami operator in the two representations. The coupled partial differential equations (PDEs) are solved with an iterative procedure to reach a steady state, which delivers desired solvent-solute interface and electrostatic potential for problems of interest. These quantities are utilized to evaluate the solvation free energies and protein-protein binding affinities. A number of computational methods and algorithms are described for the interconversion of Lagrangian and Eulerian representations, and for the solution of the coupled PDE system. The proposed approaches have been extensively validated. We also verify that the mean curvature flow indeed gives rise to the minimal molecular surface (MMS) and the proposed variational procedure indeed offers minimal total free energy. Solvation analysis and applications are considered for a set of 17 small compounds and a set of 23 proteins. The salt effect on protein-protein binding affinity is investigated with two protein complexes by using the present model. Numerical results are compared to the experimental measurements and to those obtained by using other theoretical methods in the literature. PMID:21279359
A novel minimal invasive mouse model of extracorporeal circulation.
Luo, Shuhua; Tang, Menglin; Du, Lei; Gong, Lina; Xu, Jin; Chen, Youwen; Wang, Yabo; Lin, Ke; An, Qi
2015-01-01
Extracorporeal circulation (ECC) is necessary for conventional cardiac surgery and life support, but it often triggers systemic inflammation that can significantly damage tissue. Studies of ECC have been limited to large animals because of the complexity of the surgical procedures involved, which has hampered detailed understanding of ECC-induced injury. Here we describe a minimally invasive mouse model of ECC that may allow more extensive mechanistic studies. The right carotid artery and external jugular vein of anesthetized adult male C57BL/6 mice were cannulated to allow blood flow through a 1/32-inch external tube. All animals (n = 20) survived 30 min ECC and subsequent 60 min observation. Blood analysis after ECC showed significant increases in levels of tumor necrosis factor α, interleukin-6, and neutrophil elastase in plasma, lung, and renal tissues, as well as increases in plasma creatinine and cystatin C and decreases in the oxygenation index. Histopathology showed that ECC induced the expected lung inflammation, which included alveolar congestion, hemorrhage, neutrophil infiltration, and alveolar wall thickening; in renal tissue, ECC induced intracytoplasmic vacuolization, acute tubular necrosis, and epithelial swelling. Our results suggest that this novel, minimally invasive mouse model can recapitulate many of the clinical features of ECC-induced systemic inflammatory response and organ injury.
A Novel Minimal Invasive Mouse Model of Extracorporeal Circulation
Luo, Shuhua; Tang, Menglin; Du, Lei; Gong, Lina; Xu, Jin; Chen, Youwen; Wang, Yabo; Lin, Ke; An, Qi
2015-01-01
Extracorporeal circulation (ECC) is necessary for conventional cardiac surgery and life support, but it often triggers systemic inflammation that can significantly damage tissue. Studies of ECC have been limited to large animals because of the complexity of the surgical procedures involved, which has hampered detailed understanding of ECC-induced injury. Here we describe a minimally invasive mouse model of ECC that may allow more extensive mechanistic studies. The right carotid artery and external jugular vein of anesthetized adult male C57BL/6 mice were cannulated to allow blood flow through a 1/32-inch external tube. All animals (n = 20) survived 30 min ECC and subsequent 60 min observation. Blood analysis after ECC showed significant increases in levels of tumor necrosis factor α, interleukin-6, and neutrophil elastase in plasma, lung, and renal tissues, as well as increases in plasma creatinine and cystatin C and decreases in the oxygenation index. Histopathology showed that ECC induced the expected lung inflammation, which included alveolar congestion, hemorrhage, neutrophil infiltration, and alveolar wall thickening; in renal tissue, ECC induced intracytoplasmic vacuolization, acute tubular necrosis, and epithelial swelling. Our results suggest that this novel, minimally invasive mouse model can recapitulate many of the clinical features of ECC-induced systemic inflammatory response and organ injury. PMID:25705092
Statistical analysis of target acquisition sensor modeling experiments
NASA Astrophysics Data System (ADS)
Deaver, Dawne M.; Moyer, Steve
2015-05-01
The U.S. Army RDECOM CERDEC NVESD Modeling and Simulation Division is charged with the development and advancement of military target acquisition models to estimate expected soldier performance when using all types of imaging sensors. Two elements of sensor modeling are (1) laboratory-based psychophysical experiments used to measure task performance and calibrate the various models and (2) field-based experiments used to verify the model estimates for specific sensors. In both types of experiments, it is common practice to control or measure environmental, sensor, and target physical parameters in order to minimize uncertainty of the physics based modeling. Predicting the minimum number of test subjects required to calibrate or validate the model should be, but is not always, done during test planning. The objective of this analysis is to develop guidelines for test planners which recommend the number and types of test samples required to yield a statistically significant result.
Basin analysis of South Mozambique graben
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iliffe, J.; Lerche, I.; De Buyl, M.
1987-05-01
Basin analysis of the South Mozambique graben between latitudes 25/sup 0/ and 26/sup 0/ and longitudes 34/sup 0/ and 35/sup 0/ demonstrates how modeling techniques may help to assess the oil potential of a speculative basin with only minimal seismic data. Two-dimensional restoration of the seismic profiles, using a backstripping and decompaction program on pseudowells linked with structural reconstruction, assesses the rift's two-phase extensional history. Since no well or thermal indicator data exist within the basin, the thermal history had to be derived from extensional models. The best fit of observed subsidence curves and those predicted by the models resultsmore » in values of lithospheric extension (gamma). The disagreement in observed and theoretical basement subsidence curves was minimized by taking a range of gamma for each model for each well. These extension factors were then used in each model's equations for paleoheat flux to derive the heat-flow histories. (It is noted that a systematic basinwide variance of gamma occurs.) The heat-flux histories were then used with a one-dimensional fluid flow/compaction model to calculate TTI values and oil windows. A Tissot generation model was applied to each formation in every well for kerogen Types I, II, and III. The results were contoured across the basin to assess possible oil- and gas-prone formations. The extensional, burial, and thermal histories are integrated into an overall basin development picture and provide an oil and gas provenance model. Thus they estimate the basinwide hydrocarbon potential and also gain insight into the additional data necessary to significantly decrease the uncertainty.« less
MetaDP: a comprehensive web server for disease prediction of 16S rRNA metagenomic datasets.
Xu, Xilin; Wu, Aiping; Zhang, Xinlei; Su, Mingming; Jiang, Taijiao; Yuan, Zhe-Ming
2016-01-01
High-throughput sequencing-based metagenomics has garnered considerable interest in recent years. Numerous methods and tools have been developed for the analysis of metagenomic data. However, it is still a daunting task to install a large number of tools and complete a complicated analysis, especially for researchers with minimal bioinformatics backgrounds. To address this problem, we constructed an automated software named MetaDP for 16S rRNA sequencing data analysis, including data quality control, operational taxonomic unit clustering, diversity analysis, and disease risk prediction modeling. Furthermore, a support vector machine-based prediction model for intestinal bowel syndrome (IBS) was built by applying MetaDP to microbial 16S sequencing data from 108 children. The success of the IBS prediction model suggests that the platform may also be applied to other diseases related to gut microbes, such as obesity, metabolic syndrome, or intestinal cancer, among others (http://metadp.cn:7001/).
State Event Models for the Formal Analysis of Human-Machine Interactions
NASA Technical Reports Server (NTRS)
Combefis, Sebastien; Giannakopoulou, Dimitra; Pecheur, Charles
2014-01-01
The work described in this paper was motivated by our experience with applying a framework for formal analysis of human-machine interactions (HMI) to a realistic model of an autopilot. The framework is built around a formally defined conformance relation called "fullcontrol" between an actual system and the mental model according to which the system is operated. Systems are well-designed if they can be described by relatively simple, full-control, mental models for their human operators. For this reason, our framework supports automated generation of minimal full-control mental models for HMI systems, where both the system and the mental models are described as labelled transition systems (LTS). The autopilot that we analysed has been developed in the NASA Ames HMI prototyping tool ADEPT. In this paper, we describe how we extended the models that our HMI analysis framework handles to allow adequate representation of ADEPT models. We then provide a property-preserving reduction from these extended models to LTSs, to enable application of our LTS-based formal analysis algorithms. Finally, we briefly discuss the analyses we were able to perform on the autopilot model with our extended framework.
Val-Cid, Cristina; Biarnés, Xevi; Faijes, Magda; Planas, Antoni
2015-01-01
Hexosaminidases are involved in important biological processes catalyzing the hydrolysis of N-acetyl-hexosaminyl residues in glycosaminoglycans and glycoconjugates. The GH20 enzymes present diverse domain organizations for which we propose two minimal model architectures: Model A containing at least a non-catalytic GH20b domain and the catalytic one (GH20) always accompanied with an extra α-helix (GH20b-GH20-α), and Model B with only the catalytic GH20 domain. The large Bifidobacterium bifidum lacto-N-biosidase was used as a model protein to evaluate the minimal functional unit due to its interest and structural complexity. By expressing different truncated forms of this enzyme, we show that Model A architectures cannot be reduced to Model B. In particular, there are two structural requirements general to GH20 enzymes with Model A architecture. First, the non-catalytic domain GH20b at the N-terminus of the catalytic GH20 domain is required for expression and seems to stabilize it. Second, the substrate-binding cavity at the GH20 domain always involves a remote element provided by a long loop from the catalytic domain itself or, when this loop is short, by an element from another domain of the multidomain structure or from the dimeric partner. Particularly, the lacto-N-biosidase requires GH20b and the lectin-like domain at the N- and C-termini of the catalytic GH20 domain to be fully soluble and functional. The lectin domain provides this remote element to the active site. We demonstrate restoration of activity of the inactive GH20b-GH20-α construct (model A architecture) by a complementation assay with the lectin-like domain. The engineering of minimal functional units of multidomain GH20 enzymes must consider these structural requirements.
Optimization of a Thermodynamic Model Using a Dakota Toolbox Interface
NASA Astrophysics Data System (ADS)
Cyrus, J.; Jafarov, E. E.; Schaefer, K. M.; Wang, K.; Clow, G. D.; Piper, M.; Overeem, I.
2016-12-01
Scientific modeling of the Earth physical processes is an important driver of modern science. The behavior of these scientific models is governed by a set of input parameters. It is crucial to choose accurate input parameters that will also preserve the corresponding physics being simulated in the model. In order to effectively simulate real world processes the models output data must be close to the observed measurements. To achieve this optimal simulation, input parameters are tuned until we have minimized the objective function, which is the error between the simulation model outputs and the observed measurements. We developed an auxiliary package, which serves as a python interface between the user and DAKOTA. The package makes it easy for the user to conduct parameter space explorations, parameter optimizations, as well as sensitivity analysis while tracking and storing results in a database. The ability to perform these analyses via a Python library also allows the users to combine analysis techniques, for example finding an approximate equilibrium with optimization then immediately explore the space around it. We used the interface to calibrate input parameters for the heat flow model, which is commonly used in permafrost science. We performed optimization on the first three layers of the permafrost model, each with two thermal conductivity coefficients input parameters. Results of parameter space explorations indicate that the objective function not always has a unique minimal value. We found that gradient-based optimization works the best for the objective functions with one minimum. Otherwise, we employ more advanced Dakota methods such as genetic optimization and mesh based convergence in order to find the optimal input parameters. We were able to recover 6 initially unknown thermal conductivity parameters within 2% accuracy of their known values. Our initial tests indicate that the developed interface for the Dakota toolbox could be used to perform analysis and optimization on a `black box' scientific model more efficiently than using just Dakota.
NASA Astrophysics Data System (ADS)
Salvucci, G.; Rigden, A. J.; Gentine, P.; Lintner, B. R.
2013-12-01
A new method was recently proposed for estimating evapotranspiration (ET) from weather station data without requiring measurements of surface limiting factors (e.g. soil moisture, leaf area, canopy conductance) [Salvucci and Gentine, 2013, PNAS, 110(16): 6287-6291]. Required measurements include diurnal air temperature, specific humidity, wind speed, net shortwave radiation, and either measured or estimated incoming longwave radiation and ground heat flux. The approach is built around the idea that the key, rate-limiting, parameter of typical ET models, the land-surface resistance to water vapor transport, can be estimated from an emergent relationship between the diurnal cycle of the relative humidity profile and ET. The emergent relation is that the vertical variance of the relative humidity profile is less than what would occur for increased or decreased evaporation rates, suggesting that land-atmosphere feedback processes minimize this variance. This relation was found to hold over a wide range of climate conditions (arid to humid) and limiting factors (soil moisture, leaf area, energy) at a set of Ameriflux field sites. While the field tests in Salvucci and Gentine (2013) supported the minimum variance hypothesis, the analysis did not reveal the mechanisms responsible for the behavior. Instead the paper suggested, heuristically, that the results were due to an equilibration of the relative humidity between the land surface and the surface layer of the boundary layer. Here we apply this method using surface meteorological fields simulated by a global climate model (GCM), and compare the predicted ET to that simulated by the climate model. Similar to the field tests, the GCM simulated ET is in agreement with that predicted by minimizing the profile relative humidity variance. A reasonable interpretation of these results is that the feedbacks responsible for the minimization of the profile relative humidity variance in nature are represented in the climate model. The climate model components, in particular the land surface model and boundary layer representation, can thus be analyzed in controlled numerical experiments to discern the specific processes leading to the observed behavior. Results of this analysis will be presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolda, Christopher
In this talk, I review recent work on using a generalization of the Next-to-Minimal Supersymmetric Standard Model (NMSSM), called the Singlet-extended Minimal Supersymmetric Standard Model (SMSSM), to raise the mass of the Standard Model-like Higgs boson without requiring extremely heavy top squarks or large stop mixing. In so doing, this model solves the little hierarchy problem of the minimal model (MSSM), at the expense of leaving the {mu}-problem of the MSSM unresolved. This talk is based on work published in Refs. [1, 2, 3].
Flattening the inflaton potential beyond minimal gravity
NASA Astrophysics Data System (ADS)
Lee, Hyun Min
2018-01-01
We review the status of the Starobinsky-like models for inflation beyond minimal gravity and discuss the unitarity problem due to the presence of a large non-minimal gravity coupling. We show that the induced gravity models allow for a self-consistent description of inflation and discuss the implications of the inflaton couplings to the Higgs field in the Standard Model.
Make or buy decision model with multi-stage manufacturing process and supplier imperfect quality
NASA Astrophysics Data System (ADS)
Pratama, Mega Aria; Rosyidi, Cucuk Nur
2017-11-01
This research develops an make or buy decision model considering supplier imperfect quality. This model can be used to help companies make the right decision in case of make or buy component with the best quality and the least cost in multistage manufacturing process. The imperfect quality is one of the cost component that must be minimizing in this model. Component with imperfect quality, not necessarily defective. It still can be rework and used for assembly. This research also provide a numerical example and sensitivity analysis to show how the model work. We use simulation and help by crystal ball to solve the numerical problem. The sensitivity analysis result show that percentage of imperfect generally not affect to the model significantly, and the model is not sensitive to changes in these parameters. This is because the imperfect cost are smaller than overall total cost components.
Augmented halal food traceability system: analysis and design using UML
NASA Astrophysics Data System (ADS)
Usman, Y. V.; Fauzi, A. M.; Irawadi, T. T.; Djatna, T.
2018-04-01
Augmented halal food traceability is expanding the range of halal traceability in food supply chain where currently only available for tracing from the source of raw material to the industrial warehouse or inbound logistic. The halal traceability system must be developed in the integrated form that includes inbound and outbound logistics. The objective of this study was to develop a reliable initial model of integrated traceability system of halal food supply chain. The method was based on unified modeling language (UML) such as use case, sequence, and business process diagram. A goal programming model was formulated considering two objective functions which include (1) minimization of risk of halal traceability failures happened potentially during outbound logistics activities and (2) maximization of quality of halal product information. The result indicates the supply of material is the most important point to be considered in minimizing the risk of failure of halal food traceability system whereas no risk observed in manufacturing and distribution.
A Nonparametric Statistical Approach to the Validation of Computer Simulation Models
1985-11-01
Ballistic Research Laboratory, the Experimental Design and Analysis Branch of the Systems Engineering and Concepts Analysis Division was funded to...2 Winter. E M. Wisemiler. D P. azd UjiharmJ K. Venrgcation ad Validatiot of Engineering Simulatiots with Minimal D2ta." Pmeedinr’ of the 1976 Summer...used by numerous authors. Law%6 has augmented their approach with specific suggestions for each of the three stage’s: 1. develop high face-validity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morozov, Dmitriy; Weber, Gunther H.
2014-03-31
Topological techniques provide robust tools for data analysis. They are used, for example, for feature extraction, for data de-noising, and for comparison of data sets. This chapter concerns contour trees, a topological descriptor that records the connectivity of the isosurfaces of scalar functions. These trees are fundamental to analysis and visualization of physical phenomena modeled by real-valued measurements. We study the parallel analysis of contour trees. After describing a particular representation of a contour tree, called local{global representation, we illustrate how di erent problems that rely on contour trees can be solved in parallel with minimal communication.
Automated thermal mapping techniques using chromatic image analysis
NASA Technical Reports Server (NTRS)
Buck, Gregory M.
1989-01-01
Thermal imaging techniques are introduced using a chromatic image analysis system and temperature sensitive coatings. These techniques are used for thermal mapping and surface heat transfer measurements on aerothermodynamic test models in hypersonic wind tunnels. Measurements are made on complex vehicle configurations in a timely manner and at minimal expense. The image analysis system uses separate wavelength filtered images to analyze surface spectral intensity data. The system was initially developed for quantitative surface temperature mapping using two-color thermographic phosphors but was found useful in interpreting phase change paint and liquid crystal data as well.
An emulator for minimizing finite element analysis implementation resources
NASA Technical Reports Server (NTRS)
Melosh, R. J.; Utku, S.; Salama, M.; Islam, M.
1982-01-01
A finite element analysis emulator providing a basis for efficiently establishing an optimum computer implementation strategy when many calculations are involved is described. The SCOPE emulator determines computer resources required as a function of the structural model, structural load-deflection equation characteristics, the storage allocation plan, and computer hardware capabilities. Thereby, it provides data for trading analysis implementation options to arrive at a best strategy. The models contained in SCOPE lead to micro-operation computer counts of each finite element operation as well as overall computer resource cost estimates. Application of SCOPE to the Memphis-Arkansas bridge analysis provides measures of the accuracy of resource assessments. Data indicate that predictions are within 17.3 percent for calculation times and within 3.2 percent for peripheral storage resources for the ELAS code.
Study of constrained minimal supersymmetry
NASA Astrophysics Data System (ADS)
Kane, G. L.; Kolda, Chris; Roszkowski, Leszek; Wells, James D.
1994-06-01
Taking seriously the phenomenological indications for supersymmetry we have made a detailed study of unified minimal SUSY, including many effects at the few percent level in a consistent fashion. We report here a general analysis of what can be studied without choosing a particular gauge group at the unification scale. Firstly, we find that the encouraging SUSY unification results of recent years do survive the challenge of a more complete and accurate analysis. Taking into account effects at the 5-10 % level leads to several improvements of previous results and allows us to sharpen our predictions for SUSY in the light of unification. We perform a thorough study of the parameter space and look for patterns to indicate SUSY predictions, so that they do not depend on arbitrary choices of some parameters or untested assumptions. Our results can be viewed as a fully constrained minimal SUSY standard model. The resulting model forms a well-defined basis for comparing the physics potential of different facilities. Very little of the acceptable parameter space has been excluded by CERN LEP or Fermilab so far, but a significant fraction can be covered when these accelerators are upgraded. A number of initial applications to the understanding of the values of mh and mt, the SUSY spectrum, detectability of SUSY at LEP II or Fermilab, B(b-->sγ), Γ(Z-->bb¯), dark matter, etc., are included in a separate section that might be of more interest to some readers than the technical aspects of model building. We formulate an approach to extracting SUSY parameters from data when superpartners are detected. For small tanβ or large mt both m1/2 and m0 are entirely bounded from above at ~1 TeV without having to use a fine-tuning constraint.
2015-11-01
strategies to predict and prevent metastasis. 15. SUBJECT TERMS triple-negative breast cancer, metastasis, p53, BTG2, PDX Models 16. SECURITY CLASSIFICATION...membrane and into the circulation, survival in the circulation, extravasation into distant organs, tumor dormancy, and finally tumor growth in the...sequencing analysis are novel targets for metastasis prevention or are more effective at destroying metastatic cells while minimizing the risk of
NASA Astrophysics Data System (ADS)
Burger, Martin; Dirks, Hendrik; Frerking, Lena; Hauptmann, Andreas; Helin, Tapio; Siltanen, Samuli
2017-12-01
In this paper we study the reconstruction of moving object densities from undersampled dynamic x-ray tomography in two dimensions. A particular motivation of this study is to use realistic measurement protocols for practical applications, i.e. we do not assume to have a full Radon transform in each time step, but only projections in few angular directions. This restriction enforces a space-time reconstruction, which we perform by incorporating physical motion models and regularization of motion vectors in a variational framework. The methodology of optical flow, which is one of the most common methods to estimate motion between two images, is utilized to formulate a joint variational model for reconstruction and motion estimation. We provide a basic mathematical analysis of the forward model and the variational model for the image reconstruction. Moreover, we discuss the efficient numerical minimization based on alternating minimizations between images and motion vectors. A variety of results are presented for simulated and real measurement data with different sampling strategy. A key observation is that random sampling combined with our model allows reconstructions of similar amount of measurements and quality as a single static reconstruction.
Millicharge or decay: a critical take on Minimal Dark Matter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nobile, Eugenio Del; Dipartimento di Fisica e Astronomia “G. Galilei”, Università di Padova and INFN, Sezione di Padova,Via Marzolo 8, 35131 Padova; Nardecchia, Marco
2016-04-26
Minimal Dark Matter (MDM) is a theoretical framework highly appreciated for its minimality and yet its predictivity. Of the two only viable candidates singled out in the original analysis, the scalar eptaplet has been found to decay too quickly to be around today, while the fermionic quintuplet is now being probed by indirect Dark Matter (DM) searches. It is therefore timely to critically review the MDM paradigm, possibly pointing out generalizations of this framework. We propose and explore two distinct directions. One is to abandon the assumption of DM electric neutrality in favor of absolutely stable, millicharged DM candidates whichmore » are part of SU(2){sub L} multiplets with integer isospin. Another possibility is to lower the cutoff of the model, which was originally fixed at the Planck scale, to allow for DM decays. We find new viable MDM candidates and study their phenomenology in detail.« less
Millicharge or decay: a critical take on Minimal Dark Matter
Nobile, Eugenio Del; Nardecchia, Marco; Panci, Paolo
2016-04-26
Minimal Dark Matter (MDM) is a theoretical framework highly appreciated for its minimality and yet its predictivity. Of the two only viable candidates singled out in the original analysis, the scalar eptaplet has been found to decay too quickly to be around today, while the fermionic quintuplet is now being probed by indirect Dark Matter (DM) searches. It is therefore timely to critically review the MDM paradigm, possibly pointing out generalizations of this framework. We propose and explore two distinct directions. One is to abandon the assumption of DM electric neutrality in favor of absolutely stable, millicharged DM candidates whichmore » are part of SU(2)L multiplets with integer isospin. Another possibility is to lower the cutoff of the model, which was originally fixed at the Planck scale, to allow for DM decays. We find new viable MDM candidates and study their phenomenology in detail.« less
Structural tailoring of advanced turboprops
NASA Technical Reports Server (NTRS)
Brown, K. W.; Hopkins, Dale A.
1988-01-01
The Structural Tailoring of Advanced Turboprops (STAT) computer program was developed to perform numerical optimization on highly swept propfan blades. The optimization procedure seeks to minimize an objective function defined as either: (1) direct operating cost of full scale blade or, (2) aeroelastic differences between a blade and its scaled model, by tuning internal and external geometry variables that must satisfy realistic blade design constraints. The STAT analysis system includes an aerodynamic efficiency evaluation, a finite element stress and vibration analysis, an acoustic analysis, a flutter analysis, and a once-per-revolution forced response life prediction capability. STAT includes all relevant propfan design constraints.
Global/local methods for probabilistic structural analysis
NASA Technical Reports Server (NTRS)
Millwater, H. R.; Wu, Y.-T.
1993-01-01
A probabilistic global/local method is proposed to reduce the computational requirements of probabilistic structural analysis. A coarser global model is used for most of the computations with a local more refined model used only at key probabilistic conditions. The global model is used to establish the cumulative distribution function (cdf) and the Most Probable Point (MPP). The local model then uses the predicted MPP to adjust the cdf value. The global/local method is used within the advanced mean value probabilistic algorithm. The local model can be more refined with respect to the g1obal model in terms of finer mesh, smaller time step, tighter tolerances, etc. and can be used with linear or nonlinear models. The basis for this approach is described in terms of the correlation between the global and local models which can be estimated from the global and local MPPs. A numerical example is presented using the NESSUS probabilistic structural analysis program with the finite element method used for the structural modeling. The results clearly indicate a significant computer savings with minimal loss in accuracy.
Global/local methods for probabilistic structural analysis
NASA Astrophysics Data System (ADS)
Millwater, H. R.; Wu, Y.-T.
1993-04-01
A probabilistic global/local method is proposed to reduce the computational requirements of probabilistic structural analysis. A coarser global model is used for most of the computations with a local more refined model used only at key probabilistic conditions. The global model is used to establish the cumulative distribution function (cdf) and the Most Probable Point (MPP). The local model then uses the predicted MPP to adjust the cdf value. The global/local method is used within the advanced mean value probabilistic algorithm. The local model can be more refined with respect to the g1obal model in terms of finer mesh, smaller time step, tighter tolerances, etc. and can be used with linear or nonlinear models. The basis for this approach is described in terms of the correlation between the global and local models which can be estimated from the global and local MPPs. A numerical example is presented using the NESSUS probabilistic structural analysis program with the finite element method used for the structural modeling. The results clearly indicate a significant computer savings with minimal loss in accuracy.
Short cell-penetrating peptides: a model of interactions with gene promoter sites.
Khavinson, V Kh; Tarnovskaya, S I; Linkova, N S; Pronyaeva, V E; Shataeva, L K; Yakutseni, P P
2013-01-01
Analysis of the main parameters of molecular mechanics (number of hydrogen bonds, hydrophobic and electrostatic interactions, DNA-peptide complex minimization energy) provided the data to validate the previously proposed qualitative models of peptide-DNA interactions and to evaluate their quantitative characteristics. Based on these estimations, a three-dimensional model of Lys-Glu and Ala-Glu-Asp-Gly peptide interactions with DNA sites (GCAG and ATTTC) located in the promoter zones of genes encoding CD5, IL-2, MMP2, and Tram1 signal molecules.
Posttest REALP4 analysis of LOFT experiment L1-3A
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, J.R.; Holmstrom, H.L.O.
This report presents selected results of posttest RELAP4 modeling of LOFT loss-of-coolant experiment L1-3A, a double-ended isothermal cold leg break with lower plenum emergency core coolant injection. Comparisons are presented between the pretest prediction, the posttest analysis, and the experimental data. It is concluded that pressurizer modeling is important for accurately predicting system behavior during the initial portion of saturated blowdown. Using measured initial conditions rather than nominal specified initial conditions did not influence the system model results significantly. Using finer nodalization in the reactor vessel improved the prediction of the system pressure history by minimizing steam condensation effects. Unequalmore » steam condensation between the downcomer and core volumes appear to cause the manometer oscillations observed in both the pretest and posttest RELAP4 analysis.« less
Li, Zheng; Qi, Rong; Wang, Bo; Zou, Zhe; Wei, Guohong; Yang, Min
2013-01-01
A full-scale oxidation ditch process for treating sewage was simulated with the ASM2d model and optimized for minimal cost with acceptable performance in terms of ammonium and phosphorus removal. A unified index was introduced by integrating operational costs (aeration energy and sludge production) with effluent violations for performance evaluation. Scenario analysis showed that, in comparison with the baseline (all of the 9 aerators activated), the strategy of activating 5 aerators could save aeration energy significantly with an ammonium violation below 10%. Sludge discharge scenario analysis showed that a sludge discharge flow of 250-300 m3/day (solid retention time (SRT), 13-15 days) was appropriate for the enhancement of phosphorus removal without excessive sludge production. The proposed optimal control strategy was: activating 5 rotating disks operated with a mode of "111100100" ("1" represents activation and "0" represents inactivation) for aeration and sludge discharge flow of 200 m3/day (SRT, 19 days). Compared with the baseline, this strategy could achieve ammonium violation below 10% and TP violation below 30% with substantial reduction of aeration energy cost (46%) and minimal increment of sludge production (< 2%). This study provides a useful approach for the optimization of process operation and control.
Informed spectral analysis: audio signal parameter estimation using side information
NASA Astrophysics Data System (ADS)
Fourer, Dominique; Marchand, Sylvain
2013-12-01
Parametric models are of great interest for representing and manipulating sounds. However, the quality of the resulting signals depends on the precision of the parameters. When the signals are available, these parameters can be estimated, but the presence of noise decreases the resulting precision of the estimation. Furthermore, the Cramér-Rao bound shows the minimal error reachable with the best estimator, which can be insufficient for demanding applications. These limitations can be overcome by using the coding approach which consists in directly transmitting the parameters with the best precision using the minimal bitrate. However, this approach does not take advantage of the information provided by the estimation from the signal and may require a larger bitrate and a loss of compatibility with existing file formats. The purpose of this article is to propose a compromised approach, called the 'informed approach,' which combines analysis with (coded) side information in order to increase the precision of parameter estimation using a lower bitrate than pure coding approaches, the audio signal being known. Thus, the analysis problem is presented in a coder/decoder configuration where the side information is computed and inaudibly embedded into the mixture signal at the coder. At the decoder, the extra information is extracted and is used to assist the analysis process. This study proposes applying this approach to audio spectral analysis using sinusoidal modeling which is a well-known model with practical applications and where theoretical bounds have been calculated. This work aims at uncovering new approaches for audio quality-based applications. It provides a solution for challenging problems like active listening of music, source separation, and realistic sound transformations.
Shanks, Ryan A; Robertson, Chuck L; Haygood, Christian S; Herdliksa, Anna M; Herdliska, Heather R; Lloyd, Steven A
2017-01-01
Introductory biology courses provide an important opportunity to prepare students for future courses, yet existing cookbook labs, although important in their own way, fail to provide many of the advantages of semester-long research experiences. Engaging, authentic research experiences aid biology students in meeting many learning goals. Therefore, overlaying a research experience onto the existing lab structure allows faculty to overcome barriers involving curricular change. Here we propose a working model for this overlay design in an introductory biology course and detail a means to conduct this lab with minimal increases in student and faculty workloads. Furthermore, we conducted exploratory factor analysis of the Experimental Design Ability Test (EDAT) and uncovered two latent factors which provide valid means to assess this overlay model's ability to increase advanced experimental design abilities. In a pre-test/post-test design, we demonstrate significant increases in both basic and advanced experimental design abilities in an experimental and comparison group. We measured significantly higher gains in advanced experimental design understanding in students in the experimental group. We believe this overlay model and EDAT factor analysis contribute a novel means to conduct and assess the effectiveness of authentic research experiences in an introductory course without major changes to the course curriculum and with minimal increases in faculty and student workloads.
A Bayesian analysis of inflationary primordial spectrum models using Planck data
NASA Astrophysics Data System (ADS)
Santos da Costa, Simony; Benetti, Micol; Alcaniz, Jailson
2018-03-01
The current available Cosmic Microwave Background (CMB) data show an anomalously low value of the CMB temperature fluctuations at large angular scales (l < 40). This lack of power is not explained by the minimal ΛCDM model, and one of the possible mechanisms explored in the literature to address this problem is the presence of features in the primordial power spectrum (PPS) motivated by the early universe physics. In this paper, we analyse a set of cutoff inflationary PPS models using a Bayesian model comparison approach in light of the latest CMB data from the Planck Collaboration. Our results show that the standard power-law parameterisation is preferred over all models considered in the analysis, which motivates the search for alternative explanations for the observed lack of power in the CMB anisotropy spectrum.
Dynamic analysis and optimal control for a model of hepatitis C with treatment
NASA Astrophysics Data System (ADS)
Zhang, Suxia; Xu, Xiaxia
2017-05-01
A model for hepatitis C is formulated to study the effects of treatment and public concern on HCV transmission dynamics. The stability of equilibria and persistence of the model are analyzed, and an optimal control measure is performed to prevent the spread of HCV with minimal infected individuals and cost. The dynamical analysis reveals that the disease-free equilibrium of the model is asymptotically stable if the basic reproductive number R0 is less than unity. On the other hand, if R0 > 1 , the disease is uniformly persistent. Numerical simulations are conducted to investigate the influence of different vital parameters on R0. For the corresponding optimality system, the optimal solution is discussed by Pontryagin Maximum Principle, and the comparisons of model-predicted consequences with control or not are presented.
Dependence of the firearm-related homicide rate on gun availability: a mathematical analysis.
Wodarz, Dominik; Komarova, Natalia L
2013-01-01
In the USA, the relationship between the legal availability of guns and the firearm-related homicide rate has been debated. It has been argued that unrestricted gun availability promotes the occurrence of firearm-induced homicides. It has also been pointed out that gun possession can protect potential victims when attacked. This paper provides a first mathematical analysis of this tradeoff, with the goal to steer the debate towards arguing about assumptions, statistics, and scientific methods. The model is based on a set of clearly defined assumptions, which are supported by available statistical data, and is formulated axiomatically such that results do not depend on arbitrary mathematical expressions. According to this framework, two alternative scenarios can minimize the gun-related homicide rate: a ban of private firearms possession, or a policy allowing the general population to carry guns. Importantly, the model identifies the crucial parameters that determine which policy minimizes the death rate, and thus serves as a guide for the design of future epidemiological studies. The parameters that need to be measured include the fraction of offenders that illegally possess a gun, the degree of protection provided by gun ownership, and the fraction of the population who take up their right to own a gun and carry it when attacked. Limited data available in the literature were used to demonstrate how the model can be parameterized, and this preliminary analysis suggests that a ban of private firearm possession, or possibly a partial reduction in gun availability, might lower the rate of firearm-induced homicides. This, however, should not be seen as a policy recommendation, due to the limited data available to inform and parameterize the model. However, the model clearly defines what needs to be measured, and provides a basis for a scientific discussion about assumptions and data.
One-loop pseudo-Goldstone masses in the minimal S O (10 ) Higgs model
NASA Astrophysics Data System (ADS)
Gráf, Lukáš; Malinský, Michal; Mede, Timon; Susič, Vasja
2017-04-01
We calculate the prominent perturbative contributions shaping the one-loop scalar spectrum of the minimal renormalizable nonsupersymmetric S O (10 ) Higgs model whose unified gauge symmetry is spontaneously broken by an adjoint scalar. Focusing on its potentially realistic 45 ⊕126 variant in which the rank is reduced by a vacuum expectation value of the 5-index antisymmetric self-dual tensor, we provide a thorough analysis of the corresponding Coleman-Weinberg one-loop effective potential, paying particular attention to the masses of the potentially tachyonic pseudo-Goldstone bosons transforming as (1, 3, 0) and (8, 1, 0) under the standard model (SM) gauge group. The results confirm the assumed existence of extended regions in the parameter space supporting a locally stable SM-like quantum vacuum inaccessible at the tree level. The effective potential tedium is compared to that encountered in the previously studied 45 ⊕16 S O (10 ) Higgs model where the polynomial corrections to the relevant pseudo-Goldstone masses turn out to be easily calculable within a very simplified purely diagrammatic approach.
NASA Technical Reports Server (NTRS)
Sreekantamurthy, Thammaiah; Turner, Travis L.; Moore, James B.; Su, Ji
2014-01-01
Airframe noise is a significant part of the overall noise of transport aircraft during the approach and landing phases of flight. Airframe noise reduction is currently emphasized under the Environmentally Responsible Aviation (ERA) and Fixed Wing (FW) Project goals of NASA. A promising concept for trailing-edge-flap noise reduction is a flexible structural element or link that connects the side edges of the deployable flap to the adjacent main-wing structure. The proposed solution is distinguished by minimization of the span-wise extent of the structural link, thereby minimizing the aerodynamic load on the link structure at the expense of increased deformation requirement. Development of such a flexible structural link necessitated application of hyperelastic materials, atypical structural configurations and novel interface hardware. The resulting highly-deformable structural concept was termed the FLEXible Side Edge Link (FLEXSEL) concept. Prediction of atypical elastomeric deformation responses from detailed structural analysis was essential for evaluating feasible concepts that met the design constraints. The focus of this paper is to describe the many challenges encountered with hyperelastic finite element modeling and the nonlinear structural analysis of evolving FLEXSEL concepts. Detailed herein is the nonlinear analysis of FLEXSEL concepts that emerged during the project which include solid-section, foamcore, hollow, extended-span and pre-stressed concepts. Coupon-level analysis performed on elastomeric interface joints, which form a part of the FLEXSEL topology development, are also presented.
Predictive Cache Modeling and Analysis
2011-11-01
metaheuristic /bin-packing algorithm to optimize task placement based on task communication characterization. Our previous work on task allocation showed...Cache Miss Minimization Technology To efficiently explore combinations and discover nearly-optimal task-assignment algorithms , we extended to our...it was possible to use our algorithmic techniques to decrease network bandwidth consumption by ~25%. In this effort, we adapted these existing
On the Basis of the Basic Variety.
ERIC Educational Resources Information Center
Schwartz, Bonnie D.
1997-01-01
Considers the interplay between source and target language in relation to two points made by Klein and Perdue: (1) the argument that the analysis of the target language should not be used as the model for analyzing interlanguage data; and (2) the theoretical claim that under the technical assumptions of minimalism, the Basic Variety is a "perfect"…
Thermal Model Development for Ares I-X
NASA Technical Reports Server (NTRS)
Amundsen, Ruth M.; DelCorso, Joe
2008-01-01
Thermal analysis for the Ares I-X vehicle has involved extensive thermal model integration, since thermal models of vehicle elements came from several different NASA and industry organizations. Many valuable lessons were learned in terms of model integration and validation. Modeling practices such as submodel, analysis group and symbol naming were standardized to facilitate the later model integration. Upfront coordination of coordinate systems, timelines, units, symbols and case scenarios was very helpful in minimizing integration rework. A process for model integration was developed that included pre-integration runs and basic checks of both models, and a step-by-step process to efficiently integrate one model into another. Extensive use of model logic was used to create scenarios and timelines for avionics and air flow activation. Efficient methods of model restart between case scenarios were developed. Standardization of software version and even compiler version between organizations was found to be essential. An automated method for applying aeroheating to the full integrated vehicle model, including submodels developed by other organizations, was developed.
Mechanical behavior of cells in microinjection: a minimum potential energy study.
Liu, Fei; Wu, Dan; Chen, Ken
2013-08-01
Microinjection is a widely used technique to deliver foreign materials into biological cells. We propose a mathematical model to study the mechanical behavior of a cell in microinjection. Firstly, a cell is modeled by a hyperelastic membrane and interior cytoplasm. Then, based on the fact that the equilibrium configuration of a cell would minimize the potential energy, the energy function during microinjection is analyzed. With Lagrange multiplier and Rayleigh-Ritz technique, we successfully minimize the potential energy and obtain the equilibrium configuration. Upon this model, the injection force, the injection distance, the radius of the microinjector and the membrane stress are studied. The analysis demonstrates that the microinjector radius has a significant influence on the cell mechanical behavior: (1) the larger radius generates larger injection force and larger interior pressure at the same injection distance; (2) the radius determines the place where the membrane is most likely to rupture by governing the membrane stress distribution. For a fine microinjector with radius less than 20% of the cell radius, the most likely rupture point located at the edge of the contact area between the microinjector and the membrane; however, it may move to the middle of the equilibrium configuration as the radius increases. To verify our model, some experiments were conducted on zebrafish egg cells. The results show that the computational analysis agrees with the experimental data, which supports the findings from the theoretical model. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Chen, Hui; Tan, Chao; Lin, Zan; Wu, Tong
2018-01-01
Milk is among the most popular nutrient source worldwide, which is of great interest due to its beneficial medicinal properties. The feasibility of the classification of milk powder samples with respect to their brands and the determination of protein concentration is investigated by NIR spectroscopy along with chemometrics. Two datasets were prepared for experiment. One contains 179 samples of four brands for classification and the other contains 30 samples for quantitative analysis. Principal component analysis (PCA) was used for exploratory analysis. Based on an effective model-independent variable selection method, i.e., minimal-redundancy maximal-relevance (MRMR), only 18 variables were selected to construct a partial least-square discriminant analysis (PLS-DA) model. On the test set, the PLS-DA model based on the selected variable set was compared with the full-spectrum PLS-DA model, both of which achieved 100% accuracy. In quantitative analysis, the partial least-square regression (PLSR) model constructed by the selected subset of 260 variables outperforms significantly the full-spectrum model. It seems that the combination of NIR spectroscopy, MRMR and PLS-DA or PLSR is a powerful tool for classifying different brands of milk and determining the protein content.
Atalağ, Koray; Bilgen, Semih; Gür, Gürden; Boyacioğlu, Sedat
2007-09-01
There are very few evaluation studies for the Minimal Standard Terminology for Digestive Endoscopy. This study aims to evaluate the usage of the Turkish translation of Minimal Standard Terminology by developing an endoscopic information system. After elicitation of requirements, database modeling and software development were performed. Minimal Standard Terminology driven forms were designed for rapid data entry. The endoscopic report was rapidly created by applying basic Turkish syntax and grammar rules. Entering free text and also editing of final report were possible. After three years of live usage, data analysis was performed and results were evaluated. The system has been used for reporting of all endoscopic examinations. 15,638 valid records were analyzed, including 11,381 esophagogastroduodenoscopies, 2,616 colonoscopies, 1,079 rectoscopies and 562 endoscopic retrograde cholangiopancreatographies. In accordance with other previous validation studies, the overall usage of Minimal Standard Terminology terms was very high: 85% for examination characteristics, 94% for endoscopic findings and 94% for endoscopic diagnoses. Some new terms, attributes and allowed values were also added for better clinical coverage. Minimal Standard Terminology has been shown to cover a high proportion of routine endoscopy reports. Good user acceptance proves that both the terms and structure of Minimal Standard Terminology were consistent with usual clinical thinking. However, future work on Minimal Standard Terminology is mandatory for better coverage of endoscopic retrograde cholangiopancreatographies examinations. Technically new software development methodologies have to be sought for lowering cost of development and the maintenance phase. They should also address integration and interoperability of disparate information systems.
Health economic evaluation: important principles and methodology.
Rudmik, Luke; Drummond, Michael
2013-06-01
To discuss health economic evaluation and improve the understanding of common methodology. This article discusses the methodology for the following types of economic evaluations: cost-minimization, cost-effectiveness, cost-utility, cost-benefit, and economic modeling. Topics include health-state utility measures, the quality-adjusted life year (QALY), uncertainty analysis, discounting, decision tree analysis, and Markov modeling. Economic evaluation is the comparative analysis of alternative courses of action in terms of both their costs and consequences. With increasing health care expenditure and limited resources, it is important for physicians to consider the economic impact of their interventions. Understanding common methodology involved in health economic evaluation will improve critical appraisal of the literature and optimize future economic evaluations. Copyright © 2012 The American Laryngological, Rhinological and Otological Society, Inc.
NASA Astrophysics Data System (ADS)
Aartsen, M. G.; Abraham, K.; Ackermann, M.; Adams, J.; Aguilar, J. A.; Ahlers, M.; Ahrens, M.; Altmann, D.; Anderson, T.; Ansseau, I.; Anton, G.; Archinger, M.; Arguelles, C.; Arlen, T. C.; Auffenberg, J.; Bai, X.; Barwick, S. W.; Baum, V.; Bay, R.; Beatty, J. J.; Becker Tjus, J.; Becker, K.-H.; Beiser, E.; BenZvi, S.; Berghaus, P.; Berley, D.; Bernardini, E.; Bernhard, A.; Besson, D. Z.; Binder, G.; Bindig, D.; Bissok, M.; Blaufuss, E.; Blumenthal, J.; Boersma, D. J.; Bohm, C.; Börner, M.; Bos, F.; Bose, D.; Böser, S.; Botner, O.; Braun, J.; Brayeur, L.; Bretz, H.-P.; Buzinsky, N.; Casey, J.; Casier, M.; Cheung, E.; Chirkin, D.; Christov, A.; Clark, K.; Classen, L.; Coenders, S.; Collin, G. H.; Conrad, J. M.; Cowen, D. F.; Cruz Silva, A. H.; Danninger, M.; Daughhetee, J.; Davis, J. C.; Day, M.; de André, J. P. A. M.; De Clercq, C.; del Pino Rosendo, E.; Dembinski, H.; De Ridder, S.; Desiati, P.; de Vries, K. D.; de Wasseige, G.; de With, M.; DeYoung, T.; Díaz-Vélez, J. C.; di Lorenzo, V.; Dumm, J. P.; Dunkman, M.; Eberhardt, B.; Edsjö, J.; Ehrhardt, T.; Eichmann, B.; Euler, S.; Evenson, P. A.; Fahey, S.; Fazely, A. R.; Feintzeig, J.; Felde, J.; Filimonov, K.; Finley, C.; Flis, S.; Fösig, C.-C.; Fuchs, T.; Gaisser, T. K.; Gaior, R.; Gallagher, J.; Gerhardt, L.; Ghorbani, K.; Gier, D.; Gladstone, L.; Glagla, M.; Glüsenkamp, T.; Goldschmidt, A.; Golup, G.; Gonzalez, J. G.; Góra, D.; Grant, D.; Griffith, Z.; Groß, A.; Ha, C.; Haack, C.; Haj Ismail, A.; Hallgren, A.; Halzen, F.; Hansen, E.; Hansmann, B.; Hanson, K.; Hebecker, D.; Heereman, D.; Helbing, K.; Hellauer, R.; Hickford, S.; Hignight, J.; Hill, G. C.; Hoffman, K. D.; Hoffmann, R.; Holzapfel, K.; Homeier, A.; Hoshina, K.; Huang, F.; Huber, M.; Huelsnitz, W.; Hulth, P. O.; Hultqvist, K.; In, S.; Ishihara, A.; Jacobi, E.; Japaridze, G. S.; Jeong, M.; Jero, K.; Jones, B. J. P.; Jurkovic, M.; Kappes, A.; Karg, T.; Karle, A.; Katz, U.; Kauer, M.; Keivani, A.; Kelley, J. L.; Kemp, J.; Kheirandish, A.; Kiryluk, J.; Klein, S. R.; Kohnen, G.; Koirala, R.; Kolanoski, H.; Konietz, R.; Köpke, L.; Kopper, C.; Kopper, S.; Koskinen, D. J.; Kowalski, M.; Krings, K.; Kroll, G.; Kroll, M.; Krückl, G.; Kunnen, J.; Kurahashi, N.; Kuwabara, T.; Labare, M.; Lanfranchi, J. L.; Larson, M. J.; Lesiak-Bzdak, M.; Leuermann, M.; Leuner, J.; Lu, L.; Lünemann, J.; Madsen, J.; Maggi, G.; Mahn, K. B. M.; Mandelartz, M.; Maruyama, R.; Mase, K.; Matis, H. S.; Maunu, R.; McNally, F.; Meagher, K.; Medici, M.; Meier, M.; Meli, A.; Menne, T.; Merino, G.; Meures, T.; Miarecki, S.; Middell, E.; Mohrmann, L.; Montaruli, T.; Morse, R.; Nahnhauer, R.; Naumann, U.; Neer, G.; Niederhausen, H.; Nowicki, S. C.; Nygren, D. R.; Obertacke Pollmann, A.; Olivas, A.; Omairat, A.; O'Murchadha, A.; Palczewski, T.; Pandya, H.; Pankova, D. V.; Paul, L.; Pepper, J. A.; Pérez de los Heros, C.; Pfendner, C.; Pieloth, D.; Pinat, E.; Posselt, J.; Price, P. B.; Przybylski, G. T.; Quinnan, M.; Raab, C.; Rädel, L.; Rameez, M.; Rawlins, K.; Reimann, R.; Relich, M.; Resconi, E.; Rhode, W.; Richman, M.; Richter, S.; Riedel, B.; Robertson, S.; Rongen, M.; Rott, C.; Ruhe, T.; Ryckbosch, D.; Sabbatini, L.; Sander, H.-G.; Sandrock, A.; Sandroos, J.; Sarkar, S.; Savage, C.; Schatto, K.; Schimp, M.; Schlunder, P.; Schmidt, T.; Schoenen, S.; Schöneberg, S.; Schönwald, A.; Schulte, L.; Schumacher, L.; Scott, P.; Seckel, D.; Seunarine, S.; Silverwood, H.; Soldin, D.; Song, M.; Spiczak, G. M.; Spiering, C.; Stahlberg, M.; Stamatikos, M.; Stanev, T.; Stasik, A.; Steuer, A.; Stezelberger, T.; Stokstad, R. G.; Stößl, A.; Ström, R.; Strotjohann, N. L.; Sullivan, G. W.; Sutherland, M.; Taavola, H.; Taboada, I.; Tatar, J.; Ter-Antonyan, S.; Terliuk, A.; Te{š}ić, G.; Tilav, S.; Toale, P. A.; Tobin, M. N.; Toscano, S.; Tosi, D.; Tselengidou, M.; Turcati, A.; Unger, E.; Usner, M.; Vallecorsa, S.; Vandenbroucke, J.; van Eijndhoven, N.; Vanheule, S.; van Santen, J.; Veenkamp, J.; Vehring, M.; Voge, M.; Vraeghe, M.; Walck, C.; Wallace, A.; Wallraff, M.; Wandkowsky, N.; Weaver, Ch.; Wendt, C.; Westerhoff, S.; Whelan, B. J.; Wiebe, K.; Wiebusch, C. H.; Wille, L.; Williams, D. R.; Wills, L.; Wissing, H.; Wolf, M.; Wood, T. R.; Woschnagg, K.; Xu, D. L.; Xu, X. W.; Xu, Y.; Yanez, J. P.; Yodh, G.; Yoshida, S.; Zoll, M.
2016-04-01
We present an improved event-level likelihood formalism for including neutrino telescope data in global fits to new physics. We derive limits on spin-dependent dark matter-proton scattering by employing the new formalism in a re-analysis of data from the 79-string IceCube search for dark matter annihilation in the Sun, including explicit energy information for each event. The new analysis excludes a number of models in the weak-scale minimal supersymmetric standard model (MSSM) for the first time. This work is accompanied by the public release of the 79-string IceCube data, as well as an associated computer code for applying the new likelihood to arbitrary dark matter models.
Non-minimally coupled f(R) cosmology
NASA Astrophysics Data System (ADS)
Thakur, Shruti; Sen, Anjan A.; Seshadri, T. R.
2011-02-01
We investigate the consequences of non-minimal gravitational coupling to matter and study how it differs from the case of minimal coupling by choosing certain simple forms for the nature of coupling. The values of the parameters are specified at z=0 (present epoch) and the equations are evolved backwards to calculate the evolution of cosmological parameters. We find that the Hubble parameter evolves more slowly in non-minimal coupling case as compared to the minimal coupling case. In both the cases, the universe accelerates around present time, and enters the decelerating regime in the past. Using the latest Union2 dataset for supernova Type Ia observations as well as the data for baryon acoustic oscillation (BAO) from SDSS observations, we constraint the parameters of Linder exponential model in the two different approaches. We find that there is an upper bound on model parameter in minimal coupling. But for non-minimal coupling case, there is range of allowed values for the model parameter.
Finite-element modeling of soft tissue rolling indentation.
Sangpradit, Kiattisak; Liu, Hongbin; Dasgupta, Prokar; Althoefer, Kaspar; Seneviratne, Lakmal D
2011-12-01
We describe a finite-element (FE) model for simulating wheel-rolling tissue deformations using a rolling FE model (RFEM). A wheeled probe performing rolling tissue indentation has proven to be a promising approach for compensating for the loss of haptic and tactile feedback experienced during robotic-assisted minimally invasive surgery (H. Liu, D. P. Noonan, B. J. Challacombe, P. Dasgupta, L. D. Seneviratne, and K. Althoefer, "Rolling mechanical imaging for tissue abnormality localization during minimally invasive surgery, " IEEE Trans. Biomed. Eng., vol. 57, no. 2, pp. 404-414, Feb. 2010; K. Sangpradit, H. Liu, L. Seneviratne, and K. Althoefer, "Tissue identification using inverse finite element analysis of rolling indentation," in Proc. IEEE Int. Conf. Robot. Autom. , Kobe, Japan, 2009, pp. 1250-1255; H. Liu, D. Noonan, K. Althoefer, and L. Seneviratne, "The rolling approach for soft tissue modeling and mechanical imaging during robot-assisted minimally invasive surgery," in Proc. IEEE Int. Conf. Robot. Autom., May 2008, pp. 845-850; H. Liu, P. Puangmali, D. Zbyszewski, O. Elhage, P. Dasgupta, J. S. Dai, L. Seneviratne, and K. Althoefer, "An indentation depth-force sensing wheeled probe for abnormality identification during minimally invasive surgery," Proc. Inst. Mech. Eng., H, vol. 224, no. 6, pp. 751-63, 2010; D. Noonan, H. Liu, Y. Zweiri, K. Althoefer, and L. Seneviratne, "A dual-function wheeled probe for tissue viscoelastic property identification during minimally invasive surgery," in Proc. IEEE Int. Conf. Robot. Autom. , 2008, pp. 2629-2634; H. Liu, J. Li, Q. I. Poon, L. D. Seneviratne, and K. Althoefer, "Miniaturized force indentation-depth sensor for tissue abnormality identification," IEEE Int. Conf. Robot. Autom., May 2010, pp. 3654-3659). A sound understanding of wheel-tissue rolling interaction dynamics will facilitate the evaluation of signals from rolling indentation. In this paper, we model the dynamic interactions between a wheeled probe and a soft tissue sample using the ABAQUS FE software package. The aim of this work is to more precisely locate abnormalities within soft tissue organs using RFEM and hence aid surgeons to improve diagnostic ability. The soft tissue is modeled as a nonlinear hyperelastic material with geometrical nonlinearity. The proposed RFEM was validated on a silicone phantom and a porcine kidney sample. The results show that the proposed method can predict the wheel-tissue interaction forces of rolling indentation with good accuracy and can also accurately identify the location and depth of simulated tumors.
Limbic hyperconnectivity in the vegetative state.
Di Perri, Carol; Bastianello, Stefano; Bartsch, Andreas J; Pistarini, Caterina; Maggioni, Giorgio; Magrassi, Lorenzo; Imberti, Roberto; Pichiecchio, Anna; Vitali, Paolo; Laureys, Steven; Di Salle, Francesco
2013-10-15
To investigate functional connectivity between the default mode network (DMN) and other networks in disorders of consciousness. We analyzed MRI data from 11 patients in a vegetative state and 7 patients in a minimally conscious state along with age- and sex-matched healthy control subjects. MRI data analysis included nonlinear spatial normalization to compensate for disease-related anatomical distortions. We studied brain connectivity data from resting-state MRI temporal series, combining noninferential (independent component analysis) and inferential (seed-based general linear model) methods. In DMN hypoconnectivity conditions, a patient's DMN functional connectivity shifts and paradoxically increases in limbic structures, including the orbitofrontal cortex, insula, hypothalamus, and the ventral tegmental area. Concurrently with DMN hypoconnectivity, we report limbic hyperconnectivity in patients in vegetative and minimally conscious states. This hyperconnectivity may reflect the persistent engagement of residual neural activity in self-reinforcing neural loops, which, in turn, could disrupt normal patterns of connectivity.
Assessment Practices of Child Clinicians.
Cook, Jonathan R; Hausman, Estee M; Jensen-Doss, Amanda; Hawley, Kristin M
2017-03-01
Assessment is an integral component of treatment. However, prior surveys indicate clinicians may not use standardized assessment strategies. We surveyed 1,510 clinicians and used multivariate analysis of variance to explore group differences in specific measure use. Clinicians used unstandardized measures more frequently than standardized measures, although psychologists used standardized measures more frequently than nonpsychologists. We also used latent profile analysis to classify clinicians based on their overall approach to assessment and examined associations between clinician-level variables and assessment class or profile membership. A four-profile model best fit the data. The largest profile consisted of clinicians who primarily used unstandardized assessments (76.7%), followed by broad-spectrum assessors who regularly use both standardized and unstandardized assessment (11.9%), and two smaller profiles of minimal (6.0%) and selective assessors (5.5%). Compared with broad-spectrum assessors, unstandardized and minimal assessors were less likely to report having adequate standardized measures training. Implications for clinical practice and training are discussed.
On the geodetic applications of simultaneous range-differencing to LAGEOS
NASA Technical Reports Server (NTRS)
Pablis, E. C.
1982-01-01
The possibility of improving the accuracy of geodetic results by use of simultaneously observed ranges to Lageos, in a differencing mode, from pairs of stations was studied. Simulation tests show that model errors can be effectively minimized by simultaneous range differencing (SRD) for a rather broad class of network satellite pass configurations. The methods of least squares approximation are compared with monomials and Chebyshev polynomials and the cubic spline interpolation. Analysis of three types of orbital biases (radial, along- and across track) shows that radial biases are the ones most efficiently minimized in the SRC mode. The degree to which the other two can be minimized depends on the type of parameters under estimation and the geometry of the problem. Sensitivity analyses of the SRD observation show that for baseline length estimations the most useful data are those collected in a direction parallel to the baseline and at a low elevation. Estimating individual baseline lengths with respect to an assumed but fixed orbit not only decreases the cost, but it further reduces the effects of model biases on the results as opposed to a network solution. Analogous results and conclusions are obtained for the estimates of the coordinates of the pole.
NASA Astrophysics Data System (ADS)
Zhao, Mi; Li, ZhiWu; Hu, HeSuan
2010-09-01
This article develops a deadlock prevention policy for a class of generalised Petri nets, which can well model a large class of flexible manufacturing systems. The analysis of such a system leads us to characterise the deadlock situations in terms of the insufficiently marked siphons in its generalised Petri-net model. The proposed policy is carried out in an iterative way. At each step a minimal siphon is derived from a maximal deadly marked siphon that is found by solving a mixed integer programming (MIP) problem. An algorithm is formalised that can efficiently compute such a minimal siphon from a maximal one. A monitor is added for a derived minimal siphon such that it is max-controlled if it is elementary with respect to the siphons that have been derived. The liveness of the controlled system is decided by the fact that no siphon can be derived due to the MIP solution. After a liveness-enforcing net supervisor computed without complete siphon enumeration, the output-arcs of the additional monitors are rearranged such that the monitors act while restricting the system less. Examples are presented to demonstrate the proposed method.
Continued investigation of potential application of Omega navigation to civil aviation
NASA Technical Reports Server (NTRS)
Baxa, E. G., Jr.
1978-01-01
Major attention is given to an analysis of receiver repeatability in measuring OMEGA phase data. Repeatability is defined as the ability of two like receivers which are co-located to achieve the same LOP phase readings. Specific data analysis is presented. A propagation model is described which has been used in the analysis of propagation anomalies. Composite OMEGA analysis is presented in terms of carrier phase correlation analysis and the determination of carrier phase weighting coefficients for minimizing composite phase variation. Differential OMEGA error analysis is presented for receiver separations. Three frequency analysis includes LOP error and position error based on three and four OMEGA transmissions. Results of phase amplitude correlation studies are presented.
de la Llave-Rincón, Ana Isabel; Fernández-de-Las-Peñas, César; Pérez-de-Heredia-Torres, Marta; Martínez-Perez, Almudena; Valenza, Marie Carmen; Pareja, Juan A
2011-06-01
: The aim of this study was to analyze the differences in deficits in fine motor control and pinch grip force between patients with minimal, moderate/mild, or severe carpal tunnel syndrome (CTS) and healthy age- and hand dominance-matched controls. : A case-control study was conducted. The subtests of the Purdue Pegboard Test (one-hand and bilateral pin placements and assemblies) and pinch grip force between the thumb and the remaining four fingers of the hand were bilaterally evaluated in 66 women with minimal (n = 16), moderate (n = 16), or severe (n = 34) CTS and in 20 age- and hand-matched healthy women. The differences among the groups were analyzed using different mixed models of analysis of variance. : A two-way mixed analysis of variance revealed significant differences between groups, not depending on the presence of unilateral or bilateral symptoms (side), for the one-hand pin placement subtest: patients showed bilateral lower scores compared with controls (P < 0.001), without differences among those with minimal, moderate, or severe CTS (P = 0.946). The patients also exhibited lower scores in bilateral pin placement (P < 0.001) and assembly (P < 0.001) subtests, without differences among them. The three-way analysis of variance revealed significant differences among groups (P < 0.001) and fingers (P < 0.001), not depending on the presence of unilateral/bilateral symptoms (P = 0.684), for pinch grip force: patients showed bilateral lower pinch grip force in all fingers compared with healthy controls, without differences among those with minimal, moderate, or severe CTS. : The current study revealed similar bilateral deficits in fine motor control and pinch grip force in patients with minimal, moderate, or severe CTS, supporting that fine motor control deficits are a common feature of CTS not associated with electrodiagnostic findings.
Translational PK/PD of Anti-Infective Therapeutics
Rathi, Chetan; Lee, Richard E.; Meibohm, Bernd
2016-01-01
Translational PK/PD modeling has emerged as a critical technique for quantitative analysis of the relationship between dose, exposure and response of antibiotics. By combining model components for pharmacokinetics, bacterial growth kinetics and concentration-dependent drug effects, these models are able to quantitatively capture and simulate the complex interplay between antibiotic, bacterium and host organism. Fine-tuning of these basic model structures allows to further account for complicating factors such as resistance development, combination therapy, or host responses. With this tool set at hand, mechanism-based PK/PD modeling and simulation allows to develop optimal dosing regimens for novel and established antibiotics for maximum efficacy and minimal resistance development. PMID:27978987
The Strategic WAste Minimization Initiative (SWAMI) Software, Version 2.0 is a tool for using process analysis for identifying waste minimization opportunities within an industrial setting. The software requires user-supplied information for process definition, as well as materia...
NASA Astrophysics Data System (ADS)
Quiros, Israel; Gonzalez, Tame; Nucamendi, Ulises; García-Salcedo, Ricardo; Horta-Rangel, Francisco Antonio; Saavedra, Joel
2018-04-01
In this paper we investigate the so-called ‘phantom barrier crossing’ issue in a cosmological model based on the scalar–tensor theory with non-minimal derivative coupling to the Einstein tensor. Special attention will be paid to the physical bounds on the squared sound speed. The numeric results are geometrically illustrated by means of a qualitative procedure of analysis that is based on the mapping of the orbits in the phase plane onto the surfaces that represent physical quantities in the extended phase space, that is: the phase plane complemented with an additional dimension relative to the given physical parameter. We find that the cosmological model based on the non-minimal derivative coupling theory—this includes both the quintessence and the pure derivative coupling cases—has serious causality problems related to superluminal propagation of the scalar and tensor perturbations. Even more disturbing is the finding that, despite the fact that the underlying theory is free of the Ostrogradsky instability, the corresponding cosmological model is plagued by the Laplacian (classical) instability related with negative squared sound speed. This instability leads to an uncontrollable growth of the energy density of the perturbations that is inversely proportional to their wavelength. We show that, independent of the self-interaction potential, for positive coupling the tensor perturbations propagate superluminally, while for negative coupling a Laplacian instability arises. This latter instability invalidates the possibility for the model to describe the primordial inflation.
NASA Astrophysics Data System (ADS)
Mashayekhi, Mohammad Jalali; Behdinan, Kamran
2017-10-01
The increasing demand to minimize undesired vibration and noise levels in several high-tech industries has generated a renewed interest in vibration transfer path analysis. Analyzing vibration transfer paths within a system is of crucial importance in designing an effective vibration isolation strategy. Most of the existing vibration transfer path analysis techniques are empirical which are suitable for diagnosis and troubleshooting purpose. The lack of an analytical transfer path analysis to be used in the design stage is the main motivation behind this research. In this paper an analytical transfer path analysis based on the four-pole theory is proposed for multi-energy-domain systems. Bond graph modeling technique which is an effective approach to model multi-energy-domain systems is used to develop the system model. In this paper an electro-mechanical system is used as a benchmark example to elucidate the effectiveness of the proposed technique. An algorithm to obtain the equivalent four-pole representation of a dynamical systems based on the corresponding bond graph model is also presented in this paper.
Robot-assisted versus open sacrocolpopexy: a cost-minimization analysis.
Elliott, Christopher S; Hsieh, Michael H; Sokol, Eric R; Comiter, Craig V; Payne, Christopher K; Chen, Bertha
2012-02-01
Abdominal sacrocolpopexy is considered a standard of care operation for apical vaginal vault prolapse repair. Using outcomes at our center we evaluated whether the robotic approach to sacrocolpopexy is as cost-effective as the open approach. After obtaining institutional review board approval we performed cost-minimization analysis in a retrospective cohort of patients who underwent sacrocolpopexy at our institution between 2006 and 2010. Threshold values, that is model variable values at which the most cost effective approach crosses over to an alternative approach, were determined by testing model variables over realistic ranges using sensitivity analysis. Hospital billing data were also evaluated to confirm our findings. Operative time was similar for robotic and open surgery (226 vs 221 minutes) but postoperative length of stay differed significantly (1.0 vs 3.3 days, p <0.001). Base case analysis revealed an overall 10% cost savings for robot-assisted vs open sacrocolpopexy ($10,178 vs $11,307). Tornado analysis suggested that the number of institutional robotic cases done annually, length of stay and cost per hospitalization day in the postoperative period were the largest drivers of cost. Analysis of our hospital billing data showed a similar trend with robotic surgery costing 4.2% less than open surgery. A robot-assisted approach to sacrocolpopexy can be equally or less costly than an open approach. This depends on a sufficient institutional robotic case volume and a shorter postoperative stay for patients who undergo the robot-assisted procedure. Copyright © 2012 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Moar, Peter N; Love, John D; Ladouceur, François; Cahill, Laurence W
2006-09-01
We analyze two basic aspects of a scanning near-field optical microscope (SNOM) probe's operation: (i) spot-size evolution of the electric field along the probe with and without a metal layer, and (ii) a modal analysis of the SNOM probe, particularly in close proximity to the aperture. A slab waveguide model is utilized to minimize the analytical complexity, yet provides useful quantitative results--including losses associated with the metal coating--which can then be used as design rules.
Bifurcation analysis and dimension reduction of a predator-prey model for the L-H transition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dam, Magnus; Brøns, Morten; Juul Rasmussen, Jens
2013-10-15
The L-H transition denotes a shift to an improved confinement state of a toroidal plasma in a fusion reactor. A model of the L-H transition is required to simulate the time dependence of tokamak discharges that include the L-H transition. A 3-ODE predator-prey type model of the L-H transition is investigated with bifurcation theory of dynamical systems. The analysis shows that the model contains three types of transitions: an oscillating transition, a sharp transition with hysteresis, and a smooth transition. The model is recognized as a slow-fast system. A reduced 2-ODE model consisting of the full model restricted to themore » flow on the critical manifold is found to contain all the same dynamics as the full model. This means that all the dynamics in the system is essentially 2-dimensional, and a minimal model of the L-H transition could be a 2-ODE model.« less
Consequences of an Abelian family symmetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramond, P.
1996-01-01
The addition of an Abelian family symmetry to the Minimal Super-symmetric Standard Model reproduces the observed hierarchies of quark and lepton masses and quark mixing angles, only if it is anomalous. Green-Schwarz compensation of its anomalies requires the electroweak mixing angle to be sin{sup 2}{theta}{sub {omega}} = 3/8 at the string scale, without any assumed GUT structure, suggesting a superstring origin for the standard model. The analysis is extended to neutrino masses and the lepton mixing matrix.
1998-04-28
be discussed. 2.1 ECONOMIC REPLACEMENT THEORY Decisions about heavy equipment should be made based on sound economic principles , not emotions...Life) will be less than L*. The converse is also true. 2.1.3 The Repair Limit Theory A different way of looking at the economic replacement decision...Summary Three different economic models have been reviewed in this section. The output of each is distinct. One seeks to minimize costs, one seeks to
Belaid, D; Vendeuvre, T; Bouchoucha, A; Brémand, F; Brèque, C; Rigoard, P; Germaneau, A
2018-05-08
Treatment for fractures of the tibial plateau is in most cases carried out by stable fixation in order to allow early mobilization. Minimally invasive technologies such as tibioplasty or stabilization by locking plate, bone augmentation and cement filling (CF) have recently been used to treat this type of fracture. The aim of this paper was to determine the mechanical behavior of the tibial plateau by numerically modeling and by quantifying the mechanical effects on the tibia mechanical properties from injury healing. A personalized Finite Element (FE) model of the tibial plateau from a clinical case has been developed to analyze stress distribution in the tibial plateau stabilized by balloon osteoplasty and to determine the influence of the cement injected. Stress analysis was performed for different stages after surgery. Just after surgery, the maximum von Mises stresses obtained for the fractured tibia treated with and without CF were 134.9 MPa and 289.9 MPa respectively on the plate. Stress distribution showed an increase of values in the trabecular bone in the treated model with locking plate and CF and stress reduction in the cortical bone in the model treated with locking plate only. The computed results of stresses or displacements of the fractured models show that the cement filling of the tibial depression fracture may increase implant stability, and decrease the loss of depression reduction, while the presence of the cement in the healed model renders the load distribution uniform. Copyright © 2018 Elsevier Ltd. All rights reserved.
Kim, S-J; Kwon, Y-H; Hwang, C-J
2016-05-01
The objective of this study was to compare the biomechanical characteristics between two types of self-ligating brackets and conventional metal brackets using finite element analysis of a vertically displaced canine model focusing on the desired force on the canine and undesirable forces on adjacent teeth. Three-dimensional finite element models of the maxillary dentition with 1-mm, 2-mm, and 3-mm vertically displaced canines were constructed. Two different self-ligating brackets (In-Ovation C and Smart clip) and a conventional metal bracket (Micro-arch) were modeled. After a 0.016-inch NiTi (0.40 mm, round) wire was engaged, the displacement of each tooth was calculated using x-, y-, and z-coordinates, and the tensile and compressive stresses were calculated. The extrusion and maximal tensile stress of the canine differed little between the three brackets, but the intrusion and minimal compressive stress values of the adjacent teeth differed considerably and were highest in the Smart clip and least in the In-Ovation C. The extrusion and maximal tensile stress of the canine in the 3-mm displacement model was less than that in the 2-mm displacement model, and the intrusion and minimal compressive stress of the adjacent teeth increased with the degree of displacement. Self-ligating brackets were not superior to conventional brackets in leveling a vertically displaced canine. A continuous arch wire may not be recommended for leveling of severely displaced canines whether using self-ligating or conventional brackets. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Zhang, Lin-lin; Xu, Zhi-fang; Tan, Yan-hong; Chen, Xiu-hua; Xu, Ai-ning; Ren, Fang-gang; Wang, Hong-wei
2013-01-01
To screen the potential protein biomarkers in minimal residual disease (MRD) of the acute promyelocytic leukemia (APL) by comparison of differentially expressed serum protein between APL patients at diagnosis and after complete remission (CR) and healthy controls, and to establish and verify a diagnostic model. Serum proteins from 36 cases of primary APL, 29 cases of APL during complete remission and 32 healthy controls were purified by magnetic beads and then analyzed by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS). The spectra were analyzed statistically using FlexAnalysis(TM) and ClinProt(TM) software. Two prediction model of primary APL/healthy control, primary APL/APL CR were developed. Thirty four statistically significant peptide peaks were obtained with the m/z value ranging from 1000 to 10 000 (P < 0.001) in primary APL/healthy control model. Seven statistically significant peptide peaks were obtained in primary APL/APL CR model (P < 0.001). Comparison of the protein profiles between the two models, three peptides with m/z 4642, 7764 and 9289 were considered as the protein biomarker of APL MRD. A diagnostic pattern for APL CR using m/z 4642 and 9289 was established. Blind validation yielded correct classification of 6 out of 8 cases. The MALDI-TOF MS analysis of APL patients serum protein can be used as a promising dynamic method for MRD detection and the two peptides with m/z 4642 and 9289 may be better biomarkers.
Li, Cheng-Wei; Chen, Bor-Sen
2016-01-01
Epigenetic and microRNA (miRNA) regulation are associated with carcinogenesis and the development of cancer. By using the available omics data, including those from next-generation sequencing (NGS), genome-wide methylation profiling, candidate integrated genetic and epigenetic network (IGEN) analysis, and drug response genome-wide microarray analysis, we constructed an IGEN system based on three coupling regression models that characterize protein-protein interaction networks (PPINs), gene regulatory networks (GRNs), miRNA regulatory networks (MRNs), and epigenetic regulatory networks (ERNs). By applying system identification method and principal genome-wide network projection (PGNP) to IGEN analysis, we identified the core network biomarkers to investigate bladder carcinogenic mechanisms and design multiple drug combinations for treating bladder cancer with minimal side-effects. The progression of DNA repair and cell proliferation in stage 1 bladder cancer ultimately results not only in the derepression of miR-200a and miR-200b but also in the regulation of the TNF pathway to metastasis-related genes or proteins, cell proliferation, and DNA repair in stage 4 bladder cancer. We designed a multiple drug combination comprising gefitinib, estradiol, yohimbine, and fulvestrant for treating stage 1 bladder cancer with minimal side-effects, and another multiple drug combination comprising gefitinib, estradiol, chlorpromazine, and LY294002 for treating stage 4 bladder cancer with minimal side-effects.
Liu, Hongtao; Johnson, Jeffrey L.; Koval, Greg; Malnassy, Greg; Sher, Dorie; Damon, Lloyd E.; Hsi, Eric D.; Bucci, Donna Marie; Linker, Charles A.; Cheson, Bruce D.; Stock, Wendy
2012-01-01
Background In the present study, the prognostic impact of minimal residual disease during treatment on time to progression and overall survival was analyzed prospectively in patients with mantle cell lymphoma treated on the Cancer and Leukemia Group B 59909 clinical trial. Design and Methods Peripheral blood and bone marrow samples were collected during different phases of the Cancer and Leukemia Group B 59909 study for minimal residual disease analysis. Minimal residual disease status was determined by quantitative polymerase chain reaction of IgH and/or BCL-1/JH gene rearrangement. Correlation of minimal residual disease status with time to progression and overall survival was determined. In multivariable analysis, minimal residual disease, and other risk factors were correlated with time to progression. Results Thirty-nine patients had evaluable, sequential peripheral blood and bone marrow samples for minimal residual disease analysis. Using peripheral blood monitoring, 18 of 39 (46%) achieved molecular remission following induction therapy. The molecular remission rate increased from 46 to 74% after one course of intensification therapy. Twelve of 21 minimal residual disease positive patients (57%) progressed within three years of follow up compared to 4 of 18 (22%) molecular remission patients (P=0.049). Detection of minimal residual disease following induction therapy predicted disease progression with a hazard ratio of 3.7 (P=0.016). The 3-year probability of time to progression among those who were in molecular remission after induction chemotherapy was 82% compared to 48% in patients with detectable minimal residual disease. The prediction of time to progression by post-induction minimal residual disease was independent of other prognostic factors in multivariable analysis. Conclusions Detection of minimal residual disease following induction immunochemotherapy was an independent predictor of time to progression following immunochemotherapy and autologous stem cell transplantation for mantle cell lymphoma. The clinical trial was registered at ClinicalTrials.gov: NCT00020943. PMID:22102709
Hauer, Grant; Vic Adamowicz, W L; Boutin, Stan
2018-07-15
Tradeoffs between cost and recovery targets for boreal caribou herds, threatened species in Alberta, Canada, are examined using a dynamic cost minimization model. Unlike most approaches used for minimizing costs of achieving threatened species targets, we incorporate opportunity costs of surface (forests) and subsurface resources (energy) as well as direct costs of conservation (habitat restoration and direct predator control), into a forward looking model of species protection. Opportunity costs of conservation over time are minimized with an explicit target date for meeting species recovery targets; defined as the number of self-sustaining caribou herds, which requires that both habitat and population targets are met by a set date. The model was run under various scenarios including three species recovery criteria, two oil and gas price regimes, and targets for the number of herds to recover from 1 to 12. The derived cost curve follows a typical pattern as costs of recovery per herd increase as the number of herds targeted for recovery increases. The results also show that the opportunity costs for direct predator control are small compared to habitat restoration and protection costs. However, direct predator control is essential for meeting caribou population targets and reducing the risk of extirpation while habitat is recovered over time. Copyright © 2018 Elsevier Ltd. All rights reserved.
Decoding Problem Gamblers' Signals: A Decision Model for Casino Enterprises.
Ifrim, Sandra
2015-12-01
The aim of the present study is to offer a validated decision model for casino enterprises. The model enables those users to perform early detection of problem gamblers and fulfill their ethical duty of social cost minimization. To this end, the interpretation of casino customers' nonverbal communication is understood as a signal-processing problem. Indicators of problem gambling recommended by Delfabbro et al. (Identifying problem gamblers in gambling venues: final report, 2007) are combined with Viterbi algorithm into an interdisciplinary model that helps decoding signals emitted by casino customers. Model output consists of a historical path of mental states and cumulated social costs associated with a particular client. Groups of problem and non-problem gamblers were simulated to investigate the model's diagnostic capability and its cost minimization ability. Each group consisted of 26 subjects and was subsequently enlarged to 100 subjects. In approximately 95% of the cases, mental states were correctly decoded for problem gamblers. Statistical analysis using planned contrasts revealed that the model is relatively robust to the suppression of signals performed by casino clientele facing gambling problems as well as to misjudgments made by staff regarding the clients' mental states. Only if the last mentioned source of error occurs in a very pronounced manner, i.e. judgment is extremely faulty, cumulated social costs might be distorted.
A minimal titration model of the mammalian dynamical heat shock response
NASA Astrophysics Data System (ADS)
Sivéry, Aude; Courtade, Emmanuel; Thommen, Quentin
2016-12-01
Environmental stress, such as oxidative or heat stress, induces the activation of the heat shock response (HSR) and leads to an increase in the heat shock proteins (HSPs) level. These HSPs act as molecular chaperones to maintain cellular proteostasis. Controlled by highly intricate regulatory mechanisms, having stress-induced activation and feedback regulations with multiple partners, the HSR is still incompletely understood. In this context, we propose a minimal molecular model for the gene regulatory network of the HSR that reproduces quantitatively different heat shock experiments both on heat shock factor 1 (HSF1) and HSPs activities. This model, which is based on chemical kinetics laws, is kept with a low dimensionality without altering the biological interpretation of the model dynamics. This simplistic model highlights the titration of HSF1 by chaperones as the guiding line of the network. Moreover, by a steady states analysis of the network, three different temperature stress regimes appear: normal, acute, and chronic, where normal stress corresponds to pseudo thermal adaption. The protein triage that governs the fate of damaged proteins or the different stress regimes are consequences of the titration mechanism. The simplicity of the present model is of interest in order to study detailed modelling of cross regulation between the HSR and other major genetic networks like the cell cycle or the circadian clock.
NASA Astrophysics Data System (ADS)
Kollat, J. B.; Reed, P. M.
2009-12-01
This study contributes the ASSIST (Adaptive Strategies for Sampling in Space and Time) framework for improving long-term groundwater monitoring decisions across space and time while accounting for the influences of systematic model errors (or predictive bias). The ASSIST framework combines contaminant flow-and-transport modeling, bias-aware ensemble Kalman filtering (EnKF) and many-objective evolutionary optimization. Our goal in this work is to provide decision makers with a fuller understanding of the information tradeoffs they must confront when performing long-term groundwater monitoring network design. Our many-objective analysis considers up to 6 design objectives simultaneously and consequently synthesizes prior monitoring network design methodologies into a single, flexible framework. This study demonstrates the ASSIST framework using a tracer study conducted within a physical aquifer transport experimental tank located at the University of Vermont. The tank tracer experiment was extensively sampled to provide high resolution estimates of tracer plume behavior. The simulation component of the ASSIST framework consists of stochastic ensemble flow-and-transport predictions using ParFlow coupled with the Lagrangian SLIM transport model. The ParFlow and SLIM ensemble predictions are conditioned with tracer observations using a bias-aware EnKF. The EnKF allows decision makers to enhance plume transport predictions in space and time in the presence of uncertain and biased model predictions by conditioning them on uncertain measurement data. In this initial demonstration, the position and frequency of sampling were optimized to: (i) minimize monitoring cost, (ii) maximize information provided to the EnKF, (iii) minimize failure to detect the tracer, (iv) maximize the detection of tracer flux, (v) minimize error in quantifying tracer mass, and (vi) minimize error in quantifying the moment of the tracer plume. The results demonstrate that the many-objective problem formulation provides a tremendous amount of information for decision makers. Specifically our many-objective analysis highlights the limitations and potentially negative design consequences of traditional single and two-objective problem formulations. These consequences become apparent through visual exploration of high-dimensional tradeoffs and the identification of regions with interesting compromise solutions. The prediction characteristics of these compromise designs are explored in detail, as well as their implications for subsequent design decisions in both space and time.
Cobelli, Claudio; Dalla Man, Chiara; Toffolo, Gianna; Basu, Rita; Vella, Adrian; Rizza, Robert
2014-01-01
The simultaneous assessment of insulin action, secretion, and hepatic extraction is key to understanding postprandial glucose metabolism in nondiabetic and diabetic humans. We review the oral minimal method (i.e., models that allow the estimation of insulin sensitivity, β-cell responsivity, and hepatic insulin extraction from a mixed-meal or an oral glucose tolerance test). Both of these oral tests are more physiologic and simpler to administer than those based on an intravenous test (e.g., a glucose clamp or an intravenous glucose tolerance test). The focus of this review is on indices provided by physiological-based models and their validation against the glucose clamp technique. We discuss first the oral minimal model method rationale, data, and protocols. Then we present the three minimal models and the indices they provide. The disposition index paradigm, a widely used β-cell function metric, is revisited in the context of individual versus population modeling. Adding a glucose tracer to the oral dose significantly enhances the assessment of insulin action by segregating insulin sensitivity into its glucose disposal and hepatic components. The oral minimal model method, by quantitatively portraying the complex relationships between the major players of glucose metabolism, is able to provide novel insights regarding the regulation of postprandial metabolism. PMID:24651807
Babela, Robert; Jarcuska, Pavol; Uraz, Vladimir; Krčméry, Vladimír; Jadud, Branislav; Stevlik, Jan; Gould, Ian M
2017-11-01
No previous analyses have attempted to determine optimal therapy for upper respiratory tract infections on the basis of cost-minimization models and the prevalence of antimicrobial resistance among respiratory pathogens in Slovakia. This investigation compares macrolides and cephalosporines for empirical therapy and look at this new tool from the aspect of potential antibiotic policy decision-making process. We employed a decision tree model to determine the threshold level of macrolides and cephalosporines resistance among community respiratory pathogens that would make cephalosporines or macrolides cost-minimising. To obtain information on clinical outcomes and cost of URTIs, a systematic review of the literature was performed. The cost-minimization model of upper respiratory tract infections (URTIs) treatment was derived from the review of literature and published models. We found that the mean cost of empirical treatment with macrolides for an URTIs was €93.27 when the percentage of resistant Streptococcus pneumoniae in the community was 0%; at 5%, the mean cost was €96.45; at 10%, €99.63; at 20%, €105.99, and at 30%, €112.36. Our model demonstrated that when the percentage of macrolide resistant Streptococcus pneumoniae exceeds 13.8%, use of empirical cephalosporines rather than macrolides minimizes the treatment cost of URTIs. Empirical macrolide therapy is less expensive than cephalosporines therapy for URTIs unless macrolide resistance exceeds 13.8% in the community. Results have important antibiotic policy implications, since presented model can be use as an additional decision-making tool for new guidelines and reimbursement processes by local authorities in the era of continual increase in antibiotic resistance.
NASA Astrophysics Data System (ADS)
Teeples, Ronald; Glyer, David
1987-05-01
Both policy and technical analysis of water delivery systems have been based on cost functions that are inconsistent with or are incomplete representations of the neoclassical production functions of economics. We present a full-featured production function model of water delivery which can be estimated from a multiproduct, dual cost function. The model features implicit prices for own-water inputs and is implemented as a jointly estimated system of input share equations and a translog cost function. Likelihood ratio tests are performed showing that a minimally constrained, full-featured production function is a necessary specification of the water delivery operations in our sample. This, plus the model's highly efficient and economically correct parameter estimates, confirms the usefulness of a production function approach to modeling the economic activities of water delivery systems.
NASA Technical Reports Server (NTRS)
Jordan, F. L., Jr.
1980-01-01
As part of basic research to improve aerial applications technology, methods were developed at the Langley Vortex Research Facility to simulate and measure deposition patterns of aerially-applied sprays and granular materials by means of tests with small-scale models of agricultural aircraft and dynamically-scaled test particles. Interactions between the aircraft wake and the dispersed particles are being studied with the objective of modifying wake characteristics and dispersal techniques to increase swath width, improve deposition pattern uniformity, and minimize drift. The particle scaling analysis, test methods for particle dispersal from the model aircraft, visualization of particle trajectories, and measurement and computer analysis of test deposition patterns are described. An experimental validation of the scaling analysis and test results that indicate improved control of chemical drift by use of winglets are presented to demonstrate test methods.
Targeted tuberculosis contact investigation saves money without sacrificing health.
Pisu, Maria; Gerald, Joe; Shamiyeh, James E; Bailey, William C; Gerald, Lynn B
2009-01-01
Health departments require an efficient strategy to investigate individuals exposed to Mycobacterium tuberculosis. The contact priority model (CPM) uses a decision rule to minimize testing of low-risk contacts; however, its impact on costs and disease control is unknown. A cost-effectiveness analysis compared the CPM with the traditional concentric circle approach (CCA) in a simulated population of 1000 healthy, community-dwelling adults with a 10% background rate of latent tuberculosis (TB) infection. The analysis was conducted from the perspective of the Alabama Department of Public Health. Model inputs were derived from the literature and the Alabama Department of Public Health. Lifetime costs (2004 dollars) and outcomes were discounted 3 percent annually. Incremental cost-effectiveness ratios were used to compare the strategies. Over the lifetime of 1000 simulated contacts, the CPM saved $45,000 but led to 0.5 additional TB cases and 0.24 fewer years of life. The CCA is more effective than the CPM, but it costs $92,934 to prevent one additional TB case and $185,920 to gain one additional life year. The CPM reduces costs with minimal loss of disease control and is a viable alternative to the CCA under most conditions.
Panel Flutter Emulation Using a Few Concentrated Forces
NASA Astrophysics Data System (ADS)
Dhital, Kailash; Han, Jae-Hung
2018-04-01
The objective of this paper is to study the feasibility of panel flutter emulation using a few concentrated forces. The concentrated forces are considered to be equivalent to aerodynamic forces. The equivalence is carried out using surface spline method and principle of virtual work. The structural modeling of the plate is based on the classical plate theory and the aerodynamic modeling is based on the piston theory. The present approach differs from the linear panel flutter analysis in scheming the modal aerodynamics forces with unchanged structural properties. The solutions for the flutter problem are obtained numerically using the standard eigenvalue procedure. A few concentrated forces were considered with an optimization effort to decide their optimal locations. The optimization process is based on minimizing the error between the flutter bounds from emulated and linear flutter analysis method. The emulated flutter results for the square plate of four different boundary conditions using six concentrated forces are obtained with minimal error to the reference value. The results demonstrated the workability and viability of using concentrated forces in emulating real panel flutter. In addition, the paper includes the parametric studies of linear panel flutter whose proper literatures are not available.
NASA Astrophysics Data System (ADS)
Ibrahim, MH Wan; Hadi, MN Abdul; Hooi Min, Yee
2018-04-01
Tensioned fabric structure with different surface form could be realized. Their variations as possible choice form of minimal surface for tensioned fabric structure have been studied. The form of used in TFS is Handkerchief Surface. Handkerchief Surface used in TFS because Handkerchief Surface is the form of minimal surface and Handkerchief Surface has not been studied by other researcher. Besides, no other work on Handkerchief Surface as idea in tensioned fabric structure has been found. The aim of the study is to propose converged shape of Handkerchief Surface with variable u=v=0.4 and u=v=1.0. The method used for Form-Finding is nonlinear analysis method. From the result, the surface of Handkerchief TFS model, u=v=0.4 and u=v=1.0 show the total warp and fill stress deviation is less than 0.01. The initial equilibrium shape of Handkerchief tensioned fabric structure model, u=v=0.4 and u=v=1.0 is corresponding to equal tension surface. Tensioned fabric structure in the form of Handikerchief Surface is a structurally viable surface form to be considered by engineer.
Thangarajah, Tanujan; Shahbazi, Shirin; Pendegrass, Catherine J; Lambert, Simon; Alexander, Susan; Blunn, Gordon W
2016-01-01
Tendon-bone healing following rotator cuff repairs is mainly impaired by poor tissue quality. Demineralised bone matrix promotes healing of the tendon-bone interface but its role in the treatment of tendon tears with retraction has not been investigated. We hypothesized that cortical demineralised bone matrix used with minimally manipulated mesenchymal stem cells will result in improved function and restoration of the tendon-bone interface with no difference between xenogenic and allogenic scaffolds. In an ovine model, the patellar tendon was detached from the tibial tuberosity and a complete distal tendon transverse defect measuring 1 cm was created. Suture anchors were used to reattach the tendon and xenogenic demineralised bone matrix + minimally manipulated mesenchymal stem cells (n = 5), or allogenic demineralised bone matrix + minimally manipulated mesenchymal stem cells (n = 5) were used to bridge the defect. Graft incorporation into the tendon and its effect on regeneration of the enthesis was assessed using histomorphometry. Force plate analysis was used to assess functional recovery. Compared to the xenograft, the allograft was associated with significantly higher functional weight bearing at 6 (P = 0.047), 9 (P = 0.028), and 12 weeks (P = 0.009). In the allogenic group this was accompanied by greater remodeling of the demineralised bone matrix into tendon-like tissue in the region of the defect (p = 0.015), and a more direct type of enthesis characterized by significantly more fibrocartilage (p = 0.039). No failures of tendon-bone healing were noted in either group. Demineralised bone matrix used with minimally manipulated mesenchymal stem cells promotes healing of the tendon-bone interface in an ovine model of acute tendon retraction, with superior mechanical and histological results associated with use of an allograft.
Corrêa, Elizabeth Nappi; Retondario, Anabelle; Alves, Mariane de Almeida; Bricarello, Liliana Paula; Rockenbach, Gabriele; Hinnig, Patrícia de Fragas; Neves, Janaina das; Vasconcelos, Francisco de Assis Guedes de
2018-03-29
Access to food retailers is an environmental determinant that influences what people consume. This study aimed to test the association between the use of food outlets and schoolchildren's intake of minimally processed and ultra-processed foods. This was a cross-sectional study conducted in public and private schools in Florianópolis, state of Santa Catarina, southern Brazil, from September 2012 to June 2013. The sample consisted of randomly selected clusters of schoolchildren aged 7 to 14 years, who were attending 30 schools. Parents or guardians provided socioeconomic and demographic data and answered questions about use of food outlets. Dietary intake was surveyed using a dietary recall questionnaire based on the previous day's intake. The foods or food groups were classified according to the level of processing. Negative binomial regression was used for data analysis. We included 2,195 schoolchildren in the study. We found that buying foods from snack bars or fast-food outlets was associated with the intake frequency of ultra-processed foods among 11-14 years old in an adjusted model (incidence rate ratio, IRR: 1.11; 95% confidence interval, CI: 1.01;1.23). Use of butchers was associated with the intake frequency of unprocessed/minimally processed foods among children 11-14 years old in the crude model (IRR: 1.11; 95% CI: 1.01;1.22) and in the adjusted model (IRR: 1.11; 95% CI: 1.06;1.17). Use of butchers was associated with higher intake of unprocessed/minimally processed foods while use of snack bars or fast-food outlets may have a negative impact on schoolchildren's dietary habits.
Optimal trajectories for an aerospace plane. Part 1: Formulation, results, and analysis
NASA Technical Reports Server (NTRS)
Miele, Angelo; Lee, W. Y.; Wu, G. D.
1990-01-01
The optimization of the trajectories of an aerospace plane is discussed. This is a hypervelocity vehicle capable of achieving orbital speed, while taking off horizontally. The vehicle is propelled by four types of engines: turbojet engines for flight at subsonic speeds/low supersonic speeds; ramjet engines for flight at moderate supersonic speeds/low hypersonic speeds; scramjet engines for flight at hypersonic speeds; and rocket engines for flight at near-orbital speeds. A single-stage-to-orbit (SSTO) configuration is considered, and the transition from low supersonic speeds to orbital speeds is studied under the following assumptions: the turbojet portion of the trajectory has been completed; the aerospace plane is controlled via the angle of attack and the power setting; the aerodynamic model is the generic hypersonic aerodynamics model example (GHAME). Concerning the engine model, three options are considered: (EM1), a ramjet/scramjet combination in which the scramjet specific impulse tends to a nearly-constant value at large Mach numbers; (EM2), a ramjet/scramjet combination in which the scramjet specific impulse decreases monotonically at large Mach numbers; and (EM3), a ramjet/scramjet/rocket combination in which, owing to stagnation temperature limitations, the scramjet operates only at M approx. less than 15; at higher Mach numbers, the scramjet is shut off and the aerospace plane is driven only by the rocket engines. Under the above assumptions, four optimization problems are solved using the sequential gradient-restoration algorithm for optimal control problems: (P1) minimization of the weight of fuel consumed; (P2) minimization of the peak dynamic pressure; (P3) minimization of the peak heating rate; and (P4) minimization of the peak tangential acceleration.
NASA Astrophysics Data System (ADS)
Filizola, Marta; Carteni-Farina, Maria; Perez, Juan J.
1999-07-01
3D models of the opioid receptors μ, δ and κ were constructed using BUNDLE, an in-house program to build de novo models of G-protein coupled receptors at the atomic level. Once the three opioid receptors were constructed and before any energy refinement, models were assessed for their compatibility with the results available from point-site mutations carried out on these receptors. In a subsequent step, three selective antagonists to each of three receptors (naltrindole, naltrexone and nor-binaltorphamine) were docked onto each of the three receptors and subsequently energy minimized. The nine resulting complexes were checked for their ability to explain known results of structure-activity studies. Once the models were validated, analysis of the distances between different residues of the receptors and the ligands were computed. This analysis permitted us to identify key residues tentatively involved in direct interaction with the ligand.
Rider, Lisa G.; Aggarwal, Rohit; Pistorio, Angela; Bayat, Nastaran; Erman, Brian; Feldman, Brian M.; Huber, Adam M.; Cimaz, Rolando; Cuttica, Rubén J.; de Oliveira, Sheila Knupp; Lindsley, Carol B.; Pilkington, Clarissa A.; Punaro, Marilyn; Ravelli, Angelo; Reed, Ann M.; Rouster-Stevens, Kelly; van Royen, Annet; Dressler, Frank; Magalhaes, Claudia Saad; Constantin, Tamás; Davidson, Joyce E.; Magnusson, Bo; Russo, Ricardo; Villa, Luca; Rinaldi, Mariangela; Rockette, Howard; Lachenbruch, Peter A.; Miller, Frederick W.; Vencovsky, Jiri; Ruperto, Nicolino
2017-01-01
Objective Develop response criteria for juvenile dermatomyositis (JDM). Methods We analyzed the performance of 312 definitions that used core set measures (CSM) from either the International Myositis Assessment and Clinical Studies Group (IMACS) or the Pediatric Rheumatology International Trials Organization (PRINTO) and were derived from natural history data and a conjoint-analysis survey. They were further validated in the PRINTO trial of prednisone alone compared to prednisone with methotrexate or cyclosporine and the Rituximab in Myositis trial. Experts considered 14 top-performing candidate criteria based on their performance characteristics and clinical face validity using nominal group technique at a consensus conference. Results Consensus was reached for a conjoint analysis–based continuous model with a Total Improvement Score of 0-100, using absolute percent change in CSM with thresholds for minimal (≥30 points), moderate (≥45), and major improvement (≥70). The same criteria were chosen for adult dermatomyositis/polymyositis with differing thresholds for improvement. The sensitivity and specificity were 89% and 91-98% for minimal, 92-94% and 94-99% for moderate, and 91-98% and 85-85% for major improvement, respectively, in JDM patient cohorts using the IMACS and PRINTO CSM. These criteria were validated in the PRINTO trial for differentiating between treatment arms for minimal and moderate improvement (P=0.009–0.057) and in the Rituximab trial for significantly differentiating the physician rating of improvement (P<0.006). Conclusion The response criteria for JDM was a conjoint analysis–based model using a continuous improvement score based on absolute percent change in CSM, with thresholds for minimal, moderate, and major improvement. PMID:28382787
Rider, Lisa G.; Aggarwal, Rohit; Pistorio, Angela; Bayat, Nastaran; Erman, Brian; Feldman, Brian M.; Huber, Adam M.; Cimaz, Rolando; Cuttica, Rubén J.; de Oliveira, Sheila Knupp; Lindsley, Carol B.; Pilkington, Clarissa A.; Punaro, Marilyn; Ravelli, Angelo; Reed, Ann M.; Rouster-Stevens, Kelly; van Royen, Annet; Dressler, Frank; Magalhaes, Claudia Saad; Constantin, Tamás; Davidson, Joyce E.; Magnusson, Bo; Russo, Ricardo; Villa, Luca; Rinaldi, Mariangela; Rockette, Howard; Lachenbruch, Peter A.; Miller, Frederick W.; Vencovsky, Jiri; Ruperto, Nicolino
2017-01-01
Objective Develop response criteria for juvenile dermatomyositis (JDM). Methods We analyzed the performance of 312 definitions that used core set measures (CSM) from either the International Myositis Assessment and Clinical Studies Group (IMACS) or the Pediatric Rheumatology International Trials Organization (PRINTO) and were derived from natural history data and a conjoint-analysis survey. They were further validated in the PRINTO trial of prednisone alone compared to prednisone with methotrexate or cyclosporine and the Rituximab in Myositis trial. Experts considered 14 top-performing candidate criteria based on their performance characteristics and clinical face validity using nominal group technique at a consensus conference. Results Consensus was reached for a conjoint analysis–based continuous model with a Total Improvement Score of 0-100, using absolute percent change in CSM with thresholds for minimal (≥30 points), moderate (≥45), and major improvement (≥70). The same criteria were chosen for adult dermatomyositis/polymyositis with differing thresholds for improvement. The sensitivity and specificity were 89% and 91-98% for minimal, 92-94% and 94-99% for moderate, and 91-98% and 85-85% for major improvement, respectively, in JDM patient cohorts using the IMACS and PRINTO CSM. These criteria were validated in the PRINTO trial for differentiating between treatment arms for minimal and moderate improvement (P=0.009–0.057) and in the Rituximab trial for significantly differentiating the physician rating of improvement (P<0.006). Conclusion The response criteria for JDM was a conjoint analysis–based model using a continuous improvement score based on absolute percent change in CSM, with thresholds for minimal, moderate, and major improvement. PMID:28382778
Çakιr, Tunahan; Alsan, Selma; Saybaşιlι, Hale; Akιn, Ata; Ülgen, Kutlu Ö
2007-01-01
Background It is a daunting task to identify all the metabolic pathways of brain energy metabolism and develop a dynamic simulation environment that will cover a time scale ranging from seconds to hours. To simplify this task and make it more practicable, we undertook stoichiometric modeling of brain energy metabolism with the major aim of including the main interacting pathways in and between astrocytes and neurons. Model The constructed model includes central metabolism (glycolysis, pentose phosphate pathway, TCA cycle), lipid metabolism, reactive oxygen species (ROS) detoxification, amino acid metabolism (synthesis and catabolism), the well-known glutamate-glutamine cycle, other coupling reactions between astrocytes and neurons, and neurotransmitter metabolism. This is, to our knowledge, the most comprehensive attempt at stoichiometric modeling of brain metabolism to date in terms of its coverage of a wide range of metabolic pathways. We then attempted to model the basal physiological behaviour and hypoxic behaviour of the brain cells where astrocytes and neurons are tightly coupled. Results The reconstructed stoichiometric reaction model included 217 reactions (184 internal, 33 exchange) and 216 metabolites (183 internal, 33 external) distributed in and between astrocytes and neurons. Flux balance analysis (FBA) techniques were applied to the reconstructed model to elucidate the underlying cellular principles of neuron-astrocyte coupling. Simulation of resting conditions under the constraints of maximization of glutamate/glutamine/GABA cycle fluxes between the two cell types with subsequent minimization of Euclidean norm of fluxes resulted in a flux distribution in accordance with literature-based findings. As a further validation of our model, the effect of oxygen deprivation (hypoxia) on fluxes was simulated using an FBA-derivative approach, known as minimization of metabolic adjustment (MOMA). The results show the power of the constructed model to simulate disease behaviour on the flux level, and its potential to analyze cellular metabolic behaviour in silico. Conclusion The predictive power of the constructed model for the key flux distributions, especially central carbon metabolism and glutamate-glutamine cycle fluxes, and its application to hypoxia is promising. The resultant acceptable predictions strengthen the power of such stoichiometric models in the analysis of mammalian cell metabolism. PMID:18070347
Sampling strategies based on singular vectors for assimilated models in ocean forecasting systems
NASA Astrophysics Data System (ADS)
Fattorini, Maria; Brandini, Carlo; Ortolani, Alberto
2016-04-01
Meteorological and oceanographic models do need observations, not only as a ground truth element to verify the quality of the models, but also to keep model forecast error acceptable: through data assimilation techniques which merge measured and modelled data, natural divergence of numerical solutions from reality can be reduced / controlled and a more reliable solution - called analysis - is computed. Although this concept is valid in general, its application, especially in oceanography, raises many problems due to three main reasons: the difficulties that have ocean models in reaching an acceptable state of equilibrium, the high measurements cost and the difficulties in realizing them. The performances of the data assimilation procedures depend on the particular observation networks in use, well beyond the background quality and the used assimilation method. In this study we will present some results concerning the great impact of the dataset configuration, in particular measurements position, on the evaluation of the overall forecasting reliability of an ocean model. The aim consists in identifying operational criteria to support the design of marine observation networks at regional scale. In order to identify the observation network able to minimize the forecast error, a methodology based on Singular Vectors Decomposition of the tangent linear model is proposed. Such a method can give strong indications on the local error dynamics. In addition, for the purpose of avoiding redundancy of information contained in the data, a minimal distance among data positions has been chosen on the base of a spatial correlation analysis of the hydrodynamic fields under investigation. This methodology has been applied for the choice of data positions starting from simplified models, like an ideal double-gyre model and a quasi-geostrophic one. Model configurations and data assimilation are based on available ROMS routines, where a variational assimilation algorithm (4D-var) is included as part of the code These first applications have provided encouraging results in terms of increased predictability time and reduced forecast error, also improving the quality of the analysis used to recover the real circulation patterns from a first guess quite far from the real state.
Constraints on B and Higgs physics in minimal low energy supersymmetric models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carena, Marcela; /Fermilab; Menon, A.
2006-03-01
We study the implications of minimal flavor violating low energy supersymmetry scenarios for the search of new physics in the B and Higgs sectors at the Tevatron collider and the LHC. We show that the already stringent Tevatron bound on the decay rate B{sub s} {yields} {mu}{sup +}{mu}{sup -} sets strong constraints on the possibility of generating large corrections to the mass difference {Delta} M{sub s} of the B{sub s} eigenstates. We also show that the B{sub s} {yields} {mu}{sup +}{mu}{sup -} bound together with the constraint on the branching ratio of the rare decay b {yields} s{gamma} has strongmore » implications for the search of light, non-standard Higgs bosons at hadron colliders. In doing this, we demonstrate that the former expressions derived for the analysis of the double penguin contributions in the Kaon sector need to be corrected by additional terms for a realistic analysis of these effects. We also study a specific non-minimal flavor violating scenario, where there are flavor changing gluino-squark-quark interactions, governed by the CKM matrix elements, and show that the B and Higgs physics constraints are similar to the ones in the minimal flavor violating case. Finally we show that, in scenarios like electroweak baryogenesis which have light stops and charginos, there may be enhanced effects on the B and K mixing parameters, without any significant effect on the rate of B{sub s} {yields} {mu}{sup +}{mu}{sup -}.« less
NASA Astrophysics Data System (ADS)
Shinzato, Takashi
2017-02-01
In the present paper, the minimal investment risk for a portfolio optimization problem with imposed budget and investment concentration constraints is considered using replica analysis. Since the minimal investment risk is influenced by the investment concentration constraint (as well as the budget constraint), it is intuitive that the minimal investment risk for the problem with an investment concentration constraint can be larger than that without the constraint (that is, with only the budget constraint). Moreover, a numerical experiment shows the effectiveness of our proposed analysis. In contrast, the standard operations research approach failed to identify accurately the minimal investment risk of the portfolio optimization problem.
Method for improving accuracy in full evaporation headspace analysis.
Xie, Wei-Qi; Chai, Xin-Sheng
2017-05-01
We report a new headspace analytical method in which multiple headspace extraction is incorporated with the full evaporation technique. The pressure uncertainty caused by the solid content change in the samples has a great impact to the measurement accuracy in the conventional full evaporation headspace analysis. The results (using ethanol solution as the model sample) showed that the present technique is effective to minimize such a problem. The proposed full evaporation multiple headspace extraction analysis technique is also automated and practical, and which could greatly broaden the applications of the full-evaporation-based headspace analysis. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Cost-effectiveness of minimally invasive sacroiliac joint fusion.
Cher, Daniel J; Frasco, Melissa A; Arnold, Renée Jg; Polly, David W
2016-01-01
Sacroiliac joint (SIJ) disorders are common in patients with chronic lower back pain. Minimally invasive surgical options have been shown to be effective for the treatment of chronic SIJ dysfunction. To determine the cost-effectiveness of minimally invasive SIJ fusion. Data from two prospective, multicenter, clinical trials were used to inform a Markov process cost-utility model to evaluate cumulative 5-year health quality and costs after minimally invasive SIJ fusion using triangular titanium implants or non-surgical treatment. The analysis was performed from a third-party perspective. The model specifically incorporated variation in resource utilization observed in the randomized trial. Multiple one-way and probabilistic sensitivity analyses were performed. SIJ fusion was associated with a gain of approximately 0.74 quality-adjusted life years (QALYs) at a cost of US$13,313 per QALY gained. In multiple one-way sensitivity analyses all scenarios resulted in an incremental cost-effectiveness ratio (ICER) <$26,000/QALY. Probabilistic analyses showed a high degree of certainty that the maximum ICER for SIJ fusion was less than commonly selected thresholds for acceptability (mean ICER =$13,687, 95% confidence interval $5,162-$28,085). SIJ fusion provided potential cost savings per QALY gained compared to non-surgical treatment after a treatment horizon of greater than 13 years. Compared to traditional non-surgical treatments, SIJ fusion is a cost-effective, and, in the long term, cost-saving strategy for the treatment of SIJ dysfunction due to degenerative sacroiliitis or SIJ disruption.
Cost-effectiveness of minimally invasive sacroiliac joint fusion
Cher, Daniel J; Frasco, Melissa A; Arnold, Renée JG; Polly, David W
2016-01-01
Background Sacroiliac joint (SIJ) disorders are common in patients with chronic lower back pain. Minimally invasive surgical options have been shown to be effective for the treatment of chronic SIJ dysfunction. Objective To determine the cost-effectiveness of minimally invasive SIJ fusion. Methods Data from two prospective, multicenter, clinical trials were used to inform a Markov process cost-utility model to evaluate cumulative 5-year health quality and costs after minimally invasive SIJ fusion using triangular titanium implants or non-surgical treatment. The analysis was performed from a third-party perspective. The model specifically incorporated variation in resource utilization observed in the randomized trial. Multiple one-way and probabilistic sensitivity analyses were performed. Results SIJ fusion was associated with a gain of approximately 0.74 quality-adjusted life years (QALYs) at a cost of US$13,313 per QALY gained. In multiple one-way sensitivity analyses all scenarios resulted in an incremental cost-effectiveness ratio (ICER) <$26,000/QALY. Probabilistic analyses showed a high degree of certainty that the maximum ICER for SIJ fusion was less than commonly selected thresholds for acceptability (mean ICER =$13,687, 95% confidence interval $5,162–$28,085). SIJ fusion provided potential cost savings per QALY gained compared to non-surgical treatment after a treatment horizon of greater than 13 years. Conclusion Compared to traditional non-surgical treatments, SIJ fusion is a cost-effective, and, in the long term, cost-saving strategy for the treatment of SIJ dysfunction due to degenerative sacroiliitis or SIJ disruption. PMID:26719717
Lin, Chia-Cheng; Lin, Hao-Jan; Lin, Yun-Ho; Sugiatno, Erwan; Ruslin, Muhammad; Su, Chen-Yao; Ou, Keng-Liang; Cheng, Han-Yi
2017-05-01
The purpose of the present study was to examine thermal damage and a sticking problem in the tissue after the use of a minimally invasive electrosurgical device with a nanostructured surface treatment that uses a femtosecond laser pulse (FLP) technique. To safely use an electrosurgical device in clinical surgery, it is important to decrease thermal damage to surrounding tissues. The surface characteristics and morphology of the FLP layer were evaluated using optical microscopy, scanning electron microscopy, and transmission electron microscopy; element analysis was performed using energy-dispersive X-ray spectroscopy, grazing incidence X-ray diffraction, and X-ray photoelectron spectroscopy. In the animal model, monopolar electrosurgical devices were used to create lesions in the legs of 30 adult rats. Animals were sacrificed for investigations at 0, 3, 7, 14, and 28 days postoperatively. Results indicated that the thermal damage and sticking situations were reduced significantly when a minimally invasive electrosurgical instrument with an FLP layer was used. Temperatures decreased while film thickness increased. Thermographic data revealed that surgical temperatures in an animal model were significantly lower in the FLP electrosurgical device compared with that in the untreated one. Furthermore, the FLP device created a relatively small area of thermal damage. As already mentioned, the biomedical nanostructured layer reduced thermal damage and promoted the antisticking property with the use of a minimally invasive electrosurgical device. © 2016 Wiley Periodicals, Inc. J Biomed Mater Res Part B: Appl Biomater, 105B: 865-873, 2017. © 2016 Wiley Periodicals, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wenzel, Tom P.
This report presents a new approach to analyze the relationship between vehicle mass and risk: tracking fatality risk by vehicle model year and mass, for individual vehicle models. This approach is appealing as it greatly minimizes the influence of driver characteristics and behavior, and crash circumstances, on fatality risk. However, only the most popular vehicle models, with the largest number of fatalities, can be analyzed in this manner. While the analysis of all vehicle models of a given type suggests that there is a relationship between increased mass and fatality risk, analysis of the ten most popular four-door car modelsmore » separately suggests that this relationship is weak: in many cases when the mass of a specific vehicle model is increased societal fatality risk is unchanged or even increases. These results suggest that increasing the mass of an individual vehicle model does not necessarily lead to decreased societal fatality risk.« less
Computer analysis of railcar vibrations
NASA Technical Reports Server (NTRS)
Vlaminck, R. R.
1975-01-01
Computer models and techniques for calculating railcar vibrations are discussed along with criteria for vehicle ride optimization. The effect on vibration of car body structural dynamics, suspension system parameters, vehicle geometry, and wheel and rail excitation are presented. Ride quality vibration data collected on the state-of-the-art car and standard light rail vehicle is compared to computer predictions. The results show that computer analysis of the vehicle can be performed for relatively low cost in short periods of time. The analysis permits optimization of the design as it progresses and minimizes the possibility of excessive vibration on production vehicles.
Punctuated equilibrium dynamics in human communications
NASA Astrophysics Data System (ADS)
Peng, Dan; Han, Xiao-Pu; Wei, Zong-Wen; Wang, Bing-Hong
2015-10-01
A minimal model based on network incorporating individual interactions is proposed to study the non-Poisson statistical properties of human behavior: individuals in system interact with their neighbors, the probability of an individual acting correlates to its activity, and all the individuals involved in action will change their activities randomly. The model reproduces varieties of spatial-temporal patterns observed in empirical studies of human daily communications, providing insight into various human activities and embracing a range of realistic social interacting systems, particularly, intriguing bimodal phenomenon. This model bridges priority queueing theory and punctuated equilibrium dynamics, and our modeling and analysis is likely to shed light on non-Poisson phenomena in many complex systems.
Improving the performance of minimizers and winnowing schemes
Marçais, Guillaume; Pellow, David; Bork, Daniel; Orenstein, Yaron; Shamir, Ron; Kingsford, Carl
2017-01-01
Abstract Motivation: The minimizers scheme is a method for selecting k-mers from sequences. It is used in many bioinformatics software tools to bin comparable sequences or to sample a sequence in a deterministic fashion at approximately regular intervals, in order to reduce memory consumption and processing time. Although very useful, the minimizers selection procedure has undesirable behaviors (e.g. too many k-mers are selected when processing certain sequences). Some of these problems were already known to the authors of the minimizers technique, and the natural lexicographic ordering of k-mers used by minimizers was recognized as their origin. Many software tools using minimizers employ ad hoc variations of the lexicographic order to alleviate those issues. Results: We provide an in-depth analysis of the effect of k-mer ordering on the performance of the minimizers technique. By using small universal hitting sets (a recently defined concept), we show how to significantly improve the performance of minimizers and avoid some of its worse behaviors. Based on these results, we encourage bioinformatics software developers to use an ordering based on a universal hitting set or, if not possible, a randomized ordering, rather than the lexicographic order. This analysis also settles negatively a conjecture (by Schleimer et al.) on the expected density of minimizers in a random sequence. Availability and Implementation: The software used for this analysis is available on GitHub: https://github.com/gmarcais/minimizers.git. Contact: gmarcais@cs.cmu.edu or carlk@cs.cmu.edu PMID:28881970
System dynamic modeling on construction waste management in Shenzhen, China.
Tam, Vivian W Y; Li, Jingru; Cai, Hong
2014-05-01
This article examines the complexity of construction waste management in Shenzhen, Mainland China. In-depth analysis of waste generation, transportation, recycling, landfill and illegal dumping of various inherent management phases is explored. A system dynamics modeling using Stella model is developed. Effects of landfill charges and also penalties from illegal dumping are also simulated. The results show that the implementation of comprehensive policy on both landfill charges and illegal dumping can effectively control the illegal dumping behavior, and achieve comprehensive construction waste minimization. This article provides important recommendations for effective policy implementation and explores new perspectives for Shenzhen policy makers.
Russell, Heidi; Swint, J. Michael; Lal, Lincy; Meza, Jane; Walterhouse, David; Hawkins, Douglas S.; Okcu, M. Fatih
2015-01-01
Background Recent Children’s Oncology Group trials for low-risk rhabdomyosarcoma attempted to reduce therapy while maintaining excellent outcomes. D9602 delivered 45 weeks of outpatient vincristine and dactinomycin (VA) for patients in Subgroup A. ARST0331 reduced the duration of therapy to 22 weeks but added four doses of cyclophosphamide to VA for patients in Subset 1. Failure-free survival was similar. We undertook a cost minimization comparison to help guide future decision-making. Procedure Addressing the costs of treatment from the healthcare perspective we modeled a simple decision-analytic model from aggregate clinical trial data. Medical care inputs and probabilities were estimated from trial reports and focused chart review. Costs of radiation, surgery and off-therapy surveillance were excluded. Unit costs were obtained from literature and national reimbursement and inpatient utilization databases and converted to 2012 US dollars. Model uncertainty was assessed with first-order sensitivity analysis. Results Direct medical costs were $46,393 for D9602 and $43,261 for ARST0331 respectively, making ARST0331 the less costly strategy. Dactinomycin contributed the most to D9602 total costs but varied with age (42–69%). Chemotherapy administration costs accounted for the largest proportion of ARST0331 total costs (39–57%). ARST0331 incurred fewer costs than D9602 under most alternative distributive models and alternative clinical practice assumptions. Conclusions Cost analysis suggests that ARST0331 may incur fewer costs than D9602 from the healthcare system’s perspective. Attention to the services driving the costs provides directions for future efficiency improvements. Future studies should prospectively consider the patient and family’s perspective. PMID:24453105
Russell, Heidi; Swint, J Michael; Lal, Lincy; Meza, Jane; Walterhouse, David; Hawkins, Douglas S; Okcu, M Fatih
2014-06-01
Recent Children's Oncology Group trials for low-risk rhabdomyosarcoma attempted to reduce therapy while maintaining excellent outcomes. D9602 delivered 45 weeks of outpatient vincristine and dactinomycin (VA) for patients in Subgroup A. ARST0331 reduced the duration of therapy to 22 weeks but added four doses of cyclophosphamide to VA for patients in Subset 1. Failure-free survival was similar. We undertook a cost minimization comparison to help guide future decision-making. Addressing the costs of treatment from the healthcare perspective we modeled a simple decision-analytic model from aggregate clinical trial data. Medical care inputs and probabilities were estimated from trial reports and focused chart review. Costs of radiation, surgery and off-therapy surveillance were excluded. Unit costs were obtained from literature and national reimbursement and inpatient utilization databases and converted to 2012 US dollars. Model uncertainty was assessed with first-order sensitivity analysis. Direct medical costs were $46,393 for D9602 and $43,261 for ARST0331 respectively, making ARST0331 the less costly strategy. Dactinomycin contributed the most to D9602 total costs but varied with age (42-69%). Chemotherapy administration costs accounted for the largest proportion of ARST0331 total costs (39-57%). ARST0331 incurred fewer costs than D9602 under most alternative distributive models and alternative clinical practice assumptions. Cost analysis suggests that ARST0331 may incur fewer costs than D9602 from the healthcare system's perspective. Attention to the services driving the costs provides directions for future efficiency improvements. Future studies should prospectively consider the patient and family's perspective. © 2014 Wiley Periodicals, Inc.
Suh, Hae Sun; Song, Hyun Jin; Jang, Eun Jin; Kim, Jung-Sun; Choi, Donghoon; Lee, Sang Moo
2013-07-01
The goal of this study was to perform an economic analysis of a primary stenting with drug-eluting stents (DES) compared with bare-metal stents (BMS) in patients with acute myocardial infarction (AMI) admitted through an emergency room (ER) visit in Korea using population-based data. We employed a cost-minimization method using a decision analytic model with a two-year time period. Model probabilities and costs were obtained from a published systematic review and population-based data from which a retrospective database analysis of the national reimbursement database of Health Insurance Review and Assessment covering 2006 through 2010 was performed. Uncertainty was evaluated using one-way sensitivity analyses and probabilistic sensitivity analyses. Among 513 979 cases with AMI during 2007 and 2008, 24 742 cases underwent stenting procedures and 20 320 patients admitted through an ER visit with primary stenting were identified in the base model. The transition probabilities of DES-to-DES, DES-to-BMS, DES-to-coronary artery bypass graft, and DES-to-balloon were 59.7%, 0.6%, 4.3%, and 35.3%, respectively, among these patients. The average two-year costs of DES and BMS in 2011 Korean won were 11 065 528 won/person and 9 647 647 won/person, respectively. DES resulted in higher costs than BMS by 1 417 882 won/person. The model was highly sensitive to the probability and costs of having no revascularization. Primary stenting with BMS for AMI with an ER visit was shown to be a cost-saving procedure compared with DES in Korea. Caution is needed when applying this finding to patients with a higher level of severity in health status.
A fully-implicit high-order system thermal-hydraulics model for advanced non-LWR safety analyses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Rui
An advanced system analysis tool is being developed for advanced reactor safety analysis. This paper describes the underlying physics and numerical models used in the code, including the governing equations, the stabilization schemes, the high-order spatial and temporal discretization schemes, and the Jacobian Free Newton Krylov solution method. The effects of the spatial and temporal discretization schemes are investigated. Additionally, a series of verification test problems are presented to confirm the high-order schemes. Furthermore, it is demonstrated that the developed system thermal-hydraulics model can be strictly verified with the theoretical convergence rates, and that it performs very well for amore » wide range of flow problems with high accuracy, efficiency, and minimal numerical diffusions.« less
A fully-implicit high-order system thermal-hydraulics model for advanced non-LWR safety analyses
Hu, Rui
2016-11-19
An advanced system analysis tool is being developed for advanced reactor safety analysis. This paper describes the underlying physics and numerical models used in the code, including the governing equations, the stabilization schemes, the high-order spatial and temporal discretization schemes, and the Jacobian Free Newton Krylov solution method. The effects of the spatial and temporal discretization schemes are investigated. Additionally, a series of verification test problems are presented to confirm the high-order schemes. Furthermore, it is demonstrated that the developed system thermal-hydraulics model can be strictly verified with the theoretical convergence rates, and that it performs very well for amore » wide range of flow problems with high accuracy, efficiency, and minimal numerical diffusions.« less
Risk of Small Bowel Obstruction After Robot-Assisted vs Open Radical Prostatectomy.
Loeb, Stacy; Meyer, Christian P; Krasnova, Anna; Curnyn, Caitlin; Reznor, Gally; Kibel, Adam S; Lepor, Herbert; Trinh, Quoc-Dien
2016-12-01
Whereas open radical prostatectomy is performed extraperitoneally, minimally invasive radical prostatectomy is typically performed within the peritoneal cavity. Our objective was to determine whether minimally invasive radical prostatectomy is associated with an increased risk of small bowel obstruction compared with open radical prostatectomy. In the U.S. Surveillance, Epidemiology and End Results (SEER)-Medicare database, we identified 14,147 men found to have prostate cancer from 2000 to 2008 treated by open (n = 10,954) or minimally invasive (n = 3193) radical prostatectomy. Multivariable Cox proportional hazard models were used to examine the impact of surgical approach on the diagnosis of small bowel obstruction, as well as the need for lysis of adhesions and exploratory laparotomy. During a median follow-up of 45 and 76 months, respectively, the cumulative incidence of small bowel obstruction was 3.7% for minimally invasive and 5.3% for open radical prostatectomy (p = 0.0005). Lysis of adhesions occurred in 1.1% of minimally invasive and 2.0% of open prostatectomy patients (p = 0.0003). On multivariable analysis, there was no significant difference between minimally invasive and open prostatectomy with respect to small bowel obstruction (HR 1.17, 95% CI 0.90, 1.52, p = 0.25) or lysis of adhesions (HR 0.87, 95% CI 0.50, 1.40, p = 0.57). Limitations of the study include the retrospective design and use of administrative claims data. Relative to open radical prostatectomy, minimally invasive radical prostatectomy is not associated with an increased risk of postoperative small bowel obstruction and lysis of adhesions.
Smalley, Hannah K; Keskinocak, Pinar; Swann, Julie; Hinman, Alan
2015-11-17
In addition to improved sanitation, hygiene, and better access to safe water, oral cholera vaccines can help to control the spread of cholera in the short term. However, there is currently no systematic method for determining the best allocation of oral cholera vaccines to minimize disease incidence in a population where the disease is endemic and resources are limited. We present a mathematical model for optimally allocating vaccines in a region under varying levels of demographic and incidence data availability. The model addresses the questions of where, when, and how many doses of vaccines to send. Considering vaccine efficacies (which may vary based on age and the number of years since vaccination), we analyze distribution strategies which allocate vaccines over multiple years. Results indicate that, given appropriate surveillance data, targeting age groups and regions with the highest disease incidence should be the first priority, followed by other groups primarily in order of disease incidence, as this approach is the most life-saving and cost-effective. A lack of detailed incidence data results in distribution strategies which are not cost-effective and can lead to thousands more deaths from the disease. The mathematical model allows for what-if analysis for various vaccine distribution strategies by providing the ability to easily vary parameters such as numbers and sizes of regions and age groups, risk levels, vaccine price, vaccine efficacy, production capacity and budget. Copyright © 2015 Elsevier Ltd. All rights reserved.
Extensions of D-optimal Minimal Designs for Symmetric Mixture Models
Raghavarao, Damaraju; Chervoneva, Inna
2017-01-01
The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. In This Paper, Extensions Of The D-Optimal Minimal Designs Are Developed For A General Mixture Model To Allow Additional Interior Points In The Design Space To Enable Prediction Of The Entire Response Surface Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations. PMID:29081574
Ondo, William G.; Grieger, Frank; Moran, Kimberly; Kohnen, Ralf; Roth, Thomas
2016-01-01
Study Objectives: Determine the minimal clinically important change (MCIC), a measure determining the minimum change in scale score perceived as clinically beneficial, for the international restless legs syndrome (IRLS) and restless legs syndrome 6-item questionnaire (RLS-6) in patients with moderate to severe restless legs syndrome (RLS/Willis-Ekbom disease) treated with the rotigotine transdermal system. Methods: This post hoc analysis analyzed data from two 6-mo randomized, double-blind, placebo-controlled studies (SP790 [NCT00136045]; SP792 [NCT00135993]) individually and as a pooled analysis in rotigotine-treated patients, with baseline and end of maintenance IRLS and Clinical Global Impressions of change (CGI Item 2) scores available for analysis. An anchor-based approach and receiver operating characteristic (ROC) curves were used to determine the MCIC for the IRLS and RLS-6. We specifically compared “much improved vs minimally improved,” “much improved/very much improved vs minimally improved or worse,” and “minimally improved or better vs no change or worse” on the CGI-2 using the full analysis set (data as observed). Results: The MCIC IRLS cut-off scores for SP790 and SP792 were similar. Using the pooled SP790+SP792 analysis, the MCIC total IRLS cut-off score (sensitivity, specificity) for “much improved vs minimally improved” was −9 (0.69, 0.66), for “much improved/very much improved vs minimally improved or worse” was −11 (0.81, 0.84), and for “minimally improved or better vs no change or worse” was −9 (0.79, 0.88). MCIC ROC cut-offs were also calculated for each RLS-6 item. Conclusions: In patients with RLS, the MCIC values derived in the current analysis provide a basis for defining meaningful clinical improvement based on changes in the IRLS and RLS-6 following treatment with rotigotine. Citation: Ondo WG, Grieger F, Moran K, Kohnen R, Roth T. Post hoc analysis of data from two clinical trials evaluating the minimal clinically important change in international restless legs syndrome sum score in patients with restless legs syndrome (Willis-Ekbom Disease). J Clin Sleep Med 2016;12(1):63–70. PMID:26446245
IPMP Global Fit - A one-step direct data analysis tool for predictive microbiology.
Huang, Lihan
2017-12-04
The objective of this work is to develop and validate a unified optimization algorithm for performing one-step global regression analysis of isothermal growth and survival curves for determination of kinetic parameters in predictive microbiology. The algorithm is incorporated with user-friendly graphical interfaces (GUIs) to develop a data analysis tool, the USDA IPMP-Global Fit. The GUIs are designed to guide the users to easily navigate through the data analysis process and properly select the initial parameters for different combinations of mathematical models. The software is developed for one-step kinetic analysis to directly construct tertiary models by minimizing the global error between the experimental observations and mathematical models. The current version of the software is specifically designed for constructing tertiary models with time and temperature as the independent model parameters in the package. The software is tested with a total of 9 different combinations of primary and secondary models for growth and survival of various microorganisms. The results of data analysis show that this software provides accurate estimates of kinetic parameters. In addition, it can be used to improve the experimental design and data collection for more accurate estimation of kinetic parameters. IPMP-Global Fit can be used in combination with the regular USDA-IPMP for solving the inverse problems and developing tertiary models in predictive microbiology. Published by Elsevier B.V.
Grimsrud, K N; Ait-Oudhia, S; Durbin-Johnson, B P; Rocke, D M; Mama, K R; Rezende, M L; Stanley, S D; Jusko, W J
2015-02-01
The present study characterizes the pharmacokinetic (PK) and pharmacodynamic (PD) relationships of the α2-adrenergic receptor agonists detomidine (DET), medetomidine (MED) and dexmedetomidine (DEX) in parallel groups of horses from in vivo data after single bolus doses. Head height (HH), heart rate (HR), and blood glucose concentrations were measured over 6 h. Compartmental PK and minimal physiologically based PK (mPBPK) models were applied and incorporated into basic and extended indirect response models (IRM). Population PK/PD analysis was conducted using the Monolix software implementing the stochastic approximation expectation maximization algorithm. Marked reductions in HH and HR were found. The drug concentrations required to obtain inhibition at half-maximal effect (IC50 ) were approximately four times larger for DET than MED and DEX for both HH and HR. These effects were not gender dependent. Medetomidine had a greater influence on the increase in glucose concentration than DEX. The developed models demonstrate the use of mechanistic and mPBPK/PD models for the analysis of clinically obtainable in vivo data. © 2014 John Wiley & Sons Ltd.
Grimsrud, K. N.; Ait-Oudhia, S.; Durbin-Johnson, B. P.; Rocke, D. M.; Mama, K. R.; Rezende, M. L.; Stanley, S. D.; Jusko, W. J.
2014-01-01
The present study characterizes the pharmacokinetic (PK) and pharmacodynamic (PD) relationships of the α2-adrenergic receptor agonists detomidine (DET), medetomidine (MED) and dexmedetomidine (DEX) in parallel groups of horses from in vivo data after single bolus doses. Head height (HH), heart rate (HR), and blood glucose concentrations were measured over 6 h. Compartmental PK and minimal physiologically based PK (mPBPK) models were applied and incorporated into basic and extended indirect response models (IRM). Population PK/PD analysis was conducted using the Monolix software implementing the stochastic approximation expectation maximization algorithm. Marked reductions in HH and HR were found. The drug concentrations required to obtain inhibition at half-maximal effect (IC50) were approximately four times larger for DET than MED and DEX for both HH and HR. These effects were not gender dependent. Medetomidine had a greater influence on the increase in glucose concentration than DEX. The developed models demonstrate the use of mechanistic and mPBPK/PD models for the analysis of clinically obtainable in vivo data. PMID:25073816
Evaluating the risks of clinical research: direct comparative analysis.
Rid, Annette; Abdoler, Emily; Roberson-Nay, Roxann; Pine, Daniel S; Wendler, David
2014-09-01
Many guidelines and regulations allow children and adolescents to be enrolled in research without the prospect of clinical benefit when it poses minimal risk. However, few systematic methods exist to determine when research risks are minimal. This situation has led to significant variation in minimal risk judgments, raising concern that some children are not being adequately protected. To address this concern, we describe a new method for implementing the widely endorsed "risks of daily life" standard for minimal risk. This standard defines research risks as minimal when they do not exceed the risks posed by daily life activities or routine examinations. This study employed a conceptual and normative analysis, and use of an illustrative example. Different risks are composed of the same basic elements: Type, likelihood, and magnitude of harm. Hence, one can compare the risks of research and the risks of daily life by comparing the respective basic elements with each other. We use this insight to develop a systematic method, direct comparative analysis, for implementing the "risks of daily life" standard for minimal risk. The method offers a way of evaluating research procedures that pose the same types of risk as daily life activities, such as the risk of experiencing anxiety, stress, or other psychological harm. We thus illustrate how direct comparative analysis can be applied in practice by using it to evaluate whether the anxiety induced by a respiratory CO2 challenge poses minimal or greater than minimal risks in children and adolescents. Direct comparative analysis is a systematic method for applying the "risks of daily life" standard for minimal risk to research procedures that pose the same types of risk as daily life activities. It thereby offers a method to protect children and adolescents in research, while ensuring that important studies are not blocked because of unwarranted concerns about research risks.
Evaluating the Risks of Clinical Research: Direct Comparative Analysis
Abdoler, Emily; Roberson-Nay, Roxann; Pine, Daniel S.; Wendler, David
2014-01-01
Abstract Objectives: Many guidelines and regulations allow children and adolescents to be enrolled in research without the prospect of clinical benefit when it poses minimal risk. However, few systematic methods exist to determine when research risks are minimal. This situation has led to significant variation in minimal risk judgments, raising concern that some children are not being adequately protected. To address this concern, we describe a new method for implementing the widely endorsed “risks of daily life” standard for minimal risk. This standard defines research risks as minimal when they do not exceed the risks posed by daily life activities or routine examinations. Methods: This study employed a conceptual and normative analysis, and use of an illustrative example. Results: Different risks are composed of the same basic elements: Type, likelihood, and magnitude of harm. Hence, one can compare the risks of research and the risks of daily life by comparing the respective basic elements with each other. We use this insight to develop a systematic method, direct comparative analysis, for implementing the “risks of daily life” standard for minimal risk. The method offers a way of evaluating research procedures that pose the same types of risk as daily life activities, such as the risk of experiencing anxiety, stress, or other psychological harm. We thus illustrate how direct comparative analysis can be applied in practice by using it to evaluate whether the anxiety induced by a respiratory CO2 challenge poses minimal or greater than minimal risks in children and adolescents. Conclusions: Direct comparative analysis is a systematic method for applying the “risks of daily life” standard for minimal risk to research procedures that pose the same types of risk as daily life activities. It thereby offers a method to protect children and adolescents in research, while ensuring that important studies are not blocked because of unwarranted concerns about research risks. PMID:25210944
A novel integrated framework and improved methodology of computer-aided drug design.
Chen, Calvin Yu-Chian
2013-01-01
Computer-aided drug design (CADD) is a critical initiating step of drug development, but a single model capable of covering all designing aspects remains to be elucidated. Hence, we developed a drug design modeling framework that integrates multiple approaches, including machine learning based quantitative structure-activity relationship (QSAR) analysis, 3D-QSAR, Bayesian network, pharmacophore modeling, and structure-based docking algorithm. Restrictions for each model were defined for improved individual and overall accuracy. An integration method was applied to join the results from each model to minimize bias and errors. In addition, the integrated model adopts both static and dynamic analysis to validate the intermolecular stabilities of the receptor-ligand conformation. The proposed protocol was applied to identifying HER2 inhibitors from traditional Chinese medicine (TCM) as an example for validating our new protocol. Eight potent leads were identified from six TCM sources. A joint validation system comprised of comparative molecular field analysis, comparative molecular similarity indices analysis, and molecular dynamics simulation further characterized the candidates into three potential binding conformations and validated the binding stability of each protein-ligand complex. The ligand pathway was also performed to predict the ligand "in" and "exit" from the binding site. In summary, we propose a novel systematic CADD methodology for the identification, analysis, and characterization of drug-like candidates.
Immortal time bias in observational studies of time-to-event outcomes.
Jones, Mark; Fowler, Robert
2016-12-01
The purpose of the study is to show, through simulation and example, the magnitude and direction of immortal time bias when an inappropriate analysis is used. We compare 4 methods of analysis for observational studies of time-to-event outcomes: logistic regression, standard Cox model, landmark analysis, and time-dependent Cox model using an example data set of patients critically ill with influenza and a simulation study. For the example data set, logistic regression, standard Cox model, and landmark analysis all showed some evidence that treatment with oseltamivir provides protection from mortality in patients critically ill with influenza. However, when the time-dependent nature of treatment exposure is taken account of using a time-dependent Cox model, there is no longer evidence of a protective effect of treatment. The simulation study showed that, under various scenarios, the time-dependent Cox model consistently provides unbiased treatment effect estimates, whereas standard Cox model leads to bias in favor of treatment. Logistic regression and landmark analysis may also lead to bias. To minimize the risk of immortal time bias in observational studies of survival outcomes, we strongly suggest time-dependent exposures be included as time-dependent variables in hazard-based analyses. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
di Stefano, Marco; Paulsen, Jonas; Lien, Tonje G.; Hovig, Eivind; Micheletti, Cristian
2016-10-01
Combining genome-wide structural models with phenomenological data is at the forefront of efforts to understand the organizational principles regulating the human genome. Here, we use chromosome-chromosome contact data as knowledge-based constraints for large-scale three-dimensional models of the human diploid genome. The resulting models remain minimally entangled and acquire several functional features that are observed in vivo and that were never used as input for the model. We find, for instance, that gene-rich, active regions are drawn towards the nuclear center, while gene poor and lamina associated domains are pushed to the periphery. These and other properties persist upon adding local contact constraints, suggesting their compatibility with non-local constraints for the genome organization. The results show that suitable combinations of data analysis and physical modelling can expose the unexpectedly rich functionally-related properties implicit in chromosome-chromosome contact data. Specific directions are suggested for further developments based on combining experimental data analysis and genomic structural modelling.
Di Stefano, Marco; Paulsen, Jonas; Lien, Tonje G; Hovig, Eivind; Micheletti, Cristian
2016-10-27
Combining genome-wide structural models with phenomenological data is at the forefront of efforts to understand the organizational principles regulating the human genome. Here, we use chromosome-chromosome contact data as knowledge-based constraints for large-scale three-dimensional models of the human diploid genome. The resulting models remain minimally entangled and acquire several functional features that are observed in vivo and that were never used as input for the model. We find, for instance, that gene-rich, active regions are drawn towards the nuclear center, while gene poor and lamina associated domains are pushed to the periphery. These and other properties persist upon adding local contact constraints, suggesting their compatibility with non-local constraints for the genome organization. The results show that suitable combinations of data analysis and physical modelling can expose the unexpectedly rich functionally-related properties implicit in chromosome-chromosome contact data. Specific directions are suggested for further developments based on combining experimental data analysis and genomic structural modelling.
NASA Astrophysics Data System (ADS)
Maslova, I.; Ticlavilca, A. M.; McKee, M.
2012-12-01
There has been an increased interest in wavelet-based streamflow forecasting models in recent years. Often overlooked in this approach are the circularity assumptions of the wavelet transform. We propose a novel technique for minimizing the wavelet decomposition boundary condition effect to produce long-term, up to 12 months ahead, forecasts of streamflow. A simulation study is performed to evaluate the effects of different wavelet boundary rules using synthetic and real streamflow data. A hybrid wavelet-multivariate relevance vector machine model is developed for forecasting the streamflow in real-time for Yellowstone River, Uinta Basin, Utah, USA. The inputs of the model utilize only the past monthly streamflow records. They are decomposed into components formulated in terms of wavelet multiresolution analysis. It is shown that the model model accuracy can be increased by using the wavelet boundary rule introduced in this study. This long-term streamflow modeling and forecasting methodology would enable better decision-making and managing water availability risk.
NASA Astrophysics Data System (ADS)
Bahamonde, Sebastian; Marciu, Mihai; Rudra, Prabir
2018-04-01
Within this work, we propose a new generalised quintom dark energy model in the teleparallel alternative of general relativity theory, by considering a non-minimal coupling between the scalar fields of a quintom model with the scalar torsion component T and the boundary term B. In the teleparallel alternative of general relativity theory, the boundary term represents the divergence of the torsion vector, B=2∇μTμ, and is related to the Ricci scalar R and the torsion scalar T, by the fundamental relation: R=‑T+B. We have investigated the dynamical properties of the present quintom scenario in the teleparallel alternative of general relativity theory by performing a dynamical system analysis in the case of decomposable exponential potentials. The study analysed the structure of the phase space, revealing the fundamental dynamical effects of the scalar torsion and boundary couplings in the case of a more general quintom scenario. Additionally, a numerical approach to the model is presented to analyse the cosmological evolution of the system.
A concept for adaptive performance optimization on commercial transport aircraft
NASA Technical Reports Server (NTRS)
Jackson, Michael R.; Enns, Dale F.
1995-01-01
An adaptive control method is presented for the minimization of drag during flight for transport aircraft. The minimization of drag is achieved by taking advantage of the redundant control capability available in the pitch axis, with the horizontal tail used as the primary surface and symmetric deflection of the ailerons and cruise flaps used as additional controls. The additional control surfaces are excited with sinusoidal signals, while the altitude and velocity loops are closed with guidance and control laws. A model of the throttle response as a function of the additional control surfaces is formulated and the parameters in the model are estimated from the sensor measurements using a least squares estimation method. The estimated model is used to determine the minimum drag positions of the control surfaces. The method is presented for the optimization of one and two additional control surfaces. The adaptive control method is extended to optimize rate of climb with the throttle fixed. Simulations that include realistic disturbances are presented, as well as the results of a Monte Carlo simulation analysis that shows the effects of changing the disturbance environment and the excitation signal parameters.
NASA Astrophysics Data System (ADS)
Dijkgraaf, Robbert; Verlinde, Herman; Verlinde, Erik
1991-03-01
We calculate correlation functions in minimal topological field theories. These twisted versions of N = 2 minimal models have recently been proposed to describe d < 1 matrix models, once coupled to topological gravity. In our calculation we make use of the Landau-Ginzburg formulation of the N = 2 models, and we find a direct relation between the Landau-Ginzburg superpotential and the KdV differential operator. Using this correspondence we show that the minimal topological models are in perfect agreement with the matrix models as solved in terms of the KdV hierarchy. This proves the equivalence at tree-level of topological and ordinary string thoery in d < 1.
Job shop scheduling model for non-identic machine with fixed delivery time to minimize tardiness
NASA Astrophysics Data System (ADS)
Kusuma, K. K.; Maruf, A.
2016-02-01
Scheduling non-identic machines problem with low utilization characteristic and fixed delivery time are frequent in manufacture industry. This paper propose a mathematical model to minimize total tardiness for non-identic machines in job shop environment. This model will be categorized as an integer linier programming model and using branch and bound algorithm as the solver method. We will use fixed delivery time as main constraint and different processing time to process a job. The result of this proposed model shows that the utilization of production machines can be increase with minimal tardiness using fixed delivery time as constraint.
NASA Astrophysics Data System (ADS)
Hengl, Tomislav
2015-04-01
Efficiency of spatial sampling largely determines success of model building. This is especially important for geostatistical mapping where an initial sampling plan should provide a good representation or coverage of both geographical (defined by the study area mask map) and feature space (defined by the multi-dimensional covariates). Otherwise the model will need to extrapolate and, hence, the overall uncertainty of the predictions will be high. In many cases, geostatisticians use point data sets which are produced using unknown or inconsistent sampling algorithms. Many point data sets in environmental sciences suffer from spatial clustering and systematic omission of feature space. But how to quantify these 'representation' problems and how to incorporate this knowledge into model building? The author has developed a generic function called 'spsample.prob' (Global Soil Information Facilities package for R) and which simultaneously determines (effective) inclusion probabilities as an average between the kernel density estimation (geographical spreading of points; analysed using the spatstat package in R) and MaxEnt analysis (feature space spreading of points; analysed using the MaxEnt software used primarily for species distribution modelling). The output 'iprob' map indicates whether the sampling plan has systematically missed some important locations and/or features, and can also be used as an input for geostatistical modelling e.g. as a weight map for geostatistical model fitting. The spsample.prob function can also be used in combination with the accessibility analysis (cost of field survey are usually function of distance from the road network, slope and land cover) to allow for simultaneous maximization of average inclusion probabilities and minimization of total survey costs. The author postulates that, by estimating effective inclusion probabilities using combined geographical and feature space analysis, and by comparing survey costs to representation efficiency, an optimal initial sampling plan can be produced which satisfies both criteria: (a) good representation (i.e. within a tolerance threshold), and (b) minimized survey costs. This sampling analysis framework could become especially interesting for generating sampling plans in new areas e.g. for which no previous spatial prediction model exists. The presentation includes data processing demos with standard soil sampling data sets Ebergotzen (Germany) and Edgeroi (Australia), also available via the GSIF package.
Sexing California gulls using morphometrics and discriminant function analysis
Herring, Garth; Ackerman, Joshua T.; Eagles-Smith, Collin A.; Takekawa, John Y.
2010-01-01
A discriminant function analysis (DFA) model was developed with DNA sex verification so that external morphology could be used to sex 203 adult California Gulls (Larus californicus) in San Francisco Bay (SFB). The best model was 97% accurate and included head-to-bill length, culmen depth at the gonys, and wing length. Using an iterative process, the model was simplified to a single measurement (head-to-bill length) that still assigned sex correctly 94% of the time. A previous California Gull sex determination model developed for a population in Wyoming was then assessed by fitting SFB California Gull measurement data to the Wyoming model; this new model failed to converge on the same measurements as those originally used by the Wyoming model. Results from the SFB discriminant function model were compared to the Wyoming model results (by using SFB data with the Wyoming model); the SFB model was 7% more accurate for SFB California gulls. The simplified DFA model (head-to-bill length only) provided highly accurate results (94%) and minimized the measurements and time required to accurately sex California Gulls.
Optimal synthesis and design of the number of cycles in the leaching process for surimi production.
Reinheimer, M Agustina; Scenna, Nicolás J; Mussati, Sergio F
2016-12-01
Water consumption required during the leaching stage in the surimi manufacturing process strongly depends on the design and the number and size of stages connected in series for the soluble protein extraction target, and it is considered as the main contributor to the operating costs. Therefore, the optimal synthesis and design of the leaching stage is essential to minimize the total annual cost. In this study, a mathematical optimization model for the optimal design of the leaching operation is presented. Precisely, a detailed Mixed Integer Nonlinear Programming (MINLP) model including operating and geometric constraints was developed based on our previous optimization model (NLP model). Aspects about quality, water consumption and main operating parameters were considered. The minimization of total annual costs, which considered a trade-off between investment and operating costs, led to an optimal solution with lesser number of stages (2 instead of 3 stages) and higher volumes of the leaching tanks comparing with previous results. An analysis was performed in order to investigate how the optimal solution was influenced by the variations of the unitary cost of fresh water, waste treatment and capital investment.
Kostuj, T; Schulze-Raestrup, U; Noack, M; Buckup, K; Smektala, R
2011-05-01
A minimal provider volume for total knee replacement (TKR) was introduced in 2006. Does this lead to an improvenment in quality or not? The records of treatment in the compulsory external quality assurance program of the Land of North Rhine-Westphalia (QS-NRW) were evaluated. A total of 125,324 comparable records from the QS-NRW program were available to determine the appearance of general and surgical complications. In a logistical regression model the risk factors age, gender, ASA classification, comorbidity and duration were taken into account. A significant reduction could only be shown for pneumonia, thrombotic events and lung embolisms as well as vascular injury. In 2006 and 2007 malpositioning of implants was significantly higher and from 2005 to 2008 the number of fractures rose compared to 2004. Deep infections and reoperations did not change significantly during the whole study period. This evaluation could not show an improvement in quality due to the minimal provider volume. Thus the minimal provider volume should not be taken into account as a main criterion to improve quality. Further outcome studies and creating an arthroplasty register in Germany are more useful.
NASA Technical Reports Server (NTRS)
Katow, S. M.
1979-01-01
The computer analysis of the 34-m HA-DEC antenna by the IDEAS program provided the rms distortions of the surface panels support points for full gravity loadings in the three directions of the basic coordinate system of the computer model. The rms distortions for the gravity vector not in line with any of the three basic directions were solved and contour plotted starting from three surface panels setting declination angle. By inspections of the plots, it was concluded that the setting or rigging angle of -15 degrees declination minimized the rms distortions for sky coverage of plus or minus 22 declination angles to 10 degrees of ground mask.
Feldman, Daniel; Liu, Zuowei; Nath, Pran
2007-12-21
The minimal supersymmetric standard model with soft breaking has a large landscape of supersymmetric particle mass hierarchies. This number is reduced significantly in well-motivated scenarios such as minimal supergravity and alternatives. We carry out an analysis of the landscape for the first four lightest particles and identify at least 16 mass patterns, and provide benchmarks for each. We study the signature space for the patterns at the CERN Large Hadron Collider by analyzing the lepton+ (jet> or =2) + missing P{T} signals with 0, 1, 2, and 3 leptons. Correlations in missing P{T} are also analyzed. It is found that even with 10 fb{-1} of data a significant discrimination among patterns emerges.
Evaluation of semiempirical atmospheric density models for orbit determination applications
NASA Technical Reports Server (NTRS)
Cox, C. M.; Feiertag, R. J.; Oza, D. H.; Doll, C. E.
1994-01-01
This paper presents the results of an investigation of the orbit determination performance of the Jacchia-Roberts (JR), mass spectrometer incoherent scatter 1986 (MSIS-86), and drag temperature model (DTM) atmospheric density models. Evaluation of the models was performed to assess the modeling of the total atmospheric density. This study was made generic by using six spacecraft and selecting time periods of study representative of all portions of the 11-year cycle. Performance of the models was measured for multiple spacecraft, representing a selection of orbit geometries from near-equatorial to polar inclinations and altitudes from 400 kilometers to 900 kilometers. The orbit geometries represent typical low earth-orbiting spacecraft supported by the Goddard Space Flight Center (GSFC) Flight Dynamics Division (FDD). The best available modeling and orbit determination techniques using the Goddard Trajectory Determination System (GTDS) were employed to minimize the effects of modeling errors. The latest geopotential model available during the analysis, the Goddard earth model-T3 (GEM-T3), was employed to minimize geopotential model error effects on the drag estimation. Improved-accuracy techniques identified for TOPEX/Poseidon orbit determination analysis were used to improve the Tracking and Data Relay Satellite System (TDRSS)-based orbit determination used for most of the spacecraft chosen for this analysis. This paper shows that during periods of relatively quiet solar flux and geomagnetic activity near the solar minimum, the choice of atmospheric density model used for orbit determination is relatively inconsequential. During typical solar flux conditions near the solar maximum, the differences between the JR, DTM, and MSIS-86 models begin to become apparent. Time periods of extreme solar activity, those in which the daily and 81-day mean solar flux are high and change rapidly, result in significant differences between the models. During periods of high geomagnetic activity, the standard JR model was outperformed by DTM. Modification of the JR model to use a geomagnetic heating delay of 3 hours, as used in DTM, instead of the 6.7-hour delay produced results comparable to or better than the DTM performance, reducing definitive orbit solution ephermeris overlap differences by 30 to 50 percent. The reduction in the overlap differences would be useful for mitigating the impact of geomagnetic storms on orbit prediction.
An analysis of wildfire prevention
NASA Technical Reports Server (NTRS)
Heineke, J. M.; Weissenberger, S.
1974-01-01
A model of the production of wildfire ignitions and damages is developed and used to determine wildland activity-regulation decisions, which minimize total expected cost-plus-loss due to wildfires. In this context, the implications of various policy decisions are considered. The resulting decision rules take a form that makes it possible for existing wildfire management agencies to readily adopt them upon collection of the required data.
Childers, W Lee; Kogler, Géza F
2014-01-01
People with amputation move asymmetrically with regard to kinematics (joint angles) and kinetics (joint forces and moments). Clinicians have traditionally sought to minimize kinematic asymmetries, assuming kinetic asymmetries would also be minimized. A cycling model evaluated locomotor asymmetries. Eight individuals with unilateral transtibial amputation pedaled with 172 mm-length crank arms on both sides (control condition) and with the crank arm length shortened to 162 mm on the amputated side (CRANK condition). Pedaling kinetics and limb kinematics were recorded. Joint kinetics, joint angles (mean and range of motion [ROM]), and pedaling asymmetries were calculated from force pedals and with a motion capture system. A one-way analysis of variance with tukey post hoc compared kinetics and kinematics across limbs. Statistical significance was set to p = 0.05. The CRANK condition reduced hip and knee ROM in the amputated limb compared with the control condition. There were no differences in joint kinematics between the contralateral and amputated limbs during the CRANK condition. Pedaling asymmetries did not differ and were 23.0% +/= 9.8% and 23.2% +/= 12% for the control and CRANK conditions, respectively. Our results suggest that minimizing kinematic asymmetries does not relate to kinetic asymmetries as clinically assumed. We propose that future research should concentrate on defining acceptable asymmetry.
NASA Technical Reports Server (NTRS)
Hou, Gene
2004-01-01
The focus of this research is on the development of analysis and sensitivity analysis equations for nonlinear, transient heat transfer problems modeled by p-version, time discontinuous finite element approximation. The resulting matrix equation of the state equation is simply in the form ofA(x)x = c, representing a single step, time marching scheme. The Newton-Raphson's method is used to solve the nonlinear equation. Examples are first provided to demonstrate the accuracy characteristics of the resultant finite element approximation. A direct differentiation approach is then used to compute the thermal sensitivities of a nonlinear heat transfer problem. The report shows that only minimal coding effort is required to enhance the analysis code with the sensitivity analysis capability.
Improving the performance of minimizers and winnowing schemes.
Marçais, Guillaume; Pellow, David; Bork, Daniel; Orenstein, Yaron; Shamir, Ron; Kingsford, Carl
2017-07-15
The minimizers scheme is a method for selecting k -mers from sequences. It is used in many bioinformatics software tools to bin comparable sequences or to sample a sequence in a deterministic fashion at approximately regular intervals, in order to reduce memory consumption and processing time. Although very useful, the minimizers selection procedure has undesirable behaviors (e.g. too many k -mers are selected when processing certain sequences). Some of these problems were already known to the authors of the minimizers technique, and the natural lexicographic ordering of k -mers used by minimizers was recognized as their origin. Many software tools using minimizers employ ad hoc variations of the lexicographic order to alleviate those issues. We provide an in-depth analysis of the effect of k -mer ordering on the performance of the minimizers technique. By using small universal hitting sets (a recently defined concept), we show how to significantly improve the performance of minimizers and avoid some of its worse behaviors. Based on these results, we encourage bioinformatics software developers to use an ordering based on a universal hitting set or, if not possible, a randomized ordering, rather than the lexicographic order. This analysis also settles negatively a conjecture (by Schleimer et al. ) on the expected density of minimizers in a random sequence. The software used for this analysis is available on GitHub: https://github.com/gmarcais/minimizers.git . gmarcais@cs.cmu.edu or carlk@cs.cmu.edu. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
SPS pilot signal design and power transponder analysis, volume 2, phase 3
NASA Technical Reports Server (NTRS)
Lindsey, W. C.; Scholtz, R. A.; Chie, C. M.
1980-01-01
The problem of pilot signal parameter optimization and the related problem of power transponder performance analysis for the Solar Power Satellite reference phase control system are addressed. Signal and interference models were established to enable specifications of the front end filters including both the notch filter and the antenna frequency response. A simulation program package was developed to be included in SOLARSIM to perform tradeoffs of system parameters based on minimizing the phase error for the pilot phase extraction. An analytical model that characterizes the overall power transponder operation was developed. From this model, the effects of different phase noise disturbance sources that contribute to phase variations at the output of the power transponders were studied and quantified. Results indicate that it is feasible to hold the antenna array phase error to less than one degree per power module for the type of disturbances modeled.
MUSiC - A Generic Search for Deviations from Monte Carlo Predictions in CMS
NASA Astrophysics Data System (ADS)
Hof, Carsten
2009-05-01
We present a model independent analysis approach, systematically scanning the data for deviations from the Standard Model Monte Carlo expectation. Such an analysis can contribute to the understanding of the CMS detector and the tuning of the event generators. Furthermore, due to the minimal theoretical bias this approach is sensitive to a variety of models of new physics, including those not yet thought of. Events are classified into event classes according to their particle content (muons, electrons, photons, jets and missing transverse energy). A broad scan of various distributions is performed, identifying significant deviations from the Monte Carlo simulation. We outline the importance of systematic uncertainties, which are taken into account rigorously within the algorithm. Possible detector effects and generator issues, as well as models involving supersymmetry and new heavy gauge bosons have been used as an input to the search algorithm.
A kinematic model to assess spinal motion during walking.
Konz, Regina J; Fatone, Stefania; Stine, Rebecca L; Ganju, Aruna; Gard, Steven A; Ondra, Stephen L
2006-11-15
A 3-dimensional multi-segment kinematic spine model was developed for noninvasive analysis of spinal motion during walking. Preliminary data from able-bodied ambulators were collected and analyzed using the model. Neither the spine's role during walking nor the effect of surgical spinal stabilization on gait is fully understood. Typically, gait analysis models disregard the spine entirely or regard it as a single rigid structure. Data on regional spinal movements, in conjunction with lower limb data, associated with walking are scarce. KinTrak software (Motion Analysis Corp., Santa Rosa, CA) was used to create a biomechanical model for analysis of 3-dimensional regional spinal movements. Measuring known angles from a mechanical model and comparing them to the calculated angles validated the kinematic model. Spine motion data were collected from 10 able-bodied adults walking at 5 self-selected speeds. These results were compared to data reported in the literature. The uniaxial angles measured on the mechanical model were within 5 degrees of the calculated kinematic model angles, and the coupled angles were within 2 degrees. Regional spine kinematics from able-bodied subjects calculated with this model compared well to data reported by other authors. A multi-segment kinematic spine model has been developed and validated for analysis of spinal motion during walking. By understanding the spine's role during ambulation and the cause-and-effect relationship between spine motion and lower limb motion, preoperative planning may be augmented to restore normal alignment and balance with minimal negative effects on walking.
Mallak, Shadi Kafi; Bakri Ishak, Mohd; Mohamed, Ahmad Fariz
2016-09-13
Malaysia is facing an increasing trend in industrial solid waste generation due to industrial development.Thus there is a paramount need in taking a serious action to move toward sustainable industrial waste management. The main aim of this study is to assess practicing solid waste minimization by manufacturing firms in Shah Alam industrial state, Malaysia. This paper presents a series of descriptive and inferential statistical analysis regarding the level and effects of practicing waste minimization methods, and seriousness of barriers preventing industries from practicing waste minimization methods. For this purpose the survey questions were designed such that both quantitative (questionnaire) and qualitative (semi-structures interview) data were collected concurrently. Analysis showed that, the majority of firms (92%) dispose their wastes rather than practice other sustainable waste management options. Also waste minimization methods such as segregation of wastes, on-site recycle and reuse, improve housekeeping and equipment modification were found to have significant contribution in waste reduction (p<0.05). Lack of expertise (M=3.50), lack of enough information (M= 3.54), lack of equipment modification (M= 3.16) and lack of specific waste minimization guidelines (M=3.49) have higher mean scores comparing with other barriers in different categories. These data were interpreted for elaborating of SWOT and TOWS matrix to highlight strengths, weaknesses, threats and opportunities. Accordingly, ten policies were recommended for improvement of practicing waste minimization by manufacturing firms as the main aim of this research. Implications This manuscript critically analysis waste minimization practices by manufacturing firms in Malaysia. Both qualitative and quantitative data collection and analysis were conducted to formulate SWOT and TOWS matrix in order to recommend policies and strategies for improvement of solid waste minimization by manufacturing industries. The results contribute to the knowledge and the findings of this study provide a useful baseline information and data on industrial solid waste generation and waste minimization practice.
Extensions of D-optimal Minimal Designs for Symmetric Mixture Models.
Li, Yanyan; Raghavarao, Damaraju; Chervoneva, Inna
2017-01-01
The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations.
Non-minimal gravitational reheating during kination
NASA Astrophysics Data System (ADS)
Dimopoulos, Konstantinos; Markkanen, Tommi
2018-06-01
A new mechanism is presented which can reheat the Universe in non-oscillatory models of inflation, where the inflation period is followed by a period dominated by the kinetic density for the inflaton field (kination). The mechanism considers an auxiliary field non-minimally coupled to gravity. The auxiliary field is a spectator during inflation, rendered heavy by the non-minimal coupling to gravity. During kination however, the non-minimal coupling generates a tachyonic mass, which displaces the field, until its bare mass becomes important, leading to coherent oscillations. Then, the field decays into the radiation bath of the hot big bang. The model is generic and predictive, in that the resulting reheating temperature is a function only of the model parameters (masses and couplings) and not of initial conditions. It is shown that reheating can be very efficient also when considering only the Standard Model.
Chakravorty, Arghya; Jia, Zhe; Li, Lin; Zhao, Shan; Alexov, Emil
2018-02-13
Typically, the ensemble average polar component of solvation energy (ΔG polar solv ) of a macromolecule is computed using molecular dynamics (MD) or Monte Carlo (MC) simulations to generate conformational ensemble and then single/rigid conformation solvation energy calculation is performed on each snapshot. The primary objective of this work is to demonstrate that Poisson-Boltzmann (PB)-based approach using a Gaussian-based smooth dielectric function for macromolecular modeling previously developed by us (Li et al. J. Chem. Theory Comput. 2013, 9 (4), 2126-2136) can reproduce that ensemble average (ΔG polar solv ) of a protein from a single structure. We show that the Gaussian-based dielectric model reproduces the ensemble average ΔG polar solv (⟨ΔG polar solv ⟩) from an energy-minimized structure of a protein regardless of the minimization environment (structure minimized in vacuo, implicit or explicit waters, or crystal structure); the best case, however, is when it is paired with an in vacuo-minimized structure. In other minimization environments (implicit or explicit waters or crystal structure), the traditional two-dielectric model can still be selected with which the model produces correct solvation energies. Our observations from this work reflect how the ability to appropriately mimic the motion of residues, especially the salt bridge residues, influences a dielectric model's ability to reproduce the ensemble average value of polar solvation free energy from a single in vacuo-minimized structure.
Reuseable Objects Software Environment (ROSE): Introduction to Air Force Software Reuse Workshop
NASA Technical Reports Server (NTRS)
Cottrell, William L.
1994-01-01
The Reusable Objects Software Environment (ROSE) is a common, consistent, consolidated implementation of software functionality using modern object oriented software engineering including designed-in reuse and adaptable requirements. ROSE is designed to minimize abstraction and reduce complexity. A planning model for the reverse engineering of selected objects through object oriented analysis is depicted. Dynamic and functional modeling are used to develop a system design, the object design, the language, and a database management system. The return on investment for a ROSE pilot program and timelines are charted.
Analysis of intrapulse chirp in CO2 oscillators
NASA Technical Reports Server (NTRS)
Moody, Stephen E.; Berger, Russell G.; Thayer, William J., III
1987-01-01
Pulsed single-frequency CO2 laser oscillators are often used as transmitters for coherent lidar applications. These oscillators suffer from intrapulse chirp, or dynamic frequency shifting. If excessive, such chirp can limit the signal-to-noise ratio of the lidar (by generating excess bandwidth), or limit the velocity resolution if the lidar is of the Doppler type. This paper describes a detailed numerical model that considers all known sources of intrapulse chirp. Some typical predictions of the model are shown, and simple design rules to minimize chirp are proposed.
Reaction Wheel Disturbance Model Extraction Software - RWDMES
NASA Technical Reports Server (NTRS)
Blaurock, Carl
2009-01-01
The RWDMES is a tool for modeling the disturbances imparted on spacecraft by spinning reaction wheels. Reaction wheels are usually the largest disturbance source on a precision pointing spacecraft, and can be the dominating source of pointing error. Accurate knowledge of the disturbance environment is critical to accurate prediction of the pointing performance. In the past, it has been difficult to extract an accurate wheel disturbance model since the forcing mechanisms are difficult to model physically, and the forcing amplitudes are filtered by the dynamics of the reaction wheel. RWDMES captures the wheel-induced disturbances using a hybrid physical/empirical model that is extracted directly from measured forcing data. The empirical models capture the tonal forces that occur at harmonics of the spin rate, and the broadband forces that arise from random effects. The empirical forcing functions are filtered by a physical model of the wheel structure that includes spin-rate-dependent moments (gyroscopic terms). The resulting hybrid model creates a highly accurate prediction of wheel-induced forces. It accounts for variation in disturbance frequency, as well as the shifts in structural amplification by the whirl modes, as the spin rate changes. This software provides a point-and-click environment for producing accurate models with minimal user effort. Where conventional approaches may take weeks to produce a model of variable quality, RWDMES can create a demonstrably high accuracy model in two hours. The software consists of a graphical user interface (GUI) that enables the user to specify all analysis parameters, to evaluate analysis results and to iteratively refine the model. Underlying algorithms automatically extract disturbance harmonics, initialize and tune harmonic models, and initialize and tune broadband noise models. The component steps are described in the RWDMES user s guide and include: converting time domain data to waterfall PSDs (power spectral densities); converting PSDs to order analysis data; extracting harmonics; initializing and simultaneously tuning a harmonic model and a wheel structural model; initializing and tuning a broadband model; and verifying the harmonic/broadband/structural model against the measurement data. Functional operation is through a MATLAB GUI that loads test data, performs the various analyses, plots evaluation data for assessment and refinement of analysis parameters, and exports the data to documentation or downstream analysis code. The harmonic models are defined as specified functions of frequency, typically speed-squared. The reaction wheel structural model is realized as mass, damping, and stiffness matrices (typically from a finite element analysis package) with the addition of a gyroscopic forcing matrix. The broadband noise model is realized as a set of speed-dependent filters. The tuning of the combined model is performed using nonlinear least squares techniques. RWDMES is implemented as a MATLAB toolbox comprising the Fit Manager for performing the model extraction, Data Manager for managing input data and output models, the Gyro Manager for modifying wheel structural models, and the Harmonic Editor for evaluating and tuning harmonic models. This software was validated using data from Goodrich E wheels, and from GSFC Lunar Reconnaissance Orbiter (LRO) wheels. The validation testing proved that RWDMES has the capability to extract accurate disturbance models from flight reaction wheels with minimal user effort.
Model based systems engineering for astronomical projects
NASA Astrophysics Data System (ADS)
Karban, R.; Andolfato, L.; Bristow, P.; Chiozzi, G.; Esselborn, M.; Schilling, M.; Schmid, C.; Sommer, H.; Zamparelli, M.
2014-08-01
Model Based Systems Engineering (MBSE) is an emerging field of systems engineering for which the System Modeling Language (SysML) is a key enabler for descriptive, prescriptive and predictive models. This paper surveys some of the capabilities, expectations and peculiarities of tools-assisted MBSE experienced in real-life astronomical projects. The examples range in depth and scope across a wide spectrum of applications (for example documentation, requirements, analysis, trade studies) and purposes (addressing a particular development need, or accompanying a project throughout many - if not all - its lifecycle phases, fostering reuse and minimizing ambiguity). From the beginnings of the Active Phasing Experiment, through VLT instrumentation, VLTI infrastructure, Telescope Control System for the E-ELT, until Wavefront Control for the E-ELT, we show how stepwise refinements of tools, processes and methods have provided tangible benefits to customary system engineering activities like requirement flow-down, design trade studies, interfaces definition, and validation, by means of a variety of approaches (like Model Checking, Simulation, Model Transformation) and methodologies (like OOSEM, State Analysis)
Chai, Rifai; Naik, Ganesh R; Nguyen, Tuan Nghia; Ling, Sai Ho; Tran, Yvonne; Craig, Ashley; Nguyen, Hung T
2017-05-01
This paper presents a two-class electroencephal-ography-based classification for classifying of driver fatigue (fatigue state versus alert state) from 43 healthy participants. The system uses independent component by entropy rate bound minimization analysis (ERBM-ICA) for the source separation, autoregressive (AR) modeling for the features extraction, and Bayesian neural network for the classification algorithm. The classification results demonstrate a sensitivity of 89.7%, a specificity of 86.8%, and an accuracy of 88.2%. The combination of ERBM-ICA (source separator), AR (feature extractor), and Bayesian neural network (classifier) provides the best outcome with a p-value < 0.05 with the highest value of area under the receiver operating curve (AUC-ROC = 0.93) against other methods such as power spectral density as feature extractor (AUC-ROC = 0.81). The results of this study suggest the method could be utilized effectively for a countermeasure device for driver fatigue identification and other adverse event applications.
Soley, Micheline B; Markmann, Andreas; Batista, Victor S
2018-06-12
We introduce the so-called "Classical Optimal Control Optimization" (COCO) method for global energy minimization based on the implementation of the diffeomorphic modulation under observable-response-preserving homotopy (DMORPH) gradient algorithm. A probe particle with time-dependent mass m( t;β) and dipole μ( r, t;β) is evolved classically on the potential energy surface V( r) coupled to an electric field E( t;β), as described by the time-dependent density of states represented on a grid, or otherwise as a linear combination of Gaussians generated by the k-means clustering algorithm. Control parameters β defining m( t;β), μ( r, t;β), and E( t;β) are optimized by following the gradients of the energy with respect to β, adapting them to steer the particle toward the global minimum energy configuration. We find that the resulting COCO algorithm is capable of resolving near-degenerate states separated by large energy barriers and successfully locates the global minima of golf potentials on flat and rugged surfaces, previously explored for testing quantum annealing methodologies and the quantum optimal control optimization (QuOCO) method. Preliminary results show successful energy minimization of multidimensional Lennard-Jones clusters. Beyond the analysis of energy minimization in the specific model systems investigated, we anticipate COCO should be valuable for solving minimization problems in general, including optimization of parameters in applications to machine learning and molecular structure determination.
Bonanno, George A.; Diminich, Erica D.
2012-01-01
Background Research on resilience in the aftermath of potentially traumatic life events is still evolving. For decades researchers have documented resilience in children exposed to corrosive early environments, such as poverty or chronic maltreatment. Relatively more recently the study of resilience has migrated to the investigation of isolated and potentially traumatic life events (PTE) in adults. Methods In this article we first consider some of the key differences in the conceptualization of resilience following chronic adversity versus resilience following single-incident traumas, and then describe some of the misunderstandings that have developed about these constructs. To organize our discussion we introduce the terms emergent resilience and minimal-impact resilience to represent trajectories positive adjustment in these two domains, respectively. Results We focused in particular on minimal-impact resilience, and reviewed recent advances in statistical modeling of latent trajectories that have informed the most recent research on minimal-impact resilience in both children and adults and the variables that predict it, including demographic variables, exposure, past and current stressors, resources, personality, positive emotion, coping and appraisal, and flexibility in coping and emotion regulation. Conclusions The research on minimal impact resilience is nascent. Further research is warranted with implications for a multiple levels of analysis approach to elucidate the processes that may mitigate or modify the impact of a PTE at different developmental stages. PMID:23215790
Reilly, John; Glisic, Branko
2018-01-01
Temperature changes play a large role in the day to day structural behavior of structures, but a smaller direct role in most contemporary Structural Health Monitoring (SHM) analyses. Temperature-Driven SHM will consider temperature as the principal driving force in SHM, relating a measurable input temperature to measurable output generalized strain (strain, curvature, etc.) and generalized displacement (deflection, rotation, etc.) to create three-dimensional signatures descriptive of the structural behavior. Identifying time periods of minimal thermal gradient provides the foundation for the formulation of the temperature–deformation–displacement model. Thermal gradients in a structure can cause curvature in multiple directions, as well as non-linear strain and stress distributions within the cross-sections, which significantly complicates data analysis and interpretation, distorts the signatures, and may lead to unreliable conclusions regarding structural behavior and condition. These adverse effects can be minimized if the signatures are evaluated at times when thermal gradients in the structure are minimal. This paper proposes two classes of methods based on the following two metrics: (i) the range of raw temperatures on the structure, and (ii) the distribution of the local thermal gradients, for identifying time periods of minimal thermal gradient on a structure with the ability to vary the tolerance of acceptable thermal gradients. The methods are tested and validated with data collected from the Streicker Bridge on campus at Princeton University. PMID:29494496
Reilly, John; Glisic, Branko
2018-03-01
Temperature changes play a large role in the day to day structural behavior of structures, but a smaller direct role in most contemporary Structural Health Monitoring (SHM) analyses. Temperature-Driven SHM will consider temperature as the principal driving force in SHM, relating a measurable input temperature to measurable output generalized strain (strain, curvature, etc.) and generalized displacement (deflection, rotation, etc.) to create three-dimensional signatures descriptive of the structural behavior. Identifying time periods of minimal thermal gradient provides the foundation for the formulation of the temperature-deformation-displacement model. Thermal gradients in a structure can cause curvature in multiple directions, as well as non-linear strain and stress distributions within the cross-sections, which significantly complicates data analysis and interpretation, distorts the signatures, and may lead to unreliable conclusions regarding structural behavior and condition. These adverse effects can be minimized if the signatures are evaluated at times when thermal gradients in the structure are minimal. This paper proposes two classes of methods based on the following two metrics: (i) the range of raw temperatures on the structure, and (ii) the distribution of the local thermal gradients, for identifying time periods of minimal thermal gradient on a structure with the ability to vary the tolerance of acceptable thermal gradients. The methods are tested and validated with data collected from the Streicker Bridge on campus at Princeton University.
Systems Biology Perspectives on Minimal and Simpler Cells
Xavier, Joana C.; Patil, Kiran Raosaheb
2014-01-01
SUMMARY The concept of the minimal cell has fascinated scientists for a long time, from both fundamental and applied points of view. This broad concept encompasses extreme reductions of genomes, the last universal common ancestor (LUCA), the creation of semiartificial cells, and the design of protocells and chassis cells. Here we review these different areas of research and identify common and complementary aspects of each one. We focus on systems biology, a discipline that is greatly facilitating the classical top-down and bottom-up approaches toward minimal cells. In addition, we also review the so-called middle-out approach and its contributions to the field with mathematical and computational models. Owing to the advances in genomics technologies, much of the work in this area has been centered on minimal genomes, or rather minimal gene sets, required to sustain life. Nevertheless, a fundamental expansion has been taking place in the last few years wherein the minimal gene set is viewed as a backbone of a more complex system. Complementing genomics, progress is being made in understanding the system-wide properties at the levels of the transcriptome, proteome, and metabolome. Network modeling approaches are enabling the integration of these different omics data sets toward an understanding of the complex molecular pathways connecting genotype to phenotype. We review key concepts central to the mapping and modeling of this complexity, which is at the heart of research on minimal cells. Finally, we discuss the distinction between minimizing the number of cellular components and minimizing cellular complexity, toward an improved understanding and utilization of minimal and simpler cells. PMID:25184563
Rapid Modeling and Analysis Tools: Evolution, Status, Needs and Directions
NASA Technical Reports Server (NTRS)
Knight, Norman F., Jr.; Stone, Thomas J.; Ransom, Jonathan B. (Technical Monitor)
2002-01-01
Advanced aerospace systems are becoming increasingly more complex, and customers are demanding lower cost, higher performance, and high reliability. Increased demands are placed on the design engineers to collaborate and integrate design needs and objectives early in the design process to minimize risks that may occur later in the design development stage. High performance systems require better understanding of system sensitivities much earlier in the design process to meet these goals. The knowledge, skills, intuition, and experience of an individual design engineer will need to be extended significantly for the next generation of aerospace system designs. Then a collaborative effort involving the designer, rapid and reliable analysis tools and virtual experts will result in advanced aerospace systems that are safe, reliable, and efficient. This paper discusses the evolution, status, needs and directions for rapid modeling and analysis tools for structural analysis. First, the evolution of computerized design and analysis tools is briefly described. Next, the status of representative design and analysis tools is described along with a brief statement on their functionality. Then technology advancements to achieve rapid modeling and analysis are identified. Finally, potential future directions including possible prototype configurations are proposed.
Arsenyev, P A; Trezvov, V V; Saratovskaya, N V
1997-01-01
This work represents a method, which allows to determine phase composition of calcium hydroxylapatite basing on its infrared spectrum. The method uses factor analysis of the spectral data of calibration set of samples to determine minimal number of factors required to reproduce the spectra within experimental error. Multiple linear regression is applied to establish correlation between factor scores of calibration standards and their properties. The regression equations can be used to predict the property value of unknown sample. The regression model was built for determination of beta-tricalcium phosphate content in hydroxylapatite. Statistical estimation of quality of the model was carried out. Application of the factor analysis on spectral data allows to increase accuracy of beta-tricalcium phosphate determination and expand the range of determination towards its less concentration. Reproducibility of results is retained.
Hydromagnetic couple-stress nanofluid flow over a moving convective wall: OHAM analysis
NASA Astrophysics Data System (ADS)
Awais, M.; Saleem, S.; Hayat, T.; Irum, S.
2016-12-01
This communication presents the magnetohydrodynamics (MHD) flow of a couple-stress nanofluid over a convective moving wall. The flow dynamics are analyzed in the boundary layer region. Convective cooling phenomenon combined with thermophoresis and Brownian motion effects has been discussed. Similarity transforms are utilized to convert the system of partial differential equations into coupled non-linear ordinary differential equation. Optimal homotopy analysis method (OHAM) is utilized and the concept of minimization is employed by defining the average squared residual errors. Effects of couple-stress parameter, convective cooling process parameter and energy enhancement parameters are displayed via graphs and discussed in detail. Various tables are also constructed to present the error analysis and a comparison of obtained results with the already published data. Stream lines are plotted showing a difference of Newtonian fluid model and couplestress fluid model.
Cooley, Richard L.
1982-01-01
Prior information on the parameters of a groundwater flow model can be used to improve parameter estimates obtained from nonlinear regression solution of a modeling problem. Two scales of prior information can be available: (1) prior information having known reliability (that is, bias and random error structure) and (2) prior information consisting of best available estimates of unknown reliability. A regression method that incorporates the second scale of prior information assumes the prior information to be fixed for any particular analysis to produce improved, although biased, parameter estimates. Approximate optimization of two auxiliary parameters of the formulation is used to help minimize the bias, which is almost always much smaller than that resulting from standard ridge regression. It is shown that if both scales of prior information are available, then a combined regression analysis may be made.
NASA Astrophysics Data System (ADS)
Dahms, Rainer N.
2016-04-01
A generalized framework for multi-component liquid injections is presented to understand and predict the breakdown of classic two-phase theory and spray atomization at engine-relevant conditions. The analysis focuses on the thermodynamic structure and the immiscibility state of representative gas-liquid interfaces. The most modern form of Helmholtz energy mixture state equation is utilized which exhibits a unique and physically consistent behavior over the entire two-phase regime of fluid densities. It is combined with generalized models for non-linear gradient theory and for liquid injections to quantify multi-component two-phase interface structures in global thermal equilibrium. Then, the Helmholtz free energy is minimized which determines the interfacial species distribution as a consequence. This minimal free energy state is demonstrated to validate the underlying assumptions of classic two-phase theory and spray atomization. However, under certain engine-relevant conditions for which corroborating experimental data are presented, this requirement for interfacial thermal equilibrium becomes unsustainable. A rigorously derived probability density function quantifies the ability of the interface to develop internal spatial temperature gradients in the presence of significant temperature differences between injected liquid and ambient gas. Then, the interface can no longer be viewed as an isolated system at minimal free energy. Instead, the interfacial dynamics become intimately connected to those of the separated homogeneous phases. Hence, the interface transitions toward a state in local equilibrium whereupon it becomes a dense-fluid mixing layer. A new conceptual view of a transitional liquid injection process emerges from a transition time scale analysis. Close to the nozzle exit, the two-phase interface still remains largely intact and more classic two-phase processes prevail as a consequence. Further downstream, however, the transition to dense-fluid mixing generally occurs before the liquid length is reached. The significance of the presented modeling expressions is established by a direct comparison to a reduced model, which utilizes widely applied approximations but fundamentally fails to capture the physical complexity discussed in this paper.
Dependence of the Firearm-Related Homicide Rate on Gun Availability: A Mathematical Analysis
Wodarz, Dominik; Komarova, Natalia L.
2013-01-01
In the USA, the relationship between the legal availability of guns and the firearm-related homicide rate has been debated. It has been argued that unrestricted gun availability promotes the occurrence of firearm-induced homicides. It has also been pointed out that gun possession can protect potential victims when attacked. This paper provides a first mathematical analysis of this tradeoff, with the goal to steer the debate towards arguing about assumptions, statistics, and scientific methods. The model is based on a set of clearly defined assumptions, which are supported by available statistical data, and is formulated axiomatically such that results do not depend on arbitrary mathematical expressions. According to this framework, two alternative scenarios can minimize the gun-related homicide rate: a ban of private firearms possession, or a policy allowing the general population to carry guns. Importantly, the model identifies the crucial parameters that determine which policy minimizes the death rate, and thus serves as a guide for the design of future epidemiological studies. The parameters that need to be measured include the fraction of offenders that illegally possess a gun, the degree of protection provided by gun ownership, and the fraction of the population who take up their right to own a gun and carry it when attacked. Limited data available in the literature were used to demonstrate how the model can be parameterized, and this preliminary analysis suggests that a ban of private firearm possession, or possibly a partial reduction in gun availability, might lower the rate of firearm-induced homicides. This, however, should not be seen as a policy recommendation, due to the limited data available to inform and parameterize the model. However, the model clearly defines what needs to be measured, and provides a basis for a scientific discussion about assumptions and data. PMID:23923062
NASA Astrophysics Data System (ADS)
Luo, Keqin
1999-11-01
The electroplating industry of over 10,000 planting plants nationwide is one of the major waste generators in the industry. Large quantities of wastewater, spent solvents, spent process solutions, and sludge are the major wastes generated daily in plants, which costs the industry tremendously for waste treatment and disposal and hinders the further development of the industry. It becomes, therefore, an urgent need for the industry to identify technically most effective and economically most attractive methodologies and technologies to minimize the waste, while the production competitiveness can be still maintained. This dissertation aims at developing a novel WM methodology using artificial intelligence, fuzzy logic, and fundamental knowledge in chemical engineering, and an intelligent decision support tool. The WM methodology consists of two parts: the heuristic knowledge-based qualitative WM decision analysis and support methodology and fundamental knowledge-based quantitative process analysis methodology for waste reduction. In the former, a large number of WM strategies are represented as fuzzy rules. This becomes the main part of the knowledge base in the decision support tool, WMEP-Advisor. In the latter, various first-principles-based process dynamic models are developed. These models can characterize all three major types of operations in an electroplating plant, i.e., cleaning, rinsing, and plating. This development allows us to perform a thorough process analysis on bath efficiency, chemical consumption, wastewater generation, sludge generation, etc. Additional models are developed for quantifying drag-out and evaporation that are critical for waste reduction. The models are validated through numerous industrial experiments in a typical plating line of an industrial partner. The unique contribution of this research is that it is the first time for the electroplating industry to (i) use systematically available WM strategies, (ii) know quantitatively and accurately what is going on in each tank, and (iii) identify all WM opportunities through process improvement. This work has formed a solid foundation for the further development of powerful WM technologies for comprehensive WM in the following decade.
NATbox: a network analysis toolbox in R.
Chavan, Shweta S; Bauer, Michael A; Scutari, Marco; Nagarajan, Radhakrishnan
2009-10-08
There has been recent interest in capturing the functional relationships (FRs) from high-throughput assays using suitable computational techniques. FRs elucidate the working of genes in concert as a system as opposed to independent entities hence may provide preliminary insights into biological pathways and signalling mechanisms. Bayesian structure learning (BSL) techniques and its extensions have been used successfully for modelling FRs from expression profiles. Such techniques are especially useful in discovering undocumented FRs, investigating non-canonical signalling mechanisms and cross-talk between pathways. The objective of the present study is to develop a graphical user interface (GUI), NATbox: Network Analysis Toolbox in the language R that houses a battery of BSL algorithms in conjunction with suitable statistical tools for modelling FRs in the form of acyclic networks from gene expression profiles and their subsequent analysis. NATbox is a menu-driven open-source GUI implemented in the R statistical language for modelling and analysis of FRs from gene expression profiles. It provides options to (i) impute missing observations in the given data (ii) model FRs and network structure from gene expression profiles using a battery of BSL algorithms and identify robust dependencies using a bootstrap procedure, (iii) present the FRs in the form of acyclic graphs for visualization and investigate its topological properties using network analysis metrics, (iv) retrieve FRs of interest from published literature. Subsequently, use these FRs as structural priors in BSL (v) enhance scalability of BSL across high-dimensional data by parallelizing the bootstrap routines. NATbox provides a menu-driven GUI for modelling and analysis of FRs from gene expression profiles. By incorporating readily available functions from existing R-packages, it minimizes redundancy and improves reproducibility, transparency and sustainability, characteristic of open-source environments. NATbox is especially suited for interdisciplinary researchers and biologists with minimal programming experience and would like to use systems biology approaches without delving into the algorithmic aspects. The GUI provides appropriate parameter recommendations for the various menu options including default parameter choices for the user. NATbox can also prove to be a useful demonstration and teaching tool in graduate and undergraduate course in systems biology. It has been tested successfully under Windows and Linux operating systems. The source code along with installation instructions and accompanying tutorial can be found at http://bioinformatics.ualr.edu/natboxWiki/index.php/Main_Page.
NASA Technical Reports Server (NTRS)
Findlay, J. T.; Kelly, G. M.; Troutman, P. A.
1984-01-01
A perturbation model to the Marshall Space Flight Center (MSFC) Global Reference Atmosphere Model (GRAM) was developed for use in the Aeroassist Orbital Transfer Vehicle (AOTV) trajectory and analysis. The model reflects NASA Space Shuttle experience over the first twelve entry flights. The GRAM was selected over the Air Force 1978 Reference Model because of its more general formulation and wider use throughout NASA. The add-on model, a simple scaling with altitude to reflect density structure encountered by the Shuttle Orbiter was selected principally to simplify implementation. Perturbations, by season, can be utilized to minimize the number of required simulations, however, exact Shuttle flight history can be exercised using the same model if desired. Such a perturbation model, though not meteorologically motivated, enables inclusion of High Resolution Accelerometer Package (HiRAP) results in the thermosphere. Provision is made to incorporate differing perturbations during the AOTV entry and exit phases of the aero-asist maneuver to account for trajectory displacement (geographic) along the ground track.
Minimal realization of right-handed gauge symmetry
NASA Astrophysics Data System (ADS)
Nomura, Takaaki; Okada, Hiroshi
2018-01-01
We propose a minimally extended gauge symmetry model with U (1 )R , where only the right-handed fermions have nonzero charges in the fermion sector. To achieve both anomaly cancellations and minimality, three right-handed neutrinos are naturally required, and the standard model Higgs has to have nonzero charge under this symmetry. Then we find that its breaking scale(Λ ) is restricted by precise measurement of neutral gauge boson in the standard model; therefore, O (10 ) TeV ≲Λ . We also discuss its testability of the new gauge boson and discrimination of U (1 )R model from U (1 )B-L one at collider physics such as LHC and ILC.
Analysis of an optimization-based atomistic-to-continuum coupling method for point defects
Olson, Derek; Shapeev, Alexander V.; Bochev, Pavel B.; ...
2015-11-16
Here, we formulate and analyze an optimization-based Atomistic-to-Continuum (AtC) coupling method for problems with point defects. Application of a potential-based atomistic model near the defect core enables accurate simulation of the defect. Away from the core, where site energies become nearly independent of the lattice position, the method switches to a more efficient continuum model. The two models are merged by minimizing the mismatch of their states on an overlap region, subject to the atomistic and continuum force balance equations acting independently in their domains. We prove that the optimization problem is well-posed and establish error estimates.
NASA Astrophysics Data System (ADS)
Singh, Shiwangi; Bard, Deborah
2017-01-01
Weak gravitational lensing is an effective tool to map the structure of matter in the universe, and has been used for more than ten years as a probe of the nature of dark energy. Beyond the well-established two-point summary statistics, attention is now turning to methods that use the full statistical information available in the lensing observables, through analysis of the reconstructed shear field. This offers an opportunity to take advantage of powerful deep learning methods for image analysis. We present two early studies that demonstrate that deep learning can be used to characterise features in weak lensing convergence maps, and to identify the underlying cosmological model that produced them.We developed an unsupervised Denoising Convolutional Autoencoder model in order to learn an abstract representation directly from our data. This model uses a convolution-deconvolution architecture, which is fed with input data (corrupted with binomial noise to prevent over-fitting). Our model effectively trains itself to minimize the mean-squared error between the input and the output using gradient descent, resulting in a model which, theoretically, is broad enough to tackle other similarly structured problems. Using this model we were able to successfully reconstruct simulated convergence maps and identify the structures in them. We also determined which structures had the highest “importance” - i.e. which structures were most typical of the data. We note that the structures that had the highest importance in our reconstruction were around high mass concentrations, but were highly non-Gaussian.We also developed a supervised Convolutional Neural Network (CNN) for classification of weak lensing convergence maps from two different simulated theoretical models. The CNN uses a softmax classifier which minimizes a binary cross-entropy loss between the estimated distribution and true distribution. In other words, given an unseen convergence map the trained CNN determines probabilistically which theoretical model fits the data best. This preliminary work demonstrates that we can classify the cosmological model that produced the convergence maps with 80% accuracy.
Subsite mapping of enzymes. Depolymerase computer modelling.
Allen, J D; Thoma, J A
1976-01-01
We have developed a depolymerase computer model that uses a minimization routine. The model is designed so that, given experimental bond-cleavage frequencies for oligomeric substrates and experimental Michaelis parameters as a function of substrate chain length, the optimum subsite map is generated. The minimized sum of the weighted-squared residuals of the experimental and calculated data is used as a criterion of the goodness-of-fit for the optimized subsite map. The application of the minimization procedure to subsite mapping is explored through the use of simulated data. A procedure is developed whereby the minimization model can be used to determine the number of subsites in the enzymic binding region and to locate the position of the catalytic amino acids among these subsites. The degree of propagation of experimental variance into the subsite-binding energies is estimated. The question of whether hydrolytic rate coefficients are constant or a function of the number of filled subsites is examined. PMID:999629
Bandwidth Allocation to Interactive Users in DBS-Based Hybrid Internet
1998-01-01
policies 12 3.1 Framework for queuing analysis: ON/OFF source traffic model . 13 3.2 Service quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14...minimizing the queuing delay. In consequence, we were interested in ob- taining improvements in the service quality , as perceived by the users. A...the service quality as per- ceived by users. The merit of this approach, first introduced in [8], is the ability to capture the characteristics of the
Blading Design for Axial Turbomachines
1989-05-01
three- dimensional, viscous computation systems appear to have a long development period ahead, in which fluid shear stress modeling and computation time ...and n directions and T is the shear stress , As a consequence the solution time is longer than for integral methods, dependent largely on thc accuracy of...distributions over airfoils is an adaptation of thin plate deflection theory from stress analysis. At the same time , it minimizes designer effort
Semi-Markov Models for Degradation-Based Reliability
2010-01-01
standard analysis techniques for Markov processes can be employed (cf. Whitt (1984), Altiok (1985), Perros (1994), and Osogami and Harchol-Balter...We want to approximate X by a PH random variable, sayY, with c.d.f. Ĥ. Marie (1980), Altiok (1985), Johnson (1993), Perros (1994), and Osogami and...provides a minimal representation when matching only two moments. By considering the guidance provided by Marie (1980), Whitt (1984), Altiok (1985), Perros
Alcohol use trajectories after high school graduation among emerging adults with type 1 diabetes.
Hanna, Kathleen M; Stupiansky, Nathan W; Weaver, Michael T; Slaven, James E; Stump, Timothy E
2014-08-01
To explore alcohol involvement trajectories and associated factors during the year post-high school (HS) graduation among emerging adults with type 1 diabetes. Youth (N = 181) self-reported alcohol use at baseline and every 3 months for 1 year post-HS graduation. Data were also collected on parent-youth conflict, diabetes self-efficacy, major life events, living and educational situations, diabetes management, marijuana use, cigarette smoking, and glycemic control. Trajectories of alcohol use were modeled using latent class growth analysis. Associations between trajectory class and specific salient variables were examined using analysis of variance, chi square, or generalized linear mixed model, as appropriate. Identified alcohol involvement trajectory classes were labeled as (1) consistent involvement group (n = 25, 13.8%) with stable, high use relative to other groups over the 12 months; (2) growing involvement group (n = 55, 30.4%) with increasing use throughout the 12 months; and (3) minimal involvement group (n = 101, 55.8%) with essentially no involvement until the ninth month. Those with minimal involvement had the best diabetes management and better diabetes self-efficacy than those with consistent involvement. In comparison with those minimally involved, those with growing involvement were more likely to live independently of parents; those consistently involved had more major life events; and both the growing and consistent involvement groups were more likely to have tried marijuana and cigarettes. This sample of emerging adults with type 1 diabetes has three unique patterns of alcohol use during the first year after HS. Copyright © 2014 Society for Adolescent Health and Medicine. Published by Elsevier Inc. All rights reserved.
Targeting the minimal supersymmetric standard model with the compact muon solenoid experiment
NASA Astrophysics Data System (ADS)
Bein, Samuel Louis
An interpretation of CMS searches for evidence of supersymmetry in the context of the minimal supersymmetric Standard Model (MSSM) is given. It is found that supersymmetric particles with color charge are excluded in the mass range below about 400 GeV, but neutral and weakly-charged sparticles remain non-excluded in all mass ranges. Discussion of the non-excluded regions of the model parameter space is given, including details on the strengths and weaknesses of existing searches, and recommendations for future analysis strategies. Advancements in the modeling of events arising from quantum chromodynamics and electroweak boson production, which are major backgrounds in searches for new physics at the LHC, are also presented. These methods have been implemented as components of CMS searches for supersymmetry in proton-proton collisions resulting in purely hadronic events (i.e., events with no identified leptons) at a center of momentum energy of 13 TeV. These searches, interpreted in the context of simplified models, exclude supersymmetric gluons (gluinos) up to masses of 1400 to 1600 GeV, depending on the model considered, and exclude scalar top quarks with masses up to about 800 GeV, assuming a massless lightest supersymmetric particle. A search for non-excluded supersymmetry models is also presented, which uses multivariate discriminants to isolate potential signal candidate events. The search achieves sensitivity to new physics models in background-dominated kinematic regions not typically considered by analyses, and rules out supersymmetry models that survived 7 and 8 TeV searches performed by CMS.
A contact stress model for multifingered grasps of rough objects
NASA Technical Reports Server (NTRS)
Sinha, Pramath Raj; Abel, Jacob M.
1990-01-01
The model developed utilizes a contact-stress analysis of an arbitrarily shaped object in a multifingered grasp. The fingers and the object are all treated as elastic bodies, and the region of contact is modeled as a deformable surface patch. The relationship between the friction and normal forces is nonlocal and nonlinear in nature and departs from the Coulomb approximation. The nature of the constraints arising out of conditions for compatibility and static equilibrium motivated the formulation of the model as a nonlinear constrained minimization problem. The model is able to predict the magnitude of the inwardly directed normal forces and both the magnitude and direction of the tangential (friction) forces at each finger-object interface for grasped objects in static equilibrium.
Townsend, Molly T; Sarigul-Klijn, Nesrin
2016-01-01
Simplified material models are commonly used in computational simulation of biological soft tissue as an approximation of the complicated material response and to minimize computational resources. However, the simulation of complex loadings, such as long-duration tissue swelling, necessitates complex models that are not easy to formulate. This paper strives to offer the updated Lagrangian formulation comprehensive procedure of various non-linear material models for the application of finite element analysis of biological soft tissues including a definition of the Cauchy stress and the spatial tangential stiffness. The relationships between water content, osmotic pressure, ionic concentration and the pore pressure stress of the tissue are discussed with the merits of these models and their applications.
A review of failure models for unidirectional ceramic matrix composites under monotonic loads
NASA Technical Reports Server (NTRS)
Tripp, David E.; Hemann, John H.; Gyekenyesi, John P.
1989-01-01
Ceramic matrix composites offer significant potential for improving the performance of turbine engines. In order to achieve their potential, however, improvements in design methodology are needed. In the past most components using structural ceramic matrix composites were designed by trial and error since the emphasis of feasibility demonstration minimized the development of mathematical models. To understand the key parameters controlling response and the mechanics of failure, the development of structural failure models is required. A review of short term failure models with potential for ceramic matrix composite laminates under monotonic loads is presented. Phenomenological, semi-empirical, shear-lag, fracture mechanics, damage mechanics, and statistical models for the fast fracture analysis of continuous fiber unidirectional ceramic matrix composites under monotonic loads are surveyed.
Klamt, Steffen; Regensburger, Georg; Gerstl, Matthias P; Jungreuthmayer, Christian; Schuster, Stefan; Mahadevan, Radhakrishnan; Zanghellini, Jürgen; Müller, Stefan
2017-04-01
Elementary flux modes (EFMs) emerged as a formal concept to describe metabolic pathways and have become an established tool for constraint-based modeling and metabolic network analysis. EFMs are characteristic (support-minimal) vectors of the flux cone that contains all feasible steady-state flux vectors of a given metabolic network. EFMs account for (homogeneous) linear constraints arising from reaction irreversibilities and the assumption of steady state; however, other (inhomogeneous) linear constraints, such as minimal and maximal reaction rates frequently used by other constraint-based techniques (such as flux balance analysis [FBA]), cannot be directly integrated. These additional constraints further restrict the space of feasible flux vectors and turn the flux cone into a general flux polyhedron in which the concept of EFMs is not directly applicable anymore. For this reason, there has been a conceptual gap between EFM-based (pathway) analysis methods and linear optimization (FBA) techniques, as they operate on different geometric objects. One approach to overcome these limitations was proposed ten years ago and is based on the concept of elementary flux vectors (EFVs). Only recently has the community started to recognize the potential of EFVs for metabolic network analysis. In fact, EFVs exactly represent the conceptual development required to generalize the idea of EFMs from flux cones to flux polyhedra. This work aims to present a concise theoretical and practical introduction to EFVs that is accessible to a broad audience. We highlight the close relationship between EFMs and EFVs and demonstrate that almost all applications of EFMs (in flux cones) are possible for EFVs (in flux polyhedra) as well. In fact, certain properties can only be studied with EFVs. Thus, we conclude that EFVs provide a powerful and unifying framework for constraint-based modeling of metabolic networks.
Klamt, Steffen; Gerstl, Matthias P.; Jungreuthmayer, Christian; Mahadevan, Radhakrishnan; Müller, Stefan
2017-01-01
Elementary flux modes (EFMs) emerged as a formal concept to describe metabolic pathways and have become an established tool for constraint-based modeling and metabolic network analysis. EFMs are characteristic (support-minimal) vectors of the flux cone that contains all feasible steady-state flux vectors of a given metabolic network. EFMs account for (homogeneous) linear constraints arising from reaction irreversibilities and the assumption of steady state; however, other (inhomogeneous) linear constraints, such as minimal and maximal reaction rates frequently used by other constraint-based techniques (such as flux balance analysis [FBA]), cannot be directly integrated. These additional constraints further restrict the space of feasible flux vectors and turn the flux cone into a general flux polyhedron in which the concept of EFMs is not directly applicable anymore. For this reason, there has been a conceptual gap between EFM-based (pathway) analysis methods and linear optimization (FBA) techniques, as they operate on different geometric objects. One approach to overcome these limitations was proposed ten years ago and is based on the concept of elementary flux vectors (EFVs). Only recently has the community started to recognize the potential of EFVs for metabolic network analysis. In fact, EFVs exactly represent the conceptual development required to generalize the idea of EFMs from flux cones to flux polyhedra. This work aims to present a concise theoretical and practical introduction to EFVs that is accessible to a broad audience. We highlight the close relationship between EFMs and EFVs and demonstrate that almost all applications of EFMs (in flux cones) are possible for EFVs (in flux polyhedra) as well. In fact, certain properties can only be studied with EFVs. Thus, we conclude that EFVs provide a powerful and unifying framework for constraint-based modeling of metabolic networks. PMID:28406903
Minimal T-wave representation and its use in the assessment of drug arrhythmogenicity.
Shakibfar, Saeed; Graff, Claus; Kanters, Jørgen K; Nielsen, Jimmi; Schmidt, Samuel; Struijk, Johannes J
2017-05-01
Recently, numerous models and techniques have been developed for analyzing and extracting features from the T wave which could be used as biomarkers for drug-induced abnormalities. The majority of these techniques and algorithms use features that determine readily apparent characteristics of the T wave, such as duration, area, amplitude, and slopes. In the present work the T wave was down-sampled to a minimal rate, such that a good reconstruction was still possible. The entire T wave was then used as a feature vector to assess drug-induced repolarization effects. The ability of the samples or combinations of samples obtained from the minimal T-wave representation to correctly classify a group of subjects before and after receiving d,l-sotalol 160 mg and 320 mg was evaluated using a linear discriminant analysis (LDA). The results showed that a combination of eight samples from the minimal T-wave representation can be used to identify normal from abnormal repolarization significantly better compared to the heart rate-corrected QT interval (QTc). It was further indicated that the interval from the peak of the T wave to the end of the T wave (Tpe) becomes relatively shorter after I K r inhibition by d,l-sotalol and that the most pronounced repolarization changes were present in the ascending segment of the minimal T-wave representation. The minimal T-wave representation can potentially be used as a new tool to identify normal from abnormal repolarization in drug safety studies. © 2016 Wiley Periodicals, Inc.
Real-Time Adaptive Least-Squares Drag Minimization for Performance Adaptive Aeroelastic Wing
NASA Technical Reports Server (NTRS)
Ferrier, Yvonne L.; Nguyen, Nhan T.; Ting, Eric
2016-01-01
This paper contains a simulation study of a real-time adaptive least-squares drag minimization algorithm for an aeroelastic model of a flexible wing aircraft. The aircraft model is based on the NASA Generic Transport Model (GTM). The wing structures incorporate a novel aerodynamic control surface known as the Variable Camber Continuous Trailing Edge Flap (VCCTEF). The drag minimization algorithm uses the Newton-Raphson method to find the optimal VCCTEF deflections for minimum drag in the context of an altitude-hold flight control mode at cruise conditions. The aerodynamic coefficient parameters used in this optimization method are identified in real-time using Recursive Least Squares (RLS). The results demonstrate the potential of the VCCTEF to improve aerodynamic efficiency for drag minimization for transport aircraft.
Amaral, Camilla F; Gomes, Rafael S; Rodrigues Garcia, Renata C M; Del Bel Cury, Altair A
2018-05-01
Studies have demonstrated the effectiveness of a single-implant-retained mandibular overdenture for elderly patients with edentulism. However, due to the high concentration of stress around the housing portion of the single implant, this prosthesis tends to fracture at the anterior region more than the 2-implant-retained mandibular overdenture. The purpose of this finite-element analysis study was to evaluate the stress distribution in a single-implant-retained mandibular overdenture reinforced with a cobalt-chromium framework, to minimize the incidence of denture base fracture. Two 3-dimensional finite element models of mandibular overdentures supported by a single implant with a stud attachment were designed in SolidWorks 2013 software. The only difference between the models was the presence or absence of a cobalt-chromium framework at the denture base between canines. Subsequently, the models were imported into the mathematical analysis software ANSYS Workbench v15.0. A mesh was generated with an element size of 0.7 mm and submitted to convergence analysis before mechanical simulation. All materials were considered to be homogeneous, isotropic, and linearly elastic. A 100-N load was applied to the incisal edge of the central mandibular incisors at a 30-degree angle. Maximum principal stress was calculated for the overdenture, von Mises stress was calculated for the attachment and implant, and minimum principal stress was calculated for cortical and cancellous bone. In both models, peak stress on the overdenture was localized at the anterior intaglio surface region around the implant. However, the presence of the framework reduced the stress by almost 62% compared with the overdenture without a framework (8.7 MPa and 22.8 MPa, respectively). Both models exhibited similar stress values in the attachment, implant, and bone. A metal framework reinforcement for a single-implant-retained mandibular overdenture concentrates less stress through the anterior area of the prosthesis and could minimize the incidence of fracture. Copyright © 2017 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Karabatsos, George
2017-02-01
Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected functionals and values of covariates. The software is illustrated through the BNP regression analysis of real data.
Placebo non-response measure in sequential parallel comparison design studies.
Rybin, Denis; Doros, Gheorghe; Pencina, Michael J; Fava, Maurizio
2015-07-10
The Sequential Parallel Comparison Design (SPCD) is one of the novel approaches addressing placebo response. The analysis of SPCD data typically classifies subjects as 'placebo responders' or 'placebo non-responders'. Most current methods employed for analysis of SPCD data utilize only a part of the data collected during the trial. A repeated measures model was proposed for analysis of continuous outcomes that permitted the inclusion of information from all subjects into the treatment effect estimation. We describe here a new approach using a weighted repeated measures model that further improves the utilization of data collected during the trial, allowing the incorporation of information that is relevant to the placebo response, and dealing with the problem of possible misclassification of subjects. Our simulations show that when compared to the unweighted repeated measures model method, our approach performs as well or, under certain conditions, better, in preserving the type I error, achieving adequate power and minimizing the mean squared error. Copyright © 2015 John Wiley & Sons, Ltd.
A high throughput MATLAB program for automated force-curve processing using the AdG polymer model.
O'Connor, Samantha; Gaddis, Rebecca; Anderson, Evan; Camesano, Terri A; Burnham, Nancy A
2015-02-01
Research in understanding biofilm formation is dependent on accurate and representative measurements of the steric forces related to brush on bacterial surfaces. A MATLAB program to analyze force curves from an AFM efficiently, accurately, and with minimal user bias has been developed. The analysis is based on a modified version of the Alexander and de Gennes (AdG) polymer model, which is a function of equilibrium polymer brush length, probe radius, temperature, separation distance, and a density variable. Automating the analysis reduces the amount of time required to process 100 force curves from several days to less than 2min. The use of this program to crop and fit force curves to the AdG model will allow researchers to ensure proper processing of large amounts of experimental data and reduce the time required for analysis and comparison of data, thereby enabling higher quality results in a shorter period of time. Copyright © 2014 Elsevier B.V. All rights reserved.
Linear model for fast background subtraction in oligonucleotide microarrays.
Kroll, K Myriam; Barkema, Gerard T; Carlon, Enrico
2009-11-16
One important preprocessing step in the analysis of microarray data is background subtraction. In high-density oligonucleotide arrays this is recognized as a crucial step for the global performance of the data analysis from raw intensities to expression values. We propose here an algorithm for background estimation based on a model in which the cost function is quadratic in a set of fitting parameters such that minimization can be performed through linear algebra. The model incorporates two effects: 1) Correlated intensities between neighboring features in the chip and 2) sequence-dependent affinities for non-specific hybridization fitted by an extended nearest-neighbor model. The algorithm has been tested on 360 GeneChips from publicly available data of recent expression experiments. The algorithm is fast and accurate. Strong correlations between the fitted values for different experiments as well as between the free-energy parameters and their counterparts in aqueous solution indicate that the model captures a significant part of the underlying physical chemistry.
NASA Technical Reports Server (NTRS)
Dash, S. M.; York, B. J.; Sinha, N.; Dvorak, F. A.
1987-01-01
An overview of parabolic and PNS (Parabolized Navier-Stokes) methodology developed to treat highly curved sub and supersonic wall jets is presented. The fundamental data base to which these models were applied is discussed in detail. The analysis of strong curvature effects was found to require a semi-elliptic extension of the parabolic modeling to account for turbulent contributions to the normal pressure variations, as well as an extension to the turbulence models utilized, to account for the highly enhanced mixing rates observed in situations with large convex curvature. A noniterative, pressure split procedure is shown to extend parabolic models to account for such normal pressure variations in an efficient manner, requiring minimal additional run time over a standard parabolic approach. A new PNS methodology is presented to solve this problem which extends parabolic methodology via the addition of a characteristic base wave solver. Applications of this approach to analyze the interaction of wave and turbulence processes in wall jets is presented.
NASA Astrophysics Data System (ADS)
Widhiarso, Wahyu; Rosyidi, Cucuk Nur
2018-02-01
Minimizing production cost in a manufacturing company will increase the profit of the company. The cutting parameters will affect total processing time which then will affect the production cost of machining process. Besides affecting the production cost and processing time, the cutting parameters will also affect the environment. An optimization model is needed to determine the optimum cutting parameters. In this paper, we develop an optimization model to minimize the production cost and the environmental impact in CNC turning process. The model is used a multi objective optimization. Cutting speed and feed rate are served as the decision variables. Constraints considered are cutting speed, feed rate, cutting force, output power, and surface roughness. The environmental impact is converted from the environmental burden by using eco-indicator 99. Numerical example is given to show the implementation of the model and solved using OptQuest of Oracle Crystal Ball software. The results of optimization indicate that the model can be used to optimize the cutting parameters to minimize the production cost and the environmental impact.
Systems biology perspectives on minimal and simpler cells.
Xavier, Joana C; Patil, Kiran Raosaheb; Rocha, Isabel
2014-09-01
The concept of the minimal cell has fascinated scientists for a long time, from both fundamental and applied points of view. This broad concept encompasses extreme reductions of genomes, the last universal common ancestor (LUCA), the creation of semiartificial cells, and the design of protocells and chassis cells. Here we review these different areas of research and identify common and complementary aspects of each one. We focus on systems biology, a discipline that is greatly facilitating the classical top-down and bottom-up approaches toward minimal cells. In addition, we also review the so-called middle-out approach and its contributions to the field with mathematical and computational models. Owing to the advances in genomics technologies, much of the work in this area has been centered on minimal genomes, or rather minimal gene sets, required to sustain life. Nevertheless, a fundamental expansion has been taking place in the last few years wherein the minimal gene set is viewed as a backbone of a more complex system. Complementing genomics, progress is being made in understanding the system-wide properties at the levels of the transcriptome, proteome, and metabolome. Network modeling approaches are enabling the integration of these different omics data sets toward an understanding of the complex molecular pathways connecting genotype to phenotype. We review key concepts central to the mapping and modeling of this complexity, which is at the heart of research on minimal cells. Finally, we discuss the distinction between minimizing the number of cellular components and minimizing cellular complexity, toward an improved understanding and utilization of minimal and simpler cells. Copyright © 2014, American Society for Microbiology. All Rights Reserved.
Offshore wind farm layout optimization
NASA Astrophysics Data System (ADS)
Elkinton, Christopher Neil
Offshore wind energy technology is maturing in Europe and is poised to make a significant contribution to the U.S. energy production portfolio. Building on the knowledge the wind industry has gained to date, this dissertation investigates the influences of different site conditions on offshore wind farm micrositing---the layout of individual turbines within the boundaries of a wind farm. For offshore wind farms, these conditions include, among others, the wind and wave climates, water depths, and soil conditions at the site. An analysis tool has been developed that is capable of estimating the cost of energy (COE) from offshore wind farms. For this analysis, the COE has been divided into several modeled components: major costs (e.g. turbines, electrical interconnection, maintenance, etc.), energy production, and energy losses. By treating these component models as functions of site-dependent parameters, the analysis tool can investigate the influence of these parameters on the COE. Some parameters result in simultaneous increases of both energy and cost. In these cases, the analysis tool was used to determine the value of the parameter that yielded the lowest COE and, thus, the best balance of cost and energy. The models have been validated and generally compare favorably with existing offshore wind farm data. The analysis technique was then paired with optimization algorithms to form a tool with which to design offshore wind farm layouts for which the COE was minimized. Greedy heuristic and genetic optimization algorithms have been tuned and implemented. The use of these two algorithms in series has been shown to produce the best, most consistent solutions. The influences of site conditions on the COE have been studied further by applying the analysis and optimization tools to the initial design of a small offshore wind farm near the town of Hull, Massachusetts. The results of an initial full-site analysis and optimization were used to constrain the boundaries of the farm. A more thorough optimization highlighted the features of the area that would result in a minimized COE. The results showed reasonable layout designs and COE estimates that are consistent with existing offshore wind farms.
Balancing on tightropes and slacklines
Paoletti, P.; Mahadevan, L.
2012-01-01
Balancing on a tightrope or a slackline is an example of a neuromechanical task where the whole body both drives and responds to the dynamics of the external environment, often on multiple timescales. Motivated by a range of neurophysiological observations, here we formulate a minimal model for this system and use optimal control theory to design a strategy for maintaining an upright position. Our analysis of the open and closed-loop dynamics shows the existence of an optimal rope sag where balancing requires minimal effort, consistent with qualitative observations and suggestive of strategies for optimizing balancing performance while standing and walking. Our consideration of the effects of nonlinearities, potential parameter coupling and delays on the overall performance shows that although these factors change the results quantitatively, the existence of an optimal strategy persists. PMID:22513724
Robust dynamics in minimal hybrid models of genetic networks
Perkins, Theodore J.; Wilds, Roy; Glass, Leon
2010-01-01
Many gene-regulatory networks necessarily display robust dynamics that are insensitive to noise and stable under evolution. We propose that a class of hybrid systems can be used to relate the structure of these networks to their dynamics and provide insight into the origin of robustness. In these systems, the genes are represented by logical functions, and the controlling transcription factor protein molecules are real variables, which are produced and destroyed. As the transcription factor concentrations cross thresholds, they control the production of other transcription factors. We discuss mathematical analysis of these systems and show how the concepts of robustness and minimality can be used to generate putative logical organizations based on observed symbolic sequences. We apply the methods to control of the cell cycle in yeast. PMID:20921006
Robust dynamics in minimal hybrid models of genetic networks.
Perkins, Theodore J; Wilds, Roy; Glass, Leon
2010-11-13
Many gene-regulatory networks necessarily display robust dynamics that are insensitive to noise and stable under evolution. We propose that a class of hybrid systems can be used to relate the structure of these networks to their dynamics and provide insight into the origin of robustness. In these systems, the genes are represented by logical functions, and the controlling transcription factor protein molecules are real variables, which are produced and destroyed. As the transcription factor concentrations cross thresholds, they control the production of other transcription factors. We discuss mathematical analysis of these systems and show how the concepts of robustness and minimality can be used to generate putative logical organizations based on observed symbolic sequences. We apply the methods to control of the cell cycle in yeast.
[Three dimensional mathematical model of tooth for finite element analysis].
Puskar, Tatjana; Vasiljević, Darko; Marković, Dubravka; Jevremović, Danimir; Pantelić, Dejan; Savić-Sević, Svetlana; Murić, Branka
2010-01-01
The mathematical model of the abutment tooth is the starting point of the finite element analysis of stress and deformation of dental structures. The simplest and easiest way is to form a model according to the literature data of dimensions and morphological characteristics of teeth. Our method is based on forming 3D models using standard geometrical forms (objects) in programmes for solid modeling. Forming the mathematical model of abutment of the second upper premolar for finite element analysis of stress and deformation of dental structures. The abutment tooth has a form of a complex geometric object. It is suitable for modeling in programs for solid modeling SolidWorks. After analysing the literature data about the morphological characteristics of teeth, we started the modeling dividing the tooth (complex geometric body) into simple geometric bodies (cylinder, cone, pyramid,...). Connecting simple geometric bodies together or substricting bodies from the basic body, we formed complex geometric body, tooth. The model is then transferred into Abaqus, a computational programme for finite element analysis. Transferring the data was done by standard file format for transferring 3D models ACIS SAT. Using the programme for solid modeling SolidWorks, we developed three models of abutment of the second maxillary premolar: the model of the intact abutment, the model of the endodontically treated tooth with two remaining cavity walls and the model of the endodontically treated tooth with two remaining walls and inserted post. Mathematical models of the abutment made according to the literature data are very similar with the real abutment and the simplifications are minimal. These models enable calculations of stress and deformation of the dental structures. The finite element analysis provides useful information in understanding biomechanical problems and gives guidance for clinical research.
Evidence for surprise minimization over value maximization in choice behavior
Schwartenbeck, Philipp; FitzGerald, Thomas H. B.; Mathys, Christoph; Dolan, Ray; Kronbichler, Martin; Friston, Karl
2015-01-01
Classical economic models are predicated on the idea that the ultimate aim of choice is to maximize utility or reward. In contrast, an alternative perspective highlights the fact that adaptive behavior requires agents’ to model their environment and minimize surprise about the states they frequent. We propose that choice behavior can be more accurately accounted for by surprise minimization compared to reward or utility maximization alone. Minimizing surprise makes a prediction at variance with expected utility models; namely, that in addition to attaining valuable states, agents attempt to maximize the entropy over outcomes and thus ‘keep their options open’. We tested this prediction using a simple binary choice paradigm and show that human decision-making is better explained by surprise minimization compared to utility maximization. Furthermore, we replicated this entropy-seeking behavior in a control task with no explicit utilities. These findings highlight a limitation of purely economic motivations in explaining choice behavior and instead emphasize the importance of belief-based motivations. PMID:26564686
Chen, Bo; Bian, Zhaoying; Zhou, Xiaohui; Chen, Wensheng; Ma, Jianhua; Liang, Zhengrong
2018-04-12
Total variation (TV) minimization for the sparse-view x-ray computer tomography (CT) reconstruction has been widely explored to reduce radiation dose. However, due to the piecewise constant assumption for the TV model, the reconstructed images often suffer from over-smoothness on the image edges. To mitigate this drawback of TV minimization, we present a Mumford-Shah total variation (MSTV) minimization algorithm in this paper. The presented MSTV model is derived by integrating TV minimization and Mumford-Shah segmentation. Subsequently, a penalized weighted least-squares (PWLS) scheme with MSTV is developed for the sparse-view CT reconstruction. For simplicity, the proposed algorithm is named as 'PWLS-MSTV.' To evaluate the performance of the present PWLS-MSTV algorithm, both qualitative and quantitative studies were conducted by using a digital XCAT phantom and a physical phantom. Experimental results show that the present PWLS-MSTV algorithm has noticeable gains over the existing algorithms in terms of noise reduction, contrast-to-ratio measure and edge-preservation.
Analysis-Driven Design Optimization of a SMA-Based Slat-Cove Filler for Aeroacoustic Noise Reduction
NASA Technical Reports Server (NTRS)
Scholten, William; Hartl, Darren; Turner, Travis
2013-01-01
Airframe noise is a significant component of environmental noise in the vicinity of airports. The noise associated with the leading-edge slat of typical transport aircraft is a prominent source of airframe noise. Previous work suggests that a slat-cove filler (SCF) may be an effective noise treatment. Hence, development and optimization of a practical slat-cove-filler structure is a priority. The objectives of this work are to optimize the design of a functioning SCF which incorporates superelastic shape memory alloy (SMA) materials as flexures that permit the deformations involved in the configuration change. The goal of the optimization is to minimize the actuation force needed to retract the slat-SCF assembly while satisfying constraints on the maximum SMA stress and on the SCF deflection under static aerodynamic pressure loads, while also satisfying the condition that the SCF self-deploy during slat extension. A finite element analysis model based on a physical bench-top model is created in Abaqus such that automated iterative analysis of the design could be performed. In order to achieve an optimized design, several design variables associated with the current SCF configuration are considered, such as the thicknesses of SMA flexures and the dimensions of various components, SMA and conventional. Designs of experiment (DOE) are performed to investigate structural response to an aerodynamic pressure load and to slat retraction and deployment. DOE results are then used to inform the optimization process, which determines a design minimizing actuator forces while satisfying the required constraints.
Minimal clinically important difference in the fibromyalgia impact questionnaire.
Bennett, Robert M; Bushmakin, Andrew G; Cappelleri, Joseph C; Zlateva, Gergana; Sadosky, Alesia B
2009-06-01
The Fibromyalgia Impact Questionnaire (FIQ) is a disease-specific composite instrument that measures the effect of problems experienced by patients with fibromyalgia (FM). Utilization of the FIQ in measuring changes due to interventions in FM requires derivation of a clinically meaningful change for that instrument. Analyses were conducted to estimate the minimal clinically important difference (MCID), and to propose FIQ severity categories. Data from 3 similarly designed, 3-month placebo-controlled, clinical treatment trials of pregabalin 300, 450, and 600 mg/day in patients with FM were modeled to estimate the change in the mean FIQ total and stiffness items corresponding to each category on the Patient Global Impression of Change. FIQ severity categories were modeled and determined using established pain severity cutpoints as an anchor. A total of 2228 patients, mean age 49 years, 93% women, with a mean baseline FIQ total score of 62 were treated in the 3 studies. Estimated MCID on a given measure were similar across the studies. In a pooled analysis the estimated MCID (95% confidence interval) was 14% (13; 15) and for FIQ stiffness it was 13% (12; 14). In the severity analysis a FIQ total score from 0 to <39 was found to represent a mild effect, >or= 39 to <59 a moderate effect, and >or=59 to 100 a severe effect. The analysis indicates that a 14% change in the FIQ total score is clinically relevant, and results of these analyses should enhance the clinical utility of the FIQ in research and practice.
Importance of biometrics to addressing vulnerabilities of the U.S. infrastructure
NASA Astrophysics Data System (ADS)
Arndt, Craig M.; Hall, Nathaniel A.
2004-08-01
Human identification technologies are important threat countermeasures in minimizing select infrastructure vulnerabilities. Properly targeted countermeasures should be selected and integrated into an overall security solution based on disciplined analysis and modeling. Available data on infrastructure value, threat intelligence, and system vulnerabilities are carefully organized, analyzed and modeled. Prior to design and deployment of an effective countermeasure; the proper role and appropriateness of technology in addressing the overall set of vulnerabilities is established. Deployment of biometrics systems, as with other countermeasures, introduces potentially heightened vulnerabilities into the system. Heightened vulnerabilities may arise from both the newly introduced system complexities and an unfocused understanding of the set of vulnerabilities impacted by the new countermeasure. The countermeasure's own inherent vulnerabilities and those introduced by the system's integration with the existing system are analyzed and modeled to determine the overall vulnerability impact. The United States infrastructure is composed of government and private assets. The infrastructure is valued by their potential impact on several components: human physical safety, physical/information replacement/repair cost, potential contribution to future loss (criticality in weapons production), direct productivity output, national macro-economic output/productivity, and information integrity. These components must be considered in determining the overall impact of an infrastructure security breach. Cost/benefit analysis is then incorporated in the security technology deployment decision process. Overall security risks based on system vulnerabilities and threat intelligence determines areas of potential benefit. Biometric countermeasures are often considered when additional security at intended points of entry would minimize vulnerabilities.
Primary urethral reconstruction: the cost minimized approach to the bulbous urethral stricture.
Rourke, Keith F; Jordan, Gerald H
2005-04-01
Treatment for urethral stricture disease often requires a choice between readily available direct vision internal urethrotomy (DVIU) and highly efficacious but more technically complex open urethral reconstruction. Using the short segment bulbous urethral stricture as a model, we determined which strategy is less costly. The costs of DVIU and open urethral reconstruction with stricture excision and primary anastomosis for a 2 cm bulbous urethral stricture were compared using a cost minimization decision analysis model. Clinical probability estimates for the DVIU treatment arm were the risk of bleeding, urinary tract infection and the risk of stricture recurrence. Estimates for the primary urethral reconstruction strategy were the risk of wound complications, complications of exaggerated lithotomy and the risk of treatment failure. Direct third party payer costs were determined in 2002 United States dollars. The model predicted that treatment with DVIU was more costly (17,747 dollars per patient) than immediate open urethral reconstruction (16,444 dollars per patient). This yielded an incremental cost savings of $1,304 per patient, favoring urethral reconstruction. Sensitivity analysis revealed that primary treatment with urethroplasty was economically advantageous within the range of clinically relevant events. Treatment with DVIU became more favorable when the long-term risk of stricture recurrence after DVIU was less than 60%. Treatment for short segment bulbous urethral strictures with primary reconstruction is less costly than treatment with DVIU. From a fiscal standpoint urethral reconstruction should be considered over DVIU in the majority of clinical circumstances.
Main steam line break accident simulation of APR1400 using the model of ATLAS facility
NASA Astrophysics Data System (ADS)
Ekariansyah, A. S.; Deswandri; Sunaryo, Geni R.
2018-02-01
A main steam line break simulation for APR1400 as an advanced design of PWR has been performed using the RELAP5 code. The simulation was conducted in a model of thermal-hydraulic test facility called as ATLAS, which represents a scaled down facility of the APR1400 design. The main steam line break event is described in a open-access safety report document, in which initial conditions and assumptionsfor the analysis were utilized in performing the simulation and analysis of the selected parameter. The objective of this work was to conduct a benchmark activities by comparing the simulation results of the CESEC-III code as a conservative approach code with the results of RELAP5 as a best-estimate code. Based on the simulation results, a general similarity in the behavior of selected parameters was observed between the two codes. However the degree of accuracy still needs further research an analysis by comparing with the other best-estimate code. Uncertainties arising from the ATLAS model should be minimized by taking into account much more specific data in developing the APR1400 model.
Leadership: validation of a self-report scale.
Dussault, Marc; Frenette, Eric; Fernet, Claude
2013-04-01
The aim of this paper was to propose and test the factor structure of a new self-report questionnaire on leadership. A sample of 373 school principals in the Province of Quebec, Canada completed the initial 46-item version of the questionnaire. In order to obtain a questionnaire of minimal length, a four-step procedure was retained. First, items analysis was performed using Classical Test Theory. Second, Rasch analysis was used to identify non-fitting or overlapping items. Third, a confirmatory factor analysis (CFA) using structural equation modelling was performed on the 21 remaining items to verify the factor structure of the scale. Results show that the model with a single third-order dimension (leadership), two second-order dimensions (transactional and transformational leadership), and one first-order dimension (laissez-faire leadership) provides a good fit to the data. Finally, invariance of factor structure was assessed with a second sample of 222 vice-principals in the Province of Quebec, Canada. This model is in agreement with the theoretical model developed by Bass (1985), upon which the questionnaire is based.
Vectorlike fermions and Higgs effective field theory revisited
Chen, Chien-Yi; Dawson, S.; Furlan, Elisabetta
2017-07-10
Heavy vectorlike quarks (VLQs) appear in many models of beyond the Standard Model physics. Direct experimental searches require these new quarks to be heavy, ≳ 800 – 1000 GeV . Here, we perform a global fit of the parameters of simple VLQ models in minimal representations of S U ( 2 ) L to precision data and Higgs rates. One interesting connection between anomalous Z bmore » $$\\bar{b}$$ interactions and Higgs physics in VLQ models is discussed. Finally, we present our analysis in an effective field theory (EFT) framework and show that the parameters of VLQ models are already highly constrained. Exact and approximate analytical formulas for the S and T parameters in the VLQ models we consider are available in the Supplemental Material as Mathematica files.« less
Minimal but non-minimal inflation and electroweak symmetry breaking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marzola, Luca; Institute of Physics, University of Tartu,Ravila 14c, 50411 Tartu; Racioppi, Antonio
2016-10-07
We consider the most minimal scale invariant extension of the standard model that allows for successful radiative electroweak symmetry breaking and inflation. The framework involves an extra scalar singlet, that plays the rôle of the inflaton, and is compatibile with current experimental bounds owing to the non-minimal coupling of the latter to gravity. This inflationary scenario predicts a very low tensor-to-scalar ratio r≈10{sup −3}, typical of Higgs-inflation models, but in contrast yields a scalar spectral index n{sub s}≃0.97 which departs from the Starobinsky limit. We briefly discuss the collider phenomenology of the framework.
NASA Astrophysics Data System (ADS)
Zhang, Jingjing; Xie, Bin; Yu, Xingjian; Luo, Xiaobing; Zhang, Tao; Liu, Shishen; Yu, Zhihua; Liu, Li; Jin, Xing
2017-07-01
In this study, the blue light hazard performances of phosphor converted-light-emitting diodes (pc-LEDs) with red phosphor and red quantum dots (QDs) were compared and analyzed by spectral optimization, which boosts the minimum attainable blue light hazard efficiency of radiation (BLHER) at high values of color rendering index (CRI) and luminous efficacy of radiation (LER) when the correlated color temperature (CCT) value changes from 1800 to 7800 K. It is found that the minimal BLHER value increases with the increase in the CCT value, and the minimal BLHER values of the two spectral models are nearly the same. Note that the QDs' model has advantages at CCT coverage under the same constraints of CRI and LER. Then, the relationships between minimal BLHER, CRI, CCT, and LER of pc-LEDs with QDs' model were analyzed. It is found that the minimal BLHER values are nearly the same when the CRI value changes from 50 to 90. Therefore, the influence of CRI on minimal BLHER is insignificant. Minimal BLHER increases with the increase in the LER value from 240 to 360 lm/W.
Model verification of mixed dynamic systems. [POGO problem in liquid propellant rockets
NASA Technical Reports Server (NTRS)
Chrostowski, J. D.; Evensen, D. A.; Hasselman, T. K.
1978-01-01
A parameter-estimation method is described for verifying the mathematical model of mixed (combined interactive components from various engineering fields) dynamic systems against pertinent experimental data. The model verification problem is divided into two separate parts: defining a proper model and evaluating the parameters of that model. The main idea is to use differences between measured and predicted behavior (response) to adjust automatically the key parameters of a model so as to minimize response differences. To achieve the goal of modeling flexibility, the method combines the convenience of automated matrix generation with the generality of direct matrix input. The equations of motion are treated in first-order form, allowing for nonsymmetric matrices, modeling of general networks, and complex-mode analysis. The effectiveness of the method is demonstrated for an example problem involving a complex hydraulic-mechanical system.
Czaplewski, Cezary; Karczynska, Agnieszka; Sieradzan, Adam K; Liwo, Adam
2018-04-30
A server implementation of the UNRES package (http://www.unres.pl) for coarse-grained simulations of protein structures with the physics-based UNRES model, coined a name UNRES server, is presented. In contrast to most of the protein coarse-grained models, owing to its physics-based origin, the UNRES force field can be used in simulations, including those aimed at protein-structure prediction, without ancillary information from structural databases; however, the implementation includes the possibility of using restraints. Local energy minimization, canonical molecular dynamics simulations, replica exchange and multiplexed replica exchange molecular dynamics simulations can be run with the current UNRES server; the latter are suitable for protein-structure prediction. The user-supplied input includes protein sequence and, optionally, restraints from secondary-structure prediction or small x-ray scattering data, and simulation type and parameters which are selected or typed in. Oligomeric proteins, as well as those containing D-amino-acid residues and disulfide links can be treated. The output is displayed graphically (minimized structures, trajectories, final models, analysis of trajectory/ensembles); however, all output files can be downloaded by the user. The UNRES server can be freely accessed at http://unres-server.chem.ug.edu.pl.
Switching neuronal state: optimal stimuli revealed using a stochastically-seeded gradient algorithm.
Chang, Joshua; Paydarfar, David
2014-12-01
Inducing a switch in neuronal state using energy optimal stimuli is relevant to a variety of problems in neuroscience. Analytical techniques from optimal control theory can identify such stimuli; however, solutions to the optimization problem using indirect variational approaches can be elusive in models that describe neuronal behavior. Here we develop and apply a direct gradient-based optimization algorithm to find stimulus waveforms that elicit a change in neuronal state while minimizing energy usage. We analyze standard models of neuronal behavior, the Hodgkin-Huxley and FitzHugh-Nagumo models, to show that the gradient-based algorithm: (1) enables automated exploration of a wide solution space, using stochastically generated initial waveforms that converge to multiple locally optimal solutions; and (2) finds optimal stimulus waveforms that achieve a physiological outcome condition, without a priori knowledge of the optimal terminal condition of all state variables. Analysis of biological systems using stochastically-seeded gradient methods can reveal salient dynamical mechanisms underlying the optimal control of system behavior. The gradient algorithm may also have practical applications in future work, for example, finding energy optimal waveforms for therapeutic neural stimulation that minimizes power usage and diminishes off-target effects and damage to neighboring tissue.
Dimensioning Principles in Potash and Salt: Stability and Integrity
NASA Astrophysics Data System (ADS)
Minkley, W.; Mühlbauer, J.; Lüdeling, C.
2016-11-01
The paper describes the principal geomechanical approaches to mine dimensioning in salt and potash mining, focusing on stability of the mining system and integrity of the hydraulic barrier. Several common dimensioning are subjected to a comparative analysis. We identify geomechanical discontinuum models as essential physical ingredients for examining the collapse of working fields in potash mining. The basic mechanisms rely on the softening behaviour of salt rocks and the interfaces. A visco-elasto-plastic material model with strain softening, dilatancy and creep describes the time-dependent softening behaviour of the salt pillars, while a shear model with velocity-dependent adhesive friction with shear displacement-dependent softening is used for bedding planes and discontinuities. Pillar stability critically depends on the shear conditions of the bedding planes to the overlying and underlying beds, which provide the necessary confining pressure for the pillar core, but can fail dynamically, leading to large-scale field collapses. We further discuss the integrity conditions for the hydraulic barrier, most notably the minimal stress criterion, the violation of which leads to pressure-driven percolation as the mechanism of fluid transport and hence barrier failure. We present a number of examples where violation of the minimal stress criterion has led to mine floodings.
NASA Astrophysics Data System (ADS)
D'Ambra, Pasqua; Tartaglione, Gaetano
2015-04-01
Image segmentation addresses the problem to partition a given image into its constituent objects and then to identify the boundaries of the objects. This problem can be formulated in terms of a variational model aimed to find optimal approximations of a bounded function by piecewise-smooth functions, minimizing a given functional. The corresponding Euler-Lagrange equations are a set of two coupled elliptic partial differential equations with varying coefficients. Numerical solution of the above system often relies on alternating minimization techniques involving descent methods coupled with explicit or semi-implicit finite-difference discretization schemes, which are slowly convergent and poorly scalable with respect to image size. In this work we focus on generalized relaxation methods also coupled with multigrid linear solvers, when a finite-difference discretization is applied to the Euler-Lagrange equations of Ambrosio-Tortorelli model. We show that non-linear Gauss-Seidel, accelerated by inner linear iterations, is an effective method for large-scale image analysis as those arising from high-throughput screening platforms for stem cells targeted differentiation, where one of the main goal is segmentation of thousand of images to analyze cell colonies morphology.
Solution of Ambrosio-Tortorelli model for image segmentation by generalized relaxation method
NASA Astrophysics Data System (ADS)
D'Ambra, Pasqua; Tartaglione, Gaetano
2015-03-01
Image segmentation addresses the problem to partition a given image into its constituent objects and then to identify the boundaries of the objects. This problem can be formulated in terms of a variational model aimed to find optimal approximations of a bounded function by piecewise-smooth functions, minimizing a given functional. The corresponding Euler-Lagrange equations are a set of two coupled elliptic partial differential equations with varying coefficients. Numerical solution of the above system often relies on alternating minimization techniques involving descent methods coupled with explicit or semi-implicit finite-difference discretization schemes, which are slowly convergent and poorly scalable with respect to image size. In this work we focus on generalized relaxation methods also coupled with multigrid linear solvers, when a finite-difference discretization is applied to the Euler-Lagrange equations of Ambrosio-Tortorelli model. We show that non-linear Gauss-Seidel, accelerated by inner linear iterations, is an effective method for large-scale image analysis as those arising from high-throughput screening platforms for stem cells targeted differentiation, where one of the main goal is segmentation of thousand of images to analyze cell colonies morphology.
NPAC-Nozzle Performance Analysis Code
NASA Technical Reports Server (NTRS)
Barnhart, Paul J.
1997-01-01
A simple and accurate nozzle performance analysis methodology has been developed. The geometry modeling requirements are minimal and very flexible, thus allowing rapid design evaluations. The solution techniques accurately couple: continuity, momentum, energy, state, and other relations which permit fast and accurate calculations of nozzle gross thrust. The control volume and internal flow analyses are capable of accounting for the effects of: over/under expansion, flow divergence, wall friction, heat transfer, and mass addition/loss across surfaces. The results from the nozzle performance methodology are shown to be in excellent agreement with experimental data for a variety of nozzle designs over a range of operating conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batiy, V.G.; Stojanov, A.I.; Schmieman, E.
2007-07-01
Methodological approach of optimization of schemes of solid radwaste management of the Object Shelter (Shelter) and ChNPP industrial site during transformation to the ecologically safe system was developed. On the basis of the conducted models researches the ALARA-analysis was carried out for the choice of optimum variant of schemes and technologies of solid radwaste management. The criteria of choice of optimum schemes, which are directed on optimization of doses and financial expenses, minimization of amount of the formed radwaste etc, were developed for realization of this ALARA-analysis. (authors)
Web-based Visual Analytics for Extreme Scale Climate Science
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steed, Chad A; Evans, Katherine J; Harney, John F
In this paper, we introduce a Web-based visual analytics framework for democratizing advanced visualization and analysis capabilities pertinent to large-scale earth system simulations. We address significant limitations of present climate data analysis tools such as tightly coupled dependencies, ineffi- cient data movements, complex user interfaces, and static visualizations. Our Web-based visual analytics framework removes critical barriers to the widespread accessibility and adoption of advanced scientific techniques. Using distributed connections to back-end diagnostics, we minimize data movements and leverage HPC platforms. We also mitigate system dependency issues by employing a RESTful interface. Our framework embraces the visual analytics paradigm via newmore » visual navigation techniques for hierarchical parameter spaces, multi-scale representations, and interactive spatio-temporal data mining methods that retain details. Although generalizable to other science domains, the current work focuses on improving exploratory analysis of large-scale Community Land Model (CLM) and Community Atmosphere Model (CAM) simulations.« less
Efficient use of single molecule time traces to resolve kinetic rates, models and uncertainties
NASA Astrophysics Data System (ADS)
Schmid, Sonja; Hugel, Thorsten
2018-03-01
Single molecule time traces reveal the time evolution of unsynchronized kinetic systems. Especially single molecule Förster resonance energy transfer (smFRET) provides access to enzymatically important time scales, combined with molecular distance resolution and minimal interference with the sample. Yet the kinetic analysis of smFRET time traces is complicated by experimental shortcomings—such as photo-bleaching and noise. Here we recapitulate the fundamental limits of single molecule fluorescence that render the classic, dwell-time based kinetic analysis unsuitable. In contrast, our Single Molecule Analysis of Complex Kinetic Sequences (SMACKS) considers every data point and combines the information of many short traces in one global kinetic rate model. We demonstrate the potential of SMACKS by resolving the small kinetic effects caused by different ionic strengths in the chaperone protein Hsp90. These results show an unexpected interrelation between conformational dynamics and ATPase activity in Hsp90.
Measurement and analysis of thrust force in drilling sisal-glass fiber reinforced polymer composites
NASA Astrophysics Data System (ADS)
Ramesh, M.; Gopinath, A.
2017-05-01
Drilling of composite materials is difficult when compared to the conventional materials because of its in-homogeneous nature. The force developed during drilling play a major role in the surface quality of the hole and minimizing the damages around the surface. This paper focuses the effect of drilling parameters on thrust force in drilling of sisal-glass fiber reinforced polymer composite laminates. The quadratic response models are developed by using response surface methodology (RSM) to predict the influence of cutting parameters on thrust force. The adequacy of the models is checked by using the analysis of variance (ANOVA). A scanning electron microscope (SEM) analysis is carried out to analyze the quality of the drilled surface. From the results, it is found that, the feed rate is the most influencing parameter followed by spindle speed and the drill diameter is the least influencing parameter on the thrust force.
NASA Astrophysics Data System (ADS)
Horesh, L.; Haber, E.
2009-09-01
The ell1 minimization problem has been studied extensively in the past few years. Recently, there has been a growing interest in its application for inverse problems. Most studies have concentrated in devising ways for sparse representation of a solution using a given prototype dictionary. Very few studies have addressed the more challenging problem of optimal dictionary construction, and even these were primarily devoted to the simplistic sparse coding application. In this paper, sensitivity analysis of the inverse solution with respect to the dictionary is presented. This analysis reveals some of the salient features and intrinsic difficulties which are associated with the dictionary design problem. Equipped with these insights, we propose an optimization strategy that alleviates these hurdles while utilizing the derived sensitivity relations for the design of a locally optimal dictionary. Our optimality criterion is based on local minimization of the Bayesian risk, given a set of training models. We present a mathematical formulation and an algorithmic framework to achieve this goal. The proposed framework offers the design of dictionaries for inverse problems that incorporate non-trivial, non-injective observation operators, where the data and the recovered parameters may reside in different spaces. We test our algorithm and show that it yields improved dictionaries for a diverse set of inverse problems in geophysics and medical imaging.
A computer-guided minimally-invasive technique for orthodontic forced eruption of impacted canines.
BERTELè, Matteo; Minniti, Paola P; Dalessandri, Domenico; Bonetti, Stefano; Visconti, Luca; Paganelli, Corrado
2016-06-01
The aim of this study was to develop a computer-guided minimally-invasive protocol for the surgical application of an orthodontic traction during the forced eruption of an impacted canine. 3Diagnosys® software was used to evaluate impacted canines position and to plan the surgical access, taking into account soft and hard tissues thickness, orthodontic traction path and presence of possible obstacles. Geomagic® software was used for reverse engineering and RhinocerosTM software was employed as three-dimensional modeller in preparing individualized surgical guides. Surgical access was gained flapless through the use of a mucosal punch for soft tissues, followed by a trephine bur with a pre-adjusted stop for bone path creation. A diamond bur mounted on SONICflex® 2003/L handpiece was used to prepare a 2-mm-deep calibrated hole into the canine enamel where a titanium screw connected with a stainless steel ligature was screwed. In-vitro pull-out tests, radiological and SEM analysis were realized in order to investigate screw stability and position. In two out of ten samples the screw was removed after the application of a 1-kg pull-out force. Radiological and SEM analysis demonstrated that all the screws were inserted into the enamel without affecting dentine integrity. This computer-guided minimally-invasive technique allowed a precise and reliable positioning of screws utilized during the orthodontic traction of impacted canines.
NASA Astrophysics Data System (ADS)
Shinnaka, Shinji
This paper presents a new unified analysis of estimate errors by model-matching extended-back-EMF estimation methods for sensorless drive of permanent-magnet synchronous motors. Analytical solutions about estimate errors, whose validity is confirmed by numerical experiments, are rich in universality and applicability. As an example of universality and applicability, a new trajectory-oriented vector control method is proposed, which can realize directly quasi-optimal strategy minimizing total losses with no additional computational loads by simply orienting one of vector-control coordinates to the associated quasi-optimal trajectory. The coordinate orientation rule, which is analytically derived, is surprisingly simple. Consequently the trajectory-oriented vector control method can be applied to a number of conventional vector control systems using model-matching extended-back-EMF estimation methods.
NASA Astrophysics Data System (ADS)
Likhachev, Dmitriy V.
2017-06-01
Johs and Hale developed the Kramers-Kronig consistent B-spline formulation for the dielectric function modeling in spectroscopic ellipsometry data analysis. In this article we use popular Akaike, corrected Akaike and Bayesian Information Criteria (AIC, AICc and BIC, respectively) to determine an optimal number of knots for B-spline model. These criteria allow finding a compromise between under- and overfitting of experimental data since they penalize for increasing number of knots and select representation which achieves the best fit with minimal number of knots. Proposed approach provides objective and practical guidance, as opposite to empirically driven or "gut feeling" decisions, for selecting the right number of knots for B-spline models in spectroscopic ellipsometry. AIC, AICc and BIC selection criteria work remarkably well as we demonstrated in several real-data applications. This approach formalizes selection of the optimal knot number and may be useful in practical perspective of spectroscopic ellipsometry data analysis.
Nonlinear analysis of an improved continuum model considering headway change with memory
NASA Astrophysics Data System (ADS)
Cheng, Rongjun; Wang, Jufeng; Ge, Hongxia; Li, Zhipeng
2018-01-01
Considering the effect of headway changes with memory, an improved continuum model of traffic flow is proposed in this paper. By means of linear stability theory, the new model’s linear stability with the effect of headway changes with memory is obtained. Through nonlinear analysis, the KdV-Burgers equation is derived to describe the propagating behavior of traffic density wave near the neutral stability line. Numerical simulation is carried out to study the improved traffic flow model, which explores how the headway changes with memory affected each car’s velocity, density and energy consumption. Numerical results show that when considering the effects of headway changes with memory, the traffic jams can be suppressed efficiently. Furthermore, research results demonstrate that the effect of headway changes with memory can avoid the disadvantage of historical information, which will improve the stability of traffic flow and minimize car energy consumption.
Embedded CLIPS for SDI BM/C3 simulation and analysis
NASA Technical Reports Server (NTRS)
Gossage, Brett; Nanney, Van
1990-01-01
Nichols Research Corporation is developing the BM/C3 Requirements Analysis Tool (BRAT) for the U.S. Army Strategic Defense Command. BRAT uses embedded CLIPS/Ada to model the decision making processes used by the human commander of a defense system. Embedding CLlPS/Ada in BRAT allows the user to explore the role of the human in Command and Control (C2) and the use of expert systems for automated C2. BRAT models assert facts about the current state of the system, the simulated scenario, and threat information into CLIPS/Ada. A user-defined rule set describes the decision criteria for the commander. We have extended CLIPS/Ada with user-defined functions that allow the firing of a rule to invoke a system action such as weapons release or a change in strategy. The use of embedded CLIPS/Ada will provide a powerful modeling tool for our customer at minimal cost.
MUSiC - A general search for deviations from monte carlo predictions in CMS
NASA Astrophysics Data System (ADS)
Biallass, Philipp A.; CMS Collaboration
2009-06-01
A model independent analysis approach in CMS is presented, systematically scanning the data for deviations from the Monte Carlo expectation. Such an analysis can contribute to the understanding of the detector and the tuning of the event generators. Furthermore, due to the minimal theoretical bias this approach is sensitive to a variety of models of new physics, including those not yet thought of. Events are classified into event classes according to their particle content (muons, electrons, photons, jets and missing transverse energy). A broad scan of various distributions is performed, identifying significant deviations from the Monte Carlo simulation. The importance of systematic uncertainties is outlined, which are taken into account rigorously within the algorithm. Possible detector effects and generator issues, as well as models involving Supersymmetry and new heavy gauge bosons are used as an input to the search algorithm.
Multiple-length-scale deformation analysis in a thermoplastic polyurethane
Sui, Tan; Baimpas, Nikolaos; Dolbnya, Igor P.; Prisacariu, Cristina; Korsunsky, Alexander M.
2015-01-01
Thermoplastic polyurethane elastomers enjoy an exceptionally wide range of applications due to their remarkable versatility. These block co-polymers are used here as an example of a structurally inhomogeneous composite containing nano-scale gradients, whose internal strain differs depending on the length scale of consideration. Here we present a combined experimental and modelling approach to the hierarchical characterization of block co-polymer deformation. Synchrotron-based small- and wide-angle X-ray scattering and radiography are used for strain evaluation across the scales. Transmission electron microscopy image-based finite element modelling and fast Fourier transform analysis are used to develop a multi-phase numerical model that achieves agreement with the combined experimental data using a minimal number of adjustable structural parameters. The results highlight the importance of fuzzy interfaces, that is, regions of nanometre-scale structure and property gradients, in determining the mechanical properties of hierarchical composites across the scales. PMID:25758945
Jeannotte, Guillaume; Lubell, William D
2004-11-10
For the first time, the influence of a fused Delta3-arylproline on peptide conformation has been studied by the synthesis and comparison of the conformations of peptides containing proline and pyrrolo-proline, 3 (PyPro). Pyrrolo-proline was demonstrated to be a conservative replacement for Pro in model beta-turns, 4 and 5, as shown by their similar DMSO titration curves, cis/trans-isomer populations, and NOESY spectral data. Pyrrolo-proline may thus be used for studying the structure activity relationships of Pro-containing peptides with minimal modification of secondary structures.
Formulation analysis and computation of an optimization-based local-to-nonlocal coupling method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
D'Elia, Marta; Bochev, Pavel Blagoveston
2017-01-01
In this paper, we present an optimization-based coupling method for local and nonlocal continuum models. Our approach couches the coupling of the models into a control problem where the states are the solutions of the nonlocal and local equations, the objective is to minimize their mismatch on the overlap of the local and nonlocal problem domains, and the virtual controls are the nonlocal volume constraint and the local boundary condition. We present the method in the context of Local-to-Nonlocal di usion coupling. Numerical examples illustrate the theoretical properties of the approach.
Ultrasonic measurements of the reflection coefficient at a water/polyurethane foam interface.
Sagers, Jason D; Haberman, Michael R; Wilson, Preston S
2013-09-01
Measured ultrasonic reflection coefficients as a function of normal incidence angle are reported for several samples of polyurethane foam submerged in a water bath. Three reflection coefficient models are employed as needed in this analysis to approximate the measured data: (1) an infinite plane wave impinging on an elastic halfspace, (2) an infinite plane wave impinging on a single fluid layer overlying a fluid halfspace, and (3) a finite acoustic beam impinging on an elastic halfspace. The compressional wave speed in each sample is calculated by minimizing the sum of squared error (SSE) between the measured and modeled data.
The high-energy-density counterpropagating shear experiment and turbulent self-heating
Doss, F. W.; Fincke, J. R.; Loomis, E. N.; ...
2013-12-06
The counterpropagating shear experiment has previously demonstrated the ability to create regions of shockdriven shear, balanced symmetrically in pressure and experiencing minimal net drift. This allows for the creation of a high-Mach-number high-energy-density shear environment. New data from the counterpropagating shear campaign is presented, and both hydrocode modeling and theoretical analysis in the context of a Reynolds-averaged-Navier-Stokes model suggest turbulent dissipation of energy from the supersonic flow bounding the layer is a significant driver in its expansion. A theoretical minimum shear flow Mach number threshold is suggested for substantial thermal-turbulence coupling.
Effects of tools inserted through snake-like surgical manipulators.
Murphy, Ryan J; Otake, Yoshito; Wolfe, Kevin C; Taylor, Russell H; Armand, Mehran
2014-01-01
Snake-like manipulators with a large, open lumen can offer improved treatment alternatives for minimally-and less-invasive surgeries. In these procedures, surgeons use the manipulator to introduce and control flexible tools in the surgical environment. This paper describes a predictive algorithm for estimating manipulator configuration given tip position for nonconstant curvature, cable-driven manipulators using energy minimization. During experimental bending of the manipulator with and without a tool inserted in its lumen, images were recorded from an overhead camera in conjunction with actuation cable tension and length. To investigate the accuracy, the estimated manipulator configuration from the model and the ground-truth configuration measured from the image were compared. Additional analysis focused on the response differences for the manipulator with and without a tool inserted through the lumen. Results indicate that the energy minimization model predicts manipulator configuration with an error of 0.24 ± 0.22mm without tools in the lumen and 0.24 ± 0.19mm with tools in the lumen (no significant difference, p = 0.81). Moreover, tools did not introduce noticeable perturbations in the manipulator trajectory; however, there was an increase in requisite force required to reach a configuration. These results support the use of the proposed estimation method for calculating the shape of the manipulator with an tool inserted in its lumen when an accuracy range of at least 1mm is required.
Geographic Distribution and Ecology of Potential Malaria Vectors in the Republic of Korea
2009-05-01
species . Figure 4 shows a minimal spanning tree of the non- metric multidimensional scaling analysis of the means of the Þrst 15 principal components...to develop ecological niche models (ENMs) of the potential geographic distribution for eight anopheline species known to occur there. The areas...predicted suitable for the Hyrcanus Group species were the most extensive for Anopheles sinensis Wiedemann, An. kleini Rueda, An. belenrae Rueda, and An
Simbol-X Background Minimization: Mirror Spacecraft Passive Shielding Trade-off Study
NASA Astrophysics Data System (ADS)
Fioretti, V.; Malaguti, G.; Bulgarelli, A.; Palumbo, G. G. C.; Ferri, A.; Attinà, P.
2009-05-01
The present work shows a quantitative trade-off analysis of the Simbol-X Mirror Spacecraft (MSC) passive shielding, in the phase space of the various parameters: mass budget, dimension, geometry and composition. A simplified physical (and geometrical) model of the sky screen, implemented by means of a GEANT4 simulation, has been developed to perform a performance-driven mass optimization and evaluate the residual background level on Simbol-X focal plane.
Strongly Correlated Electron Systems: An Operatorial Perspective
NASA Astrophysics Data System (ADS)
Di Ciolo, Andrea; Avella, Adolfo
2018-05-01
We discuss the operatorial approach to the study of strongly correlated electron systems and show how the exact solution of target models on small clusters chosen ad-hoc (minimal models) can suggest very efficient bulk approximations. We use the Hubbard model as case study (target model) and we analyze and discuss the crucial role of spin fluctuations in its 2-site realization (minimal model). Accordingly, we devise a novel three-pole approximation for the 2D case, including in the basic field an operator describing the dressing of the electronic one by the nearest-neighbor spin-fluctuations. Such a solution is in very good agreement with the exact one in the minimal model (2-site case) and performs very well once compared to advanced (semi-)numerical methods in the 2D case, being by far less computational-resource demanding.
NASA Technical Reports Server (NTRS)
Prive, Nikki C.; Errico, Ronald M.
2013-01-01
A series of experiments that explore the roles of model and initial condition error in numerical weather prediction are performed using an observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO). The use of an OSSE allows the analysis and forecast errors to be explicitly calculated, and different hypothetical observing networks can be tested with ease. In these experiments, both a full global OSSE framework and an 'identical twin' OSSE setup are utilized to compare the behavior of the data assimilation system and evolution of forecast skill with and without model error. The initial condition error is manipulated by varying the distribution and quality of the observing network and the magnitude of observation errors. The results show that model error has a strong impact on both the quality of the analysis field and the evolution of forecast skill, including both systematic and unsystematic model error components. With a realistic observing network, the analysis state retains a significant quantity of error due to systematic model error. If errors of the analysis state are minimized, model error acts to rapidly degrade forecast skill during the first 24-48 hours of forward integration. In the presence of model error, the impact of observation errors on forecast skill is small, but in the absence of model error, observation errors cause a substantial degradation of the skill of medium range forecasts.
Improved bounds on the energy-minimizing strains in martensitic polycrystals
NASA Astrophysics Data System (ADS)
Peigney, Michaël
2016-07-01
This paper is concerned with the theoretical prediction of the energy-minimizing (or recoverable) strains in martensitic polycrystals, considering a nonlinear elasticity model of phase transformation at finite strains. The main results are some rigorous upper bounds on the set of energy-minimizing strains. Those bounds depend on the polycrystalline texture through the volume fractions of the different orientations. The simplest form of the bounds presented is obtained by combining recent results for single crystals with a homogenization approach proposed previously for martensitic polycrystals. However, the polycrystalline bound delivered by that procedure may fail to recover the monocrystalline bound in the homogeneous limit, as is demonstrated in this paper by considering an example related to tetragonal martensite. This motivates the development of a more detailed analysis, leading to improved polycrystalline bounds that are notably consistent with results for single crystals in the homogeneous limit. A two-orientation polycrystal of tetragonal martensite is studied as an illustration. In that case, analytical expressions of the upper bounds are derived and the results are compared with lower bounds obtained by considering laminate textures.
OptFlux: an open-source software platform for in silico metabolic engineering.
Rocha, Isabel; Maia, Paulo; Evangelista, Pedro; Vilaça, Paulo; Soares, Simão; Pinto, José P; Nielsen, Jens; Patil, Kiran R; Ferreira, Eugénio C; Rocha, Miguel
2010-04-19
Over the last few years a number of methods have been proposed for the phenotype simulation of microorganisms under different environmental and genetic conditions. These have been used as the basis to support the discovery of successful genetic modifications of the microbial metabolism to address industrial goals. However, the use of these methods has been restricted to bioinformaticians or other expert researchers. The main aim of this work is, therefore, to provide a user-friendly computational tool for Metabolic Engineering applications. OptFlux is an open-source and modular software aimed at being the reference computational application in the field. It is the first tool to incorporate strain optimization tasks, i.e., the identification of Metabolic Engineering targets, using Evolutionary Algorithms/Simulated Annealing metaheuristics or the previously proposed OptKnock algorithm. It also allows the use of stoichiometric metabolic models for (i) phenotype simulation of both wild-type and mutant organisms, using the methods of Flux Balance Analysis, Minimization of Metabolic Adjustment or Regulatory on/off Minimization of Metabolic flux changes, (ii) Metabolic Flux Analysis, computing the admissible flux space given a set of measured fluxes, and (iii) pathway analysis through the calculation of Elementary Flux Modes. OptFlux also contemplates several methods for model simplification and other pre-processing operations aimed at reducing the search space for optimization algorithms. The software supports importing/exporting to several flat file formats and it is compatible with the SBML standard. OptFlux has a visualization module that allows the analysis of the model structure that is compatible with the layout information of Cell Designer, allowing the superimposition of simulation results with the model graph. The OptFlux software is freely available, together with documentation and other resources, thus bridging the gap from research in strain optimization algorithms and the final users. It is a valuable platform for researchers in the field that have available a number of useful tools. Its open-source nature invites contributions by all those interested in making their methods available for the community. Given its plug-in based architecture it can be extended with new functionalities. Currently, several plug-ins are being developed, including network topology analysis tools and the integration with Boolean network based regulatory models.
OptFlux: an open-source software platform for in silico metabolic engineering
2010-01-01
Background Over the last few years a number of methods have been proposed for the phenotype simulation of microorganisms under different environmental and genetic conditions. These have been used as the basis to support the discovery of successful genetic modifications of the microbial metabolism to address industrial goals. However, the use of these methods has been restricted to bioinformaticians or other expert researchers. The main aim of this work is, therefore, to provide a user-friendly computational tool for Metabolic Engineering applications. Results OptFlux is an open-source and modular software aimed at being the reference computational application in the field. It is the first tool to incorporate strain optimization tasks, i.e., the identification of Metabolic Engineering targets, using Evolutionary Algorithms/Simulated Annealing metaheuristics or the previously proposed OptKnock algorithm. It also allows the use of stoichiometric metabolic models for (i) phenotype simulation of both wild-type and mutant organisms, using the methods of Flux Balance Analysis, Minimization of Metabolic Adjustment or Regulatory on/off Minimization of Metabolic flux changes, (ii) Metabolic Flux Analysis, computing the admissible flux space given a set of measured fluxes, and (iii) pathway analysis through the calculation of Elementary Flux Modes. OptFlux also contemplates several methods for model simplification and other pre-processing operations aimed at reducing the search space for optimization algorithms. The software supports importing/exporting to several flat file formats and it is compatible with the SBML standard. OptFlux has a visualization module that allows the analysis of the model structure that is compatible with the layout information of Cell Designer, allowing the superimposition of simulation results with the model graph. Conclusions The OptFlux software is freely available, together with documentation and other resources, thus bridging the gap from research in strain optimization algorithms and the final users. It is a valuable platform for researchers in the field that have available a number of useful tools. Its open-source nature invites contributions by all those interested in making their methods available for the community. Given its plug-in based architecture it can be extended with new functionalities. Currently, several plug-ins are being developed, including network topology analysis tools and the integration with Boolean network based regulatory models. PMID:20403172
Evaluation of Two Crew Module Boilerplate Tests Using Newly Developed Calibration Metrics
NASA Technical Reports Server (NTRS)
Horta, Lucas G.; Reaves, Mercedes C.
2012-01-01
The paper discusses a application of multi-dimensional calibration metrics to evaluate pressure data from water drop tests of the Max Launch Abort System (MLAS) crew module boilerplate. Specifically, three metrics are discussed: 1) a metric to assess the probability of enveloping the measured data with the model, 2) a multi-dimensional orthogonality metric to assess model adequacy between test and analysis, and 3) a prediction error metric to conduct sensor placement to minimize pressure prediction errors. Data from similar (nearly repeated) capsule drop tests shows significant variability in the measured pressure responses. When compared to expected variability using model predictions, it is demonstrated that the measured variability cannot be explained by the model under the current uncertainty assumptions.
NASA Astrophysics Data System (ADS)
Fugett, James H.; Bennett, Haydon E.; Shrout, Joshua L.; Coad, James E.
2017-02-01
Expansions in minimally invasive medical devices and technologies with thermal mechanisms of action are continuing to advance the practice of medicine. These expansions have led to an increasing need for appropriate animal models to validate and quantify device performance. The planning of these studies should take into consideration a variety of parameters, including the appropriate animal model (test system - ex vivo or in vivo; species; tissue type), treatment conditions (test conditions), predicate device selection (as appropriate, control article), study timing (Day 0 acute to more than Day 90 chronic survival studies), and methods of tissue analysis (tissue dissection - staining methods). These considerations are discussed and illustrated using the fresh extirpated porcine longissimus muscle model for endometrial ablation.
Toward a preoperative planning tool for brain tumor resection therapies.
Coffey, Aaron M; Miga, Michael I; Chen, Ishita; Thompson, Reid C
2013-01-01
Neurosurgical procedures involving tumor resection require surgical planning such that the surgical path to the tumor is determined to minimize the impact on healthy tissue and brain function. This work demonstrates a predictive tool to aid neurosurgeons in planning tumor resection therapies by finding an optimal model-selected patient orientation that minimizes lateral brain shift in the field of view. Such orientations may facilitate tumor access and removal, possibly reduce the need for retraction, and could minimize the impact of brain shift on image-guided procedures. In this study, preoperative magnetic resonance images were utilized in conjunction with pre- and post-resection laser range scans of the craniotomy and cortical surface to produce patient-specific finite element models of intraoperative shift for 6 cases. These cases were used to calibrate a model (i.e., provide general rules for the application of patient positioning parameters) as well as determine the current model-based framework predictive capabilities. Finally, an objective function is proposed that minimizes shift subject to patient position parameters. Patient positioning parameters were then optimized and compared to our neurosurgeon as a preliminary study. The proposed model-driven brain shift minimization objective function suggests an overall reduction of brain shift by 23 % over experiential methods. This work recasts surgical simulation from a trial-and-error process to one where options are presented to the surgeon arising from an optimization of surgical goals. To our knowledge, this is the first realization of an evaluative tool for surgical planning that attempts to optimize surgical approach by means of shift minimization in this manner.
A linear goal programming model for human resource allocation in a health-care organization.
Kwak, N K; Lee, C
1997-06-01
This paper presents the development of a goal programming (GP) model as an aid to strategic planning and allocation for limited human resources in a health-care organization. The purpose of this study is to assign the personnel to the proper shift hours that enable management to meet the objective of minimizing the total payroll costs while patients are satisfied. A GP model is illustrated using the data provided by a health-care organization in the midwest area. The goals are identified and prioritized. The model result is examined and a sensitivity analysis is performed to improve the model applicability. The GP model application adds insight to the planning functions of resource allocation in the health-care organizations. The proposed model is easily applicable to other human resource planning process.
NASA Astrophysics Data System (ADS)
Greenwald, Jared
Any good physical theory must resolve current experimental data as well as offer predictions for potential searches in the future. The Standard Model of particle physics, Grand Unied Theories, Minimal Supersymmetric Models and Supergravity are all attempts to provide such a framework. However, they all lack the ability to predict many of the parameters that each of the theories utilize. String theory may yield a solution to this naturalness (or self-predictiveness) problem as well as offer a unifed theory of gravity. Studies in particle physics phenomenology based on perturbative low energy analysis of various string theories can help determine the candidacy of such models. After a review of principles and problems leading up to our current understanding of the universe, we will discuss some of the best particle physics model building techniques that have been developed using string theory. This will culminate in the introduction of a novel approach to a computational, systematic analysis of the various physical phenomena that arise from these string models. We focus on the necessary assumptions, complexity and open questions that arise while making a fully-automated at direction analysis program.
Construction schedules slack time minimizing
NASA Astrophysics Data System (ADS)
Krzemiński, Michał
2017-07-01
The article presents two copyright models for minimizing downtime working brigades. Models have been developed for construction schedules performed using the method of work uniform. Application of flow shop models is possible and useful for the implementation of large objects, which can be divided into plots. The article also presents a condition describing gives which model should be used, as well as a brief example of optimization schedule. The optimization results confirm the legitimacy of the work on the newly-developed models.
Optimization of power systems with voltage security constraints
NASA Astrophysics Data System (ADS)
Rosehart, William Daniel
As open access market principles are applied to power systems, significant changes in their operation and control are occurring. In the new marketplace, power systems are operating under higher loading conditions as market influences demand greater attention to operating cost versus stability margins. Since stability continues to be a basic requirement in the operation of any power system, new tools are being considered to analyze the effect of stability on the operating cost of the system, so that system stability can be incorporated into the costs of operating the system. In this thesis, new optimal power flow (OPF) formulations are proposed based on multi-objective methodologies to optimize active and reactive power dispatch while maximizing voltage security in power systems. The effects of minimizing operating costs, minimizing reactive power generation and/or maximizing voltage stability margins are analyzed. Results obtained using the proposed Voltage Stability Constrained OPF formulations are compared and analyzed to suggest possible ways of costing voltage security in power systems. When considering voltage stability margins the importance of system modeling becomes critical, since it has been demonstrated, based on bifurcation analysis, that modeling can have a significant effect of the behavior of power systems, especially at high loading levels. Therefore, this thesis also examines the effects of detailed generator models and several exponential load models. Furthermore, because of its influence on voltage stability, a Static Var Compensator model is also incorporated into the optimization problems.
NASA Astrophysics Data System (ADS)
Reiter, D. T.; Rodi, W. L.
2015-12-01
Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.
Kim, Choong-Ki; Toft, Jodie E; Papenfus, Michael; Verutes, Gregory; Guerry, Anne D; Ruckelshaus, Marry H; Arkema, Katie K; Guannel, Gregory; Wood, Spencer A; Bernhardt, Joanna R; Tallis, Heather; Plummer, Mark L; Halpern, Benjamin S; Pinsky, Malin L; Beck, Michael W; Chan, Francis; Chan, Kai M A; Levin, Phil S; Polasky, Stephen
2012-01-01
Many hope that ocean waves will be a source for clean, safe, reliable and affordable energy, yet wave energy conversion facilities may affect marine ecosystems through a variety of mechanisms, including competition with other human uses. We developed a decision-support tool to assist siting wave energy facilities, which allows the user to balance the need for profitability of the facilities with the need to minimize conflicts with other ocean uses. Our wave energy model quantifies harvestable wave energy and evaluates the net present value (NPV) of a wave energy facility based on a capital investment analysis. The model has a flexible framework and can be easily applied to wave energy projects at local, regional, and global scales. We applied the model and compatibility analysis on the west coast of Vancouver Island, British Columbia, Canada to provide information for ongoing marine spatial planning, including potential wave energy projects. In particular, we conducted a spatial overlap analysis with a variety of existing uses and ecological characteristics, and a quantitative compatibility analysis with commercial fisheries data. We found that wave power and harvestable wave energy gradually increase offshore as wave conditions intensify. However, areas with high economic potential for wave energy facilities were closer to cable landing points because of the cost of bringing energy ashore and thus in nearshore areas that support a number of different human uses. We show that the maximum combined economic benefit from wave energy and other uses is likely to be realized if wave energy facilities are sited in areas that maximize wave energy NPV and minimize conflict with existing ocean uses. Our tools will help decision-makers explore alternative locations for wave energy facilities by mapping expected wave energy NPV and helping to identify sites that provide maximal returns yet avoid spatial competition with existing ocean uses.
Kim, Choong-Ki; Toft, Jodie E.; Papenfus, Michael; Verutes, Gregory; Guerry, Anne D.; Ruckelshaus, Marry H.; Arkema, Katie K.; Guannel, Gregory; Wood, Spencer A.; Bernhardt, Joanna R.; Tallis, Heather; Plummer, Mark L.; Halpern, Benjamin S.; Pinsky, Malin L.; Beck, Michael W.; Chan, Francis; Chan, Kai M. A.; Levin, Phil S.; Polasky, Stephen
2012-01-01
Many hope that ocean waves will be a source for clean, safe, reliable and affordable energy, yet wave energy conversion facilities may affect marine ecosystems through a variety of mechanisms, including competition with other human uses. We developed a decision-support tool to assist siting wave energy facilities, which allows the user to balance the need for profitability of the facilities with the need to minimize conflicts with other ocean uses. Our wave energy model quantifies harvestable wave energy and evaluates the net present value (NPV) of a wave energy facility based on a capital investment analysis. The model has a flexible framework and can be easily applied to wave energy projects at local, regional, and global scales. We applied the model and compatibility analysis on the west coast of Vancouver Island, British Columbia, Canada to provide information for ongoing marine spatial planning, including potential wave energy projects. In particular, we conducted a spatial overlap analysis with a variety of existing uses and ecological characteristics, and a quantitative compatibility analysis with commercial fisheries data. We found that wave power and harvestable wave energy gradually increase offshore as wave conditions intensify. However, areas with high economic potential for wave energy facilities were closer to cable landing points because of the cost of bringing energy ashore and thus in nearshore areas that support a number of different human uses. We show that the maximum combined economic benefit from wave energy and other uses is likely to be realized if wave energy facilities are sited in areas that maximize wave energy NPV and minimize conflict with existing ocean uses. Our tools will help decision-makers explore alternative locations for wave energy facilities by mapping expected wave energy NPV and helping to identify sites that provide maximal returns yet avoid spatial competition with existing ocean uses. PMID:23144824
Boughey, Judy C; Keeney, Gary L; Radensky, Paul; Song, Christine P; Habermann, Elizabeth B
2016-04-01
In the current health care environment, cost effectiveness is critically important in policy setting and care of patients. This study performed a health economic analysis to assess the implications to providers and payers of expanding the use of frozen section margin analysis to minimize reoperations for patients undergoing breast cancer lumpectomy. A health care economic impact model was built to assess annual costs associated with breast lumpectomy procedures with and without frozen section margin analysis to avoid reoperation. If frozen section margin analysis is used in 20% of breast lumpectomies and under a baseline assumption that 35% of initial lumpectomies without frozen section analysis result in reoperations, the potential annual cost savings are $18.2 million to payers and $0.4 million to providers. Under the same baseline assumption, if 100% of all health care facilities adopted the use of frozen section margin analysis for breast lumpectomy procedures, the potential annual cost savings are $90.9 million to payers and $1.8 million to providers. On the basis of 10,000 simulations, use of intraoperative frozen section margin analysis yields cost saving for payers and is cost neutral to slightly cost saving for providers. This economic analysis indicates that widespread use of frozen section margin evaluation intraoperatively to guide surgical resection in breast lumpectomy cases and minimize reoperations would be beneficial to cost savings not only for the patient but also for payers and, in most cases, for providers. Copyright © 2016 by American Society of Clinical Oncology.
Einstein’s gravity from a polynomial affine model
NASA Astrophysics Data System (ADS)
Castillo-Felisola, Oscar; Skirzewski, Aureliano
2018-03-01
We show that the effective field equations for a recently formulated polynomial affine model of gravity, in the sector of a torsion-free connection, accept general Einstein manifolds—with or without cosmological constant—as solutions. Moreover, the effective field equations are partially those obtained from a gravitational Yang–Mills theory known as Stephenson–Kilmister–Yang theory. Additionally, we find a generalization of a minimally coupled massless scalar field in General Relativity within a ‘minimally’ coupled scalar field in this affine model. Finally, we present a brief (perturbative) analysis of the propagators of the gravitational theory, and count the degrees of freedom. For completeness, we prove that a Birkhoff-like theorem is valid for the analyzed sector.
Stop-catalyzed baryogenesis beyond the MSSM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katz, Andrey; Perelstein, Maxim; Ramsey-Musolf, Michael J.
2015-11-19
Nonminimal supersymmetric models that predict a tree-level Higgs mass above the minimal supersymmetric standard model (MSSM) bound are well motivated by naturalness considerations. Indirect constraints on the stop sector parameters of such models are significantly relaxed compared to the MSSM; in particular, both stops can have weak-scale masses. We revisit the stop-catalyzed electroweak baryogenesis (EWB) scenario in this context. We find that the LHC measurements of the Higgs boson production and decay rates already rule out the possibility of stop-catalyzed EWB. Here, we also introduce a gauge-invariant analysis framework that may generalize to other scenarios in which interactions outside themore » gauge sector drive the electroweak phase transition.« less
Hierarchy of models: From qualitative to quantitative analysis of circadian rhythms in cyanobacteria
NASA Astrophysics Data System (ADS)
Chaves, M.; Preto, M.
2013-06-01
A hierarchy of models, ranging from high to lower levels of abstraction, is proposed to construct "minimal" but predictive and explanatory models of biological systems. Three hierarchical levels will be considered: Boolean networks, piecewise affine differential (PWA) equations, and a class of continuous, ordinary, differential equations' models derived from the PWA model. This hierarchy provides different levels of approximation of the biological system and, crucially, allows the use of theoretical tools to more exactly analyze and understand the mechanisms of the system. The Kai ABC oscillator, which is at the core of the cyanobacterial circadian rhythm, is analyzed as a case study, showing how several fundamental properties—order of oscillations, synchronization when mixing oscillating samples, structural robustness, and entrainment by external cues—can be obtained from basic mechanisms.
Properties of Blazar Jets Defined by an Economy of Power
NASA Astrophysics Data System (ADS)
Petropoulou, Maria; Dermer, Charles D.
2016-07-01
The absolute power of a relativistic black hole jet includes the power in the magnetic field, the leptons, the hadrons, and the radiated photons. A power analysis of a relativistic radio/γ-ray blazar jet leads to bifurcated leptonic synchrotron-Compton (LSC) and leptohadronic synchrotron (LHS) solutions that minimize the total jet power. Higher Doppler factors with increasing peak synchrotron frequency are implied in the LSC model. Strong magnetic fields {B}\\prime ≳ 100 {{G}} are found for the LHS model with variability times ≲ {10}3 {{s}}, in accord with highly magnetized, reconnection-driven jet models. Proton synchrotron models of ≳ 100 {GeV} blazar radiation can have sub-Eddington absolute jet powers, but models of dominant GeV radiation in flat spectrum radio quasars require excessive power.
van Walraven, Carl
2017-04-01
Diagnostic codes used in administrative databases cause bias due to misclassification of patient disease status. It is unclear which methods minimize this bias. Serum creatinine measures were used to determine severe renal failure status in 50,074 hospitalized patients. The true prevalence of severe renal failure and its association with covariates were measured. These were compared to results for which renal failure status was determined using surrogate measures including the following: (1) diagnostic codes; (2) categorization of probability estimates of renal failure determined from a previously validated model; or (3) bootstrap methods imputation of disease status using model-derived probability estimates. Bias in estimates of severe renal failure prevalence and its association with covariates were minimal when bootstrap methods were used to impute renal failure status from model-based probability estimates. In contrast, biases were extensive when renal failure status was determined using codes or methods in which model-based condition probability was categorized. Bias due to misclassification from inaccurate diagnostic codes can be minimized using bootstrap methods to impute condition status using multivariable model-derived probability estimates. Copyright © 2017 Elsevier Inc. All rights reserved.
Shingrani, Rahul; Krenz, Gary; Molthen, Robert
2010-01-01
With advances in medical imaging scanners, it has become commonplace to generate large multidimensional datasets. These datasets require tools for a rapid, thorough analysis. To address this need, we have developed an automated algorithm for morphometric analysis incorporating A Visualization Workshop computational and image processing libraries for three-dimensional segmentation, vascular tree generation and structural hierarchical ordering with a two-stage numeric optimization procedure for estimating vessel diameters. We combine this new technique with our mathematical models of pulmonary vascular morphology to quantify structural and functional attributes of lung arterial trees. Our physiological studies require repeated measurements of vascular structure to determine differences in vessel biomechanical properties between animal models of pulmonary disease. Automation provides many advantages including significantly improved speed and minimized operator interaction and biasing. The results are validated by comparison with previously published rat pulmonary arterial micro-CT data analysis techniques, in which vessels were manually mapped and measured using intense operator intervention. Published by Elsevier Ireland Ltd.
Design oriented structural analysis
NASA Technical Reports Server (NTRS)
Giles, Gary L.
1994-01-01
Desirable characteristics and benefits of design oriented analysis methods are described and illustrated by presenting a synoptic description of the development and uses of the Equivalent Laminated Plate Solution (ELAPS) computer code. ELAPS is a design oriented structural analysis method which is intended for use in the early design of aircraft wing structures. Model preparation is minimized by using a few large plate segments to model the wing box structure. Computational efficiency is achieved by using a limited number of global displacement functions that encompass all segments over the wing planform. Coupling with other codes is facilitated since the output quantities such as deflections and stresses are calculated as continuous functions over the plate segments. Various aspects of the ELAPS development are discussed including the analytical formulation, verification of results by comparison with finite element analysis results, coupling with other codes, and calculation of sensitivity derivatives. The effectiveness of ELAPS for multidisciplinary design application is illustrated by describing its use in design studies of high speed civil transport wing structures.
Dynamics of a distributed drill string system: Characteristic parameters and stability maps
NASA Astrophysics Data System (ADS)
Aarsnes, Ulf Jakob F.; van de Wouw, Nathan
2018-03-01
This paper involves the dynamic (stability) analysis of distributed drill-string systems. A minimal set of parameters characterizing the linearized, axial-torsional dynamics of a distributed drill string coupled through the bit-rock interaction is derived. This is found to correspond to five parameters for a simple drill string and eight parameters for a two-sectioned drill-string (e.g., corresponding to the pipe and collar sections of a drilling system). These dynamic characterizations are used to plot the inverse gain margin of the system, parametrized in the non-dimensional parameters, effectively creating a stability map covering the full range of realistic physical parameters. This analysis reveals a complex spectrum of dynamics not evident in stability analysis with lumped models, thus indicating the importance of analysis using distributed models. Moreover, it reveals trends concerning stability properties depending on key system parameters useful in the context of system and control design aiming at the mitigation of vibrations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Charles W. Solbrig; Chad Pope; Jason Andrus
The fuel cycle facility (FCF) at the Idaho National Laboratory is a nuclear facility which must be licensed in order to operate. A safety analysis is required for a license. This paper describes the analysis of the Design Basis Accident for this facility. This analysis involves a model of the transient behavior of the FCF inert atmosphere hot cell following an earthquake initiated breach of pipes passing through the cell boundary. The hot cell is used to process spent metallic nuclear fuel. Such breaches allow the introduction of air and subsequent burning of pyrophoric metals. The model predicts the pressure,more » temperature, volumetric releases, cell heat transfer, metal fuel combustion, heat generation rates, radiological releases and other quantities. The results show that releases from the cell are minimal and satisfactory for safety. This analysis method should be useful in other facilities that have potential for damage from an earthquake and could eliminate the need to back fit facilities with earthquake proof boundaries or lessen the cost of new facilities.« less
Gowda, Dhananjaya; Airaksinen, Manu; Alku, Paavo
2017-09-01
Recently, a quasi-closed phase (QCP) analysis of speech signals for accurate glottal inverse filtering was proposed. However, the QCP analysis which belongs to the family of temporally weighted linear prediction (WLP) methods uses the conventional forward type of sample prediction. This may not be the best choice especially in computing WLP models with a hard-limiting weighting function. A sample selective minimization of the prediction error in WLP reduces the effective number of samples available within a given window frame. To counter this problem, a modified quasi-closed phase forward-backward (QCP-FB) analysis is proposed, wherein each sample is predicted based on its past as well as future samples thereby utilizing the available number of samples more effectively. Formant detection and estimation experiments on synthetic vowels generated using a physical modeling approach as well as natural speech utterances show that the proposed QCP-FB method yields statistically significant improvements over the conventional linear prediction and QCP methods.
NASA Astrophysics Data System (ADS)
Castaldo, R.; Tizzani, P.; Lollino, P.; Calò, F.; Ardizzone, F.; Lanari, R.; Guzzetti, F.; Manunta, M.
2015-11-01
The aim of this paper is to propose a methodology to perform inverse numerical modelling of slow landslides that combines the potentialities of both numerical approaches and well-known remote-sensing satellite techniques. In particular, through an optimization procedure based on a genetic algorithm, we minimize, with respect to a proper penalty function, the difference between the modelled displacement field and differential synthetic aperture radar interferometry (DInSAR) deformation time series. The proposed methodology allows us to automatically search for the physical parameters that characterize the landslide behaviour. To validate the presented approach, we focus our analysis on the slow Ivancich landslide (Assisi, central Italy). The kinematical evolution of the unstable slope is investigated via long-term DInSAR analysis, by exploiting about 20 years of ERS-1/2 and ENVISAT satellite acquisitions. The landslide is driven by the presence of a shear band, whose behaviour is simulated through a two-dimensional time-dependent finite element model, in two different physical scenarios, i.e. Newtonian viscous flow and a deviatoric creep model. Comparison between the model results and DInSAR measurements reveals that the deviatoric creep model is more suitable to describe the kinematical evolution of the landslide. This finding is also confirmed by comparing the model results with the available independent inclinometer measurements. Our analysis emphasizes that integration of different data, within inverse numerical models, allows deep investigation of the kinematical behaviour of slow active landslides and discrimination of the driving forces that govern their deformation processes.
NASA Astrophysics Data System (ADS)
Mishra, Vinod Kumar
2017-09-01
In this paper we develop an inventory model, to determine the optimal ordering quantities, for a set of two substitutable deteriorating items. In this inventory model the inventory level of both items depleted due to demands and deterioration and when an item is out of stock, its demands are partially fulfilled by the other item and all unsatisfied demand is lost. Each substituted item incurs a cost of substitution and the demands and deterioration is considered to be deterministic and constant. Items are order jointly in each ordering cycle, to take the advantages of joint replenishment. The problem is formulated and a solution procedure is developed to determine the optimal ordering quantities that minimize the total inventory cost. We provide an extensive numerical and sensitivity analysis to illustrate the effect of different parameter on the model. The key observation on the basis of numerical analysis, there is substantial improvement in the optimal total cost of the inventory model with substitution over without substitution.
Pairwise velocities in the "Running FLRW" cosmological model
NASA Astrophysics Data System (ADS)
Bibiano, Antonio; Croton, Darren J.
2017-05-01
We present an analysis of the pairwise velocity statistics from a suite of cosmological N-body simulations describing the 'Running Friedmann-Lemaître-Robertson-Walker' (R-FLRW) cosmological model. This model is based on quantum field theory in a curved space-time and extends Λ cold dark matter (CDM) with a time-evolving vacuum energy density, ρ _Λ. To enforce local conservation of matter, a time-evolving gravitational coupling is also included. Our results constitute the first study of velocities in the R-FLRW cosmology, and we also compare with other dark energy simulations suites, repeating the same analysis. We find a strong degeneracy between the pairwise velocity and σ8 at z = 0 for almost all scenarios considered, which remains even when we look back to epochs as early as z = 2. We also investigate various coupled dark energy models, some of which show minimal degeneracy, and reveal interesting deviations from ΛCDM that could be readily exploited by future cosmological observations to test and further constrain our understanding of dark energy.
Marie, James R.
1976-01-01
The computer models were developed to investigate possible hydrologic effects within the Indiana Dunes National Lakeshore caused by planned dewatering at the adjacent Bailly Nuclear Generator construction site. The model analysis indicated that the planned dewatering would cause a drawdown of about 4 ft under the westernmost pond of the Lakeshore and that this drawdown would cause the pond to go almost dry--less than 0.5 ft of water remaining in about 1 percent of the pond--under average conditions during the 18-month dewatering period. When water levels are below average, as during late July and early August 1974, the pond would go dry in about 5.5 months. However, the pond may not have to go completely dry to damage the ecosystem. If the National Park Service 's independent study determines the minimum pond level at which ecosystem damage would be minimized, the models developed in this study could be used to predict the hydrologic conditions necessary to maintain that level.
TG study of the Li0.4Fe2.4Zn0.2O4 ferrite synthesis
NASA Astrophysics Data System (ADS)
Lysenko, E. N.; Nikolaev, E. V.; Surzhikov, A. P.
2016-02-01
In this paper, the kinetic analysis of Li-Zn ferrite synthesis was studied using thermogravimetry (TG) method through the simultaneous application of non-linear regression to several measurements run at different heating rates (multivariate non-linear regression). Using TG-curves obtained for the four heating rates and Netzsch Thermokinetics software package, the kinetic models with minimal adjustable parameters were selected to quantitatively describe the reaction of Li-Zn ferrite synthesis. It was shown that the experimental TG-curves clearly suggest a two-step process for the ferrite synthesis and therefore a model-fitting kinetic analysis based on multivariate non-linear regressions was conducted. The complex reaction was described by a two-step reaction scheme consisting of sequential reaction steps. It is established that the best results were obtained using the Yander three-dimensional diffusion model at the first stage and Ginstling-Bronstein model at the second step. The kinetic parameters for lithium-zinc ferrite synthesis reaction were found and discussed.
Yiu, Sean; Farewell, Vernon T; Tom, Brian D M
2018-02-01
In psoriatic arthritis, it is important to understand the joint activity (represented by swelling and pain) and damage processes because both are related to severe physical disability. The paper aims to provide a comprehensive investigation into both processes occurring over time, in particular their relationship, by specifying a joint multistate model at the individual hand joint level, which also accounts for many of their important features. As there are multiple hand joints, such an analysis will be based on the use of clustered multistate models. Here we consider an observation level random-effects structure with dynamic covariates and allow for the possibility that a subpopulation of patients is at minimal risk of damage. Such an analysis is found to provide further understanding of the activity-damage relationship beyond that provided by previous analyses. Consideration is also given to the modelling of mean sojourn times and jump probabilities. In particular, a novel model parameterization which allows easily interpretable covariate effects to act on these quantities is proposed.
Teixeira, E R; Sato, Y; Akagawa, Y; Shindoi, N
1998-04-01
Further validity of finite element analysis (FEA) in implant biomechanics requires an increase of modelled range and mesh refinement, and a consequent increase in element number and calculation time. To develop a new method that allows a decrease of the modelled range and element number (along with less calculation time and less computer memory), 10 FEA models of the mandible with different mesio-distal lengths and elements were constructed based on three-dimensional graphic data of the bone structure around an osseointegrated implant. Analysis of stress distribution followed by 100 N loading with the fixation of the most external planes of the models indicated that a minimal bone length of 4.2 mm of the mesial and distal sides was acceptable for FEA representation. Moreover, unification of elements located far away from the implant surface did not affect stress distribution. These results suggest that it may be possible to develop a replica FEA implant model of the mandible with less range and fewer elements without altering stress distribution.
Minimally invasive surgical video analysis: a powerful tool for surgical training and navigation.
Sánchez-González, P; Oropesa, I; Gómez, E J
2013-01-01
Analysis of minimally invasive surgical videos is a powerful tool to drive new solutions for achieving reproducible training programs, objective and transparent assessment systems and navigation tools to assist surgeons and improve patient safety. This paper presents how video analysis contributes to the development of new cognitive and motor training and assessment programs as well as new paradigms for image-guided surgery.
Parsons, T.; Blakely, R.J.; Brocher, T.M.
2001-01-01
The geologic structure of the Earth's upper crust can be revealed by modeling variation in seismic arrival times and in potential field measurements. We demonstrate a simple method for sequentially satisfying seismic traveltime and observed gravity residuals in an iterative 3-D inversion. The algorithm is portable to any seismic analysis method that uses a gridded representation of velocity structure. Our technique calculates the gravity anomaly resulting from a velocity model by converting to density with Gardner's rule. The residual between calculated and observed gravity is minimized by weighted adjustments to the model velocity-depth gradient where the gradient is steepest and where seismic coverage is least. The adjustments are scaled by the sign and magnitude of the gravity residuals, and a smoothing step is performed to minimize vertical streaking. The adjusted model is then used as a starting model in the next seismic traveltime iteration. The process is repeated until one velocity model can simultaneously satisfy both the gravity anomaly and seismic traveltime observations within acceptable misfits. We test our algorithm with data gathered in the Puget Lowland of Washington state, USA (Seismic Hazards Investigation in Puget Sound [SHIPS] experiment). We perform resolution tests with synthetic traveltime and gravity observations calculated with a checkerboard velocity model using the SHIPS experiment geometry, and show that the addition of gravity significantly enhances resolution. We calculate a new velocity model for the region using SHIPS traveltimes and observed gravity, and show examples where correlation between surface geology and modeled subsurface velocity structure is enhanced.
Morrato, Elaine H; Smith, Meredith Y
2015-01-01
Pharmaceutical risk minimization programs are now an established requirement in the regulatory landscape. However, pharmaceutical companies have been slow to recognize and embrace the significant potential these programs offer in terms of enhancing trust with health care professionals and patients, and for providing a mechanism for bringing products to the market that might not otherwise have been approved. Pitfalls of the current drug development process include risk minimization programs that are not data driven; missed opportunities to incorporate pragmatic methods and market-based insights, outmoded tools and data sources, lack of rapid evaluative learning to support timely adaption, lack of systematic approaches for patient engagement, and questions on staffing and organizational infrastructure. We propose better integration of risk minimization with clinical drug development and commercialization work streams throughout the product lifecycle. We articulate a vision and propose broad adoption of organizational models for incorporating risk minimization expertise into the drug development process. Three organizational models are discussed and compared: outsource/external vendor, embedded risk management specialist model, and Center of Excellence. PMID:25750537
Boer, Annemarie; Dutmer, Alisa L; Schiphorst Preuper, Henrica R; van der Woude, Lucas H V; Stewart, Roy E; Deyo, Richard A; Reneman, Michiel F; Soer, Remko
2017-10-01
Validation study with cross-sectional and longitudinal measurements. To translate the US National Institutes of Health (NIH)-minimal dataset for clinical research on chronic low back pain into the Dutch language and to test its validity and reliability among people with chronic low back pain. The NIH developed a minimal dataset to encourage more complete and consistent reporting of clinical research and to be able to compare studies across countries in patients with low back pain. In the Netherlands, the NIH-minimal dataset has not been translated before and measurement properties are unknown. Cross-cultural validity was tested by a formal forward-backward translation. Structural validity was tested with exploratory factor analyses (comparative fit index, Tucker-Lewis index, and root mean square error of approximation). Hypothesis testing was performed to compare subscales of the NIH dataset with the Pain Disability Index and the EurQol-5D (Pearson correlation coefficients). Internal consistency was tested with Cronbach α and test-retest reliability at 2 weeks was calculated in a subsample of patients with Intraclass Correlation Coefficients and weighted Kappa (κω). In total, 452 patients were included of which 52 were included for the test-retest study. factor analysis for structural validity pointed into the direction of a seven-factor model (Cronbach α = 0.78). Factors and total score of the NIH-minimal dataset showed fair to good correlations with Pain Disability Index (r = 0.43-0.70) and EuroQol-5D (r = -0.41 to -0.64). Reliability: test-retest reliability per item showed substantial agreement (κω=0.65). Test-retest reliability per factor was moderate to good (Intraclass Correlation Coefficient = 0.71). The Dutch language version measurement properties of the NIH-minimal were satisfactory. N/A.
NASA Astrophysics Data System (ADS)
Hamada, Aulia; Rosyidi, Cucuk Nur; Jauhari, Wakhid Ahmad
2017-11-01
Minimizing processing time in a production system can increase the efficiency of a manufacturing company. Processing time are influenced by application of modern technology and machining parameter. Application of modern technology can be apply by use of CNC machining, one of the machining process can be done with a CNC machining is turning. However, the machining parameters not only affect the processing time but also affect the environmental impact. Hence, optimization model is needed to optimize the machining parameters to minimize the processing time and environmental impact. This research developed a multi-objective optimization to minimize the processing time and environmental impact in CNC turning process which will result in optimal decision variables of cutting speed and feed rate. Environmental impact is converted from environmental burden through the use of eco-indicator 99. The model were solved by using OptQuest optimization software from Oracle Crystal Ball.
Competing quantum orderings in cuprate superconductors: A minimal model
NASA Astrophysics Data System (ADS)
Martin, I.; Ortiz, G.; Balatsky, A. V.; Bishop, A. R.
2001-02-01
We present a minimal model for cuprate superconductors. At the unrestricted mean-field level, the model produces homogeneous superconductivity at large doping, striped superconductivity in the underdoped regime and various antiferromagnetic phases at low doping and for high temperatures. On the underdoped side, the superconductor is intrinsically inhomogeneous and global phase coherence is achieved through Josephson-like coupling of the superconducting stripes. The model is applied to calculate experimentally measurable ARPES spectra.
Industrial wastewater minimization using water pinch analysis: a case study on an old textile plant.
Ujang, Z; Wong, C L; Manan, Z A
2002-01-01
Industrial wastewater minimization can be conducted using four main strategies: (i) reuse; (ii) regeneration-reuse; (iii) regeneration-recycling; and (iv) process changes. This study is concerned with (i) and (ii) to investigate the most suitable approach to wastewater minimization for an old textile industry plant. A systematic water networks design using water pinch analysis (WPA) was developed to minimize the water usage and wastewater generation for the textile plant. COD was chosen as the main parameter. An integrated design method has been applied, which brings the engineering insight using WPA that can determine the minimum flowrate of the water usage and then minimize the water consumption and wastewater generation as well. The overall result of this study shows that WPA has been effectively applied using both reuse and regeneration-reuse strategies for the old textile industry plant, and reduced the operating cost by 16% and 50% respectively.
NASA Astrophysics Data System (ADS)
Gu, Wen; Zhu, Zhiwei; Zhu, Wu-Le; Lu, Leyao; To, Suet; Xiao, Gaobo
2018-05-01
An automatic identification method for obtaining the critical depth-of-cut (DoC) of brittle materials with nanometric accuracy and sub-nanometric uncertainty is proposed in this paper. With this method, a two-dimensional (2D) microscopic image of the taper cutting region is captured and further processed by image analysis to extract the margin of generated micro-cracks in the imaging plane. Meanwhile, an analytical model is formulated to describe the theoretical curve of the projected cutting points on the imaging plane with respect to a specified DoC during the whole cutting process. By adopting differential evolution algorithm-based minimization, the critical DoC can be identified by minimizing the deviation between the extracted margin and the theoretical curve. The proposed method is demonstrated through both numerical simulation and experimental analysis. Compared with conventional 2D- and 3D-microscopic-image-based methods, determination of the critical DoC in this study uses the envelope profile rather than the onset point of the generated cracks, providing a more objective approach with smaller uncertainty.
A minimal spatial cell lineage model of epithelium: tissue stratification and multi-stability
NASA Astrophysics Data System (ADS)
Yeh, Wei-Ting; Chen, Hsuan-Yi
2018-05-01
A minimal model which includes spatial and cell lineage dynamics for stratified epithelia is presented. The dependence of tissue steady state on cell differentiation models, cell proliferation rate, cell differentiation rate, and other parameters are studied numerically and analytically. Our minimal model shows some important features. First, we find that morphogen or mechanical stress mediated interaction is necessary to maintain a healthy stratified epithelium. Furthermore, comparing with tissues in which cell differentiation can take place only during cell division, tissues in which cell division and cell differentiation are decoupled can achieve relatively higher degree of stratification. Finally, our model also shows that in the presence of short-range interactions, it is possible for a tissue to have multiple steady states. The relation between our results and tissue morphogenesis or lesion is discussed.
Mellin transforming the minimal model CFTs: AdS/CFT at strong curvature
Lowe, David A.
2016-07-14
Mack has conjectured that all conformal field theories are equivalent to string theories. Here, we explore the example of the two-dimensional minimal model CFTs and confirm that the Mellin transformed amplitudes have the desired properties of string theory in three-dimensional anti-de Sitter spacetime.
Towards Minimizing Social, Cultural, and Intellectual Disruptions Embedded in Literacy Instruction.
ERIC Educational Resources Information Center
Peat, David W.
1994-01-01
To explain the concept of literacy, the Integrative Systems Model of Literacy is developed, illustrating how understanding literacy has direct applications to both instruction and research. The model's utility in reconciling opposing concepts of literacy is shown, presenting practical suggestions for literacy instruction which minimize social,…
Meso-scale turbulence in living fluids
NASA Astrophysics Data System (ADS)
Dunkel, Jorn; Wensink, Rik; Heidenreich, Sebastian; Drescher, Knut; Goldstein, Ray; Loewen, Hartmut; Yeomans, Julia
2012-11-01
The mathematical characterization of turbulence phenomena in active non-equilibrium fluids proves even more difficult than for conventional liquids or gases. It is not known which features of turbulent phases in living matter are universal or system-specific, or which generalizations of the Navier-Stokes equations are able to describe them adequately. We combine experiments, particle simulations, and continuum theory to identify the statistical properties of self-sustained meso-scale turbulence in active systems. To study how dimensionality and boundary conditions affect collective bacterial dynamics, we measured energy spectra and structure functions in dense Bacillus subtilis suspensions in quasi-2D and 3D geometries. Our experimental results for the bacterial flow statistics agree well with predictions from a minimal model for self-propelled rods, suggesting that at high concentrations the collective motion of the bacteria is dominated by short-range interactions. To provide a basis for future theoretical studies, we propose a minimal continuum model for incompressible bacterial flow. A detailed numerical analysis of the 2D case shows that this theory can reproduce many of the experimentally observed features of self-sustained active turbulence. Supported by the ERC, EPSRC and DFG.
Modeling and Uncertainty Quantification of Vapor Sorption and Diffusion in Heterogeneous Polymers
Sun, Yunwei; Harley, Stephen J.; Glascoe, Elizabeth A.
2015-08-13
A high-fidelity model of kinetic and equilibrium sorption and diffusion is developed and exercised. The gas-diffusion model is coupled with a triple-sorption mechanism: Henry’s law absorption, Langmuir adsorption, and pooling or clustering of molecules at higher partial pressures. Sorption experiments are conducted and span a range of relative humidities (0-95%) and temperatures (30-60°C). Kinetic and equilibrium sorption properties and effective diffusivity are determined by minimizing the absolute difference between measured and modeled uptakes. Uncertainty quantification and sensitivity analysis methods are described and exercised herein to demonstrate the capability of this modeling approach. Water uptake in silica-filled and unfilled poly(dimethylsiloxane) networksmore » is investigated; however, the model is versatile enough to be used with a wide range of materials and vapors.« less
A risk-based multi-objective model for optimal placement of sensors in water distribution system
NASA Astrophysics Data System (ADS)
Naserizade, Sareh S.; Nikoo, Mohammad Reza; Montaseri, Hossein
2018-02-01
In this study, a new stochastic model based on Conditional Value at Risk (CVaR) and multi-objective optimization methods is developed for optimal placement of sensors in water distribution system (WDS). This model determines minimization of risk which is caused by simultaneous multi-point contamination injection in WDS using CVaR approach. The CVaR considers uncertainties of contamination injection in the form of probability distribution function and calculates low-probability extreme events. In this approach, extreme losses occur at tail of the losses distribution function. Four-objective optimization model based on NSGA-II algorithm is developed to minimize losses of contamination injection (through CVaR of affected population and detection time) and also minimize the two other main criteria of optimal placement of sensors including probability of undetected events and cost. Finally, to determine the best solution, Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE), as a subgroup of Multi Criteria Decision Making (MCDM) approach, is utilized to rank the alternatives on the trade-off curve among objective functions. Also, sensitivity analysis is done to investigate the importance of each criterion on PROMETHEE results considering three relative weighting scenarios. The effectiveness of the proposed methodology is examined through applying it to Lamerd WDS in the southwestern part of Iran. The PROMETHEE suggests 6 sensors with suitable distribution that approximately cover all regions of WDS. Optimal values related to CVaR of affected population and detection time as well as probability of undetected events for the best optimal solution are equal to 17,055 persons, 31 mins and 0.045%, respectively. The obtained results of the proposed methodology in Lamerd WDS show applicability of CVaR-based multi-objective simulation-optimization model for incorporating the main uncertainties of contamination injection in order to evaluate extreme value of losses in WDS.
CANDID: Companion Analysis and Non-Detection in Interferometric Data
NASA Astrophysics Data System (ADS)
Gallenne, A.; Mérand, A.; Kervella, P.; Monnier, J. D.; Schaefer, G. H.; Baron, F.; Breitfelder, J.; Le Bouquin, J. B.; Roettenbacher, R. M.; Gieren, W.; Pietrzynski, G.; McAlister, H.; ten Brummelaar, T.; Sturmann, J.; Sturmann, L.; Turner, N.; Ridgway, S.; Kraus, S.
2015-05-01
CANDID finds faint companion around star in interferometric data in the OIFITS format. It allows systematically searching for faint companions in OIFITS data, and if not found, estimates the detection limit. The tool is based on model fitting and Chi2 minimization, with a grid for the starting points of the companion position. It ensures all positions are explored by estimating a-posteriori if the grid is dense enough, and provides an estimate of the optimum grid density.
Validation of insulin sensitivity and secretion indices derived from the liquid meal tolerance test.
Maki, Kevin C; Kelley, Kathleen M; Lawless, Andrea L; Hubacher, Rachel L; Schild, Arianne L; Dicklin, Mary R; Rains, Tia M
2011-06-01
A liquid meal tolerance test (LMTT) has been proposed as a useful alternative to more labor-intensive methods of assessing insulin sensitivity and secretion. This substudy, conducted at the conclusion of a randomized, double-blind crossover trial, compared insulin sensitivity indices from a LMTT (Matsuda insulin sensitivity index [MISI] and LMTT disposition index [LMTT-DI]) with indices derived from minimal model analysis of results from the insulin-modified intravenous glucose tolerance test (IVGTT) (insulin sensitivity index [S(I)] and disposition index [DI]). Participants included men (n = 16) and women (n = 8) without diabetes but with increased abdominal adiposity (waist circumference ≥102 cm and ≥89 cm, respectively) and mean age of 48.9 years. The correlation between S(I) and the MISI was 0.776 (P < 0.0001). The respective associations between S(I) and MISI with waist circumference (r = -0.445 and -0.554, both P < 0.05) and body mass index were similar (r = -0.500 and -0.539, P < 0.05). The correlation between DI and LMTT-DI was 0.604 (P = 0.002). These results indicate that indices of insulin sensitivity and secretion derived from the LMTT correlate well with those from the insulin-modified IVGTT with minimal model analysis, suggesting that they may be useful for application in clinical and population studies of glucose homeostasis.
NASA Astrophysics Data System (ADS)
Harmening, Corinna; Neuner, Hans
2016-09-01
Due to the establishment of terrestrial laser scanner, the analysis strategies in engineering geodesy change from pointwise approaches to areal ones. These areal analysis strategies are commonly built on the modelling of the acquired point clouds. Freeform curves and surfaces like B-spline curves/surfaces are one possible approach to obtain space continuous information. A variety of parameters determines the B-spline's appearance; the B-spline's complexity is mostly determined by the number of control points. Usually, this number of control points is chosen quite arbitrarily by intuitive trial-and-error-procedures. In this paper, the Akaike Information Criterion and the Bayesian Information Criterion are investigated with regard to a justified and reproducible choice of the optimal number of control points of B-spline curves. Additionally, we develop a method which is based on the structural risk minimization of the statistical learning theory. Unlike the Akaike and the Bayesian Information Criteria this method doesn't use the number of parameters as complexity measure of the approximating functions but their Vapnik-Chervonenkis-dimension. Furthermore, it is also valid for non-linear models. Thus, the three methods differ in their target function to be minimized and consequently in their definition of optimality. The present paper will be continued by a second paper dealing with the choice of the optimal number of control points of B-spline surfaces.
Panić, Sanja; Rakić, Dušan; Guzsvány, Valéria; Kiss, Erne; Boskovic, Goran; Kónya, Zoltán; Kukovecz, Ákos
2015-12-01
The aim of this work was to evaluate significant factors affecting the thiamethoxam adsorption efficiency using oxidized multi-walled carbon nanotubes (MWCNTs) as adsorbents. Five factors (initial solution concentration of thiamethoxam in water, temperature, solution pH, MWCNTs weight and contact time) were investigated using 2V(5-1) fractional factorial design. The obtained linear model was statistically tested using analysis of variance (ANOVA) and the analysis of residuals was used to investigate the model validity. It was observed that the factors and their second-order interactions affecting the thiamethoxam removal can be divided into three groups: very important, moderately important and insignificant ones. The initial solution concentration was found to be the most influencing parameter on thiamethoxam adsorption from water. Optimization of the factors levels was carried out by minimizing those parameters which are usually critical in real life: the temperature (energy), contact time (money) and weight of MWCNTs (potential health hazard), in order to maximize the adsorbed amount of the pollutant. The results of maximal adsorbed thiamethoxam amount in both real and optimized experiments indicate that among minimized parameters the adsorption time is one that makes the largest difference. The results of this study indicate that fractional factorial design is very useful tool for screening the higher number of parameters and reducing the number of adsorption experiments. Copyright © 2015 Elsevier Ltd. All rights reserved.
Arba-Mosquera, Samuel; Aslanides, Ioannis M.
2012-01-01
Purpose To analyze the effects of Eye-Tracker performance on the pulse positioning errors during refractive surgery. Methods A comprehensive model, which directly considers eye movements, including saccades, vestibular, optokinetic, vergence, and miniature, as well as, eye-tracker acquisition rate, eye-tracker latency time, scanner positioning time, laser firing rate, and laser trigger delay have been developed. Results Eye-tracker acquisition rates below 100 Hz correspond to pulse positioning errors above 1.5 mm. Eye-tracker latency times to about 15 ms correspond to pulse positioning errors of up to 3.5 mm. Scanner positioning times to about 9 ms correspond to pulse positioning errors of up to 2 mm. Laser firing rates faster than eye-tracker acquisition rates basically duplicate pulse-positioning errors. Laser trigger delays to about 300 μs have minor to no impact on pulse-positioning errors. Conclusions The proposed model can be used for comparison of laser systems used for ablation processes. Due to the pseudo-random nature of eye movements, positioning errors of single pulses are much larger than observed decentrations in the clinical settings. There is no single parameter that ‘alone’ minimizes the positioning error. It is the optimal combination of the several parameters that minimizes the error. The results of this analysis are important to understand the limitations of correcting very irregular ablation patterns.
Shanks, Ryan A.; Robertson, Chuck L.; Haygood, Christian S.; Herdliksa, Anna M.; Herdliska, Heather R.; Lloyd, Steven A.
2017-01-01
Introductory biology courses provide an important opportunity to prepare students for future courses, yet existing cookbook labs, although important in their own way, fail to provide many of the advantages of semester-long research experiences. Engaging, authentic research experiences aid biology students in meeting many learning goals. Therefore, overlaying a research experience onto the existing lab structure allows faculty to overcome barriers involving curricular change. Here we propose a working model for this overlay design in an introductory biology course and detail a means to conduct this lab with minimal increases in student and faculty workloads. Furthermore, we conducted exploratory factor analysis of the Experimental Design Ability Test (EDAT) and uncovered two latent factors which provide valid means to assess this overlay model’s ability to increase advanced experimental design abilities. In a pre-test/post-test design, we demonstrate significant increases in both basic and advanced experimental design abilities in an experimental and comparison group. We measured significantly higher gains in advanced experimental design understanding in students in the experimental group. We believe this overlay model and EDAT factor analysis contribute a novel means to conduct and assess the effectiveness of authentic research experiences in an introductory course without major changes to the course curriculum and with minimal increases in faculty and student workloads. PMID:28904647
Dark matter, constrained minimal supersymmetric standard model, and lattice QCD.
Giedt, Joel; Thomas, Anthony W; Young, Ross D
2009-11-13
Recent lattice measurements have given accurate estimates of the quark condensates in the proton. We use these results to significantly improve the dark matter predictions in benchmark models within the constrained minimal supersymmetric standard model. The predicted spin-independent cross sections are at least an order of magnitude smaller than previously suggested and our results have significant consequences for dark matter searches.
NASA Astrophysics Data System (ADS)
Belkhode, Pramod Namdeorao
2017-06-01
Field data based model is proposed to reduce the overhauling time and human energy consumed in liner piston maintenance activity so as to increase the productivity of liner piston maintenance activity. The independent variables affecting the phenomenon such as anthropometric parameters of workers (Eastman Kodak Co. Ltd in Section VIA Appendix-A: Anthropometric Data. Ergonomic Design for People at Work, Van Nostrans Reinhold, New York, 1), workers parameters, specification of liner piston data, specification of tools used in liner piston maintenance activity, specification of solvents, axial clearance of big end bearing and bolt elongation, workstation data (Eastman Kodak Co. Ltd in Work Place Ergonomic Design for People at Work, Van Nostrans Reinhold, New York, 2) and extraneous variables, namely, temperature, humidity at workplace, illumination at workplace and noise at workplace (Eastman Kodak Co. Ltd in Chapter V Environment Ergonomic Design for People at Work, Van Nostrans Reinhold, New York, 3) are taken into account. The model is formulated for dependent variables of liner piston maintenance activity to minimize the overhauling time and human energy consumption so as to improve the productivity of liner piston maintenance activity. The developed model can predict the performance of liner piston maintenance activity which involves man and machine system (Schenck in Theories of Engineering Experimentation, Mc-Graw Hill, New York 4). The model is then optimized by optimization technique and the sensitivity analysis of the model has also been estimated.
Reddy, M V; Eachempati, Krishnakiran; Gurava Reddy, A V; Mugalur, Aakash
2018-01-01
Rapid prototyping (RP) is used widely in dental and faciomaxillary surgery with anecdotal uses in orthopedics. The purview of RP in orthopedics is vast. However, there is no error analysis reported in the literature on bone models generated using office-based RP. This study evaluates the accuracy of fused deposition modeling (FDM) using standard tessellation language (STL) files and errors generated during the fabrication of bone models. Nine dry bones were selected and were computed tomography (CT) scanned. STL files were procured from the CT scans and three-dimensional (3D) models of the bones were printed using our in-house FDM based 3D printer using Acrylonitrile Butadiene Styrene (ABS) filament. Measurements were made on the bone and 3D models according to data collection procedures for forensic skeletal material. Statistical analysis was performed to establish interobserver co-relation for measurements on dry bones and the 3D bone models. Statistical analysis was performed using SPSS version 13.0 software to analyze the collected data. The inter-observer reliability was established using intra-class coefficient for both the dry bones and the 3D models. The mean of absolute difference is 0.4 that is very minimal. The 3D models are comparable to the dry bones. STL file dependent FDM using ABS material produces near-anatomical 3D models. The high 3D accuracy hold a promise in the clinical scenario for preoperative planning, mock surgery, and choice of implants and prostheses, especially in complicated acetabular trauma and complex hip surgeries.
NASA Astrophysics Data System (ADS)
Polverino, Pierpaolo; Frisk, Erik; Jung, Daniel; Krysander, Mattias; Pianese, Cesare
2017-07-01
The present paper proposes an advanced approach for Polymer Electrolyte Membrane Fuel Cell (PEMFC) systems fault detection and isolation through a model-based diagnostic algorithm. The considered algorithm is developed upon a lumped parameter model simulating a whole PEMFC system oriented towards automotive applications. This model is inspired by other models available in the literature, with further attention to stack thermal dynamics and water management. The developed model is analysed by means of Structural Analysis, to identify the correlations among involved physical variables, defined equations and a set of faults which may occur in the system (related to both auxiliary components malfunctions and stack degradation phenomena). Residual generators are designed by means of Causal Computation analysis and the maximum theoretical fault isolability, achievable with a minimal number of installed sensors, is investigated. The achieved results proved the capability of the algorithm to theoretically detect and isolate almost all faults with the only use of stack voltage and temperature sensors, with significant advantages from an industrial point of view. The effective fault isolability is proved through fault simulations at a specific fault magnitude with an advanced residual evaluation technique, to consider quantitative residual deviations from normal conditions and achieve univocal fault isolation.
NASA Astrophysics Data System (ADS)
Iida, T.; Sakurai, Y.; Matsumura, T.; Sugai, H.; Imada, H.; Kataza, H.; Ohsaki, H.; Hazumi, M.; Katayama, N.; Yamamoto, R.; Utsunomiya, S.; Terao, Y.
2017-12-01
We report a thermal analysis of a polarization modulator unit (PMU) for use in a space-borne cosmic microwave background (CMB) project. A measurement of the CMB polarization allows us to probe the physics of early universe, and that is the best method to test the cosmic inflation experimentally. One of the key instruments for this science is to use a halfwave plate (HWP) based polarization modulator. The HWP is required to rotate continuously at about 1 Hz below 10 K to minimize its own thermal emission to a detector system. The rotating HWP system at the cryogenic environment can be realized by using a superconducting magnetic bearing (SMB) without significant heat dissipation by mechanical friction. While the SMB achieves the smooth rotation due to the contactless bearing, an estimation of a levitating HWP temperature becomes a challenge. We manufactured a one-eighth scale prototype model of PMU and built a thermal model. We verified our thermal model with the experimental data. We forecasted the projected thermal performance of PMU for a full-scale model based on the thermal model. From this analysis, we discuss the design requirement toward constructing the full-scale model for use in a space environment such as a future CMB satellite mission, LiteBIRD.
NASA Astrophysics Data System (ADS)
Curcó, David; Casanovas, Jordi; Roca, Marc; Alemán, Carlos
2005-07-01
A method for generating atomistic models of dense amorphous polymers is presented. The method is organized in a two-steps procedure. First, structures are generated using an algorithm that minimizes the torsional strain. After this, a relaxation algorithm is applied to minimize the non-bonding interactions. Two alternative relaxation methods, which are based simple minimization and Concerted Rotation techniques, have been implemented. The performance of the method has been checked by simulating polyethylene, polypropylene, nylon 6, poly(L,D-lactic acid) and polyglycolic acid.
Minimization In Digital Design As A Meta-Planning Problem
NASA Astrophysics Data System (ADS)
Ho, William P. C.; Wu, Jung-Gen
1987-05-01
In our model-based expert system for automatic digital system design, we formalize the design process into three sub-processes - compiling high-level behavioral specifications into primitive behavioral operations, grouping primitive operations into behavioral functions, and grouping functions into modules. Consideration of design minimization explicitly controls decision-making in the last two subprocesses. Design minimization, a key task in the automatic design of digital systems, is complicated by the high degree of interaction among the time sequence and content of design decisions. In this paper, we present an AI approach which directly addresses these interactions and their consequences by modeling the minimization prob-lem as a planning problem, and the management of design decision-making as a meta-planning problem.
Study on Optimum Design of Multi-Pole Interior Permanent Magnet Motor with Concentrated Windings
NASA Astrophysics Data System (ADS)
Kano, Yoshiaki; Kosaka, Takashi; Matsui, Nobuyuki
Interior Permanent Magnet Synchronous Motors (IPMSM) have been found in many applications because of their high-power density and high-efficiency. The existence of a complex magnetic circuit, however, makes the design of this machine quite complicated. Although FEM is commonly used in the IPMSM design, one of disadvantages is long CPU times. This paper presents a simple non-linear magnetic analysis for a multi-pole IPMSM as a preliminary design tool of FEM. The proposed analysis consists of the geometric-flux-tube-based equivalent-magnetic-circuit model. The model includes saturable permeances taking into account the local magnetic saturation in the core. As a result, the proposed analysis is capable of calculating the flux distribution and the torque characteristics in the presence of magnetic saturation. The effectiveness of the proposed analysis is verified by comparing with FEM in terms of the analytical accuracy and the computation time for two IPMSMs with different specifications. After verification, the proposed analysis-based optimum design is examined, by which the minimization of motor volume is realized while satisfying the necessary maximum torque for target applications.
Brand, Richard A; Stanford, Clark M; Swan, Colby C
2003-01-01
Joint implant design clearly affects long-term outcome. While many implant designs have been empirically-based, finite element analysis has the potential to identify beneficial and deleterious features prior to clinical trials. Finite element analysis is a powerful analytic tool allowing computation of the stress and strain distribution throughout an implant construct. Whether it is useful depends upon many assumptions and details of the model. Since ultimate failure is related to biological factors in addition to mechanical, and since the mechanical causes of failure are related to load history, rather than a few loading conditions, chief among them is whether the stresses or strains under limited loading conditions relate to outcome. Newer approaches can minimize this and the many other model limitations. If the surgeon is to critically and properly interpret the results in scientific articles and sales literature, he or she must have a fundamental understanding of finite element analysis. We outline here the major capabilities of finite element analysis, as well as the assumptions and limitations. PMID:14575244
Spontaneous emergence of milling (vortex state) in a Vicsek-like model
NASA Astrophysics Data System (ADS)
Costanzo, A.; Hemelrijk, C. K.
2018-04-01
Collective motion is of interest to laymen and scientists in different fields. In groups of animals, many patterns of collective motion arise such as polarized schools and mills (i.e. circular motion). Collective motion can be generated in computational models of different degrees of complexity. In these models, moving individuals coordinate with others nearby. In the more complex models, individuals attract each other, aligning their headings, and avoiding collisions. Simpler models may include only one or two of these types of interactions. The collective pattern that interests us here is milling, which is observed in many animal species. It has been reproduced in the more complex models, but not in simpler models that are based only on alignment, such as the well-known Vicsek model. Our aim is to provide insight in the minimal conditions required for milling by making minimal modifications to the Vicsek model. Our results show that milling occurs when both the field of view and the maximal angular velocity are decreased. Remarkably, apart from milling, our minimal model also exhibits many of the other patterns of collective motion observed in animal groups.
Choosing colors for map display icons using models of visual search.
Shive, Joshua; Francis, Gregory
2013-04-01
We show how to choose colors for icons on maps to minimize search time using predictions of a model of visual search. The model analyzes digital images of a search target (an icon on a map) and a search display (the map containing the icon) and predicts search time as a function of target-distractor color distinctiveness and target eccentricity. We parameterized the model using data from a visual search task and performed a series of optimization tasks to test the model's ability to choose colors for icons to minimize search time across icons. Map display designs made by this procedure were tested experimentally. In a follow-up experiment, we examined the model's flexibility to assign colors in novel search situations. The model fits human performance, performs well on the optimization tasks, and can choose colors for icons on maps with novel stimuli to minimize search time without requiring additional model parameter fitting. Models of visual search can suggest color choices that produce search time reductions for display icons. Designers should consider constructing visual search models as a low-cost method of evaluating color assignments.
An optimal control strategies using vaccination and fogging in dengue fever transmission model
NASA Astrophysics Data System (ADS)
Fitria, Irma; Winarni, Pancahayani, Sigit; Subchan
2017-08-01
This paper discussed regarding a model and an optimal control problem of dengue fever transmission. We classified the model as human and vector (mosquito) population classes. For the human population, there are three subclasses, such as susceptible, infected, and resistant classes. Then, for the vector population, we divided it into wiggler, susceptible, and infected vector classes. Thus, the model consists of six dynamic equations. To minimize the number of dengue fever cases, we designed two optimal control variables in the model, the giving of fogging and vaccination. The objective function of this optimal control problem is to minimize the number of infected human population, the number of vector, and the cost of the controlling efforts. By giving the fogging optimally, the number of vector can be minimized. In this case, we considered the giving of vaccination as a control variable because it is one of the efforts that are being developed to reduce the spreading of dengue fever. We used Pontryagin Minimum Principle to solve the optimal control problem. Furthermore, the numerical simulation results are given to show the effect of the optimal control strategies in order to minimize the epidemic of dengue fever.
CMB constraints on β-exponential inflationary models
NASA Astrophysics Data System (ADS)
Santos, M. A.; Benetti, M.; Alcaniz, J. S.; Brito, F. A.; Silva, R.
2018-03-01
We analyze a class of generalized inflationary models proposed in ref. [1], known as β-exponential inflation. We show that this kind of potential can arise in the context of brane cosmology, where the field describing the size of the extra-dimension is interpreted as the inflaton. We discuss the observational viability of this class of model in light of the latest Cosmic Microwave Background (CMB) data from the Planck Collaboration through a Bayesian analysis, and impose tight constraints on the model parameters. We find that the CMB data alone prefer weakly the minimal standard model (ΛCDM) over the β-exponential inflation. However, when current local measurements of the Hubble parameter, H0, are considered, the β-inflation model is moderately preferred over the ΛCDM cosmology, making the study of this class of inflationary models interesting in the context of the current H0 tension.
Adly, Amr A.; Abd-El-Hafiz, Salwa K.
2012-01-01
Incorporation of hysteresis models in electromagnetic analysis approaches is indispensable to accurate field computation in complex magnetic media. Throughout those computations, vector nature and computational efficiency of such models become especially crucial when sophisticated geometries requiring massive sub-region discretization are involved. Recently, an efficient vector Preisach-type hysteresis model constructed from only two scalar models having orthogonally coupled elementary operators has been proposed. This paper presents a novel Hopfield neural network approach for the implementation of Stoner–Wohlfarth-like operators that could lead to a significant enhancement in the computational efficiency of the aforementioned model. Advantages of this approach stem from the non-rectangular nature of these operators that substantially minimizes the number of operators needed to achieve an accurate vector hysteresis model. Details of the proposed approach, its identification and experimental testing are presented in the paper. PMID:25685446
A practical model for economic evaluation of tissue-engineered therapies.
Tan, Tien-En; Peh, Gary S L; Finkelstein, Eric A; Mehta, Jodhbir S
2015-01-01
Tissue-engineered therapies are being developed across virtually all fields of medicine. Some of these therapies are already in clinical use, while others are still in clinical trials or the experimental phase. Most initial studies in the evaluation of new therapies focus on demonstration of clinical efficacy. However, cost considerations or economic viability are just as important. Many tissue-engineered therapies have failed to be impactful because of shortcomings in economic competitiveness, rather than clinical efficacy. Furthermore, such economic viability studies should be performed early in the process of development, before significant investment has been made. Cost-minimization analysis combined with sensitivity analysis is a useful model for the economic evaluation of new tissue-engineered therapies. The analysis can be performed early in the development process, and can provide valuable information to guide further investment and research. The utility of the model is illustrated with the practical real-world example of tissue-engineered constructs for corneal endothelial transplantation. The authors have declared no conflicts of interest for this article. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Guerin, Marianne
2001-10-01
An analysis of tritium and 36Cl data collected at Yucca Mountain, Nevada suggests that fracture flow may occur at high velocities through the thick unsaturated zone. The mechanisms and extent of this "fast flow" in fractures at Yucca Mountain are investigated with data analysis, mixing models and several one-dimensional modeling scenarios. The model results and data analysis provide evidence substantiating the weeps model [Gauthier, J.H., Wilson, M.L., Lauffer, F.C., 1992. Proceedings of the Third Annual International High-level Radioactive Waste Management Conference, vol. 1, Las Vegas, NV. American Nuclear Society, La Grange Park, IL, pp. 891-989] and suggest that fast flow in fractures with minimal fracture-matrix interaction may comprise a substantial proportion of the total infiltration through Yucca Mountain. Mixing calculations suggest that bomb-pulse tritium measurements, in general, represent the tail end of travel times for thermonuclear-test-era (bomb-pulse) infiltration. The data analysis shows that bomb-pulse tritium and 36Cl measurements are correlated with discrete features such as horizontal fractures and areas where lateral flow may occur. The results presented here imply that fast flow in fractures may be ubiquitous at Yucca Mountain, occurring when transient infiltration (storms) generates flow in the connected fracture network.
Guerin, M
2001-10-01
An analysis of tritium and 36Cl data collected at Yucca Mountain, Nevada suggests that fracture flow may occur at high velocities through the thick unsaturated zone. The mechanisms and extent of this "fast flow" in fractures at Yucca Mountain are investigated with data analysis, mixing models and several one-dimensional modeling scenarios. The model results and data analysis provide evidence substantiating the weeps model [Gauthier, J.H., Wilson, M.L., Lauffer, F.C., 1992. Proceedings of the Third Annual International High-level Radioactive Waste Management Conference, vol. 1, Las Vegas, NV. American Nuclear Society, La Grange Park, IL, pp. 891-989] and suggest that fast flow in fractures with minimal fracture-matrix interaction may comprise a substantial proportion of the total infiltration through Yucca Mountain. Mixing calculations suggest that bomb-pulse tritium measurements, in general, represent the tail end of travel times for thermonuclear-test-era (bomb-pulse) infiltration. The data analysis shows that bomb-pulse tritium and 36Cl measurements are correlated with discrete features such as horizontal fractures and areas where lateral flow may occur. The results presented here imply that fast flow in fractures may be ubiquitous at Yucca Mountain, occurring when transient infiltration (storms) generates flow in the connected fracture network.
NASA Technical Reports Server (NTRS)
Yam, Y.; Briggs, C.
1988-01-01
One important aspect of the LDR control problem is the possible excitations of structural modes due to random disturbances, mirror chopping, and slewing maneuvers. An analysis was performed to yield a first order estimate of the effects of such dynamic excitations. The analysis involved a study of slewing jitters, chopping jitters, disturbance responses, and pointing errors, making use of a simplified planar LDR model which describes the LDR dynamics on a plane perpendicular to the primary reflector. Briefly, the results indicate that the command slewing profile plays an important role in minimizing the resultant jitter, even to a level acceptable without any control action. An optimal profile should therefore be studied.
Programs for analysis and resizing of complex structures. [computerized minimum weight design
NASA Technical Reports Server (NTRS)
Haftka, R. T.; Prasad, B.
1978-01-01
The paper describes the PARS (Programs for Analysis and Resizing of Structures) system. PARS is a user oriented system of programs for the minimum weight design of structures modeled by finite elements and subject to stress, displacement, flutter and thermal constraints. The system is built around SPAR - an efficient and modular general purpose finite element program, and consists of a series of processors that communicate through the use of a data base. An efficient optimizer based on the Sequence of Unconstrained Minimization Technique (SUMT) with an extended interior penalty function and Newton's method is used. Several problems are presented for demonstration of the system capabilities.
BMP analysis system for watershed-based stormwater management.
Zhen, Jenny; Shoemaker, Leslie; Riverson, John; Alvi, Khalid; Cheng, Mow-Soung
2006-01-01
Best Management Practices (BMPs) are measures for mitigating nonpoint source (NPS) pollution caused mainly by stormwater runoff. Established urban and newly developing areas must develop cost effective means for restoring or minimizing impacts, and planning future growth. Prince George's County in Maryland, USA, a fast-growing region in the Washington, DC metropolitan area, has developed a number of tools to support analysis and decision making for stormwater management planning and design at the watershed level. These tools support watershed analysis, innovative BMPs, and optimization. Application of these tools can help achieve environmental goals and lead to significant cost savings. This project includes software development that utilizes GIS information and technology, integrates BMP processes simulation models, and applies system optimization techniques for BMP planning and selection. The system employs the ESRI ArcGIS as the platform, and provides GIS-based visualization and support for developing networks including sequences of land uses, BMPs, and stream reaches. The system also provides interfaces for BMP placement, BMP attribute data input, and decision optimization management. The system includes a stand-alone BMP simulation and evaluation module, which complements both research and regulatory nonpoint source control assessment efforts, and allows flexibility in the examining various BMP design alternatives. Process based simulation of BMPs provides a technique that is sensitive to local climate and rainfall patterns. The system incorporates a meta-heuristic optimization technique to find the most cost-effective BMP placement and implementation plan given a control target, or a fixed cost. A case study is presented to demonstrate the application of the Prince George's County system. The case study involves a highly urbanized area in the Anacostia River (a tributary to Potomac River) watershed southeast of Washington, DC. An innovative system of management practices is proposed to minimize runoff, improve water quality, and provide water reuse opportunities. Proposed management techniques include bioretention, green roof, and rooftop runoff collection (rain barrel) systems. The modeling system was used to identify the most cost-effective combinations of management practices to help minimize frequency and size of runoff events and resulting combined sewer overflows to the Anacostia River.
Long-term Preservation of Data Analysis Capabilities
NASA Astrophysics Data System (ADS)
Gabriel, C.; Arviset, C.; Ibarra, A.; Pollock, A.
2015-09-01
While the long-term preservation of scientific data obtained by large astrophysics missions is ensured through science archives, the issue of data analysis software preservation has hardly been addressed. Efforts by large data centres have contributed so far to maintain some instrument or mission-specific data reduction packages on top of high-level general purpose data analysis software. However, it is always difficult to keep software alive without support and maintenance once the active phase of a mission is over. This is especially difficult in the budgetary model followed by space agencies. We discuss the importance of extending the lifetime of dedicated data analysis packages and review diverse strategies under development at ESA using new paradigms such as Virtual Machines, Cloud Computing, and Software as a Service for making possible full availability of data analysis and calibration software for decades at minimal cost.
The returns and risks of investment portfolio in a financial market
NASA Astrophysics Data System (ADS)
Li, Jiang-Cheng; Mei, Dong-Cheng
2014-07-01
The returns and risks of investment portfolio in a financial system was investigated by constructing a theoretical model based on the Heston model. After the theoretical model and analysis of portfolio were calculated and analyzed, we find the following: (i) The statistical properties (i.e., the probability distribution, the variance and loss rate of equity portfolio return) between simulation results of the theoretical model and the real financial data obtained from Dow Jones Industrial Average are in good agreement; (ii) The maximum dispersion of the investment portfolio is associated with the maximum stability of the equity portfolio return and minimal investment risks; (iii) An increase of the investment period and a worst investment period are associated with a decrease of stability of the equity portfolio return and a maximum investment risk, respectively.
He, Wei; Yurkevich, Igor V; Canham, Leigh T; Loni, Armando; Kaplan, Andrey
2014-11-03
We develop an analytical model based on the WKB approach to evaluate the experimental results of the femtosecond pump-probe measurements of the transmittance and reflectance obtained on thin membranes of porous silicon. The model allows us to retrieve a pump-induced nonuniform complex dielectric function change along the membrane depth. We show that the model fitting to the experimental data requires a minimal number of fitting parameters while still complying with the restriction imposed by the Kramers-Kronig relation. The developed model has a broad range of applications for experimental data analysis and practical implementation in the design of devices involving a spatially nonuniform dielectric function, such as in biosensing, wave-guiding, solar energy harvesting, photonics and electro-optical devices.
Slicing AADL Specifications for Model Checking
NASA Technical Reports Server (NTRS)
Odenbrett, Maximilian; Nguyen, Viet Yen; Noll, Thomas
2010-01-01
To combat the state-space explosion problem in model checking larger systems, abstraction techniques can be employed. Here, methods that operate on the system specification before constructing its state space are preferable to those that try to minimize the resulting transition system as they generally reduce peak memory requirements. We sketch a slicing algorithm for system specifications written in (a variant of) the Architecture Analysis and Design Language (AADL). Given a specification and a property to be verified, it automatically removes those parts of the specification that are irrelevant for model checking the property, thus reducing the size of the corresponding transition system. The applicability and effectiveness of our approach is demonstrated by analyzing the state-space reduction for an example, employing a translator from AADL to Promela, the input language of the SPIN model checker.
Human behaviours in evacuation crowd dynamics: From modelling to "big data" toward crisis management
NASA Astrophysics Data System (ADS)
Bellomo, N.; Clarke, D.; Gibelli, L.; Townsend, P.; Vreugdenhil, B. J.
2016-09-01
This paper proposes an essay concerning the understanding of human behaviours and crisis management of crowds in extreme situations, such as evacuation through complex venues. The first part focuses on the understanding of the main features of the crowd viewed as a living, hence complex system. The main concepts are subsequently addressed, in the second part, to a critical analysis of mathematical models suitable to capture them, as far as it is possible. Then, the third part focuses on the use, toward safety problems, of a model derived by the methods of the mathematical kinetic theory and theoretical tools of evolutionary game theory. It is shown how this model can depict critical situations and how these can be managed with the aim of minimizing the risk of catastrophic events.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dahms, Rainer N.
A generalized framework for multi-component liquid injections is presented to understand and predict the breakdown of classic two-phase theory and spray atomization at engine-relevant conditions. The analysis focuses on the thermodynamic structure and the immiscibility state of representative gas-liquid interfaces. The most modern form of Helmholtz energy mixture state equation is utilized which exhibits a unique and physically consistent behavior over the entire two-phase regime of fluid densities. It is combined with generalized models for non-linear gradient theory and for liquid injections to quantify multi-component two-phase interface structures in global thermal equilibrium. Then, the Helmholtz free energy is minimized whichmore » determines the interfacial species distribution as a consequence. This minimal free energy state is demonstrated to validate the underlying assumptions of classic two-phase theory and spray atomization. However, under certain engine-relevant conditions for which corroborating experimental data are presented, this requirement for interfacial thermal equilibrium becomes unsustainable. A rigorously derived probability density function quantifies the ability of the interface to develop internal spatial temperature gradients in the presence of significant temperature differences between injected liquid and ambient gas. Then, the interface can no longer be viewed as an isolated system at minimal free energy. Instead, the interfacial dynamics become intimately connected to those of the separated homogeneous phases. Hence, the interface transitions toward a state in local equilibrium whereupon it becomes a dense-fluid mixing layer. A new conceptual view of a transitional liquid injection process emerges from a transition time scale analysis. Close to the nozzle exit, the two-phase interface still remains largely intact and more classic two-phase processes prevail as a consequence. Further downstream, however, the transition to dense-fluid mixing generally occurs before the liquid length is reached. As a result, the significance of the presented modeling expressions is established by a direct comparison to a reduced model, which utilizes widely applied approximations but fundamentally fails to capture the physical complexity discussed in this paper.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dahms, Rainer N., E-mail: Rndahms@sandia.gov
A generalized framework for multi-component liquid injections is presented to understand and predict the breakdown of classic two-phase theory and spray atomization at engine-relevant conditions. The analysis focuses on the thermodynamic structure and the immiscibility state of representative gas-liquid interfaces. The most modern form of Helmholtz energy mixture state equation is utilized which exhibits a unique and physically consistent behavior over the entire two-phase regime of fluid densities. It is combined with generalized models for non-linear gradient theory and for liquid injections to quantify multi-component two-phase interface structures in global thermal equilibrium. Then, the Helmholtz free energy is minimized whichmore » determines the interfacial species distribution as a consequence. This minimal free energy state is demonstrated to validate the underlying assumptions of classic two-phase theory and spray atomization. However, under certain engine-relevant conditions for which corroborating experimental data are presented, this requirement for interfacial thermal equilibrium becomes unsustainable. A rigorously derived probability density function quantifies the ability of the interface to develop internal spatial temperature gradients in the presence of significant temperature differences between injected liquid and ambient gas. Then, the interface can no longer be viewed as an isolated system at minimal free energy. Instead, the interfacial dynamics become intimately connected to those of the separated homogeneous phases. Hence, the interface transitions toward a state in local equilibrium whereupon it becomes a dense-fluid mixing layer. A new conceptual view of a transitional liquid injection process emerges from a transition time scale analysis. Close to the nozzle exit, the two-phase interface still remains largely intact and more classic two-phase processes prevail as a consequence. Further downstream, however, the transition to dense-fluid mixing generally occurs before the liquid length is reached. The significance of the presented modeling expressions is established by a direct comparison to a reduced model, which utilizes widely applied approximations but fundamentally fails to capture the physical complexity discussed in this paper.« less
Dahms, Rainer N.
2016-04-26
A generalized framework for multi-component liquid injections is presented to understand and predict the breakdown of classic two-phase theory and spray atomization at engine-relevant conditions. The analysis focuses on the thermodynamic structure and the immiscibility state of representative gas-liquid interfaces. The most modern form of Helmholtz energy mixture state equation is utilized which exhibits a unique and physically consistent behavior over the entire two-phase regime of fluid densities. It is combined with generalized models for non-linear gradient theory and for liquid injections to quantify multi-component two-phase interface structures in global thermal equilibrium. Then, the Helmholtz free energy is minimized whichmore » determines the interfacial species distribution as a consequence. This minimal free energy state is demonstrated to validate the underlying assumptions of classic two-phase theory and spray atomization. However, under certain engine-relevant conditions for which corroborating experimental data are presented, this requirement for interfacial thermal equilibrium becomes unsustainable. A rigorously derived probability density function quantifies the ability of the interface to develop internal spatial temperature gradients in the presence of significant temperature differences between injected liquid and ambient gas. Then, the interface can no longer be viewed as an isolated system at minimal free energy. Instead, the interfacial dynamics become intimately connected to those of the separated homogeneous phases. Hence, the interface transitions toward a state in local equilibrium whereupon it becomes a dense-fluid mixing layer. A new conceptual view of a transitional liquid injection process emerges from a transition time scale analysis. Close to the nozzle exit, the two-phase interface still remains largely intact and more classic two-phase processes prevail as a consequence. Further downstream, however, the transition to dense-fluid mixing generally occurs before the liquid length is reached. As a result, the significance of the presented modeling expressions is established by a direct comparison to a reduced model, which utilizes widely applied approximations but fundamentally fails to capture the physical complexity discussed in this paper.« less
High-order sliding-mode control for blood glucose regulation in the presence of uncertain dynamics.
Hernández, Ana Gabriela Gallardo; Fridman, Leonid; Leder, Ron; Andrade, Sergio Islas; Monsalve, Cristina Revilla; Shtessel, Yuri; Levant, Arie
2011-01-01
The success of blood glucose automatic regulation depends on the robustness of the control algorithm used. It is a difficult task to perform due to the complexity of the glucose-insulin regulation system. The variety of model existing reflects the great amount of phenomena involved in the process, and the inter-patient variability of the parameters represent another challenge. In this research a High-Order Sliding-Mode Control is proposed. It is applied to two well known models, Bergman Minimal Model, and Sorensen Model, to test its robustness with respect to uncertain dynamics, and patients' parameter variability. The controller designed based on the simulations is tested with the specific Bergman Minimal Model of a diabetic patient whose parameters were identified from an in vivo assay. To minimize the insulin infusion rate, and avoid the hypoglycemia risk, the glucose target is a dynamical profile.
Kelley, George A; Kelley, Kristi S; Callahan, Leigh F
2018-07-01
A recent meta-analysis reported statistically significant improvements in anxiety as a result of exercise in adults with arthritis and other rheumatic diseases (AORD) using the traditional standardized mean difference (SMD) effect size (ES). The objective of this study was to use the more recently developed and clinically relevant minimal important difference (MID) approach to examine this association. Data from a previous meta-analysis of 14 randomized controlled trials representing 926 initially enrolled adults ≥ 18 years of age (539 exercise, 387 control) was used to calculate the ES using the MID approach. Minimal important difference data were derived from previously reported anchor-based values that represented the different instruments used to assess anxiety. Effect sizes were pooled using the inverse heterogeneity (IVhet) model. Overall, exercise resulted in a mean ES reduction in anxiety of - 0.80 (95% CI, - 1.60 to 0.001, p = 0.05; Q = 92.1, p < 0.001, I 2 = 83.7%, 95% CI, 74.9%, 89.5%), suggesting that overall, exercise may benefit an appreciable number of patients. Nonetheless, this effect spanned the range from many patients gaining important benefits to no patients improving. The clinically relevant effects of exercise on anxiety in adults with AORD are varied. However, these results should be interpreted with caution given the absence of anchor-based MID data specific to the instruments and questions used to assess anxiety in adults with AORD. A need exists for future research to establish instrument-specific, anchor-based MID values for questions assessing anxiety in adults with AORD.
Impact of TRMM and SSM/I Rainfall Assimilation on Global Analysis and QPF
NASA Technical Reports Server (NTRS)
Hou, Arthur; Zhang, Sara; Reale, Oreste
2002-01-01
Evaluation of QPF skills requires quantitatively accurate precipitation analyses. We show that assimilation of surface rain rates derived from the Tropical Rainfall Measuring Mission (TRMM) Microwave Imager and Special Sensor Microwave/Imager (SSM/I) improves quantitative precipitation estimates (QPE) and many aspects of global analyses. Short-range forecasts initialized with analyses with satellite rainfall data generally yield significantly higher QPF threat scores and better storm track predictions. These results were obtained using a variational procedure that minimizes the difference between the observed and model rain rates by correcting the moist physics tendency of the forecast model over a 6h assimilation window. In two case studies of Hurricanes Bonnie and Floyd, synoptic analysis shows that this procedure produces initial conditions with better-defined tropical storm features and stronger precipitation intensity associated with the storm.
Impact and Penetration Simulations for Composite Wing-like Structures
NASA Technical Reports Server (NTRS)
Knight, Norman F.
1998-01-01
The goal of this research project was to develop methodologies for the analysis of wing-like structures subjected to impact loadings. Low-speed impact causing either no damage or only minimal damage and high-speed impact causing severe laminate damage and possible penetration of the structure were to be considered during this research effort. To address this goal, an assessment of current analytical tools for impact analysis was performed. Assessment of the analytical tools for impact and penetration simulations with regard to accuracy, modeling, and damage modeling was considered as well as robustness, efficient, and usage in a wing design environment. Following a qualitative assessment, selected quantitative evaluations will be performed using the leading simulation tools. Based on this assessment, future research thrusts for impact and penetration simulation of composite wing-like structures were identified.
Programmatic management of multidrug-resistant tuberculosis: models from three countries.
Furin, J; Bayona, J; Becerra, M; Farmer, P; Golubkov, A; Hurtado, R; Joseph, J K; Keshavjee, S; Ponomarenko, O; Rich, M; Shin, S
2011-10-01
Although multidrug-resistant tuberculosis (MDR-TB) is a major global health problem, there is a gap in programmatic treatment implementation. This study describes MDR-TB treatment models in three countries--Peru, Russia and Lesotho-- using qualitative data collected over a 13-year period. A program analysis is presented for each country focusing on baseline medical care, initial implementation and program evolution. A pattern analysis revealed six overarching themes common to all three programs: 1) importance of baseline assessments, 2) early identification of key collaborators, 3) identification of initial locus of care, 4) minimization of patient-incurred costs, 5) targeted interventions for vulnerable populations and 6) importance of technical assistance and funding. Site commonalities and differences in each of these areas were analyzed. It is recommended that all programs providing MDR-TB treatment address these six areas during program development and implementation.
Fundamentals of Free-Space Optical Communications
NASA Technical Reports Server (NTRS)
Dolinar, Sam; Moision, Bruce; Erkmen, Baris
2012-01-01
Free-space optical communication systems potentially gain many dBs over RF systems. There is no upper limit on the theoretically achievable photon efficiency when the system is quantum-noise-limited: a) Intensity modulations plus photon counting can achieve arbitrarily high photon efficiency, but with sub-optimal spectral efficiency. b) Quantum-ideal number states can achieve the ultimate capacity in the limit of perfect transmissivity. Appropriate error correction codes are needed to communicate reliably near the capacity limits. Poisson-modeled noises, detector losses, and atmospheric effects must all be accounted for: a) Theoretical models are used to analyze performance degradations. b) Mitigation strategies derived from this analysis are applied to minimize these degradations.
Radiation protection for manned space activities
NASA Technical Reports Server (NTRS)
Jordan, T. M.
1983-01-01
The Earth's natural radiation environment poses a hazard to manned space activities directly through biological effects and indirectly through effects on materials and electronics. The following standard practices are indicated that address: (1) environment models for all radiation species including uncertainties and temporal variations; (2) upper bound and nominal quality factors for biological radiation effects that include dose, dose rate, critical organ, and linear energy transfer variations; (3) particle transport and shielding methodology including system and man modeling and uncertainty analysis; (4) mission planning that includes active dosimetry, minimizes exposure during extravehicular activities, subjects every mission to a radiation review, and specifies operational procedures for forecasting, recognizing, and dealing with large solar flaes.
System identification of an unmanned quadcopter system using MRAN neural
NASA Astrophysics Data System (ADS)
Pairan, M. F.; Shamsudin, S. S.
2017-12-01
This project presents the performance analysis of the radial basis function neural network (RBF) trained with Minimal Resource Allocating Network (MRAN) algorithm for real-time identification of quadcopter. MRAN’s performance is compared with the RBF with Constant Trace algorithm for 2500 input-output pair data sampling. MRAN utilizes adding and pruning hidden neuron strategy to obtain optimum RBF structure, increase prediction accuracy and reduce training time. The results indicate that MRAN algorithm produces fast training time and more accurate prediction compared with standard RBF. The model proposed in this paper is capable of identifying and modelling a nonlinear representation of the quadcopter flight dynamics.
MIS Score: Prediction Model for Minimally Invasive Surgery.
Hu, Yuanyuan; Cao, Jingwei; Hou, Xianzeng; Liu, Guangcun
2017-03-01
Reports suggest that patients with spontaneous intracerebral hemorrhage (ICH) can benefit from minimally invasive surgery, but the inclusion criterion for operation is controversial. This article analyzes factors affecting the 30-day prognoses of patients who have received minimally invasive surgery and proposes a simple grading scale that represents clinical operation effectiveness. The records of 101 patients with spontaneous ICH presenting to Qianfoshan Hospital were reviewed. Factors affecting their 30-day prognosis were identified by logistic regression. A clinical grading scale, the MIS score, was developed by weighting the independent predictors based on these factors. Univariate analysis revealed that the factors that affect 30-day prognosis include Glasgow coma scale score (P < 0.01), age ≥80 years (P < 0.05), blood glucose (P < 0.01), ICH volume (P < 0.01), operation time (P < 0.05), and presence of intraventricular hemorrhage (P < 0.001). Logistic regression revealed that the factors that affect 30-day prognosis include Glasgow coma scale score (P < 0.05), age (P < 0.05), ICH volume (P < 0.01), and presence of intraventricular hemorrhage (P < 0.05). The MIS score was developed accordingly; 39 patients with 0-1 MIS scores had favorable prognoses, whereas only 9 patients with 2-5 MIS scores had poor prognoses. The MIS score is a simple grading scale that can be used to select patients who are suited for minimal invasive drainage surgery. When MIS score is 0-1, minimal invasive surgery is strongly recommended for patients with spontaneous cerebral hemorrhage. The scale merits further prospective studies to fully determine its efficacy. Copyright © 2016 Elsevier Inc. All rights reserved.