Graph cuts for curvature based image denoising.
Bae, Egil; Shi, Juan; Tai, Xue-Cheng
2011-05-01
Minimization of total variation (TV) is a well-known method for image denoising. Recently, the relationship between TV minimization problems and binary MRF models has been much explored. This has resulted in some very efficient combinatorial optimization algorithms for the TV minimization problem in the discrete setting via graph cuts. To overcome limitations, such as staircasing effects, of the relatively simple TV model, variational models based upon higher order derivatives have been proposed. The Euler's elastica model is one such higher order model of central importance, which minimizes the curvature of all level lines in the image. Traditional numerical methods for minimizing the energy in such higher order models are complicated and computationally complex. In this paper, we will present an efficient minimization algorithm based upon graph cuts for minimizing the energy in the Euler's elastica model, by simplifying the problem to that of solving a sequence of easy graph representable problems. This sequence has connections to the gradient flow of the energy function, and converges to a minimum point. The numerical experiments show that our new approach is more effective in maintaining smooth visual results while preserving sharp features better than TV models.
Chakravorty, Arghya; Jia, Zhe; Li, Lin; Zhao, Shan; Alexov, Emil
2018-02-13
Typically, the ensemble average polar component of solvation energy (ΔG polar solv ) of a macromolecule is computed using molecular dynamics (MD) or Monte Carlo (MC) simulations to generate conformational ensemble and then single/rigid conformation solvation energy calculation is performed on each snapshot. The primary objective of this work is to demonstrate that Poisson-Boltzmann (PB)-based approach using a Gaussian-based smooth dielectric function for macromolecular modeling previously developed by us (Li et al. J. Chem. Theory Comput. 2013, 9 (4), 2126-2136) can reproduce that ensemble average (ΔG polar solv ) of a protein from a single structure. We show that the Gaussian-based dielectric model reproduces the ensemble average ΔG polar solv (⟨ΔG polar solv ⟩) from an energy-minimized structure of a protein regardless of the minimization environment (structure minimized in vacuo, implicit or explicit waters, or crystal structure); the best case, however, is when it is paired with an in vacuo-minimized structure. In other minimization environments (implicit or explicit waters or crystal structure), the traditional two-dielectric model can still be selected with which the model produces correct solvation energies. Our observations from this work reflect how the ability to appropriately mimic the motion of residues, especially the salt bridge residues, influences a dielectric model's ability to reproduce the ensemble average value of polar solvation free energy from a single in vacuo-minimized structure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kolda, Christopher
In this talk, I review recent work on using a generalization of the Next-to-Minimal Supersymmetric Standard Model (NMSSM), called the Singlet-extended Minimal Supersymmetric Standard Model (SMSSM), to raise the mass of the Standard Model-like Higgs boson without requiring extremely heavy top squarks or large stop mixing. In so doing, this model solves the little hierarchy problem of the minimal model (MSSM), at the expense of leaving the {mu}-problem of the MSSM unresolved. This talk is based on work published in Refs. [1, 2, 3].
Cobelli, Claudio; Dalla Man, Chiara; Toffolo, Gianna; Basu, Rita; Vella, Adrian; Rizza, Robert
2014-01-01
The simultaneous assessment of insulin action, secretion, and hepatic extraction is key to understanding postprandial glucose metabolism in nondiabetic and diabetic humans. We review the oral minimal method (i.e., models that allow the estimation of insulin sensitivity, β-cell responsivity, and hepatic insulin extraction from a mixed-meal or an oral glucose tolerance test). Both of these oral tests are more physiologic and simpler to administer than those based on an intravenous test (e.g., a glucose clamp or an intravenous glucose tolerance test). The focus of this review is on indices provided by physiological-based models and their validation against the glucose clamp technique. We discuss first the oral minimal model method rationale, data, and protocols. Then we present the three minimal models and the indices they provide. The disposition index paradigm, a widely used β-cell function metric, is revisited in the context of individual versus population modeling. Adding a glucose tracer to the oral dose significantly enhances the assessment of insulin action by segregating insulin sensitivity into its glucose disposal and hepatic components. The oral minimal model method, by quantitatively portraying the complex relationships between the major players of glucose metabolism, is able to provide novel insights regarding the regulation of postprandial metabolism. PMID:24651807
NASA Astrophysics Data System (ADS)
Curcó, David; Casanovas, Jordi; Roca, Marc; Alemán, Carlos
2005-07-01
A method for generating atomistic models of dense amorphous polymers is presented. The method is organized in a two-steps procedure. First, structures are generated using an algorithm that minimizes the torsional strain. After this, a relaxation algorithm is applied to minimize the non-bonding interactions. Two alternative relaxation methods, which are based simple minimization and Concerted Rotation techniques, have been implemented. The performance of the method has been checked by simulating polyethylene, polypropylene, nylon 6, poly(L,D-lactic acid) and polyglycolic acid.
Applications of minimal physiologically-based pharmacokinetic models
Cao, Yanguang
2012-01-01
Conventional mammillary models are frequently used for pharmacokinetic (PK) analysis when only blood or plasma data are available. Such models depend on the quality of the drug disposition data and have vague biological features. An alternative minimal-physiologically-based PK (minimal-PBPK) modeling approach is proposed which inherits and lumps major physiologic attributes from whole-body PBPK models. The body and model are represented as actual blood and tissue usually total body weight) volumes, fractions (fd) of cardiac output with Fick’s Law of Perfusion, tissue/blood partitioning (Kp), and systemic or intrinsic clearance. Analyzing only blood or plasma concentrations versus time, the minimal-PBPK models parsimoniously generate physiologically-relevant PK parameters which are more easily interpreted than those from mam-millary models. The minimal-PBPK models were applied to four types of therapeutic agents and conditions. The models well captured the human PK profiles of 22 selected beta-lactam antibiotics allowing comparison of fitted and calculated Kp values. Adding a classical hepatic compartment with hepatic blood flow allowed joint fitting of oral and intravenous (IV) data for four hepatic elimination drugs (dihydrocodeine, verapamil, repaglinide, midazolam) providing separate estimates of hepatic intrinsic clearance, non-hepatic clearance, and pre-hepatic bioavailability. The basic model was integrated with allometric scaling principles to simultaneously describe moxifloxacin PK in five species with common Kp and fd values. A basic model assigning clearance to the tissue compartment well characterized plasma concentrations of six monoclonal antibodies in human subjects, providing good concordance of predictions with expected tissue kinetics. The proposed minimal-PBPK modeling approach offers an alternative and more rational basis for assessing PK than compartmental models. PMID:23179857
General squark flavour mixing: constraints, phenomenology and benchmarks
De Causmaecker, Karen; Fuks, Benjamin; Herrmann, Bjorn; ...
2015-11-19
Here, we present an extensive study of non-minimal flavour violation in the squark sector in the framework of the Minimal Supersymmetric Standard Model. We investigate the effects of multiple non-vanishing flavour-violating elements in the squark mass matrices by means of a Markov Chain Monte Carlo scanning technique and identify parameter combinations that are favoured by both current data and theoretical constraints. We then detail the resulting distributions of the flavour-conserving and flavour-violating model parameters. Based on this analysis, we propose a set of benchmark scenarios relevant for future studies of non-minimal flavour violation in the Minimal Supersymmetric Standard Model.
Modeling specific action potentials in the human atria based on a minimal single-cell model.
Richter, Yvonne; Lind, Pedro G; Maass, Philipp
2018-01-01
We present an effective method to model empirical action potentials of specific patients in the human atria based on the minimal model of Bueno-Orovio, Cherry and Fenton adapted to atrial electrophysiology. In this model, three ionic are currents introduced, where each of it is governed by a characteristic time scale. By applying a nonlinear optimization procedure, a best combination of the respective time scales is determined, which allows one to reproduce specific action potentials with a given amplitude, width and shape. Possible applications for supporting clinical diagnosis are pointed out.
Lin, Zeming; He, Bingwei; Chen, Jiang; D u, Zhibin; Zheng, Jingyi; Li, Yanqin
2012-08-01
To guide doctors in precisely positioning surgical operation, a new production method of minimally invasive implant guide template was presented. The mandible of patient was scanned by CT scanner, and three-dimensional jaw bone model was constructed based on CT images data The professional dental implant software Simplant was used to simulate the plant based on the three-dimensional CT model to determine the location and depth of implants. In the same time, the dental plaster models were scanned by stereo vision system to build the oral mucosa model. Next, curvature registration technology was used to fuse the oral mucosa model and the CT model, then the designed position of implant in the oral mucosa could be determined. The minimally invasive implant guide template was designed in 3-Matic software according to the design position of implant and the oral mucosa model. Finally, the template was produced by rapid prototyping. The three-dimensional registration technology was useful to fuse the CT data and the dental plaster data, and the template was accurate that could provide the doctors a guidance in the actual planting without cut-off mucosa. The guide template which fabricated by comprehensive utilization of three-dimensional registration, Simplant simulation and rapid prototyping positioning are accurate and can achieve the minimally invasive and accuracy implant surgery, this technique is worthy of clinical use.
van Walraven, Carl
2017-04-01
Diagnostic codes used in administrative databases cause bias due to misclassification of patient disease status. It is unclear which methods minimize this bias. Serum creatinine measures were used to determine severe renal failure status in 50,074 hospitalized patients. The true prevalence of severe renal failure and its association with covariates were measured. These were compared to results for which renal failure status was determined using surrogate measures including the following: (1) diagnostic codes; (2) categorization of probability estimates of renal failure determined from a previously validated model; or (3) bootstrap methods imputation of disease status using model-derived probability estimates. Bias in estimates of severe renal failure prevalence and its association with covariates were minimal when bootstrap methods were used to impute renal failure status from model-based probability estimates. In contrast, biases were extensive when renal failure status was determined using codes or methods in which model-based condition probability was categorized. Bias due to misclassification from inaccurate diagnostic codes can be minimized using bootstrap methods to impute condition status using multivariable model-derived probability estimates. Copyright © 2017 Elsevier Inc. All rights reserved.
Multi-objective group scheduling optimization integrated with preventive maintenance
NASA Astrophysics Data System (ADS)
Liao, Wenzhu; Zhang, Xiufang; Jiang, Min
2017-11-01
This article proposes a single-machine-based integration model to meet the requirements of production scheduling and preventive maintenance in group production. To describe the production for identical/similar and different jobs, this integrated model considers the learning and forgetting effects. Based on machine degradation, the deterioration effect is also considered. Moreover, perfect maintenance and minimal repair are adopted in this integrated model. The multi-objective of minimizing total completion time and maintenance cost is taken to meet the dual requirements of delivery date and cost. Finally, a genetic algorithm is developed to solve this optimization model, and the computation results demonstrate that this integrated model is effective and reliable.
NASA Astrophysics Data System (ADS)
Yang, Jia Sheng
2018-06-01
In this paper, we investigate a H∞ memory controller with input limitation minimization (HMCIM) for offshore jacket platforms stabilization. The main objective of this study is to reduce the control consumption as well as protect the actuator when satisfying the requirement of the system performance. First, we introduce a dynamic model of offshore platform with low order main modes based on mode reduction method in numerical analysis. Then, based on H∞ control theory and matrix inequality techniques, we develop a novel H∞ memory controller with input limitation. Furthermore, a non-convex optimization model to minimize input energy consumption is proposed. Since it is difficult to solve this non-convex optimization model by optimization algorithm, we use a relaxation method with matrix operations to transform this non-convex optimization model to be a convex optimization model. Thus, it could be solved by a standard convex optimization solver in MATLAB or CPLEX. Finally, several numerical examples are given to validate the proposed models and methods.
Image denoising by a direct variational minimization
NASA Astrophysics Data System (ADS)
Janev, Marko; Atanacković, Teodor; Pilipović, Stevan; Obradović, Radovan
2011-12-01
In this article we introduce a novel method for the image de-noising which combines a mathematically well-posdenes of the variational modeling with the efficiency of a patch-based approach in the field of image processing. It based on a direct minimization of an energy functional containing a minimal surface regularizer that uses fractional gradient. The minimization is obtained on every predefined patch of the image, independently. By doing so, we avoid the use of an artificial time PDE model with its inherent problems of finding optimal stopping time, as well as the optimal time step. Moreover, we control the level of image smoothing on each patch (and thus on the whole image) by adapting the Lagrange multiplier using the information on the level of discontinuities on a particular patch, which we obtain by pre-processing. In order to reduce the average number of vectors in the approximation generator and still to obtain the minimal degradation, we combine a Ritz variational method for the actual minimization on a patch, and a complementary fractional variational principle. Thus, the proposed method becomes computationally feasible and applicable for practical purposes. We confirm our claims with experimental results, by comparing the proposed method with a couple of PDE-based methods, where we get significantly better denoising results specially on the oscillatory regions.
Mixed-order phase transition in a minimal, diffusion-based spin model.
Fronczak, Agata; Fronczak, Piotr
2016-07-01
In this paper we exactly solve, within the grand canonical ensemble, a minimal spin model with the hybrid phase transition. We call the model diffusion based because its Hamiltonian can be recovered from a simple dynamic procedure, which can be seen as an equilibrium statistical mechanics representation of a biased random walk. We outline the derivation of the phase diagram of the model, in which the triple point has the hallmarks of the hybrid transition: discontinuity in the average magnetization and algebraically diverging susceptibilities. At this point, two second-order transition curves meet in equilibrium with the first-order curve, resulting in a prototypical mixed-order behavior.
Real-Time Adaptive Least-Squares Drag Minimization for Performance Adaptive Aeroelastic Wing
NASA Technical Reports Server (NTRS)
Ferrier, Yvonne L.; Nguyen, Nhan T.; Ting, Eric
2016-01-01
This paper contains a simulation study of a real-time adaptive least-squares drag minimization algorithm for an aeroelastic model of a flexible wing aircraft. The aircraft model is based on the NASA Generic Transport Model (GTM). The wing structures incorporate a novel aerodynamic control surface known as the Variable Camber Continuous Trailing Edge Flap (VCCTEF). The drag minimization algorithm uses the Newton-Raphson method to find the optimal VCCTEF deflections for minimum drag in the context of an altitude-hold flight control mode at cruise conditions. The aerodynamic coefficient parameters used in this optimization method are identified in real-time using Recursive Least Squares (RLS). The results demonstrate the potential of the VCCTEF to improve aerodynamic efficiency for drag minimization for transport aircraft.
Minimal time spiking in various ChR2-controlled neuron models.
Renault, Vincent; Thieullen, Michèle; Trélat, Emmanuel
2018-02-01
We use conductance based neuron models, and the mathematical modeling of optogenetics to define controlled neuron models and we address the minimal time control of these affine systems for the first spike from equilibrium. We apply tools of geometric optimal control theory to study singular extremals, and we implement a direct method to compute optimal controls. When the system is too large to theoretically investigate the existence of singular optimal controls, we observe numerically the optimal bang-bang controls.
Topology of correlation-based minimal spanning trees in real and model markets
NASA Astrophysics Data System (ADS)
Bonanno, Giovanni; Caldarelli, Guido; Lillo, Fabrizio; Mantegna, Rosario N.
2003-10-01
We compare the topological properties of the minimal spanning tree obtained from a large group of stocks traded at the New York Stock Exchange during a 12-year trading period with the one obtained from surrogated data simulated by using simple market models. We find that the empirical tree has features of a complex network that cannot be reproduced, even as a first approximation, by a random market model and by the widespread one-factor model.
A Minimal Path Searching Approach for Active Shape Model (ASM)-based Segmentation of the Lung.
Guo, Shengwen; Fei, Baowei
2009-03-27
We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 ± 0.33 pixels, while the error is 1.99 ± 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.
A minimal path searching approach for active shape model (ASM)-based segmentation of the lung
NASA Astrophysics Data System (ADS)
Guo, Shengwen; Fei, Baowei
2009-02-01
We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 +/- 0.33 pixels, while the error is 1.99 +/- 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs.
A Minimal Path Searching Approach for Active Shape Model (ASM)-based Segmentation of the Lung
Guo, Shengwen; Fei, Baowei
2013-01-01
We are developing a minimal path searching method for active shape model (ASM)-based segmentation for detection of lung boundaries on digital radiographs. With the conventional ASM method, the position and shape parameters of the model points are iteratively refined and the target points are updated by the least Mahalanobis distance criterion. We propose an improved searching strategy that extends the searching points in a fan-shape region instead of along the normal direction. A minimal path (MP) deformable model is applied to drive the searching procedure. A statistical shape prior model is incorporated into the segmentation. In order to keep the smoothness of the shape, a smooth constraint is employed to the deformable model. To quantitatively assess the ASM-MP segmentation, we compare the automatic segmentation with manual segmentation for 72 lung digitized radiographs. The distance error between the ASM-MP and manual segmentation is 1.75 ± 0.33 pixels, while the error is 1.99 ± 0.45 pixels for the ASM. Our results demonstrate that our ASM-MP method can accurately segment the lung on digital radiographs. PMID:24386531
Perturbed Yukawa textures in the minimal seesaw model
NASA Astrophysics Data System (ADS)
Rink, Thomas; Schmitz, Kai
2017-03-01
We revisit the minimal seesaw model, i.e., the type-I seesaw mechanism involving only two right-handed neutrinos. This model represents an important minimal benchmark scenario for future experimental updates on neutrino oscillations. It features four real parameters that cannot be fixed by the current data: two CP -violating phases, δ and σ, as well as one complex parameter, z, that is experimentally inaccessible at low energies. The parameter z controls the structure of the neutrino Yukawa matrix at high energies, which is why it may be regarded as a label or index for all UV completions of the minimal seesaw model. The fact that z encompasses only two real degrees of freedom allows us to systematically scan the minimal seesaw model over all of its possible UV completions. In doing so, we address the following question: suppose δ and σ should be measured at particular values in the future — to what extent is one then still able to realize approximate textures in the neutrino Yukawa matrix? Our analysis, thus, generalizes previous studies of the minimal seesaw model based on the assumption of exact texture zeros. In particular, our study allows us to assess the theoretical uncertainty inherent to the common texture ansatz. One of our main results is that a normal light-neutrino mass hierarchy is, in fact, still consistent with a two-zero Yukawa texture, provided that the two texture zeros receive corrections at the level of O (10%). While our numerical results pertain to the minimal seesaw model only, our general procedure appears to be applicable to other neutrino mass models as well.
Automatic network coupling analysis for dynamical systems based on detailed kinetic models.
Lebiedz, Dirk; Kammerer, Julia; Brandt-Pollmann, Ulrich
2005-10-01
We introduce a numerical complexity reduction method for the automatic identification and analysis of dynamic network decompositions in (bio)chemical kinetics based on error-controlled computation of a minimal model dimension represented by the number of (locally) active dynamical modes. Our algorithm exploits a generalized sensitivity analysis along state trajectories and subsequent singular value decomposition of sensitivity matrices for the identification of these dominant dynamical modes. It allows for a dynamic coupling analysis of (bio)chemical species in kinetic models that can be exploited for the piecewise computation of a minimal model on small time intervals and offers valuable functional insight into highly nonlinear reaction mechanisms and network dynamics. We present results for the identification of network decompositions in a simple oscillatory chemical reaction, time scale separation based model reduction in a Michaelis-Menten enzyme system and network decomposition of a detailed model for the oscillatory peroxidase-oxidase enzyme system.
Model-based color halftoning using direct binary search.
Agar, A Ufuk; Allebach, Jan P
2005-12-01
In this paper, we develop a model-based color halftoning method using the direct binary search (DBS) algorithm. Our method strives to minimize the perceived error between the continuous tone original color image and the color halftone image. We exploit the differences in how the human viewers respond to luminance and chrominance information and use the total squared error in a luminance/chrominance based space as our metric. Starting with an initial halftone, we minimize this error metric using the DBS algorithm. Our method also incorporates a measurement based color printer dot interaction model to prevent the artifacts due to dot overlap and to improve color texture quality. We calibrate our halftoning algorithm to ensure accurate colorant distributions in resulting halftones. We present the color halftones which demonstrate the efficacy of our method.
System identification using Nuclear Norm & Tabu Search optimization
NASA Astrophysics Data System (ADS)
Ahmed, Asif A.; Schoen, Marco P.; Bosworth, Ken W.
2018-01-01
In recent years, subspace System Identification (SI) algorithms have seen increased research, stemming from advanced minimization methods being applied to the Nuclear Norm (NN) approach in system identification. These minimization algorithms are based on hard computing methodologies. To the authors’ knowledge, as of now, there has been no work reported that utilizes soft computing algorithms to address the minimization problem within the nuclear norm SI framework. A linear, time-invariant, discrete time system is used in this work as the basic model for characterizing a dynamical system to be identified. The main objective is to extract a mathematical model from collected experimental input-output data. Hankel matrices are constructed from experimental data, and the extended observability matrix is employed to define an estimated output of the system. This estimated output and the actual - measured - output are utilized to construct a minimization problem. An embedded rank measure assures minimum state realization outcomes. Current NN-SI algorithms employ hard computing algorithms for minimization. In this work, we propose a simple Tabu Search (TS) algorithm for minimization. TS algorithm based SI is compared with the iterative Alternating Direction Method of Multipliers (ADMM) line search optimization based NN-SI. For comparison, several different benchmark system identification problems are solved by both approaches. Results show improved performance of the proposed SI-TS algorithm compared to the NN-SI ADMM algorithm.
Shape Sensing Techniques for Continuum Robots in Minimally Invasive Surgery: A Survey.
Shi, Chaoyang; Luo, Xiongbiao; Qi, Peng; Li, Tianliang; Song, Shuang; Najdovski, Zoran; Fukuda, Toshio; Ren, Hongliang
2017-08-01
Continuum robots provide inherent structural compliance with high dexterity to access the surgical target sites along tortuous anatomical paths under constrained environments and enable to perform complex and delicate operations through small incisions in minimally invasive surgery. These advantages enable their broad applications with minimal trauma and make challenging clinical procedures possible with miniaturized instrumentation and high curvilinear access capabilities. However, their inherent deformable designs make it difficult to realize 3-D intraoperative real-time shape sensing to accurately model their shape. Solutions to this limitation can lead themselves to further develop closely associated techniques of closed-loop control, path planning, human-robot interaction, and surgical manipulation safety concerns in minimally invasive surgery. Although extensive model-based research that relies on kinematics and mechanics has been performed, accurate shape sensing of continuum robots remains challenging, particularly in cases of unknown and dynamic payloads. This survey investigates the recent advances in alternative emerging techniques for 3-D shape sensing in this field and focuses on the following categories: fiber-optic-sensor-based, electromagnetic-tracking-based, and intraoperative imaging modality-based shape-reconstruction methods. The limitations of existing technologies and prospects of new technologies are also discussed.
Minimizing Concentration Effects in Water-Based, Laminar-Flow Condensation Particle Counters
Lewis, Gregory S.; Hering, Susanne V.
2013-01-01
Concentration effects in water condensation systems, such as used in the water-based condensation particle counter, are explored through numeric modeling and direct measurements. Modeling shows that the condensation heat release and vapor depletion associated with particle activation and growth lowers the peak supersaturation. At higher number concentrations, the diameter of the droplets formed is smaller, and the threshold particle size for activation is higher. This occurs in both cylindrical and parallel plate geometries. For water-based systems we find that condensational heat release is more important than is vapor depletion. We also find that concentration effects can be minimized through use of smaller tube diameters, or more closely spaced parallel plates. Experimental measurements of droplet diameter confirm modeling results. PMID:24436507
Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen
2016-01-01
Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.
Evidence for surprise minimization over value maximization in choice behavior
Schwartenbeck, Philipp; FitzGerald, Thomas H. B.; Mathys, Christoph; Dolan, Ray; Kronbichler, Martin; Friston, Karl
2015-01-01
Classical economic models are predicated on the idea that the ultimate aim of choice is to maximize utility or reward. In contrast, an alternative perspective highlights the fact that adaptive behavior requires agents’ to model their environment and minimize surprise about the states they frequent. We propose that choice behavior can be more accurately accounted for by surprise minimization compared to reward or utility maximization alone. Minimizing surprise makes a prediction at variance with expected utility models; namely, that in addition to attaining valuable states, agents attempt to maximize the entropy over outcomes and thus ‘keep their options open’. We tested this prediction using a simple binary choice paradigm and show that human decision-making is better explained by surprise minimization compared to utility maximization. Furthermore, we replicated this entropy-seeking behavior in a control task with no explicit utilities. These findings highlight a limitation of purely economic motivations in explaining choice behavior and instead emphasize the importance of belief-based motivations. PMID:26564686
Chen, Bo; Bian, Zhaoying; Zhou, Xiaohui; Chen, Wensheng; Ma, Jianhua; Liang, Zhengrong
2018-04-12
Total variation (TV) minimization for the sparse-view x-ray computer tomography (CT) reconstruction has been widely explored to reduce radiation dose. However, due to the piecewise constant assumption for the TV model, the reconstructed images often suffer from over-smoothness on the image edges. To mitigate this drawback of TV minimization, we present a Mumford-Shah total variation (MSTV) minimization algorithm in this paper. The presented MSTV model is derived by integrating TV minimization and Mumford-Shah segmentation. Subsequently, a penalized weighted least-squares (PWLS) scheme with MSTV is developed for the sparse-view CT reconstruction. For simplicity, the proposed algorithm is named as 'PWLS-MSTV.' To evaluate the performance of the present PWLS-MSTV algorithm, both qualitative and quantitative studies were conducted by using a digital XCAT phantom and a physical phantom. Experimental results show that the present PWLS-MSTV algorithm has noticeable gains over the existing algorithms in terms of noise reduction, contrast-to-ratio measure and edge-preservation.
Morettini, Micaela; Faelli, Emanuela; Perasso, Luisa; Fioretti, Sandro; Burattini, Laura; Ruggeri, Piero; Di Nardo, Francesco
2017-01-01
For the assessment of glucose tolerance from IVGTT data in Zucker rat, minimal model methodology is reliable but time- and money-consuming. This study aimed to validate for the first time in Zucker rat, simple surrogate indexes of insulin sensitivity and secretion against the glucose-minimal-model insulin sensitivity index (SI) and against first- (Φ1) and second-phase (Φ2) β-cell responsiveness indexes provided by C-peptide minimal model. Validation of the surrogate insulin sensitivity index (ISI) and of two sets of coupled insulin-based indexes for insulin secretion, differing from the cut-off point between phases (FPIR3-SPIR3, t = 3 min and FPIR5-SPIR5, t = 5 min), was carried out in a population of ten Zucker fatty rats (ZFR) and ten Zucker lean rats (ZLR). Considering the whole rat population (ZLR+ZFR), ISI showed a significant strong correlation with SI (Spearman's correlation coefficient, r = 0.88; P<0.001). Both FPIR3 and FPIR5 showed a significant (P<0.001) strong correlation with Φ1 (r = 0.76 and r = 0.75, respectively). Both SPIR3 and SPIR5 showed a significant (P<0.001) strong correlation with Φ2 (r = 0.85 and r = 0.83, respectively). ISI is able to detect (P<0.001) the well-recognized reduction in insulin sensitivity in ZFRs, compared to ZLRs. The insulin-based indexes of insulin secretion are able to detect in ZFRs (P<0.001) the compensatory increase of first- and second-phase secretion, associated to the insulin-resistant state. The ability of the surrogate indexes in describing glucose tolerance in the ZFRs was confirmed by the Disposition Index analysis. The model-based validation performed in the present study supports the utilization of low-cost, insulin-based indexes for the assessment of glucose tolerance in Zucker rat, reliable animal model of human metabolic syndrome.
Review of Reactive Power Dispatch Strategies for Loss Minimization in a DFIG-based Wind Farm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Baohua; Hu, Weihao; Hou, Peng
This study reviews and compares the performance of reactive power dispatch strategies for the loss minimization of Doubly Fed Induction Generator (DFIG)-based Wind Farms (WFs). Twelve possible combinations of three WF level reactive power dispatch strategies and four Wind Turbine (WT) level reactive power control strategies are investigated. All of the combined strategies are formulated based on the comprehensive loss models of WFs, including the loss models of DFIGs, converters, filters, transformers, and cables of the collection system. Optimization problems are solved by a Modified Particle Swarm Optimization (MPSO) algorithm. The effectiveness of these strategies is evaluated by simulations onmore » a carefully designed WF under a series of cases with different wind speeds and reactive power requirements of the WF. The wind speed at each WT inside the WF is calculated using the Jensen wake model. The results show that the best reactive power dispatch strategy for loss minimization comes when the WF level strategy and WT level control are coordinated and the losses from each device in the WF are considered in the objective.« less
Review of Reactive Power Dispatch Strategies for Loss Minimization in a DFIG-based Wind Farm
Zhang, Baohua; Hu, Weihao; Hou, Peng; ...
2017-06-27
This study reviews and compares the performance of reactive power dispatch strategies for the loss minimization of Doubly Fed Induction Generator (DFIG)-based Wind Farms (WFs). Twelve possible combinations of three WF level reactive power dispatch strategies and four Wind Turbine (WT) level reactive power control strategies are investigated. All of the combined strategies are formulated based on the comprehensive loss models of WFs, including the loss models of DFIGs, converters, filters, transformers, and cables of the collection system. Optimization problems are solved by a Modified Particle Swarm Optimization (MPSO) algorithm. The effectiveness of these strategies is evaluated by simulations onmore » a carefully designed WF under a series of cases with different wind speeds and reactive power requirements of the WF. The wind speed at each WT inside the WF is calculated using the Jensen wake model. The results show that the best reactive power dispatch strategy for loss minimization comes when the WF level strategy and WT level control are coordinated and the losses from each device in the WF are considered in the objective.« less
Minimization In Digital Design As A Meta-Planning Problem
NASA Astrophysics Data System (ADS)
Ho, William P. C.; Wu, Jung-Gen
1987-05-01
In our model-based expert system for automatic digital system design, we formalize the design process into three sub-processes - compiling high-level behavioral specifications into primitive behavioral operations, grouping primitive operations into behavioral functions, and grouping functions into modules. Consideration of design minimization explicitly controls decision-making in the last two subprocesses. Design minimization, a key task in the automatic design of digital systems, is complicated by the high degree of interaction among the time sequence and content of design decisions. In this paper, we present an AI approach which directly addresses these interactions and their consequences by modeling the minimization prob-lem as a planning problem, and the management of design decision-making as a meta-planning problem.
The Preventive Control of a Dengue Disease Using Pontryagin Minimum Principal
NASA Astrophysics Data System (ADS)
Ratna Sari, Eminugroho; Insani, Nur; Lestari, Dwi
2017-06-01
Behaviour analysis for host-vector model without control of dengue disease is based on the value of basic reproduction number obtained using next generation matrices. Furthermore, the model is further developed involving a preventive control to minimize the contact between host and vector. The purpose is to obtain an optimal preventive strategy with minimal cost. The Pontryagin Minimum Principal is used to find the optimal control analytically. The derived optimality model is then solved numerically to investigate control effort to reduce infected class.
A physiologically based model for tramadol pharmacokinetics in horses.
Abbiati, Roberto Andrea; Cagnardi, Petra; Ravasio, Giuliano; Villa, Roberto; Manca, Davide
2017-09-21
This work proposes an application of a minimal complexity physiologically based pharmacokinetic model to predict tramadol concentration vs time profiles in horses. Tramadol is an opioid analgesic also used for veterinary treatments. Researchers and medical doctors can profit from the application of mathematical models as supporting tools to optimize the pharmacological treatment of animal species. The proposed model is based on physiology but adopts the minimal compartmental architecture necessary to describe the experimental data. The model features a system of ordinary differential equations, where most of the model parameters are either assigned or individualized for a given horse, using literature data and correlations. Conversely, residual parameters, whose value is unknown, are regressed exploiting experimental data. The model proved capable of simulating pharmacokinetic profiles with accuracy. In addition, it provides further insights on un-observable tramadol data, as for instance tramadol concentration in the liver or hepatic metabolism and renal excretion extent. Copyright © 2017 Elsevier Ltd. All rights reserved.
On the convergence of nonconvex minimization methods for image recovery.
Xiao, Jin; Ng, Michael Kwok-Po; Yang, Yu-Fei
2015-05-01
Nonconvex nonsmooth regularization method has been shown to be effective for restoring images with neat edges. Fast alternating minimization schemes have also been proposed and developed to solve the nonconvex nonsmooth minimization problem. The main contribution of this paper is to show the convergence of these alternating minimization schemes, based on the Kurdyka-Łojasiewicz property. In particular, we show that the iterates generated by the alternating minimization scheme, converges to a critical point of this nonconvex nonsmooth objective function. We also extend the analysis to nonconvex nonsmooth regularization model with box constraints, and obtain similar convergence results of the related minimization algorithm. Numerical examples are given to illustrate our convergence analysis.
NASA Astrophysics Data System (ADS)
Ullah, Asmat; Chen, Wen; Khan, Mushtaq Ahmad
2017-07-01
This paper introduces a fractional order total variation (FOTV) based model with three different weights in the fractional order derivative definition for multiplicative noise removal purpose. The fractional-order Euler Lagrange equation which is a highly non-linear partial differential equation (PDE) is obtained by the minimization of the energy functional for image restoration. Two numerical schemes namely an iterative scheme based on the dual theory and majorization- minimization algorithm (MMA) are used. To improve the restoration results, we opt for an adaptive parameter selection procedure for the proposed model by applying the trial and error method. We report numerical simulations which show the validity and state of the art performance of the fractional-order model in visual improvement as well as an increase in the peak signal to noise ratio comparing to corresponding methods. Numerical experiments also demonstrate that MMAbased methodology is slightly better than that of an iterative scheme.
Terluin, Berend; Eekhout, Iris; Terwee, Caroline B
2017-03-01
Patients have their individual minimal important changes (iMICs) as their personal benchmarks to determine whether a perceived health-related quality of life (HRQOL) change constitutes a (minimally) important change for them. We denote the mean iMIC in a group of patients as the "genuine MIC" (gMIC). The aims of this paper are (1) to examine the relationship between the gMIC and the anchor-based minimal important change (MIC), determined by receiver operating characteristic analysis or by predictive modeling; (2) to examine the impact of the proportion of improved patients on these MICs; and (3) to explore the possibility to adjust the MIC for the influence of the proportion of improved patients. Multiple simulations of patient samples involved in anchor-based MIC studies with different characteristics of HRQOL (change) scores and distributions of iMICs. In addition, a real data set is analyzed for illustration. The receiver operating characteristic-based and predictive modeling MICs equal the gMIC when the proportion of improved patients equals 0.5. The MIC is estimated higher than the gMIC when the proportion improved is greater than 0.5, and the MIC is estimated lower than the gMIC when the proportion improved is less than 0.5. Using an equation including the predictive modeling MIC, the log-odds of improvement, the standard deviation of the HRQOL change score, and the correlation between the HRQOL change score and the anchor results in an adjusted MIC reflecting the gMIC irrespective of the proportion of improved patients. Adjusting the predictive modeling MIC for the proportion of improved patients assures that the adjusted MIC reflects the gMIC. We assumed normal distributions and global perceived change scores that were independent on the follow-up score. Additionally, floor and ceiling effects were not taken into account. Copyright © 2017 Elsevier Inc. All rights reserved.
Storage Optimization of Educational System Data
ERIC Educational Resources Information Center
Boja, Catalin
2006-01-01
There are described methods used to minimize data files dimension. There are defined indicators for measuring size of files and databases. The storage optimization process is based on selecting from a multitude of data storage models the one that satisfies the propose problem objective, maximization or minimization of the optimum criterion that is…
Zhang, Hanming; Wang, Linyuan; Yan, Bin; Li, Lei; Cai, Ailong; Hu, Guoen
2016-01-01
Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems. PMID:26901410
DEVELOPMENT OF A PORTABLE SOFTWARE LANGUAGE FOR PHYSIOLOGICALLY-BASED PHARMACOKINETIC (PBPK) MODELS
The PBPK modeling community has had a long-standing problem with modeling software compatibility. The numerous software packages used for PBPK models are, at best, minimally compatible. This creates problems ranging from model obsolescence due to software support discontinuation...
High-order sliding-mode control for blood glucose regulation in the presence of uncertain dynamics.
Hernández, Ana Gabriela Gallardo; Fridman, Leonid; Leder, Ron; Andrade, Sergio Islas; Monsalve, Cristina Revilla; Shtessel, Yuri; Levant, Arie
2011-01-01
The success of blood glucose automatic regulation depends on the robustness of the control algorithm used. It is a difficult task to perform due to the complexity of the glucose-insulin regulation system. The variety of model existing reflects the great amount of phenomena involved in the process, and the inter-patient variability of the parameters represent another challenge. In this research a High-Order Sliding-Mode Control is proposed. It is applied to two well known models, Bergman Minimal Model, and Sorensen Model, to test its robustness with respect to uncertain dynamics, and patients' parameter variability. The controller designed based on the simulations is tested with the specific Bergman Minimal Model of a diabetic patient whose parameters were identified from an in vivo assay. To minimize the insulin infusion rate, and avoid the hypoglycemia risk, the glucose target is a dynamical profile.
Morrato, Elaine H; Smith, Meredith Y
2015-01-01
Pharmaceutical risk minimization programs are now an established requirement in the regulatory landscape. However, pharmaceutical companies have been slow to recognize and embrace the significant potential these programs offer in terms of enhancing trust with health care professionals and patients, and for providing a mechanism for bringing products to the market that might not otherwise have been approved. Pitfalls of the current drug development process include risk minimization programs that are not data driven; missed opportunities to incorporate pragmatic methods and market-based insights, outmoded tools and data sources, lack of rapid evaluative learning to support timely adaption, lack of systematic approaches for patient engagement, and questions on staffing and organizational infrastructure. We propose better integration of risk minimization with clinical drug development and commercialization work streams throughout the product lifecycle. We articulate a vision and propose broad adoption of organizational models for incorporating risk minimization expertise into the drug development process. Three organizational models are discussed and compared: outsource/external vendor, embedded risk management specialist model, and Center of Excellence. PMID:25750537
A network-based approach for resistance transmission in bacterial populations.
Gehring, Ronette; Schumm, Phillip; Youssef, Mina; Scoglio, Caterina
2010-01-07
Horizontal transfer of mobile genetic elements (conjugation) is an important mechanism whereby resistance is spread through bacterial populations. The aim of our work is to develop a mathematical model that quantitatively describes this process, and to use this model to optimize antimicrobial dosage regimens to minimize resistance development. The bacterial population is conceptualized as a compartmental mathematical model to describe changes in susceptible, resistant, and transconjugant bacteria over time. This model is combined with a compartmental pharmacokinetic model to explore the effect of different plasma drug concentration profiles. An agent-based simulation tool is used to account for resistance transfer occurring when two bacteria are adjacent or in close proximity. In addition, a non-linear programming optimal control problem is introduced to minimize bacterial populations as well as the drug dose. Simulation and optimization results suggest that the rapid death of susceptible individuals in the population is pivotal in minimizing the number of transconjugants in a population. This supports the use of potent antimicrobials that rapidly kill susceptible individuals and development of dosage regimens that maintain effective antimicrobial drug concentrations for as long as needed to kill off the susceptible population. Suggestions are made for experiments to test the hypotheses generated by these simulations.
An MBO Scheme for Minimizing the Graph Ohta-Kawasaki Functional
NASA Astrophysics Data System (ADS)
van Gennip, Yves
2018-06-01
We study a graph-based version of the Ohta-Kawasaki functional, which was originally introduced in a continuum setting to model pattern formation in diblock copolymer melts and has been studied extensively as a paradigmatic example of a variational model for pattern formation. Graph-based problems inspired by partial differential equations (PDEs) and variational methods have been the subject of many recent papers in the mathematical literature, because of their applications in areas such as image processing and data classification. This paper extends the area of PDE inspired graph-based problems to pattern-forming models, while continuing in the tradition of recent papers in the field. We introduce a mass conserving Merriman-Bence-Osher (MBO) scheme for minimizing the graph Ohta-Kawasaki functional with a mass constraint. We present three main results: (1) the Lyapunov functionals associated with this MBO scheme Γ -converge to the Ohta-Kawasaki functional (which includes the standard graph-based MBO scheme and total variation as a special case); (2) there is a class of graphs on which the Ohta-Kawasaki MBO scheme corresponds to a standard MBO scheme on a transformed graph and for which generalized comparison principles hold; (3) this MBO scheme allows for the numerical computation of (approximate) minimizers of the graph Ohta-Kawasaki functional with a mass constraint.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balajewicz, Maciej; Tezaur, Irina; Dowell, Earl
For a projection-based reduced order model (ROM) of a fluid flow to be stable and accurate, the dynamics of the truncated subspace must be taken into account. This paper proposes an approach for stabilizing and enhancing projection-based fluid ROMs in which truncated modes are accounted for a priori via a minimal rotation of the projection subspace. Attention is focused on the full non-linear compressible Navier–Stokes equations in specific volume form as a step toward a more general formulation for problems with generic non-linearities. Unlike traditional approaches, no empirical turbulence modeling terms are required, and consistency between the ROM and themore » Navier–Stokes equation from which the ROM is derived is maintained. Mathematically, the approach is formulated as a trace minimization problem on the Stiefel manifold. As a result, the reproductive as well as predictive capabilities of the method are evaluated on several compressible flow problems, including a problem involving laminar flow over an airfoil with a high angle of attack, and a channel-driven cavity flow problem.« less
Balajewicz, Maciej; Tezaur, Irina; Dowell, Earl
2016-05-25
For a projection-based reduced order model (ROM) of a fluid flow to be stable and accurate, the dynamics of the truncated subspace must be taken into account. This paper proposes an approach for stabilizing and enhancing projection-based fluid ROMs in which truncated modes are accounted for a priori via a minimal rotation of the projection subspace. Attention is focused on the full non-linear compressible Navier–Stokes equations in specific volume form as a step toward a more general formulation for problems with generic non-linearities. Unlike traditional approaches, no empirical turbulence modeling terms are required, and consistency between the ROM and themore » Navier–Stokes equation from which the ROM is derived is maintained. Mathematically, the approach is formulated as a trace minimization problem on the Stiefel manifold. As a result, the reproductive as well as predictive capabilities of the method are evaluated on several compressible flow problems, including a problem involving laminar flow over an airfoil with a high angle of attack, and a channel-driven cavity flow problem.« less
Chen, Liang-Hsuan; Hsueh, Chan-Ching
2007-06-01
Fuzzy regression models are useful to investigate the relationship between explanatory and response variables with fuzzy observations. Different from previous studies, this correspondence proposes a mathematical programming method to construct a fuzzy regression model based on a distance criterion. The objective of the mathematical programming is to minimize the sum of distances between the estimated and observed responses on the X axis, such that the fuzzy regression model constructed has the minimal total estimation error in distance. Only several alpha-cuts of fuzzy observations are needed as inputs to the mathematical programming model; therefore, the applications are not restricted to triangular fuzzy numbers. Three examples, adopted in the previous studies, and a larger example, modified from the crisp case, are used to illustrate the performance of the proposed approach. The results indicate that the proposed model has better performance than those in the previous studies based on either distance criterion or Kim and Bishu's criterion. In addition, the efficiency and effectiveness for solving the larger example by the proposed model are also satisfactory.
Optimized Temporal Monitors for SystemC
NASA Technical Reports Server (NTRS)
Tabakov, Deian; Rozier, Kristin Y.; Vardi, Moshe Y.
2012-01-01
SystemC is a modeling language built as an extension of C++. Its growing popularity and the increasing complexity of designs have motivated research efforts aimed at the verification of SystemC models using assertion-based verification (ABV), where the designer asserts properties that capture the design intent in a formal language such as PSL or SVA. The model then can be verified against the properties using runtime or formal verification techniques. In this paper we focus on automated generation of runtime monitors from temporal properties. Our focus is on minimizing runtime overhead, rather than monitor size or monitor-generation time. We identify four issues in monitor generation: state minimization, alphabet representation, alphabet minimization, and monitor encoding. We conduct extensive experimentation and identify a combination of settings that offers the best performance in terms of runtime overhead.
Spontaneous emergence of milling (vortex state) in a Vicsek-like model
NASA Astrophysics Data System (ADS)
Costanzo, A.; Hemelrijk, C. K.
2018-04-01
Collective motion is of interest to laymen and scientists in different fields. In groups of animals, many patterns of collective motion arise such as polarized schools and mills (i.e. circular motion). Collective motion can be generated in computational models of different degrees of complexity. In these models, moving individuals coordinate with others nearby. In the more complex models, individuals attract each other, aligning their headings, and avoiding collisions. Simpler models may include only one or two of these types of interactions. The collective pattern that interests us here is milling, which is observed in many animal species. It has been reproduced in the more complex models, but not in simpler models that are based only on alignment, such as the well-known Vicsek model. Our aim is to provide insight in the minimal conditions required for milling by making minimal modifications to the Vicsek model. Our results show that milling occurs when both the field of view and the maximal angular velocity are decreased. Remarkably, apart from milling, our minimal model also exhibits many of the other patterns of collective motion observed in animal groups.
Electrical Wave Propagation in a Minimally Realistic Fiber Architecture Model of the Left Ventricle
NASA Astrophysics Data System (ADS)
Song, Xianfeng; Setayeshgar, Sima
2006-03-01
Experimental results indicate a nested, layered geometry for the fiber surfaces of the left ventricle, where fiber directions are approximately aligned in each surface and gradually rotate through the thickness of the ventricle. Numerical and analytical results have highlighted the importance of this rotating anisotropy and its possible destabilizing role on the dynamics of scroll waves in excitable media with application to the heart. Based on the work of Peskin[1] and Peskin and McQueen[2], we present a minimally realistic model of the left ventricle that adequately captures the geometry and anisotropic properties of the heart as a conducting medium while being easily parallelizable, and computationally more tractable than fully realistic anatomical models. Complementary to fully realistic and anatomically-based computational approaches, studies using such a minimal model with the addition of successively realistic features, such as excitation-contraction coupling, should provide unique insight into the basic mechanisms of formation and obliteration of electrical wave instabilities. We describe our construction, implementation and validation of this model. [1] C. S. Peskin, Communications on Pure and Applied Mathematics 42, 79 (1989). [2] C. S. Peskin and D. M. McQueen, in Case Studies in Mathematical Modeling: Ecology, Physiology, and Cell Biology, 309(1996)
On the topology of the inflaton field in minimal supergravity models
NASA Astrophysics Data System (ADS)
Ferrara, Sergio; Fré, Pietro; Sorin, Alexander S.
2014-04-01
We consider global issues in minimal supergravity models where a single field inflaton potential emerges. In a particular case we reproduce the Starobinsky model and its description dual to a certain formulation of R + R 2 supergravity. For definiteness we confine our analysis to spaces at constant curvature, either vanishing or negative. Five distinct models arise, two flat models with respectively a quadratic and a quartic potential and three based on the space where its distinct isometries, elliptic, hyperbolic and parabolic are gauged. Fayet-Iliopoulos terms are introduced in a geometric way and they turn out to be a crucial ingredient in order to describe the de Sitter inflationary phase of the Starobinsky model.
Ward, Logan; Steel, James; Le Compte, Aaron; Evans, Alicia; Tan, Chia-Siong; Penning, Sophie; Shaw, Geoffrey M; Desaive, Thomas; Chase, J Geoffrey
2012-01-01
Tight glycemic control (TGC) has shown benefits but has been difficult to implement. Model-based methods and computerized protocols offer the opportunity to improve TGC quality and compliance. This research presents an interface design to maximize compliance, minimize real and perceived clinical effort, and minimize error based on simple human factors and end user input. The graphical user interface (GUI) design is presented by construction based on a series of simple, short design criteria based on fundamental human factors engineering and includes the use of user feedback and focus groups comprising nursing staff at Christchurch Hospital. The overall design maximizes ease of use and minimizes (unnecessary) interaction and use. It is coupled to a protocol that allows nurse staff to select measurement intervals and thus self-manage workload. The overall GUI design is presented and requires only one data entry point per intervention cycle. The design and main interface are heavily focused on the nurse end users who are the predominant users, while additional detailed and longitudinal data, which are of interest to doctors guiding overall patient care, are available via tabs. This dichotomy of needs and interests based on the end user's immediate focus and goals shows how interfaces must adapt to offer different information to multiple types of users. The interface is designed to minimize real and perceived clinical effort, and ongoing pilot trials have reported high levels of acceptance. The overall design principles, approach, and testing methods are based on fundamental human factors principles designed to reduce user effort and error and are readily generalizable. © 2012 Diabetes Technology Society.
Ward, Logan; Steel, James; Le Compte, Aaron; Evans, Alicia; Tan, Chia-Siong; Penning, Sophie; Shaw, Geoffrey M; Desaive, Thomas; Chase, J Geoffrey
2012-01-01
Introduction Tight glycemic control (TGC) has shown benefits but has been difficult to implement. Model-based methods and computerized protocols offer the opportunity to improve TGC quality and compliance. This research presents an interface design to maximize compliance, minimize real and perceived clinical effort, and minimize error based on simple human factors and end user input. Method The graphical user interface (GUI) design is presented by construction based on a series of simple, short design criteria based on fundamental human factors engineering and includes the use of user feedback and focus groups comprising nursing staff at Christchurch Hospital. The overall design maximizes ease of use and minimizes (unnecessary) interaction and use. It is coupled to a protocol that allows nurse staff to select measurement intervals and thus self-manage workload. Results The overall GUI design is presented and requires only one data entry point per intervention cycle. The design and main interface are heavily focused on the nurse end users who are the predominant users, while additional detailed and longitudinal data, which are of interest to doctors guiding overall patient care, are available via tabs. This dichotomy of needs and interests based on the end user's immediate focus and goals shows how interfaces must adapt to offer different information to multiple types of users. Conclusions The interface is designed to minimize real and perceived clinical effort, and ongoing pilot trials have reported high levels of acceptance. The overall design principles, approach, and testing methods are based on fundamental human factors principles designed to reduce user effort and error and are readily generalizable. PMID:22401330
Silva, M M; Lemos, J M; Coito, A; Costa, B A; Wigren, T; Mendonça, T
2014-01-01
This paper addresses the local identifiability and sensitivity properties of two classes of Wiener models for the neuromuscular blockade and depth of hypnosis, when drug dose profiles like the ones commonly administered in the clinical practice are used as model inputs. The local parameter identifiability was assessed based on the singular value decomposition of the normalized sensitivity matrix. For the given input signal excitation, the results show an over-parameterization of the standard pharmacokinetic/pharmacodynamic models. The same identifiability assessment was performed on recently proposed minimally parameterized parsimonious models for both the neuromuscular blockade and the depth of hypnosis. The results show that the majority of the model parameters are identifiable from the available input-output data. This indicates that any identification strategy based on the minimally parameterized parsimonious Wiener models for the neuromuscular blockade and for the depth of hypnosis is likely to be more successful than if standard models are used. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Žáček, K.
Summary- The only way to make an excessively complex velocity model suitable for application of ray-based methods, such as the Gaussian beam or Gaussian packet methods, is to smooth it. We have smoothed the Marmousi model by choosing a coarser grid and by minimizing the second spatial derivatives of the slowness. This was done by minimizing the relevant Sobolev norm of slowness. We show that minimizing the relevant Sobolev norm of slowness is a suitable technique for preparing the optimum models for asymptotic ray theory methods. However, the price we pay for a model suitable for ray tracing is an increase of the difference between the smoothed and original model. Similarly, the estimated error in the travel time also increases due to the difference between the models. In smoothing the Marmousi model, we have found the estimated error of travel times at the verge of acceptability. Due to the low frequencies in the wavefield of the original Marmousi data set, we have found the Gaussian beams and Gaussian packets at the verge of applicability even in models sufficiently smoothed for ray tracing.
Stress granule formation via ATP depletion-triggered phase separation
NASA Astrophysics Data System (ADS)
Wurtz, Jean David; Lee, Chiu Fan
2018-04-01
Stress granules (SG) are droplets of proteins and RNA that form in the cell cytoplasm during stress conditions. We consider minimal models of stress granule formation based on the mechanism of phase separation regulated by ATP-driven chemical reactions. Motivated by experimental observations, we identify a minimal model of SG formation triggered by ATP depletion. Our analysis indicates that ATP is continuously hydrolysed to deter SG formation under normal conditions, and we provide specific predictions that can be tested experimentally.
Neutrino CP violation and sign of baryon asymmetry in the minimal seesaw model
NASA Astrophysics Data System (ADS)
Shimizu, Yusuke; Takagi, Kenta; Tanimoto, Morimitsu
2018-03-01
We discuss the correlation between the CP violating Dirac phase of the lepton mixing matrix and the cosmological baryon asymmetry based on the leptogenesis in the minimal seesaw model with two right-handed Majorana neutrinos and the trimaximal mixing for neutrino flavors. The sign of the CP violating Dirac phase at low energy is fixed by the observed cosmological baryon asymmetry since there is only one phase parameter in the model. According to the recent T2K and NOνA data of the CP violation, the Dirac neutrino mass matrix of our model is fixed only for the normal hierarchy of neutrino masses.
Robust model-based 3d/3D fusion using sparse matching for minimally invasive surgery.
Neumann, Dominik; Grbic, Sasa; John, Matthias; Navab, Nassir; Hornegger, Joachim; Ionasec, Razvan
2013-01-01
Classical surgery is being disrupted by minimally invasive and transcatheter procedures. As there is no direct view or access to the affected anatomy, advanced imaging techniques such as 3D C-arm CT and C-arm fluoroscopy are routinely used for intra-operative guidance. However, intra-operative modalities have limited image quality of the soft tissue and a reliable assessment of the cardiac anatomy can only be made by injecting contrast agent, which is harmful to the patient and requires complex acquisition protocols. We propose a novel sparse matching approach for fusing high quality pre-operative CT and non-contrasted, non-gated intra-operative C-arm CT by utilizing robust machine learning and numerical optimization techniques. Thus, high-quality patient-specific models can be extracted from the pre-operative CT and mapped to the intra-operative imaging environment to guide minimally invasive procedures. Extensive quantitative experiments demonstrate that our model-based fusion approach has an average execution time of 2.9 s, while the accuracy lies within expert user confidence intervals.
Chronic Motivational State Interacts with Task Reward Structure in Dynamic Decision-Making
Cooper, Jessica A.; Worthy, Darrell A.; Maddox, W. Todd
2015-01-01
Research distinguishes between a habitual, model-free system motivated toward immediately rewarding actions, and a goal-directed, model-based system motivated toward actions that improve future state. We examined the balance of processing in these two systems during state-based decision-making. We tested a regulatory fit hypothesis (Maddox & Markman, 2010) that predicts that global trait motivation affects the balance of habitual- vs. goal-directed processing but only through its interaction with the task framing as gain-maximization or loss-minimization. We found support for the hypothesis that a match between an individual’s chronic motivational state and the task framing enhances goal-directed processing, and thus state-based decision-making. Specifically, chronic promotion-focused individuals under gain-maximization and chronic prevention-focused individuals under loss-minimization both showed enhanced state-based decision-making. Computational modeling indicates that individuals in a match between global chronic motivational state and local task reward structure engaged more goal-directed processing, whereas those in a mismatch engaged more habitual processing. PMID:26520256
How models can support ecosystem-based management of coral reefs
NASA Astrophysics Data System (ADS)
Weijerman, Mariska; Fulton, Elizabeth A.; Janssen, Annette B. G.; Kuiper, Jan J.; Leemans, Rik; Robson, Barbara J.; van de Leemput, Ingrid A.; Mooij, Wolf M.
2015-11-01
Despite the importance of coral reef ecosystems to the social and economic welfare of coastal communities, the condition of these marine ecosystems have generally degraded over the past decades. With an increased knowledge of coral reef ecosystem processes and a rise in computer power, dynamic models are useful tools in assessing the synergistic effects of local and global stressors on ecosystem functions. We review representative approaches for dynamically modeling coral reef ecosystems and categorize them as minimal, intermediate and complex models. The categorization was based on the leading principle for model development and their level of realism and process detail. This review aims to improve the knowledge of concurrent approaches in coral reef ecosystem modeling and highlights the importance of choosing an appropriate approach based on the type of question(s) to be answered. We contend that minimal and intermediate models are generally valuable tools to assess the response of key states to main stressors and, hence, contribute to understanding ecological surprises. As has been shown in freshwater resources management, insight into these conceptual relations profoundly influences how natural resource managers perceive their systems and how they manage ecosystem recovery. We argue that adaptive resource management requires integrated thinking and decision support, which demands a diversity of modeling approaches. Integration can be achieved through complimentary use of models or through integrated models that systemically combine all relevant aspects in one model. Such whole-of-system models can be useful tools for quantitatively evaluating scenarios. These models allow an assessment of the interactive effects of multiple stressors on various, potentially conflicting, management objectives. All models simplify reality and, as such, have their weaknesses. While minimal models lack multidimensionality, system models are likely difficult to interpret as they require many efforts to decipher the numerous interactions and feedback loops. Given the breadth of questions to be tackled when dealing with coral reefs, the best practice approach uses multiple model types and thus benefits from the strength of different models types.
Toward a preoperative planning tool for brain tumor resection therapies.
Coffey, Aaron M; Miga, Michael I; Chen, Ishita; Thompson, Reid C
2013-01-01
Neurosurgical procedures involving tumor resection require surgical planning such that the surgical path to the tumor is determined to minimize the impact on healthy tissue and brain function. This work demonstrates a predictive tool to aid neurosurgeons in planning tumor resection therapies by finding an optimal model-selected patient orientation that minimizes lateral brain shift in the field of view. Such orientations may facilitate tumor access and removal, possibly reduce the need for retraction, and could minimize the impact of brain shift on image-guided procedures. In this study, preoperative magnetic resonance images were utilized in conjunction with pre- and post-resection laser range scans of the craniotomy and cortical surface to produce patient-specific finite element models of intraoperative shift for 6 cases. These cases were used to calibrate a model (i.e., provide general rules for the application of patient positioning parameters) as well as determine the current model-based framework predictive capabilities. Finally, an objective function is proposed that minimizes shift subject to patient position parameters. Patient positioning parameters were then optimized and compared to our neurosurgeon as a preliminary study. The proposed model-driven brain shift minimization objective function suggests an overall reduction of brain shift by 23 % over experiential methods. This work recasts surgical simulation from a trial-and-error process to one where options are presented to the surgeon arising from an optimization of surgical goals. To our knowledge, this is the first realization of an evaluative tool for surgical planning that attempts to optimize surgical approach by means of shift minimization in this manner.
Rider, Lisa G.; Aggarwal, Rohit; Pistorio, Angela; Bayat, Nastaran; Erman, Brian; Feldman, Brian M.; Huber, Adam M.; Cimaz, Rolando; Cuttica, Rubén J.; de Oliveira, Sheila Knupp; Lindsley, Carol B.; Pilkington, Clarissa A.; Punaro, Marilyn; Ravelli, Angelo; Reed, Ann M.; Rouster-Stevens, Kelly; van Royen, Annet; Dressler, Frank; Magalhaes, Claudia Saad; Constantin, Tamás; Davidson, Joyce E.; Magnusson, Bo; Russo, Ricardo; Villa, Luca; Rinaldi, Mariangela; Rockette, Howard; Lachenbruch, Peter A.; Miller, Frederick W.; Vencovsky, Jiri; Ruperto, Nicolino
2017-01-01
Objective Develop response criteria for juvenile dermatomyositis (JDM). Methods We analyzed the performance of 312 definitions that used core set measures (CSM) from either the International Myositis Assessment and Clinical Studies Group (IMACS) or the Pediatric Rheumatology International Trials Organization (PRINTO) and were derived from natural history data and a conjoint-analysis survey. They were further validated in the PRINTO trial of prednisone alone compared to prednisone with methotrexate or cyclosporine and the Rituximab in Myositis trial. Experts considered 14 top-performing candidate criteria based on their performance characteristics and clinical face validity using nominal group technique at a consensus conference. Results Consensus was reached for a conjoint analysis–based continuous model with a Total Improvement Score of 0-100, using absolute percent change in CSM with thresholds for minimal (≥30 points), moderate (≥45), and major improvement (≥70). The same criteria were chosen for adult dermatomyositis/polymyositis with differing thresholds for improvement. The sensitivity and specificity were 89% and 91-98% for minimal, 92-94% and 94-99% for moderate, and 91-98% and 85-85% for major improvement, respectively, in JDM patient cohorts using the IMACS and PRINTO CSM. These criteria were validated in the PRINTO trial for differentiating between treatment arms for minimal and moderate improvement (P=0.009–0.057) and in the Rituximab trial for significantly differentiating the physician rating of improvement (P<0.006). Conclusion The response criteria for JDM was a conjoint analysis–based model using a continuous improvement score based on absolute percent change in CSM, with thresholds for minimal, moderate, and major improvement. PMID:28382787
Rider, Lisa G.; Aggarwal, Rohit; Pistorio, Angela; Bayat, Nastaran; Erman, Brian; Feldman, Brian M.; Huber, Adam M.; Cimaz, Rolando; Cuttica, Rubén J.; de Oliveira, Sheila Knupp; Lindsley, Carol B.; Pilkington, Clarissa A.; Punaro, Marilyn; Ravelli, Angelo; Reed, Ann M.; Rouster-Stevens, Kelly; van Royen, Annet; Dressler, Frank; Magalhaes, Claudia Saad; Constantin, Tamás; Davidson, Joyce E.; Magnusson, Bo; Russo, Ricardo; Villa, Luca; Rinaldi, Mariangela; Rockette, Howard; Lachenbruch, Peter A.; Miller, Frederick W.; Vencovsky, Jiri; Ruperto, Nicolino
2017-01-01
Objective Develop response criteria for juvenile dermatomyositis (JDM). Methods We analyzed the performance of 312 definitions that used core set measures (CSM) from either the International Myositis Assessment and Clinical Studies Group (IMACS) or the Pediatric Rheumatology International Trials Organization (PRINTO) and were derived from natural history data and a conjoint-analysis survey. They were further validated in the PRINTO trial of prednisone alone compared to prednisone with methotrexate or cyclosporine and the Rituximab in Myositis trial. Experts considered 14 top-performing candidate criteria based on their performance characteristics and clinical face validity using nominal group technique at a consensus conference. Results Consensus was reached for a conjoint analysis–based continuous model with a Total Improvement Score of 0-100, using absolute percent change in CSM with thresholds for minimal (≥30 points), moderate (≥45), and major improvement (≥70). The same criteria were chosen for adult dermatomyositis/polymyositis with differing thresholds for improvement. The sensitivity and specificity were 89% and 91-98% for minimal, 92-94% and 94-99% for moderate, and 91-98% and 85-85% for major improvement, respectively, in JDM patient cohorts using the IMACS and PRINTO CSM. These criteria were validated in the PRINTO trial for differentiating between treatment arms for minimal and moderate improvement (P=0.009–0.057) and in the Rituximab trial for significantly differentiating the physician rating of improvement (P<0.006). Conclusion The response criteria for JDM was a conjoint analysis–based model using a continuous improvement score based on absolute percent change in CSM, with thresholds for minimal, moderate, and major improvement. PMID:28382778
Nucleic acid duplexes incorporating a dissociable covalent base pair
NASA Technical Reports Server (NTRS)
Gao, K.; Orgel, L. E.; Bada, J. L. (Principal Investigator)
1999-01-01
We have used molecular modeling techniques to design a dissociable covalently bonded base pair that can replace a Watson-Crick base pair in a nucleic acid with minimal distortion of the structure of the double helix. We introduced this base pair into a potential precursor of a nucleic acid double helix by chemical synthesis and have demonstrated efficient nonenzymatic template-directed ligation of the free hydroxyl groups of the base pair with appropriate short oligonucleotides. The nonenzymatic ligation reactions, which are characteristic of base paired nucleic acid structures, are abolished when the covalent base pair is reduced and becomes noncoplanar. This suggests that the covalent base pair linking the two strands in the duplex is compatible with a minimally distorted nucleic acid double-helical structure.
NASA Astrophysics Data System (ADS)
Barberis, Lucas; Peruani, Fernando
2016-12-01
We study a minimal cognitive flocking model, which assumes that the moving entities navigate using the available instantaneous visual information exclusively. The model consists of active particles, with no memory, that interact by a short-ranged, position-based, attractive force, which acts inside a vision cone (VC), and lack velocity-velocity alignment. We show that this active system can exhibit—due to the VC that breaks Newton's third law—various complex, large-scale, self-organized patterns. Depending on parameter values, we observe the emergence of aggregates or millinglike patterns, the formation of moving—locally polar—files with particles at the front of these structures acting as effective leaders, and the self-organization of particles into macroscopic nematic structures leading to long-ranged nematic order. Combining simulations and nonlinear field equations, we show that position-based active models, as the one analyzed here, represent a new class of active systems fundamentally different from other active systems, including velocity-alignment-based flocking systems. The reported results are of prime importance in the study, interpretation, and modeling of collective motion patterns in living and nonliving active systems.
Barberis, Lucas; Peruani, Fernando
2016-12-09
We study a minimal cognitive flocking model, which assumes that the moving entities navigate using the available instantaneous visual information exclusively. The model consists of active particles, with no memory, that interact by a short-ranged, position-based, attractive force, which acts inside a vision cone (VC), and lack velocity-velocity alignment. We show that this active system can exhibit-due to the VC that breaks Newton's third law-various complex, large-scale, self-organized patterns. Depending on parameter values, we observe the emergence of aggregates or millinglike patterns, the formation of moving-locally polar-files with particles at the front of these structures acting as effective leaders, and the self-organization of particles into macroscopic nematic structures leading to long-ranged nematic order. Combining simulations and nonlinear field equations, we show that position-based active models, as the one analyzed here, represent a new class of active systems fundamentally different from other active systems, including velocity-alignment-based flocking systems. The reported results are of prime importance in the study, interpretation, and modeling of collective motion patterns in living and nonliving active systems.
NASA Astrophysics Data System (ADS)
Reiter, D. T.; Rodi, W. L.
2015-12-01
Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.
Design Specifications for the Advanced Instructional Design Advisor (AIDA). Volume 1
1992-01-01
research; (3) Describe the knowledge base sufficient to support the varieties of knowledge to be represented in the AIDA model ; (4) Document the...feasibility of continuing the development of the AIDA model . 2.3 Background In Phase I of the AIDA project (Task 0006), (1) the AIDA concept was defined...the AIDA Model A paper-based demonstration of the AIDA instructional design model was performed by using the model to develop a minimal application
A model-based 'varimax' sampling strategy for a heterogeneous population.
Akram, Nuzhat A; Farooqi, Shakeel R
2014-01-01
Sampling strategies are planned to enhance the homogeneity of a sample, hence to minimize confounding errors. A sampling strategy was developed to minimize the variation within population groups. Karachi, the largest urban agglomeration in Pakistan, was used as a model population. Blood groups ABO and Rh factor were determined for 3000 unrelated individuals selected through simple random sampling. Among them five population groups, namely Balochi, Muhajir, Pathan, Punjabi and Sindhi, based on paternal ethnicity were identified. An index was designed to measure the proportion of admixture at parental and grandparental levels. Population models based on index score were proposed. For validation, 175 individuals selected through stratified random sampling were genotyped for the three STR loci CSF1PO, TPOX and TH01. ANOVA showed significant differences across the population groups for blood groups and STR loci distribution. Gene diversity was higher across the sub-population model than in the agglomerated population. At parental level gene diversities are significantly higher across No admixture models than Admixture models. At grandparental level the difference was not significant. A sub-population model with no admixture at parental level was justified for sampling the heterogeneous population of Karachi.
Gazijahani, Farhad Samadi; Ravadanegh, Sajad Najafi; Salehi, Javad
2018-02-01
The inherent volatility and unpredictable nature of renewable generations and load demand pose considerable challenges for energy exchange optimization of microgrids (MG). To address these challenges, this paper proposes a new risk-based multi-objective energy exchange optimization for networked MGs from economic and reliability standpoints under load consumption and renewable power generation uncertainties. In so doing, three various risk-based strategies are distinguished by using conditional value at risk (CVaR) approach. The proposed model is specified as a two-distinct objective function. The first function minimizes the operation and maintenance costs, cost of power transaction between upstream network and MGs as well as power loss cost, whereas the second function minimizes the energy not supplied (ENS) value. Furthermore, the stochastic scenario-based approach is incorporated into the approach in order to handle the uncertainty. Also, Kantorovich distance scenario reduction method has been implemented to reduce the computational burden. Finally, non-dominated sorting genetic algorithm (NSGAII) is applied to minimize the objective functions simultaneously and the best solution is extracted by fuzzy satisfying method with respect to risk-based strategies. To indicate the performance of the proposed model, it is performed on the modified IEEE 33-bus distribution system and the obtained results show that the presented approach can be considered as an efficient tool for optimal energy exchange optimization of MGs. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Energy minimization for self-organized structure formation and actuation
NASA Astrophysics Data System (ADS)
Kofod, Guggi; Wirges, Werner; Paajanen, Mika; Bauer, Siegfried
2007-02-01
An approach for creating complex structures with embedded actuation in planar manufacturing steps is presented. Self-organization and energy minimization are central to this approach, illustrated with a model based on minimization of the hyperelastic free energy strain function of a stretched elastomer and the bending elastic energy of a plastic frame. A tulip-shaped gripper structure illustrates the technological potential of the approach. Advantages are simplicity of manufacture, complexity of final structures, and the ease with which any electroactive material can be exploited as means of actuation.
Xu, Yu; Wang, Hong; Nussinov, Ruth; Ma, Buyong
2013-01-01
We constructed and simulated a ‘minimal proteome’ model using Langevin dynamics. It contains 206 essential protein types which were compiled from the literature. For comparison, we generated six proteomes with randomized concentrations. We found that the net charges and molecular weights of the proteins in the minimal genome are not random. The net charge of a protein decreases linearly with molecular weight, with small proteins being mostly positively charged and large proteins negatively charged. The protein copy numbers in the minimal genome have the tendency to maximize the number of protein-protein interactions in the network. Negatively charged proteins which tend to have larger sizes can provide large collision cross-section allowing them to interact with other proteins; on the other hand, the smaller positively charged proteins could have higher diffusion speed and are more likely to collide with other proteins. Proteomes with random charge/mass populations form less stable clusters than those with experimental protein copy numbers. Our study suggests that ‘proper’ populations of negatively and positively charged proteins are important for maintaining a protein-protein interaction network in a proteome. It is interesting to note that the minimal genome model based on the charge and mass of E. Coli may have a larger protein-protein interaction network than that based on the lower organism M. pneumoniae. PMID:23420643
NASA Astrophysics Data System (ADS)
Lee, Jeong-Eun; Gen, Mitsuo; Rhee, Kyong-Gu; Lee, Hee-Hyol
This paper deals with the building of the reusable reverse logistics model considering the decision of the backorder or the next arrival of goods. The optimization method to minimize the transportation cost and to minimize the volume of the backorder or the next arrival of goods occurred by the Just in Time delivery of the final delivery stage between the manufacturer and the processing center is proposed. Through the optimization algorithms using the priority-based genetic algorithm and the hybrid genetic algorithm, the sub-optimal delivery routes are determined. Based on the case study of a distilling and sale company in Busan in Korea, the new model of the reusable reverse logistics of empty bottles is built and the effectiveness of the proposed method is verified.
Application of Harmony Search algorithm to the solution of groundwater management models
NASA Astrophysics Data System (ADS)
Tamer Ayvaz, M.
2009-06-01
This study proposes a groundwater resources management model in which the solution is performed through a combined simulation-optimization model. A modular three-dimensional finite difference groundwater flow model, MODFLOW is used as the simulation model. This model is then combined with a Harmony Search (HS) optimization algorithm which is based on the musical process of searching for a perfect state of harmony. The performance of the proposed HS based management model is tested on three separate groundwater management problems: (i) maximization of total pumping from an aquifer (steady-state); (ii) minimization of the total pumping cost to satisfy the given demand (steady-state); and (iii) minimization of the pumping cost to satisfy the given demand for multiple management periods (transient). The sensitivity of HS algorithm is evaluated by performing a sensitivity analysis which aims to determine the impact of related solution parameters on convergence behavior. The results show that HS yields nearly same or better solutions than the previous solution methods and may be used to solve management problems in groundwater modeling.
Inference regarding multiple structural changes in linear models with endogenous regressors☆
Hall, Alastair R.; Han, Sanggohn; Boldea, Otilia
2012-01-01
This paper considers the linear model with endogenous regressors and multiple changes in the parameters at unknown times. It is shown that minimization of a Generalized Method of Moments criterion yields inconsistent estimators of the break fractions, but minimization of the Two Stage Least Squares (2SLS) criterion yields consistent estimators of these parameters. We develop a methodology for estimation and inference of the parameters of the model based on 2SLS. The analysis covers the cases where the reduced form is either stable or unstable. The methodology is illustrated via an application to the New Keynesian Phillips Curve for the US. PMID:23805021
Optimal blood glucose level control using dynamic programming based on minimal Bergman model
NASA Astrophysics Data System (ADS)
Rettian Anggita Sari, Maria; Hartono
2018-03-01
The purpose of this article is to simulate the glucose dynamic and the insulin kinetic of diabetic patient. The model used in this research is a non-linear Minimal Bergman model. Optimal control theory is then applied to formulate the problem in order to determine the optimal dose of insulin in the treatment of diabetes mellitus such that the glucose level is in the normal range for some specific time range. The optimization problem is solved using dynamic programming. The result shows that dynamic programming is quite reliable to represent the interaction between glucose and insulin levels in diabetes mellitus patient.
NASA Astrophysics Data System (ADS)
Mahalakshmi; Murugesan, R.
2018-04-01
This paper regards with the minimization of total cost of Greenhouse Gas (GHG) efficiency in Automated Storage and Retrieval System (AS/RS). A mathematical model is constructed based on tax cost, penalty cost and discount cost of GHG emission of AS/RS. A two stage algorithm namely positive selection based clonal selection principle (PSBCSP) is used to find the optimal solution of the constructed model. In the first stage positive selection principle is used to reduce the search space of the optimal solution by fixing a threshold value. In the later stage clonal selection principle is used to generate best solutions. The obtained results are compared with other existing algorithms in the literature, which shows that the proposed algorithm yields a better result compared to others.
A minimal model for kinetochore-microtubule dynamics
NASA Astrophysics Data System (ADS)
Liu, Andrea
2014-03-01
During mitosis, chromosome pairs align at the center of a bipolar microtubule (MT) spindle and oscillate as MTs attaching them to the cell poles polymerize and depolymerize. The cell fixes misaligned pairs by a tension-sensing mechanism. Pairs later separate as shrinking MTs pull each chromosome toward its respective cell pole. We present a minimal model for these processes based on properties of MT kinetics. We apply the measured tension-dependence of single MT kinetics to a stochastic many MT model, which we solve numerically and with master equations. We find that the force-velocity curve for the single chromosome system is bistable and hysteretic. Above some threshold load, tension fluctuations induce MTs to spontaneously switch from a pulling state into a growing, pushing state. To recover pulling from the pushing state, the load must be reduced far below the threshold. This leads to oscillations in the two-chromosome system. Our minimal model quantitatively captures several aspects of kinetochore dynamics observed experimentally. This work was supported by NSF-DMR-1104637.
Multiple ionization of neon by soft x-rays at ultrahigh intensity
NASA Astrophysics Data System (ADS)
Guichard, R.; Richter, M.; Rost, J.-M.; Saalmann, U.; Sorokin, A. A.; Tiedtke, K.
2013-08-01
At the free-electron laser FLASH, multiple ionization of neon atoms was quantitatively investigated at photon energies of 93.0 and 90.5 eV. For ion charge states up to 6+, we compare the respective absolute photoionization yields with results from a minimal model and an elaborate description including standard sequential and direct photoionization channels. Both approaches are based on rate equations and take into account a Gaussian spatial intensity distribution of the laser beam. From the comparison we conclude that photoionization up to a charge of 5+ can be described by the minimal model which we interpret as sequential photoionization assisted by electron shake-up processes. For higher charges, the experimental ionization yields systematically exceed the elaborate rate-based prediction.
Chitale, Rohan; Ghobrial, George M; Lobel, Darlene; Harrop, James
2013-10-01
The learning and development of technical skills are paramount for neurosurgical trainees. External influences and a need for maximizing efficiency and proficiency have encouraged advancements in simulator-based learning models. To confirm the importance of establishing an educational curriculum for teaching minimally invasive techniques of pedicle screw placement using a computer-enhanced physical model of percutaneous pedicle screw placement with simultaneous didactic and technical components. A 2-hour educational curriculum was created to educate neurosurgical residents on anatomy, pathophysiology, and technical aspects associated with image-guided pedicle screw placement. Predidactic and postdidactic practical and written scores were analyzed and compared. Scores were calculated for each participant on the basis of the optimal pedicle screw starting point and trajectory for both fluoroscopy and computed tomographic navigation. Eight trainees participated in this module. Average mean scores on the written didactic test improved from 78% to 100%. The technical component scores for fluoroscopic guidance improved from 58.8 to 52.9. Technical score for computed tomography-navigated guidance also improved from 28.3 to 26.6. Didactic and technical quantitative scores with a simulator-based educational curriculum improved objectively measured resident performance. A minimally invasive spine simulation model and curriculum may serve a valuable function in the education of neurosurgical residents and outcomes for patients.
Minimal agent based model for financial markets I. Origin and self-organization of stylized facts
NASA Astrophysics Data System (ADS)
Alfi, V.; Cristelli, M.; Pietronero, L.; Zaccaria, A.
2009-02-01
We introduce a minimal agent based model for financial markets to understand the nature and self-organization of the stylized facts. The model is minimal in the sense that we try to identify the essential ingredients to reproduce the most important deviations of price time series from a random walk behavior. We focus on four essential ingredients: fundamentalist agents which tend to stabilize the market; chartist agents which induce destabilization; analysis of price behavior for the two strategies; herding behavior which governs the possibility of changing strategy. Bubbles and crashes correspond to situations dominated by chartists, while fundamentalists provide a long time stability (on average). The stylized facts are shown to correspond to an intermittent behavior which occurs only for a finite value of the number of agents N. Therefore they correspond to finite size effects which, however, can occur at different time scales. We propose a new mechanism for the self-organization of this state which is linked to the existence of a threshold for the agents to be active or not active. The feedback between price fluctuations and number of active agents represents a crucial element for this state of self-organized intermittency. The model can be easily generalized to consider more realistic variants.
Brown, Melissa M; Brown, Gary C; Brown, Heidi C; Peet, Jonathan
2008-06-01
To assess the conferred value and average cost-utility (cost-effectiveness) for intravitreal ranibizumab used to treat occult/minimally classic subfoveal choroidal neovascularization associated with age-related macular degeneration (AMD). Value-based medicine cost-utility analysis. MARINA (Minimally Classic/Occult Trial of the Anti-Vascular Endothelial Growth Factor Antibody Ranibizumab in the Treatment of Neovascular AMD) Study patients utilizing published primary data. Reference case, third-party insurer perspective, cost-utility analysis using 2006 United States dollars. Conferred value in the forms of (1) quality-adjusted life-years (QALYs) and (2) percent improvement in health-related quality of life. Cost-utility is expressed in terms of dollars expended per QALY gained. All outcomes are discounted at a 3% annual rate, as recommended by the Panel on Cost-effectiveness in Health and Medicine. Data are presented for the second-eye model, first-eye model, and combined model. Twenty-two intravitreal injections of 0.5 mg of ranibizumab administered over a 2-year period confer 1.039 QALYs, or a 15.8% improvement in quality of life for the 12-year period of the second-eye model reference case of occult/minimally classic age-related subfoveal choroidal neovascularization. The reference case treatment cost is $52652, and the cost-utility for the second-eye model is $50691/QALY. The quality-of-life gain from the first-eye model is 6.4% and the cost-utility is $123887, whereas the most clinically simulating combined model yields a quality-of-life gain of 10.4% and cost-utility of $74169. By conventional standards and the most commonly used second-eye and combined models, intravitreal ranibizumab administered for occult/minimally classic subfoveal choroidal neovascularization is a cost-effective therapy. Ranibizumab treatment confers considerably greater value than other neovascular macular degeneration pharmaceutical therapies that have been studied in randomized clinical trials.
Nucleic acid duplexes incorporating a dissociable covalent base pair
Gao, Kui; Orgel, Leslie E.
1999-01-01
We have used molecular modeling techniques to design a dissociable covalently bonded base pair that can replace a Watson-Crick base pair in a nucleic acid with minimal distortion of the structure of the double helix. We introduced this base pair into a potential precursor of a nucleic acid double helix by chemical synthesis and have demonstrated efficient nonenzymatic template-directed ligation of the free hydroxyl groups of the base pair with appropriate short oligonucleotides. The nonenzymatic ligation reactions, which are characteristic of base paired nucleic acid structures, are abolished when the covalent base pair is reduced and becomes noncoplanar. This suggests that the covalent base pair linking the two strands in the duplex is compatible with a minimally distorted nucleic acid double-helical structure. PMID:10611299
Graham, Christopher N; Hechmati, Guy; Fakih, Marwan G; Knox, Hediyyih N; Maglinte, Gregory A; Hjelmgren, Jonas; Barber, Beth; Schwartzberg, Lee S
2015-01-01
To compare the costs of first-line treatment with panitumumab + FOLFOX in comparison to cetuximab + FOLFIRI among patients with wild-type (WT) RAS metastatic colorectal cancer (mCRC) in the US. A cost-minimization model was developed assuming similar treatment efficacy between both regimens. The model estimated the costs associated with drug acquisition, treatment administration frequency (every 2 weeks for panitumumab, weekly for cetuximab), and incidence of infusion reactions. Average anti-EGFR doses were calculated from the ASPECCT clinical trial, and average doses of chemotherapy regimens were based on product labels. Using the medical component of the consumer price index, adverse event costs were inflated to 2014 US dollars, and all other costs were reported in 2014 US dollars. The time horizon for the model was based on average first-line progression-free survival of a WT RAS patient, estimated from parametric survival analyses of PRIME clinical trial data. Relative to cetuximab + FOLFIRI in the first-line treatment of WT RAS mCRC, the cost-minimization model demonstrated lower projected drug acquisition, administration, and adverse event costs for patients who received panitumumab + FOLFOX. The overall cost per patient for first-line treatment was $179,219 for panitumumab + FOLFOX vs $202,344 for cetuximab + FOLFIRI, resulting in a per-patient saving of $23,125 (11.4%) in favor of panitumumab + FOLFOX. From a value perspective, the cost-minimization model supports panitumumab + FOLFOX instead of cetuximab + FOLFIRI as the preferred first-line treatment of WT RAS mCRC patients requiring systemic therapy.
A Minimal Model Describing Hexapedal Interlimb Coordination: The Tegotae-Based Approach
Owaki, Dai; Goda, Masashi; Miyazawa, Sakiko; Ishiguro, Akio
2017-01-01
Insects exhibit adaptive and versatile locomotion despite their minimal neural computing. Such locomotor patterns are generated via coordination between leg movements, i.e., an interlimb coordination, which is largely controlled in a distributed manner by neural circuits located in thoracic ganglia. However, the mechanism responsible for the interlimb coordination still remains elusive. Understanding this mechanism will help us to elucidate the fundamental control principle of animals' agile locomotion and to realize robots with legs that are truly adaptive and could not be developed solely by conventional control theories. This study aims at providing a “minimal" model of the interlimb coordination mechanism underlying hexapedal locomotion, in the hope that a single control principle could satisfactorily reproduce various aspects of insect locomotion. To this end, we introduce a novel concept we named “Tegotae,” a Japanese concept describing the extent to which a perceived reaction matches an expectation. By using the Tegotae-based approach, we show that a surprisingly systematic design of local sensory feedback mechanisms essential for the interlimb coordination can be realized. We also use a hexapod robot we developed to show that our mathematical model of the interlimb coordination mechanism satisfactorily reproduces various insects' gait patterns. PMID:28649197
NASA Astrophysics Data System (ADS)
Noguchi, Yuki; Yamamoto, Takashi; Yamada, Takayuki; Izui, Kazuhiro; Nishiwaki, Shinji
2017-09-01
This papers proposes a level set-based topology optimization method for the simultaneous design of acoustic and structural material distributions. In this study, we develop a two-phase material model that is a mixture of an elastic material and acoustic medium, to represent an elastic structure and an acoustic cavity by controlling a volume fraction parameter. In the proposed model, boundary conditions at the two-phase material boundaries are satisfied naturally, avoiding the need to express these boundaries explicitly. We formulate a topology optimization problem to minimize the sound pressure level using this two-phase material model and a level set-based method that obtains topologies free from grayscales. The topological derivative of the objective functional is approximately derived using a variational approach and the adjoint variable method and is utilized to update the level set function via a time evolutionary reaction-diffusion equation. Several numerical examples present optimal acoustic and structural topologies that minimize the sound pressure generated from a vibrating elastic structure.
NASA Astrophysics Data System (ADS)
Quiros, Israel; Gonzalez, Tame; Nucamendi, Ulises; García-Salcedo, Ricardo; Horta-Rangel, Francisco Antonio; Saavedra, Joel
2018-04-01
In this paper we investigate the so-called ‘phantom barrier crossing’ issue in a cosmological model based on the scalar–tensor theory with non-minimal derivative coupling to the Einstein tensor. Special attention will be paid to the physical bounds on the squared sound speed. The numeric results are geometrically illustrated by means of a qualitative procedure of analysis that is based on the mapping of the orbits in the phase plane onto the surfaces that represent physical quantities in the extended phase space, that is: the phase plane complemented with an additional dimension relative to the given physical parameter. We find that the cosmological model based on the non-minimal derivative coupling theory—this includes both the quintessence and the pure derivative coupling cases—has serious causality problems related to superluminal propagation of the scalar and tensor perturbations. Even more disturbing is the finding that, despite the fact that the underlying theory is free of the Ostrogradsky instability, the corresponding cosmological model is plagued by the Laplacian (classical) instability related with negative squared sound speed. This instability leads to an uncontrollable growth of the energy density of the perturbations that is inversely proportional to their wavelength. We show that, independent of the self-interaction potential, for positive coupling the tensor perturbations propagate superluminally, while for negative coupling a Laplacian instability arises. This latter instability invalidates the possibility for the model to describe the primordial inflation.
NASA Technical Reports Server (NTRS)
Teren, F.
1977-01-01
Minimum time accelerations of aircraft turbofan engines are presented. The calculation of these accelerations was made by using a piecewise linear engine model, and an algorithm based on nonlinear programming. Use of this model and algorithm allows such trajectories to be readily calculated on a digital computer with a minimal expenditure of computer time.
Geometric modeling of space-optimal unit-cell-based tissue engineering scaffolds
NASA Astrophysics Data System (ADS)
Rajagopalan, Srinivasan; Lu, Lichun; Yaszemski, Michael J.; Robb, Richard A.
2005-04-01
Tissue engineering involves regenerating damaged or malfunctioning organs using cells, biomolecules, and synthetic or natural scaffolds. Based on their intended roles, scaffolds can be injected as space-fillers or be preformed and implanted to provide mechanical support. Preformed scaffolds are biomimetic "trellis-like" structures which, on implantation and integration, act as tissue/organ surrogates. Customized, computer controlled, and reproducible preformed scaffolds can be fabricated using Computer Aided Design (CAD) techniques and rapid prototyping devices. A curved, monolithic construct with minimal surface area constitutes an efficient substrate geometry that promotes cell attachment, migration and proliferation. However, current CAD approaches do not provide such a biomorphic construct. We address this critical issue by presenting one of the very first physical realizations of minimal surfaces towards the construction of efficient unit-cell based tissue engineering scaffolds. Mask programmability, and optimal packing density of triply periodic minimal surfaces are used to construct the optimal pore geometry. Budgeted polygonization, and progressive minimal surface refinement facilitate the machinability of these surfaces. The efficient stress distributions, as deduced from the Finite Element simulations, favor the use of these scaffolds for orthopedic applications.
Wei Liao; Rohr, Karl; Chang-Ki Kang; Zang-Hee Cho; Worz, Stefan
2016-01-01
We propose a novel hybrid approach for automatic 3D segmentation and quantification of high-resolution 7 Tesla magnetic resonance angiography (MRA) images of the human cerebral vasculature. Our approach consists of two main steps. First, a 3D model-based approach is used to segment and quantify thick vessels and most parts of thin vessels. Second, remaining vessel gaps of the first step in low-contrast and noisy regions are completed using a 3D minimal path approach, which exploits directional information. We present two novel minimal path approaches. The first is an explicit approach based on energy minimization using probabilistic sampling, and the second is an implicit approach based on fast marching with anisotropic directional prior. We conducted an extensive evaluation with over 2300 3D synthetic images and 40 real 3D 7 Tesla MRA images. Quantitative and qualitative evaluation shows that our approach achieves superior results compared with a previous minimal path approach. Furthermore, our approach was successfully used in two clinical studies on stroke and vascular dementia.
Inherent Structure versus Geometric Metric for State Space Discretization
Liu, Hanzhong; Li, Minghai; Fan, Jue; Huo, Shuanghong
2016-01-01
Inherent structure (IS) and geometry-based clustering methods are commonly used for analyzing molecular dynamics trajectories. ISs are obtained by minimizing the sampled conformations into local minima on potential/effective energy surface. The conformations that are minimized into the same energy basin belong to one cluster. We investigate the influence of the applications of these two methods of trajectory decomposition on our understanding of the thermodynamics and kinetics of alanine tetrapeptide. We find that at the micro cluster level, the IS approach and root-mean-square deviation (RMSD) based clustering method give totally different results. Depending on the local features of energy landscape, the conformations with close RMSDs can be minimized into different minima, while the conformations with large RMSDs could be minimized into the same basin. However, the relaxation timescales calculated based on the transition matrices built from the micro clusters are similar. The discrepancy at the micro cluster level leads to different macro clusters. Although the dynamic models established through both clustering methods are validated approximately Markovian, the IS approach seems to give a meaningful state space discretization at the macro cluster level. PMID:26915811
Zhang, Jun; Gu, Zhenghui; Yu, Zhu Liang; Li, Yuanqing
2015-03-01
Low energy consumption is crucial for body area networks (BANs). In BAN-enabled ECG monitoring, the continuous monitoring entails the need of the sensor nodes to transmit a huge data to the sink node, which leads to excessive energy consumption. To reduce airtime over energy-hungry wireless links, this paper presents an energy-efficient compressed sensing (CS)-based approach for on-node ECG compression. At first, an algorithm called minimal mutual coherence pursuit is proposed to construct sparse binary measurement matrices, which can be used to encode the ECG signals with superior performance and extremely low complexity. Second, in order to minimize the data rate required for faithful reconstruction, a weighted ℓ1 minimization model is derived by exploring the multisource prior knowledge in wavelet domain. Experimental results on MIT-BIH arrhythmia database reveals that the proposed approach can obtain higher compression ratio than the state-of-the-art CS-based methods. Together with its low encoding complexity, our approach can achieve significant energy saving in both encoding process and wireless transmission.
Automatic classification of minimally invasive instruments based on endoscopic image sequences
NASA Astrophysics Data System (ADS)
Speidel, Stefanie; Benzko, Julia; Krappe, Sebastian; Sudra, Gunther; Azad, Pedram; Müller-Stich, Beat Peter; Gutt, Carsten; Dillmann, Rüdiger
2009-02-01
Minimally invasive surgery is nowadays a frequently applied technique and can be regarded as a major breakthrough in surgery. The surgeon has to adopt special operation-techniques and deal with difficulties like the complex hand-eye coordination and restricted mobility. To alleviate these constraints we propose to enhance the surgeon's capabilities by providing a context-aware assistance using augmented reality techniques. To analyze the current situation for context-aware assistance, we need intraoperatively gained sensor data and a model of the intervention. A situation consists of information about the performed activity, the used instruments, the surgical objects, the anatomical structures and defines the state of an intervention for a given moment in time. The endoscopic images provide a rich source of information which can be used for an image-based analysis. Different visual cues are observed in order to perform an image-based analysis with the objective to gain as much information as possible about the current situation. An important visual cue is the automatic recognition of the instruments which appear in the scene. In this paper we present the classification of minimally invasive instruments using the endoscopic images. The instruments are not modified by markers. The system segments the instruments in the current image and recognizes the instrument type based on three-dimensional instrument models.
Chronic motivational state interacts with task reward structure in dynamic decision-making.
Cooper, Jessica A; Worthy, Darrell A; Maddox, W Todd
2015-12-01
Research distinguishes between a habitual, model-free system motivated toward immediately rewarding actions, and a goal-directed, model-based system motivated toward actions that improve future state. We examined the balance of processing in these two systems during state-based decision-making. We tested a regulatory fit hypothesis (Maddox & Markman, 2010) that predicts that global trait motivation affects the balance of habitual- vs. goal-directed processing but only through its interaction with the task framing as gain-maximization or loss-minimization. We found support for the hypothesis that a match between an individual's chronic motivational state and the task framing enhances goal-directed processing, and thus state-based decision-making. Specifically, chronic promotion-focused individuals under gain-maximization and chronic prevention-focused individuals under loss-minimization both showed enhanced state-based decision-making. Computational modeling indicates that individuals in a match between global chronic motivational state and local task reward structure engaged more goal-directed processing, whereas those in a mismatch engaged more habitual processing. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Arya, L. D.; Koshti, Atul
2018-05-01
This paper investigates the Distributed Generation (DG) capacity optimization at location based on the incremental voltage sensitivity criteria for sub-transmission network. The Modified Shuffled Frog Leaping optimization Algorithm (MSFLA) has been used to optimize the DG capacity. Induction generator model of DG (wind based generating units) has been considered for study. Standard test system IEEE-30 bus has been considered for the above study. The obtained results are also validated by shuffled frog leaping algorithm and modified version of bare bones particle swarm optimization (BBExp). The performance of MSFLA has been found more efficient than the other two algorithms for real power loss minimization problem.
Implications of random variation in the Stand Prognosis Model
David A. Hamilton
1991-01-01
Although the Stand Prognosis Model has several stochastic components, features have been included in the model in an attempt to minimize run-to-run variation attributable to these stochastic components. This has led many users to assume that comparisons of management alternatives could be made based on a single run of the model for each alternative. Recent analyses...
A Conceptual View of the Officer Procurement Model (TOPOPS). Technical Report No. 73-73.
ERIC Educational Resources Information Center
Akman, Allan; Nordhauser, Fred
This report presents the conceptual design of a computer-based linear programing model of the Air Force officer procurement system called TOPOPS. The TOPOPS model is an aggregate model which simulates officer accession and training and is directed at optimizing officer procurement in terms of either minimizing cost or maximizing accession quality…
Mathematical model for dynamic cell formation in fast fashion apparel manufacturing stage
NASA Astrophysics Data System (ADS)
Perera, Gayathri; Ratnayake, Vijitha
2018-05-01
This paper presents a mathematical programming model for dynamic cell formation to minimize changeover-related costs (i.e., machine relocation costs and machine setup cost) and inter-cell material handling cost to cope with the volatile production environments in apparel manufacturing industry. The model is formulated through findings of a comprehensive literature review. Developed model is validated based on data collected from three different factories in apparel industry, manufacturing fast fashion products. A program code is developed using Lingo 16.0 software package to generate optimal cells for developed model and to determine the possible cost-saving percentage when the existing layouts used in three factories are replaced by generated optimal cells. The optimal cells generated by developed mathematical model result in significant cost saving when compared with existing product layouts used in production/assembly department of selected factories in apparel industry. The developed model can be considered as effective in minimizing the considered cost terms in dynamic production environment of fast fashion apparel manufacturing industry. Findings of this paper can be used for further researches on minimizing the changeover-related costs in fast fashion apparel production stage.
A model-based reasoning approach to sensor placement for monitorability
NASA Technical Reports Server (NTRS)
Chien, Steve; Doyle, Richard; Homemdemello, Luiz
1992-01-01
An approach is presented to evaluating sensor placements to maximize monitorability of the target system while minimizing the number of sensors. The approach uses a model of the monitored system to score potential sensor placements on the basis of four monitorability criteria. The scores can then be analyzed to produce a recommended sensor set. An example from our NASA application domain is used to illustrate our model-based approach to sensor placement.
NASA Astrophysics Data System (ADS)
Osman, Ayat E.
Energy use in commercial buildings constitutes a major proportion of the energy consumption and anthropogenic emissions in the USA. Cogeneration systems offer an opportunity to meet a building's electrical and thermal demands from a single energy source. To answer the question of what is the most beneficial and cost effective energy source(s) that can be used to meet the energy demands of the building, optimizations techniques have been implemented in some studies to find the optimum energy system based on reducing cost and maximizing revenues. Due to the significant environmental impacts that can result from meeting the energy demands in buildings, building design should incorporate environmental criteria in the decision making criteria. The objective of this research is to develop a framework and model to optimize a building's operation by integrating congregation systems and utility systems in order to meet the electrical, heating, and cooling demand by considering the potential life cycle environmental impact that might result from meeting those demands as well as the economical implications. Two LCA Optimization models have been developed within a framework that uses hourly building energy data, life cycle assessment (LCA), and mixed-integer linear programming (MILP). The objective functions that are used in the formulation of the problems include: (1) Minimizing life cycle primary energy consumption, (2) Minimizing global warming potential, (3) Minimizing tropospheric ozone precursor potential, (4) Minimizing acidification potential, (5) Minimizing NOx, SO 2 and CO2, and (6) Minimizing life cycle costs, considering a study period of ten years and the lifetime of equipment. The two LCA optimization models can be used for: (a) long term planning and operational analysis in buildings by analyzing the hourly energy use of a building during a day and (b) design and quick analysis of building operation based on periodic analysis of energy use of a building in a year. A Pareto-optimal frontier is also derived, which defines the minimum cost required to achieve any level of environmental emission or primary energy usage value or inversely the minimum environmental indicator and primary energy usage value that can be achieved and the cost required to achieve that value.
Simulation of minimally invasive vascular interventions for training purposes.
Alderliesten, Tanja; Konings, Maurits K; Niessen, Wiro J
2004-01-01
To master the skills required to perform minimally invasive vascular interventions, proper training is essential. A computer simulation environment has been developed to provide such training. The simulation is based on an algorithm specifically developed to simulate the motion of a guide wire--the main instrument used during these interventions--in the human vasculature. In this paper, the design and model of the computer simulation environment is described and first results obtained with phantom and patient data are presented. To simulate minimally invasive vascular interventions, a discrete representation of a guide wire is used which allows modeling of guide wires with different physical properties. An algorithm for simulating the propagation of a guide wire within a vascular system, on the basis of the principle of minimization of energy, has been developed. Both longitudinal translation and rotation are incorporated as possibilities for manipulating the guide wire. The simulation is based on quasi-static mechanics. Two types of energy are introduced: internal energy related to the bending of the guide wire, and external energy resulting from the elastic deformation of the vessel wall. A series of experiments were performed on phantom and patient data. Simulation results are qualitatively compared with 3D rotational angiography data. The results indicate plausible behavior of the simulation.
NASA Astrophysics Data System (ADS)
Kang, Donghun; Lee, Jungeon; Jung, Jongpil; Lee, Chul-Hee; Kyung, Chong-Min
2014-09-01
In mobile video systems powered by battery, reducing the encoder's compression energy consumption is critical to prolong its lifetime. Previous Energy-rate-distortion (E-R-D) optimization methods based on a software codec is not suitable for practical mobile camera systems because the energy consumption is too large and encoding rate is too low. In this paper, we propose an E-R-D model for the hardware codec based on the gate-level simulation framework to measure the switching activity and the energy consumption. From the proposed E-R-D model, an energy minimizing algorithm for mobile video camera sensor have been developed with the GOP (Group of Pictures) size and QP(Quantization Parameter) as run-time control variables. Our experimental results show that the proposed algorithm provides up to 31.76% of energy consumption saving while satisfying the rate and distortion constraints.
Sustainable, Reliable Mission-Systems Architecture
NASA Technical Reports Server (NTRS)
O'Neil, Graham; Orr, James K.; Watson, Steve
2005-01-01
A mission-systems architecture, based on a highly modular infrastructure utilizing open-standards hardware and software interfaces as the enabling technology is essential for affordable md sustainable space exploration programs. This mission-systems architecture requires (8) robust communication between heterogeneous systems, (b) high reliability, (c) minimal mission-to-mission reconfiguration, (d) affordable development, system integration, end verification of systems, and (e) minimal sustaining engineering. This paper proposes such an architecture. Lessons learned from the Space Shuttle program and Earthbound complex engineered systems are applied to define the model. Technology projections reaching out 5 years are made to refine model details.
Update on SU(2) gauge theory with NF = 2 fundamental flavours.
NASA Astrophysics Data System (ADS)
Drach, Vincent; Janowski, Tadeusz; Pica, Claudio
2018-03-01
We present a non perturbative study of SU(2) gauge theory with two fundamental Dirac flavours. This theory provides a minimal template which is ideal for a wide class of Standard Model extensions featuring novel strong dynamics, such as a minimal realization of composite Higgs models. We present an update on the status of the meson spectrum and decay constants based on increased statistics on our existing ensembles and the inclusion of new ensembles with lighter pion masses, resulting in a more reliable chiral extrapolation. Preprint: CP3-Origins-2017-048 DNRF90
Maximize, minimize or target - optimization for a fitted response from a designed experiment
Anderson-Cook, Christine Michaela; Cao, Yongtao; Lu, Lu
2016-04-01
One of the common goals of running and analyzing a designed experiment is to find a location in the design space that optimizes the response of interest. Depending on the goal of the experiment, we may seek to maximize or minimize the response, or set the process to hit a particular target value. After the designed experiment, a response model is fitted and the optimal settings of the input factors are obtained based on the estimated response model. Furthermore, the suggested optimal settings of the input factors are then used in the production environment.
Sustainable, Reliable Mission-Systems Architecture
NASA Technical Reports Server (NTRS)
O'Neil, Graham; Orr, James K.; Watson, Steve
2007-01-01
A mission-systems architecture, based on a highly modular infrastructure utilizing: open-standards hardware and software interfaces as the enabling technology is essential for affordable and sustainable space exploration programs. This mission-systems architecture requires (a) robust communication between heterogeneous system, (b) high reliability, (c) minimal mission-to-mission reconfiguration, (d) affordable development, system integration, and verification of systems, and (e) minimal sustaining engineering. This paper proposes such an architecture. Lessons learned from the Space Shuttle program and Earthbound complex engineered system are applied to define the model. Technology projections reaching out 5 years are mde to refine model details.
Evaluation of Second-Level Inference in fMRI Analysis
Roels, Sanne P.; Loeys, Tom; Moerkerke, Beatrijs
2016-01-01
We investigate the impact of decisions in the second-level (i.e., over subjects) inferential process in functional magnetic resonance imaging on (1) the balance between false positives and false negatives and on (2) the data-analytical stability, both proxies for the reproducibility of results. Second-level analysis based on a mass univariate approach typically consists of 3 phases. First, one proceeds via a general linear model for a test image that consists of pooled information from different subjects. We evaluate models that take into account first-level (within-subjects) variability and models that do not take into account this variability. Second, one proceeds via inference based on parametrical assumptions or via permutation-based inference. Third, we evaluate 3 commonly used procedures to address the multiple testing problem: familywise error rate correction, False Discovery Rate (FDR) correction, and a two-step procedure with minimal cluster size. Based on a simulation study and real data we find that the two-step procedure with minimal cluster size results in most stable results, followed by the familywise error rate correction. The FDR results in most variable results, for both permutation-based inference and parametrical inference. Modeling the subject-specific variability yields a better balance between false positives and false negatives when using parametric inference. PMID:26819578
NASA Astrophysics Data System (ADS)
Genberg, Victor L.; Michels, Gregory J.
2017-08-01
The ultimate design goal of an optical system subjected to dynamic loads is to minimize system level wavefront error (WFE). In random response analysis, system WFE is difficult to predict from finite element results due to the loss of phase information. In the past, the use of ystem WFE was limited by the difficulty of obtaining a linear optics model. In this paper, an automated method for determining system level WFE using a linear optics model is presented. An error estimate is included in the analysis output based on fitting errors of mode shapes. The technique is demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.
Optimization Methods in Sherpa
NASA Astrophysics Data System (ADS)
Siemiginowska, Aneta; Nguyen, Dan T.; Doe, Stephen M.; Refsdal, Brian L.
2009-09-01
Forward fitting is a standard technique used to model X-ray data. A statistic, usually assumed weighted chi^2 or Poisson likelihood (e.g. Cash), is minimized in the fitting process to obtain a set of the best model parameters. Astronomical models often have complex forms with many parameters that can be correlated (e.g. an absorbed power law). Minimization is not trivial in such setting, as the statistical parameter space becomes multimodal and finding the global minimum is hard. Standard minimization algorithms can be found in many libraries of scientific functions, but they are usually focused on specific functions. However, Sherpa designed as general fitting and modeling application requires very robust optimization methods that can be applied to variety of astronomical data (X-ray spectra, images, timing, optical data etc.). We developed several optimization algorithms in Sherpa targeting a wide range of minimization problems. Two local minimization methods were built: Levenberg-Marquardt algorithm was obtained from MINPACK subroutine LMDIF and modified to achieve the required robustness; and Nelder-Mead simplex method has been implemented in-house based on variations of the algorithm described in the literature. A global search Monte-Carlo method has been implemented following a differential evolution algorithm presented by Storn and Price (1997). We will present the methods in Sherpa and discuss their usage cases. We will focus on the application to Chandra data showing both 1D and 2D examples. This work is supported by NASA contract NAS8-03060 (CXC).
Search strategies for top partners in composite Higgs models
NASA Astrophysics Data System (ADS)
Gripaios, Ben; Müller, Thibaut; Parker, M. A.; Sutherland, Dave
2014-08-01
We consider how best to search for top partners in generic composite Higgs models. We begin by classifying the possible group representations carried by top partners in models with and without a custodial SU(2) × SU(2) ⋊ 2 symmetry protecting the rate for Z → decays. We identify a number of minimal models whose top partners only have electric charges of , , or and thus decay to top or bottom quarks via a single Higgs or electroweak gauge boson. We develop an inclusive search for these based on a top veto, which we find to be more effective than existing searches. Less minimal models feature light states that can be sought in final states with like-sign leptons and so we find that 2 straightforward LHC searches give a reasonable coverage of the gamut of composite Higgs models.
Feedback loops and temporal misalignment in component-based hydrologic modeling
NASA Astrophysics Data System (ADS)
Elag, Mostafa M.; Goodall, Jonathan L.; Castronova, Anthony M.
2011-12-01
In component-based modeling, a complex system is represented as a series of loosely integrated components with defined interfaces and data exchanges that allow the components to be coupled together through shared boundary conditions. Although the component-based paradigm is commonly used in software engineering, it has only recently been applied for modeling hydrologic and earth systems. As a result, research is needed to test and verify the applicability of the approach for modeling hydrologic systems. The objective of this work was therefore to investigate two aspects of using component-based software architecture for hydrologic modeling: (1) simulation of feedback loops between components that share a boundary condition and (2) data transfers between temporally misaligned model components. We investigated these topics using a simple case study where diffusion of mass is modeled across a water-sediment interface. We simulated the multimedia system using two model components, one for the water and one for the sediment, coupled using the Open Modeling Interface (OpenMI) standard. The results were compared with a more conventional numerical approach for solving the system where the domain is represented by a single multidimensional array. Results showed that the component-based approach was able to produce the same results obtained with the more conventional numerical approach. When the two components were temporally misaligned, we explored the use of different interpolation schemes to minimize mass balance error within the coupled system. The outcome of this work provides evidence that component-based modeling can be used to simulate complicated feedback loops between systems and guidance as to how different interpolation schemes minimize mass balance error introduced when components are temporally misaligned.
A global parallel model based design of experiments method to minimize model output uncertainty.
Bazil, Jason N; Buzzard, Gregory T; Rundell, Ann E
2012-03-01
Model-based experiment design specifies the data to be collected that will most effectively characterize the biological system under study. Existing model-based design of experiment algorithms have primarily relied on Fisher Information Matrix-based methods to choose the best experiment in a sequential manner. However, these are largely local methods that require an initial estimate of the parameter values, which are often highly uncertain, particularly when data is limited. In this paper, we provide an approach to specify an informative sequence of multiple design points (parallel design) that will constrain the dynamical uncertainty of the biological system responses to within experimentally detectable limits as specified by the estimated experimental noise. The method is based upon computationally efficient sparse grids and requires only a bounded uncertain parameter space; it does not rely upon initial parameter estimates. The design sequence emerges through the use of scenario trees with experimental design points chosen to minimize the uncertainty in the predicted dynamics of the measurable responses of the system. The algorithm was illustrated herein using a T cell activation model for three problems that ranged in dimension from 2D to 19D. The results demonstrate that it is possible to extract useful information from a mathematical model where traditional model-based design of experiments approaches most certainly fail. The experiments designed via this method fully constrain the model output dynamics to within experimentally resolvable limits. The method is effective for highly uncertain biological systems characterized by deterministic mathematical models with limited data sets. Also, it is highly modular and can be modified to include a variety of methodologies such as input design and model discrimination.
NASA Astrophysics Data System (ADS)
Wagner, Martin G.; Laeseke, Paul F.; Schubert, Tilman; Slagowski, Jordan M.; Speidel, Michael A.; Mistretta, Charles A.
2017-03-01
Fluoroscopic image guidance for minimally invasive procedures in the thorax and abdomen suffers from respiratory and cardiac motion, which can cause severe subtraction artifacts and inaccurate image guidance. This work proposes novel techniques for respiratory motion tracking in native fluoroscopic images as well as a model based estimation of vessel deformation. This would allow compensation for respiratory motion during the procedure and therefore simplify the workflow for minimally invasive procedures such as liver embolization. The method first establishes dynamic motion models for both the contrast-enhanced vasculature and curvilinear background features based on a native (non-contrast) and a contrast-enhanced image sequence acquired prior to device manipulation, under free breathing conditions. The model of vascular motion is generated by applying the diffeomorphic demons algorithm to an automatic segmentation of the subtraction sequence. The model of curvilinear background features is based on feature tracking in the native sequence. The two models establish the relationship between the respiratory state, which is inferred from curvilinear background features, and the vascular morphology during that same respiratory state. During subsequent fluoroscopy, curvilinear feature detection is applied to determine the appropriate vessel mask to display. The result is a dynamic motioncompensated vessel mask superimposed on the fluoroscopic image. Quantitative evaluation of the proposed methods was performed using a digital 4D CT-phantom (XCAT), which provides realistic human anatomy including sophisticated respiratory and cardiac motion models. Four groups of datasets were generated, where different parameters (cycle length, maximum diaphragm motion and maximum chest expansion) were modified within each image sequence. Each group contains 4 datasets consisting of the initial native and contrast enhanced sequences as well as a sequence, where the respiratory motion is tracked. The respiratory motion tracking error was between 1.00 % and 1.09 %. The estimated dynamic vessel masks yielded a Sørensen-Dice coefficient between 0.94 and 0.96. Finally, the accuracy of the vessel contours was measured in terms of the 99th percentile of the error, which ranged between 0.64 and 0.96 mm. The presented results show that the approach is feasible for respiratory motion tracking and compensation and could therefore considerably improve the workflow of minimally invasive procedures in the thorax and abdomen
Yu, Yinan; Diamantaras, Konstantinos I; McKelvey, Tomas; Kung, Sun-Yuan
2018-02-01
In kernel-based classification models, given limited computational power and storage capacity, operations over the full kernel matrix becomes prohibitive. In this paper, we propose a new supervised learning framework using kernel models for sequential data processing. The framework is based on two components that both aim at enhancing the classification capability with a subset selection scheme. The first part is a subspace projection technique in the reproducing kernel Hilbert space using a CLAss-specific Subspace Kernel representation for kernel approximation. In the second part, we propose a novel structural risk minimization algorithm called the adaptive margin slack minimization to iteratively improve the classification accuracy by an adaptive data selection. We motivate each part separately, and then integrate them into learning frameworks for large scale data. We propose two such frameworks: the memory efficient sequential processing for sequential data processing and the parallelized sequential processing for distributed computing with sequential data acquisition. We test our methods on several benchmark data sets and compared with the state-of-the-art techniques to verify the validity of the proposed techniques.
Malaria transmission rates estimated from serological data.
Burattini, M. N.; Massad, E.; Coutinho, F. A.
1993-01-01
A mathematical model was used to estimate malaria transmission rates based on serological data. The model is minimally stochastic and assumes an age-dependent force of infection for malaria. The transmission rates estimated were applied to a simple compartmental model in order to mimic the malaria transmission. The model has shown a good retrieving capacity for serological and parasite prevalence data. PMID:8270011
Model-based setting of inspiratory pressure and respiratory rate in pressure-controlled ventilation.
Schranz, C; Becher, T; Schädler, D; Weiler, N; Möller, K
2014-03-01
Mechanical ventilation carries the risk of ventilator-induced-lung-injury (VILI). To minimize the risk of VILI, ventilator settings should be adapted to the individual patient properties. Mathematical models of respiratory mechanics are able to capture the individual physiological condition and can be used to derive personalized ventilator settings. This paper presents model-based calculations of inspiration pressure (pI), inspiration and expiration time (tI, tE) in pressure-controlled ventilation (PCV) and a retrospective evaluation of its results in a group of mechanically ventilated patients. Incorporating the identified first order model of respiratory mechanics in the basic equation of alveolar ventilation yielded a nonlinear relation between ventilation parameters during PCV. Given this patient-specific relation, optimized settings in terms of minimal pI and adequate tE can be obtained. We then retrospectively analyzed data from 16 ICU patients with mixed pathologies, whose ventilation had been previously optimized by ICU physicians with the goal of minimization of inspiration pressure, and compared the algorithm's 'optimized' settings to the settings that had been chosen by the physicians. The presented algorithm visualizes the patient-specific relations between inspiration pressure and inspiration time. The algorithm's calculated results highly correlate to the physician's ventilation settings with r = 0.975 for the inspiration pressure, and r = 0.902 for the inspiration time. The nonlinear patient-specific relations of ventilation parameters become transparent and support the determination of individualized ventilator settings according to therapeutic goals. Thus, the algorithm is feasible for a variety of ventilated ICU patients and has the potential of improving lung-protective ventilation by minimizing inspiratory pressures and by helping to avoid the build-up of clinically significant intrinsic positive end-expiratory pressure.
United States Air Force Summer Faculty Research Program (1983). Technical Report. Volume 2
1983-12-01
filters are given below: (1) Inverse filter - Based on the model given in Eq. (2) and the criterion of minimizing the norm (i.e., power ) of the...and compared based on their performances In machine classification under a variety of blur and noise conditions. These filters are analyzed to...criteria based on various assumptions of the Image models* In practice filter performance varies with the type of image, the blur and the noise conditions
Yu, Chanki; Lee, Sang Wook
2016-05-20
We present a reliable and accurate global optimization framework for estimating parameters of isotropic analytical bidirectional reflectance distribution function (BRDF) models. This approach is based on a branch and bound strategy with linear programming and interval analysis. Conventional local optimization is often very inefficient for BRDF estimation since its fitting quality is highly dependent on initial guesses due to the nonlinearity of analytical BRDF models. The algorithm presented in this paper employs L1-norm error minimization to estimate BRDF parameters in a globally optimal way and interval arithmetic to derive our feasibility problem and lower bounding function. Our method is developed for the Cook-Torrance model but with several normal distribution functions such as the Beckmann, Berry, and GGX functions. Experiments have been carried out to validate the presented method using 100 isotropic materials from the MERL BRDF database, and our experimental results demonstrate that the L1-norm minimization provides a more accurate and reliable solution than the L2-norm minimization.
ERIC Educational Resources Information Center
Zan, Xinxing Anna; Yoon, Sang Won; Khasawneh, Mohammad; Srihari, Krishnaswami
2013-01-01
In an effort to develop a low-cost and user-friendly forecasting model to minimize forecasting error, we have applied average and exponentially weighted return ratios to project undergraduate student enrollment. We tested the proposed forecasting models with different sets of historical enrollment data, such as university-, school-, and…
Non-minimally coupled tachyon field in teleparallel gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fazlpour, Behnaz; Banijamali, Ali, E-mail: b.fazlpour@umz.ac.ir, E-mail: a.banijamali@nit.ac.ir
2015-04-01
We perform a full investigation on dynamics of a new dark energy model in which the four-derivative of a non-canonical scalar field (tachyon) is non-minimally coupled to the vector torsion. Our analysis is done in the framework of teleparallel equivalent of general relativity which is based on torsion instead of curvature. We show that in our model there exists a late-time scaling attractor (point P{sub 4}), corresponding to an accelerating universe with the property that dark energy and dark matter densities are of the same order. Such a point can help to alleviate the cosmological coincidence problem. Existence of thismore » point is the most significant difference between our model and another model in which a canonical scalar field (quintessence) is used instead of tachyon field.« less
Non-minimally coupled condensate cosmologies: a phase space analysis
NASA Astrophysics Data System (ADS)
Carloni, Sante; Vignolo, Stefano; Cianci, Roberto
2014-09-01
We present an analysis of the phase space of cosmological models based on a non-minimal coupling between the geometry and a fermionic condensate. We observe that the strong constraint coming from the Dirac equations allows a detailed design of the cosmology of these models, and at the same time guarantees an evolution towards a state indistinguishable from general relativistic cosmological models. In this light, we show in detail how the use of some specific potentials can naturally reproduce a phase of accelerated expansion. In particular, we find for the first time that an exponential potential is able to induce two de Sitter phases separated by a power law expansion, which could be an interesting model for the unification of an inflationary phase and a dark energy era.
A minimal model for the structural energetics of VO2
NASA Astrophysics Data System (ADS)
Kim, Chanul; Marianetti, Chris; The Marianetti Group Team
Resolving the structural, magnetic, and electronic structure of VO2 from the first-principles of quantum mechanics is still a forefront problem despite decades of attention. Hybrid functionals have been shown to qualitatively ruin the structural energetics. While density functional theory (DFT) combined with cluster extensions of dynamical mean-field theory (DMFT) have demonstrated promising results in terms of the electronic properties, structural phase stability has not yet been addressed. In order to capture the basic physics of the structural transition, we propose a minimal model of VO2 based on the one dimensional Peierls-Hubbard model and parameterize this based on DFT calculations of VO2. The total energy versus dimerization in the minimal mode is then solved numerically exactly using density matrix renormalization group (DMRG) and compared to the Hartree-Fock solution. We demonstrate that the Hartree-Fock solution exhibits the same pathologies as DFT+U, and spin density functional theory for that matter, while the DMRG solution is consistent with experimental observation. Our results demonstrate the critical role of non-locality in the total energy, and this will need to be accounted for to obtain a complete description of VO2 from first-principles. The authors acknowledge support from FAME, one of six centers of STARnet, a Semiconductor Research Corporation program sponsored by MARCO and DARPA.
Chase, J Geoffrey; Lambermont, Bernard; Starfinger, Christina; Hann, Christopher E; Shaw, Geoffrey M; Ghuysen, Alexandre; Kolh, Philippe; Dauby, Pierre C; Desaive, Thomas
2011-01-01
A cardiovascular system (CVS) model and parameter identification method have previously been validated for identifying different cardiac and circulatory dysfunctions in simulation and using porcine models of pulmonary embolism, hypovolemia with PEEP titrations and induced endotoxic shock. However, these studies required both left and right heart catheters to collect the data required for subject-specific monitoring and diagnosis-a maximally invasive data set in a critical care setting although it does occur in practice. Hence, use of this model-based diagnostic would require significant additional invasive sensors for some subjects, which is unacceptable in some, if not all, cases. The main goal of this study is to prove the concept of using only measurements from one side of the heart (right) in a 'minimal' data set to identify an effective patient-specific model that can capture key clinical trends in endotoxic shock. This research extends existing methods to a reduced and minimal data set requiring only a single catheter and reducing the risk of infection and other complications-a very common, typical situation in critical care patients, particularly after cardiac surgery. The extended methods and assumptions that found it are developed and presented in a case study for the patient-specific parameter identification of pig-specific parameters in an animal model of induced endotoxic shock. This case study is used to define the impact of this minimal data set on the quality and accuracy of the model application for monitoring, detecting and diagnosing septic shock. Six anesthetized healthy pigs weighing 20-30 kg received a 0.5 mg kg(-1) endotoxin infusion over a period of 30 min from T0 to T30. For this research, only right heart measurements were obtained. Errors for the identified model are within 8% when the model is identified from data, re-simulated and then compared to the experimentally measured data, including measurements not used in the identification process for validation. Importantly, all identified parameter trends match physiologically and clinically and experimentally expected changes, indicating that no diagnostic power is lost. This work represents a further with human subjects validation for this model-based approach to cardiovascular diagnosis and therapy guidance in monitoring endotoxic disease states. The results and methods obtained can be readily extended from this case study to the other animal model results presented previously. Overall, these results provide further support for prospective, proof of concept clinical testing with humans.
USDA-ARS?s Scientific Manuscript database
Soil moisture datasets (e.g. satellite-, model-, station-based) vary greatly with respect to their signal, noise, and/or combined time-series variability. Minimizing differences in signal variances is particularly important in data assimilation techniques to optimize the accuracy of the analysis obt...
Use of simulation to compare the performance of minimization with stratified blocked randomization.
Toorawa, Robert; Adena, Michael; Donovan, Mark; Jones, Steve; Conlon, John
2009-01-01
Minimization is an alternative method to stratified permuted block randomization, which may be more effective at balancing treatments when there are many strata. However, its use in the regulatory setting for industry trials remains controversial, primarily due to the difficulty in interpreting conventional asymptotic statistical tests under restricted methods of treatment allocation. We argue that the use of minimization should be critically evaluated when designing the study for which it is proposed. We demonstrate by example how simulation can be used to investigate whether minimization improves treatment balance compared with stratified randomization, and how much randomness can be incorporated into the minimization before any balance advantage is no longer retained. We also illustrate by example how the performance of the traditional model-based analysis can be assessed, by comparing the nominal test size with the observed test size over a large number of simulations. We recommend that the assignment probability for the minimization be selected using such simulations. Copyright (c) 2008 John Wiley & Sons, Ltd.
Markov random field model-based edge-directed image interpolation.
Li, Min; Nguyen, Truong Q
2008-07-01
This paper presents an edge-directed image interpolation algorithm. In the proposed algorithm, the edge directions are implicitly estimated with a statistical-based approach. In opposite to explicit edge directions, the local edge directions are indicated by length-16 weighting vectors. Implicitly, the weighting vectors are used to formulate geometric regularity (GR) constraint (smoothness along edges and sharpness across edges) and the GR constraint is imposed on the interpolated image through the Markov random field (MRF) model. Furthermore, under the maximum a posteriori-MRF framework, the desired interpolated image corresponds to the minimal energy state of a 2-D random field given the low-resolution image. Simulated annealing methods are used to search for the minimal energy state from the state space. To lower the computational complexity of MRF, a single-pass implementation is designed, which performs nearly as well as the iterative optimization. Simulation results show that the proposed MRF model-based edge-directed interpolation method produces edges with strong geometric regularity. Compared to traditional methods and other edge-directed interpolation methods, the proposed method improves the subjective quality of the interpolated edges while maintaining a high PSNR level.
Making the Case for a Model-Based Definition of Engineering Materials (Postprint)
2017-09-12
MBE relies on digi- tal representations, or a model-based definition (MBD), to define a product throughout design , manufacturing and sus- tainment...discovery through development, scale-up, product design and qualification, manufacture and sustainment have changed little over the past decades. This...testing data provided a certifiable material definition, so as to minimize risk and simplify procurement of materials during the design , manufacture , and
Predicting forest dieback in Maine, USA: a simple model based on soil frost and drought
Allan N.D. Auclair; Warren E. Heilman; Blondel Brinkman
2010-01-01
Tree roots of northern hardwoods are shallow rooted, winter active, and minimally frost hardened; dieback is a winter freezing injury to roots incited by frost penetration in the absence of adequate snow cover and exacerbated by drought in summer. High soil water content greatly increases conductivity of frost. We develop a model based on the sum of z-scores of soil...
Inherent structure versus geometric metric for state space discretization.
Liu, Hanzhong; Li, Minghai; Fan, Jue; Huo, Shuanghong
2016-05-30
Inherent structure (IS) and geometry-based clustering methods are commonly used for analyzing molecular dynamics trajectories. ISs are obtained by minimizing the sampled conformations into local minima on potential/effective energy surface. The conformations that are minimized into the same energy basin belong to one cluster. We investigate the influence of the applications of these two methods of trajectory decomposition on our understanding of the thermodynamics and kinetics of alanine tetrapeptide. We find that at the microcluster level, the IS approach and root-mean-square deviation (RMSD)-based clustering method give totally different results. Depending on the local features of energy landscape, the conformations with close RMSDs can be minimized into different minima, while the conformations with large RMSDs could be minimized into the same basin. However, the relaxation timescales calculated based on the transition matrices built from the microclusters are similar. The discrepancy at the microcluster level leads to different macroclusters. Although the dynamic models established through both clustering methods are validated approximately Markovian, the IS approach seems to give a meaningful state space discretization at the macrocluster level in terms of conformational features and kinetics. © 2016 Wiley Periodicals, Inc.
Quality assurance of multiport image-guided minimally invasive surgery at the lateral skull base.
Nau-Hermes, Maria; Schmitt, Robert; Becker, Meike; El-Hakimi, Wissam; Hansen, Stefan; Klenzner, Thomas; Schipper, Jörg
2014-01-01
For multiport image-guided minimally invasive surgery at the lateral skull base a quality management is necessary to avoid the damage of closely spaced critical neurovascular structures. So far there is no standardized method applicable independently from the surgery. Therefore, we adapt a quality management method, the quality gates (QG), which is well established in, for example, the automotive industry and apply it to multiport image-guided minimally invasive surgery. QG divide a process into different sections. Passing between sections can only be achieved if previously defined requirements are fulfilled which secures the process chain. An interdisciplinary team of otosurgeons, computer scientists, and engineers has worked together to define the quality gates and the corresponding criteria that need to be fulfilled before passing each quality gate. In order to evaluate the defined QG and their criteria, the new surgery method was applied with a first prototype at a human skull cadaver model. We show that the QG method can ensure a safe multiport minimally invasive surgical process at the lateral skull base. Therewith, we present an approach towards the standardization of quality assurance of surgical processes.
Quality Assurance of Multiport Image-Guided Minimally Invasive Surgery at the Lateral Skull Base
Nau-Hermes, Maria; Schmitt, Robert; Becker, Meike; El-Hakimi, Wissam; Hansen, Stefan; Klenzner, Thomas; Schipper, Jörg
2014-01-01
For multiport image-guided minimally invasive surgery at the lateral skull base a quality management is necessary to avoid the damage of closely spaced critical neurovascular structures. So far there is no standardized method applicable independently from the surgery. Therefore, we adapt a quality management method, the quality gates (QG), which is well established in, for example, the automotive industry and apply it to multiport image-guided minimally invasive surgery. QG divide a process into different sections. Passing between sections can only be achieved if previously defined requirements are fulfilled which secures the process chain. An interdisciplinary team of otosurgeons, computer scientists, and engineers has worked together to define the quality gates and the corresponding criteria that need to be fulfilled before passing each quality gate. In order to evaluate the defined QG and their criteria, the new surgery method was applied with a first prototype at a human skull cadaver model. We show that the QG method can ensure a safe multiport minimally invasive surgical process at the lateral skull base. Therewith, we present an approach towards the standardization of quality assurance of surgical processes. PMID:25105146
Tuned and non-Higgsable U(1)s in F-theory
Wang, Yi-Nan
2017-03-01
We study the tuning of U(1) gauge fields in F-theory models on a base of general dimension. We construct a formula that computes the change in Weierstrass moduli when such a U(1) is tuned, based on the Morrison-Park form of a Weierstrass model with an additional rational section. Using this formula, we propose the form of “minimal tuning” on any base, which corresponds to the case where the decrease in the number of Weierstrass moduli is minimal. Applying this result, we discover some universal features of bases with non-Higgsable U(1)s. Mathematically, a generic elliptic fibration over such a base hasmore » additional rational sections. Physically, this condition implies the existence of U(1) gauge group in the low-energy supergravity theory after compactification that cannot be Higgsed away. In particular, we show that the elliptic Calabi-Yau manifold over such a base has a small number of complex structure moduli. We also suggest that non-Higgsable U(1)s can never appear on any toric bases. Finally, we construct the first example of a threefold base with non-Higgsable U(1)s.« less
The minimal GUT with inflaton and dark matter unification
NASA Astrophysics Data System (ADS)
Chen, Heng-Yu; Gogoladze, Ilia; Hu, Shan; Li, Tianjun; Wu, Lina
2018-01-01
Giving up the solutions to the fine-tuning problems, we propose the non-supersymmetric flipped SU(5)× U(1)_X model based on the minimal particle content principle, which can be constructed from the four-dimensional SO(10) models, five-dimensional orbifold SO(10) models, and local F-theory SO(10) models. To achieve gauge coupling unification, we introduce one pair of vector-like fermions, which form a complete SU(5)× U(1)_X representation. The proton lifetime is around 5× 10^{35} years, neutrino masses and mixing can be explained via the seesaw mechanism, baryon asymmetry can be generated via leptogenesis, and the vacuum stability problem can be solved as well. In particular, we propose that inflaton and dark matter particles can be unified to a real scalar field with Z_2 symmetry, which is not an axion and does not have the non-minimal coupling to gravity. Such a kind of scenarios can be applied to the generic scalar dark matter models. Also, we find that the vector-like particle corrections to the B_s^0 masses might be about 6.6%, while their corrections to the K^0 and B_d^0 masses are negligible.
Stroh, Mark; Addy, Carol; Wu, Yunhui; Stoch, S Aubrey; Pourkavoos, Nazaneen; Groff, Michelle; Xu, Yang; Wagner, John; Gottesdiener, Keith; Shadle, Craig; Wang, Hong; Manser, Kimberly; Winchell, Gregory A; Stone, Julie A
2009-03-01
We describe how modeling and simulation guided program decisions following a randomized placebo-controlled single-rising oral dose first-in-man trial of compound A where an undesired transient blood pressure (BP) elevation occurred in fasted healthy young adult males. We proposed a lumped-parameter pharmacokinetic-pharmacodynamic (PK/PD) model that captured important aspects of the BP homeostasis mechanism. Four conceptual units characterized the feedback PD model: a sinusoidal BP set point, an effect compartment, a linear effect model, and a system response. To explore approaches for minimizing the BP increase, we coupled the PD model to a modified PK model to guide oral controlled-release (CR) development. The proposed PK/PD model captured the central tendency of the observed data. The simulated BP response obtained with theoretical release rate profiles suggested some amelioration of the peak BP response with CR. This triggered subsequent CR formulation development; we used actual dissolution data from these candidate CR formulations in the PK/PD model to confirm a potential benefit in the peak BP response. Though this paradigm has yet to be tested in the clinic, our model-based approach provided a common rational framework to more fully utilize the limited available information for advancing the program.
Optimized tuner selection for engine performance estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)
2013-01-01
A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.
Rickard, Timothy C; Bajic, Daniel
2006-07-01
The applicability of the identical elements (IE) model of arithmetic fact retrieval (T. C. Rickard, A. F. Healy, & L. E. Bourne, 1994) to cued recall from episodic (image and sentence) memory was explored in 3 transfer experiments. In agreement with results from arithmetic, speedup following even minimal practice recalling a missing word from an episodically bound word triplet did not transfer positively to other cued recall items involving the same triplet. The shape of the learning curve further supported a shift from episode-based to IE-based recall, extending some models of skill learning to cued recall practice. In contrast with previous findings, these results indicate that a form of representation that is independent of the original episodic memory underlies cued-recall performance following minimal practice. Copyright 2006 APA, all rights reserved.
Narayanan, Sarath Kumar; Cohen, Ralph Clinton; Shun, Albert
2014-06-01
Minimal access techniques have transformed the way pediatric surgery is practiced. Due to various constraints, surgical residency programs have not been able to tutor adequate training skills in the routine setting. The advent of new technology and methods in minimally invasive surgery (MIS), has similarly contributed to the need for systematic skills' training in a safe, simulated environment. To enable the training of the proper technique among pediatric surgery trainees, we have advanced a porcine non-survival model for endoscopic surgery. The technical advancements over the past 3 years and a subjective validation of the porcine model from 114 participating trainees using a standard questionnaire and a 5-point Likert scale have been described here. Mean attitude scores and analysis of variance (ANOVA) were used for statistical analysis of the data. Almost all trainees agreed or strongly agreed that the animal-based model was appropriate (98.35%) and also acknowledged that such workshops provided adequate practical experience before attempting on human subjects (96.6%). Mean attitude score for respondents was 19.08 (SD 3.4, range 4-20). Attitude scores showed no statistical association with years of experience or the level of seniority, indicating a positive attitude among all groups of respondents. Structured porcine-based MIS training should be an integral part of skill acquisition for pediatric surgery trainees and the experience gained can be transferred into clinical practice. We advocate that laparoscopic training should begin in a controlled workshop setting before procedures are attempted on human patients.
Probabilistic sparse matching for robust 3D/3D fusion in minimally invasive surgery.
Neumann, Dominik; Grbic, Sasa; John, Matthias; Navab, Nassir; Hornegger, Joachim; Ionasec, Razvan
2015-01-01
Classical surgery is being overtaken by minimally invasive and transcatheter procedures. As there is no direct view or access to the affected anatomy, advanced imaging techniques such as 3D C-arm computed tomography (CT) and C-arm fluoroscopy are routinely used in clinical practice for intraoperative guidance. However, due to constraints regarding acquisition time and device configuration, intraoperative modalities have limited soft tissue image quality and reliable assessment of the cardiac anatomy typically requires contrast agent, which is harmful to the patient and requires complex acquisition protocols. We propose a probabilistic sparse matching approach to fuse high-quality preoperative CT images and nongated, noncontrast intraoperative C-arm CT images by utilizing robust machine learning and numerical optimization techniques. Thus, high-quality patient-specific models can be extracted from the preoperative CT and mapped to the intraoperative imaging environment to guide minimally invasive procedures. Extensive quantitative experiments on 95 clinical datasets demonstrate that our model-based fusion approach has an average execution time of 1.56 s, while the accuracy of 5.48 mm between the anchor anatomy in both images lies within expert user confidence intervals. In direct comparison with image-to-image registration based on an open-source state-of-the-art medical imaging library and a recently proposed quasi-global, knowledge-driven multi-modal fusion approach for thoracic-abdominal images, our model-based method exhibits superior performance in terms of registration accuracy and robustness with respect to both target anatomy and anchor anatomy alignment errors.
Minimizers with Bounded Action for the High-Dimensional Frenkel-Kontorova Model
NASA Astrophysics Data System (ADS)
Miao, Xue-Qing; Wang, Ya-Nan; Qin, Wen-Xin
In Aubry-Mather theory for monotone twist maps or for one-dimensional Frenkel-Kontorova (FK) model with nearest neighbor interactions, each global minimizer (minimal energy configuration) is naturally Birkhoff. However, this is not true for the one-dimensional FK model with non-nearest neighbor interactions or for the high-dimensional FK model. In this paper, we study the Birkhoff property of minimizers with bounded action for the high-dimensional FK model.
NASA Astrophysics Data System (ADS)
Zeng, Wenhui; Yi, Jin; Rao, Xiao; Zheng, Yun
2017-11-01
In this article, collision-avoidance path planning for multiple car-like robots with variable motion is formulated as a two-stage objective optimization problem minimizing both the total length of all paths and the task's completion time. Accordingly, a new approach based on Pythagorean Hodograph (PH) curves and Modified Harmony Search algorithm is proposed to solve the two-stage path-planning problem subject to kinematic constraints such as velocity, acceleration, and minimum turning radius. First, a method of path planning based on PH curves for a single robot is proposed. Second, a mathematical model of the two-stage path-planning problem for multiple car-like robots with variable motion subject to kinematic constraints is constructed that the first-stage minimizes the total length of all paths and the second-stage minimizes the task's completion time. Finally, a modified harmony search algorithm is applied to solve the two-stage optimization problem. A set of experiments demonstrate the effectiveness of the proposed approach.
Entanglement of purification: from spin chains to holography
NASA Astrophysics Data System (ADS)
Nguyen, Phuc; Devakul, Trithep; Halbasch, Matthew G.; Zaletel, Michael P.; Swingle, Brian
2018-01-01
Purification is a powerful technique in quantum physics whereby a mixed quantum state is extended to a pure state on a larger system. This process is not unique, and in systems composed of many degrees of freedom, one natural purification is the one with minimal entanglement. Here we study the entropy of the minimally entangled purification, called the entanglement of purification, in three model systems: an Ising spin chain, conformal field theories holographically dual to Einstein gravity, and random stabilizer tensor networks. We conjecture values for the entanglement of purification in all these models, and we support our conjectures with a variety of numerical and analytical results. We find that such minimally entangled purifications have a number of applications, from enhancing entanglement-based tensor network methods for describing mixed states to elucidating novel aspects of the emergence of geometry from entanglement in the AdS/CFT correspondence.
NASA Astrophysics Data System (ADS)
Xie, Dexuan
2014-10-01
The Poisson-Boltzmann equation (PBE) is one widely-used implicit solvent continuum model in the calculation of electrostatic potential energy for biomolecules in ionic solvent, but its numerical solution remains a challenge due to its strong singularity and nonlinearity caused by its singular distribution source terms and exponential nonlinear terms. To effectively deal with such a challenge, in this paper, new solution decomposition and minimization schemes are proposed, together with a new PBE analysis on solution existence and uniqueness. Moreover, a PBE finite element program package is developed in Python based on the FEniCS program library and GAMer, a molecular surface and volumetric mesh generation program package. Numerical tests on proteins and a nonlinear Born ball model with an analytical solution validate the new solution decomposition and minimization schemes, and demonstrate the effectiveness and efficiency of the new PBE finite element program package.
Rigorous force field optimization principles based on statistical distance minimization
Vlcek, Lukas; Chialvo, Ariel A.
2015-10-12
We use the concept of statistical distance to define a measure of distinguishability between a pair of statistical mechanical systems, i.e., a model and its target, and show that its minimization leads to general convergence of the model’s static measurable properties to those of the target. Here we exploit this feature to define a rigorous basis for the development of accurate and robust effective molecular force fields that are inherently compatible with coarse-grained experimental data. The new model optimization principles and their efficient implementation are illustrated through selected examples, whose outcome demonstrates the higher robustness and predictive accuracy of themore » approach compared to other currently used methods, such as force matching and relative entropy minimization. We also discuss relations between the newly developed principles and established thermodynamic concepts, which include the Gibbs-Bogoliubov inequality and the thermodynamic length.« less
A risk-based multi-objective model for optimal placement of sensors in water distribution system
NASA Astrophysics Data System (ADS)
Naserizade, Sareh S.; Nikoo, Mohammad Reza; Montaseri, Hossein
2018-02-01
In this study, a new stochastic model based on Conditional Value at Risk (CVaR) and multi-objective optimization methods is developed for optimal placement of sensors in water distribution system (WDS). This model determines minimization of risk which is caused by simultaneous multi-point contamination injection in WDS using CVaR approach. The CVaR considers uncertainties of contamination injection in the form of probability distribution function and calculates low-probability extreme events. In this approach, extreme losses occur at tail of the losses distribution function. Four-objective optimization model based on NSGA-II algorithm is developed to minimize losses of contamination injection (through CVaR of affected population and detection time) and also minimize the two other main criteria of optimal placement of sensors including probability of undetected events and cost. Finally, to determine the best solution, Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE), as a subgroup of Multi Criteria Decision Making (MCDM) approach, is utilized to rank the alternatives on the trade-off curve among objective functions. Also, sensitivity analysis is done to investigate the importance of each criterion on PROMETHEE results considering three relative weighting scenarios. The effectiveness of the proposed methodology is examined through applying it to Lamerd WDS in the southwestern part of Iran. The PROMETHEE suggests 6 sensors with suitable distribution that approximately cover all regions of WDS. Optimal values related to CVaR of affected population and detection time as well as probability of undetected events for the best optimal solution are equal to 17,055 persons, 31 mins and 0.045%, respectively. The obtained results of the proposed methodology in Lamerd WDS show applicability of CVaR-based multi-objective simulation-optimization model for incorporating the main uncertainties of contamination injection in order to evaluate extreme value of losses in WDS.
Perico, Angelo; Manning, Gerald S
2014-11-01
We formulate and analyze a minimal model, based on condensation theory, of the lamellar cationic lipid (CL)-DNA complex of alternately charged lipid bilayers and DNA monolayers in a salt solution. Each lipid bilayer, composed by a random mixture of cationic and neutral lipids, is assumed to be a rigid uniformly charged plane. Each DNA monolayer, located between two lipid bilayers, is formed by the same number of parallel DNAs with a uniform separation distance. For the electrostatic calculation, the model lipoplex is collapsed to a single plane with charge density equal to the net lipid and DNA charge. The free energy difference between the lamellar lipoplex and a reference state of the same number of free lipid bilayers and free DNAs, is calculated as a function of the fraction of CLs, of the ratio of the number of CL charges to the number of negative charges of the DNA phosphates, and of the total number of planes. At the isoelectric point the free energy difference is minimal. The complex formation, already favoured by the decrease of the electrostatic charging free energy, is driven further by the free energy gain due to the release of counterions from the DNAs and from the lipid bilayers, if strongly charged. This minimal model compares well with experiment for lipids having a strong preference for planar geometry and with major features of more detailed models of the lipoplex. © 2014 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Burger, Martin; Dirks, Hendrik; Frerking, Lena; Hauptmann, Andreas; Helin, Tapio; Siltanen, Samuli
2017-12-01
In this paper we study the reconstruction of moving object densities from undersampled dynamic x-ray tomography in two dimensions. A particular motivation of this study is to use realistic measurement protocols for practical applications, i.e. we do not assume to have a full Radon transform in each time step, but only projections in few angular directions. This restriction enforces a space-time reconstruction, which we perform by incorporating physical motion models and regularization of motion vectors in a variational framework. The methodology of optical flow, which is one of the most common methods to estimate motion between two images, is utilized to formulate a joint variational model for reconstruction and motion estimation. We provide a basic mathematical analysis of the forward model and the variational model for the image reconstruction. Moreover, we discuss the efficient numerical minimization based on alternating minimizations between images and motion vectors. A variety of results are presented for simulated and real measurement data with different sampling strategy. A key observation is that random sampling combined with our model allows reconstructions of similar amount of measurements and quality as a single static reconstruction.
Riggs, M M; Bennetts, M; van der Graaf, P H; Martin, S W
2012-01-01
Endometriosis is a gynecological condition resulting from proliferation of endometrial-like tissue outside the endometrial cavity. Estrogen suppression therapies, mediated through gonadotropin-releasing hormone (GnRH) modulation, decrease endometriotic implants and diminish associated pain albeit at the expense of bone mineral density (BMD) loss. Our goal was to provide model-based guidance for GnRH-modulating clinical programs intended for endometriosis management. This included developing an estrogen suppression target expected to provide symptomatic relief with minimal BMD loss and to evaluate end points and study durations supportive of efficient development decisions. An existing multiscale model of calcium and bone was adapted to include systematic estrogen pharmacologic effects to describe estrogen concentration-related effects on BMD. A logistic regression fit to patient-level data from three clinical GnRH agonist (nafarelin) studies described the relationship of estrogen with endometrial-related pain. Targeting estradiol between 20 and 40 pg/ml was predicted to provide efficacious endometrial pain response while minimizing BMD effects. PMID:23887363
Implementation of Mamdani Fuzzy Method in Employee Promotion System
NASA Astrophysics Data System (ADS)
Zulfikar, W. B.; Jumadi; Prasetyo, P. K.; Ramdhani, M. A.
2018-01-01
Nowadays, employees are big assets to an institution. Every employee has a different educational background, degree, work skill, attitude and ethic that affect the performance. An institution including government institution implements a promotion system in order to improve the performance of the employees. Pangandaran Tourism, Industry, Trade, and SME Department is one of government agency that implements a promotion system to discover employees who deserve to get promotion. However, there are some practical deficiencies in the promotion system, one of which is the subjectivity issue. This work proposed a classification model that could minimize the subjectivity issue in employee promotion system. This paper reported a classification employee based on their eligibility for promotion. The degree of membership was decided using Mamdani Fuzzy based on determinant factors of the performance of employees. In the evaluation phase, this model had an accuracy of 91.4%. It goes to show that this model may minimize the subjectivity issue in the promotion system, especially at Pangandaran Tourism, Industry, Trade, and SME Department.
Multidisciplinary optimization of aeroservoelastic systems using reduced-size models
NASA Technical Reports Server (NTRS)
Karpel, Mordechay
1992-01-01
Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.
Image restoration by minimizing zero norm of wavelet frame coefficients
NASA Astrophysics Data System (ADS)
Bao, Chenglong; Dong, Bin; Hou, Likun; Shen, Zuowei; Zhang, Xiaoqun; Zhang, Xue
2016-11-01
In this paper, we propose two algorithms, namely the extrapolated proximal iterative hard thresholding (EPIHT) algorithm and the EPIHT algorithm with line-search, for solving the {{\\ell }}0-norm regularized wavelet frame balanced approach for image restoration. Under the theoretical framework of Kurdyka-Łojasiewicz property, we show that the sequences generated by the two algorithms converge to a local minimizer with linear convergence rate. Moreover, extensive numerical experiments on sparse signal reconstruction and wavelet frame based image restoration problems including CT reconstruction, image deblur, demonstrate the improvement of {{\\ell }}0-norm based regularization models over some prevailing ones, as well as the computational efficiency of the proposed algorithms.
Radiative neutrino masses from order-4 CP symmetry
NASA Astrophysics Data System (ADS)
Ivanov, Igor P.
2018-02-01
Generalized CP symmetry of order 4 (CP4) is surprisingly powerful in shaping scalar and quark sectors of multi-Higgs models. Here, we extend this framework to the neutrino sector. We build two simple Majorana neutrino mass models with unbroken CP4, which are analogous to Ma's scotogenic model. Both models use three Higgs doublets and two or three right-handed (RH) neutrinos. The minimal CP4 symmetric scotogenic model uses only two RH neutrinos, leads to three non-zero light neutrino masses, and contains a built-in mechanism to further suppress them via phase alignment. With three RH neutrinos, one generates a type I seesaw mass matrix of rank 1, which is then corrected by the same scotogenic mechanism, naturally leading to two neutrino mass scales with mild hierarchy. These minimal CP4-based constructions emerge as a primer for introducing additional symmetry structures and exploring their phenomenological consequences.
Routing and Scheduling Optimization Model of Sea Transportation
NASA Astrophysics Data System (ADS)
barus, Mika debora br; asyrafy, Habib; nababan, Esther; mawengkang, Herman
2018-01-01
This paper examines the routing and scheduling optimization model of sea transportation. One of the issues discussed is about the transportation of ships carrying crude oil (tankers) which is distributed to many islands. The consideration is the cost of transportation which consists of travel costs and the cost of layover at the port. Crude oil to be distributed consists of several types. This paper develops routing and scheduling model taking into consideration some objective functions and constraints. The formulation of the mathematical model analyzed is to minimize costs based on the total distance visited by the tanker and minimize the cost of the ports. In order for the model of the problem to be more realistic and the cost calculated to be more appropriate then added a parameter that states the multiplier factor of cost increases as the charge of crude oil is filled.
Estimation of cardiac conductivities in ventricular tissue by a variational approach
NASA Astrophysics Data System (ADS)
Yang, Huanhuan; Veneziani, Alessandro
2015-11-01
The bidomain model is the current standard model to simulate cardiac potential propagation. The numerical solution of this system of partial differential equations strongly depends on the model parameters and in particular on the cardiac conductivities. Unfortunately, it is quite problematic to measure these parameters in vivo and even more so in clinical practice, resulting in no common agreement in the literature. In this paper we consider a variational data assimilation approach to estimating those parameters. We consider the parameters as control variables to minimize the mismatch between the computed and the measured potentials under the constraint of the bidomain system. The existence of a minimizer of the misfit function is proved with the phenomenological Rogers-McCulloch ionic model, that completes the bidomain system. We significantly improve the numerical approaches in the literature by resorting to a derivative-based optimization method with settlement of some challenges due to discontinuity. The improvement in computational efficiency is confirmed by a 2D test as a direct comparison with approaches in the literature. The core of our numerical results is in 3D, on both idealized and real geometries, with the minimal ionic model. We demonstrate the reliability and the stability of the conductivity estimation approach in the presence of noise and with an imperfect knowledge of other model parameters.
Amber Plug-In for Protein Shop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oliva, Ricardo
2004-05-10
The Amber Plug-in for ProteinShop has two main components: an AmberEngine library to compute the protein energy models, and a module to solve the energy minimization problem using an optimization algorithm in the OPTI-+ library. Together, these components allow the visualization of the protein folding process in ProteinShop. AmberEngine is a object-oriented library to compute molecular energies based on the Amber model. The main class is called ProteinEnergy. Its main interface methods are (1) "init" to initialize internal variables needed to compute the energy. (2) "eval" to evaluate the total energy given a vector of coordinates. Additional methods allow themore » user to evaluate the individual components of the energy model (bond, angle, dihedral, non-bonded-1-4, and non-bonded energies) and to obtain the energy of each individual atom. The Amber Engine library source code includes examples and test routines that illustrate the use of the library in stand alone programs. The energy minimization module uses the AmberEngine library and the nonlinear optimization library OPT++. OPT++ is open source software available under the GNU Lesser General Public License. The minimization module currently makes use of the LBFGS optimization algorithm in OPT++ to perform the energy minimization. Future releases may give the user a choice of other algorithms available in OPT++.« less
As ecological risk assessments (ERA) move beyond organism-based determinations towards probabilistic population-level assessments, model complexity must be evaluated against the goals of the assessment, the information available to parameterize components with minimal dependence ...
Adopting a plant-based diet minimally increased food costs in WHEL Study.
Hyder, Joseph A; Thomson, Cynthia A; Natarajan, Loki; Madlensky, Lisa; Pu, Minya; Emond, Jennifer; Kealey, Sheila; Rock, Cheryl L; Flatt, Shirley W; Pierce, John P
2009-01-01
To assess the cost of adopting a plant-based diet. Breast cancer survivors randomized to dietary intervention (n=1109) or comparison (n=1145) group; baseline and 12-month data on diet and grocery costs. At baseline, both groups reported similar food costs and dietary intake. At 12 months, only the intervention group changed their diet (vegetable-fruit: 6.3 to 8.9 serv/d.; fiber: 21.6 to 29.8 g/d; fat: 28.2 to 22.3% of E). The intervention change was associated with a significant increase of $1.22/ person/week (multivariate model, P=0.027). A major change to a plant-based diet was associated with a minimal increase in grocery costs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paul, Prokash; Bhattacharyya, Debangsu; Turton, Richard
Here, a novel sensor network design (SND) algorithm is developed for maximizing process efficiency while minimizing sensor network cost for a nonlinear dynamic process with an estimator-based control system. The multiobjective optimization problem is solved following a lexicographic approach where the process efficiency is maximized first followed by minimization of the sensor network cost. The partial net present value, which combines the capital cost due to the sensor network and the operating cost due to deviation from the optimal efficiency, is proposed as an alternative objective. The unscented Kalman filter is considered as the nonlinear estimator. The large-scale combinatorial optimizationmore » problem is solved using a genetic algorithm. The developed SND algorithm is applied to an acid gas removal (AGR) unit as part of an integrated gasification combined cycle (IGCC) power plant with CO 2 capture. Due to the computational expense, a reduced order nonlinear model of the AGR process is identified and parallel computation is performed during implementation.« less
Paul, Prokash; Bhattacharyya, Debangsu; Turton, Richard; ...
2017-06-06
Here, a novel sensor network design (SND) algorithm is developed for maximizing process efficiency while minimizing sensor network cost for a nonlinear dynamic process with an estimator-based control system. The multiobjective optimization problem is solved following a lexicographic approach where the process efficiency is maximized first followed by minimization of the sensor network cost. The partial net present value, which combines the capital cost due to the sensor network and the operating cost due to deviation from the optimal efficiency, is proposed as an alternative objective. The unscented Kalman filter is considered as the nonlinear estimator. The large-scale combinatorial optimizationmore » problem is solved using a genetic algorithm. The developed SND algorithm is applied to an acid gas removal (AGR) unit as part of an integrated gasification combined cycle (IGCC) power plant with CO 2 capture. Due to the computational expense, a reduced order nonlinear model of the AGR process is identified and parallel computation is performed during implementation.« less
A constrained registration problem based on Ciarlet-Geymonat stored energy
NASA Astrophysics Data System (ADS)
Derfoul, Ratiba; Le Guyader, Carole
2014-03-01
In this paper, we address the issue of designing a theoretically well-motivated registration model capable of handling large deformations and including geometrical constraints, namely landmark points to be matched, in a variational framework. The theory of linear elasticity being unsuitable in this case, since assuming small strains and the validity of Hooke's law, the introduced functional is based on nonlinear elasticity principles. More precisely, the shapes to be matched are viewed as Ciarlet-Geymonat materials. We demonstrate the existence of minimizers of the related functional minimization problem and prove a convergence result when the number of geometric constraints increases. We then describe and analyze a numerical method of resolution based on the introduction of an associated decoupled problem under inequality constraint in which an auxiliary variable simulates the Jacobian matrix of the deformation field. A theoretical result of -convergence is established. We then provide preliminary 2D results of the proposed matching model for the registration of mouse brain gene expression data to a neuroanatomical mouse atlas.
Modeling of leishmaniasis infection dynamics: novel application to the design of effective therapies
2012-01-01
Background The WHO considers leishmaniasis as one of the six most important tropical diseases worldwide. It is caused by parasites of the genus Leishmania that are passed on to humans and animals by the phlebotomine sandfly. Despite all of the research, there is still a lack of understanding on the metabolism of the parasite and the progression of the disease. In this study, a mathematical model of disease progression was developed based on experimental data of clinical symptoms, immunological responses, and parasite load for Leishmania amazonensis in BALB/c mice. Results Four biologically significant variables were chosen to develop a differential equation model based on the GMA power-law formalism. Parameters were determined to minimize error in the model dynamics and time series experimental data. Subsequently, the model robustness was tested and the model predictions were verified by comparing them with experimental observations made in different experimental conditions. The model obtained helps to quantify relationships between the selected variables, leads to a better understanding of disease progression, and aids in the identification of crucial points for introducing therapeutic methods. Conclusions Our model can be used to identify the biological factors that must be changed to minimize parasite load in the host body, and contributes to the design of effective therapies. PMID:22222070
Model-based Roentgen stereophotogrammetry of orthopaedic implants.
Valstar, E R; de Jong, F W; Vrooman, H A; Rozing, P M; Reiber, J H
2001-06-01
Attaching tantalum markers to prostheses for Roentgen stereophotogrammetry (RSA) may be difficult and is sometimes even impossible. In this study, a model-based RSA method that avoids the attachment of markers to prostheses is presented and validated. This model-based RSA method uses a triangulated surface model of the implant. A projected contour of this model is calculated and this calculated model contour is matched onto the detected contour of the actual implant in the RSA radiograph. The difference between the two contours is minimized by variation of the position and orientation of the model. When a minimal difference between the contours is found, an optimal position and orientation of the model has been obtained. The method was validated by means of a phantom experiment. Three prosthesis components were used in this experiment: the femoral and tibial component of an Interax total knee prosthesis (Stryker Howmedica Osteonics Corp., Rutherfort, USA) and the femoral component of a Profix total knee prosthesis (Smith & Nephew, Memphis, USA). For the prosthesis components used in this study, the accuracy of the model-based method is lower than the accuracy of traditional RSA. For the Interax femoral and tibial components, significant dimensional tolerances were found that were probably caused by the casting process and manual polishing of the components surfaces. The largest standard deviation for any translation was 0.19mm and for any rotation it was 0.52 degrees. For the Profix femoral component that had no large dimensional tolerances, the largest standard deviation for any translation was 0.22mm and for any rotation it was 0.22 degrees. From this study we may conclude that the accuracy of the current model-based RSA method is sensitive to dimensional tolerances of the implant. Research is now being conducted to make model-based RSA less sensitive to dimensional tolerances and thereby improving its accuracy.
Kamesh, Reddi; Rani, Kalipatnapu Yamuna
2017-12-01
In this paper, a novel formulation for nonlinear model predictive control (MPC) has been proposed incorporating the extended Kalman filter (EKF) control concept using a purely data-driven artificial neural network (ANN) model based on measurements for supervisory control. The proposed scheme consists of two modules focusing on online parameter estimation based on past measurements and control estimation over control horizon based on minimizing the deviation of model output predictions from set points along the prediction horizon. An industrial case study for temperature control of a multiproduct semibatch polymerization reactor posed as a challenge problem has been considered as a test bed to apply the proposed ANN-EKFMPC strategy at supervisory level as a cascade control configuration along with proportional integral controller [ANN-EKFMPC with PI (ANN-EKFMPC-PI)]. The proposed approach is formulated incorporating all aspects of MPC including move suppression factor for control effort minimization and constraint-handling capability including terminal constraints. The nominal stability analysis and offset-free tracking capabilities of the proposed controller are proved. Its performance is evaluated by comparison with a standard MPC-based cascade control approach using the same adaptive ANN model. The ANN-EKFMPC-PI control configuration has shown better controller performance in terms of temperature tracking, smoother input profiles, as well as constraint-handling ability compared with the ANN-MPC with PI approach for two products in summer and winter. The proposed scheme is found to be versatile although it is based on a purely data-driven model with online parameter estimation.
Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy
Robotics-based synthesis of human motion.
Khatib, O; Demircan, E; De Sapio, V; Sentis, L; Besier, T; Delp, S
2009-01-01
The synthesis of human motion is a complex procedure that involves accurate reconstruction of movement sequences, modeling of musculoskeletal kinematics, dynamics and actuation, and characterization of reliable performance criteria. Many of these processes have much in common with the problems found in robotics research. Task-based methods used in robotics may be leveraged to provide novel musculoskeletal modeling methods and physiologically accurate performance predictions. In this paper, we present (i) a new method for the real-time reconstruction of human motion trajectories using direct marker tracking, (ii) a task-driven muscular effort minimization criterion and (iii) new human performance metrics for dynamic characterization of athletic skills. Dynamic motion reconstruction is achieved through the control of a simulated human model to follow the captured marker trajectories in real-time. The operational space control and real-time simulation provide human dynamics at any configuration of the performance. A new criteria of muscular effort minimization has been introduced to analyze human static postures. Extensive motion capture experiments were conducted to validate the new minimization criterion. Finally, new human performance metrics were introduced to study in details an athletic skill. These metrics include the effort expenditure and the feasible set of operational space accelerations during the performance of the skill. The dynamic characterization takes into account skeletal kinematics as well as muscle routing kinematics and force generating capacities. The developments draw upon an advanced musculoskeletal modeling platform and a task-oriented framework for the effective integration of biomechanics and robotics methods.
Heuts, Samuel; Maessen, Jos G; Sardari Nia, Peyman
2016-05-01
With the emergence of a new concept aimed at individualization of patient care, the focus will shift from whether a minimally invasive procedure is better than conventional treatment, to the question of which patients will benefit most from which technique? The superiority of minimally invasive valve surgery (MIVS) has not yet been proved. We believe that through better patient selection advantages of this technique can become more pronounced. In our current study, we evaluate the feasibility of 3D computed tomography (CT) imaging reconstruction in the preoperative planning of patients referred for MIVS. We retrospectively analysed all consecutive patients who were referred for minimally invasive mitral valve surgery (MIMVS) and minimally invasive aortic valve replacement (MIAVR) to a single surgeon in a tertiary referral centre for MIVS between March 2014 and 2015. Prospective preoperative planning was done for all patients and was based on evaluations by a multidisciplinary heart-team, an echocardiography, conventional CT images and 3D CT reconstruction models. A total of 39 patients were included in our study; 16 for mitral valve surgery (MVS) and 23 patients for aortic valve replacement (AVR). Eleven patients (69%) within the MVS group underwent MIMVS. Five patients (31%) underwent conventional MVS. Findings leading to exclusion for MIMVS were a tortuous or slender femoro-iliac tract, calcification of the aortic bifurcation, aortic elongation and pericardial calcifications. Furthermore, 2 patients had a change of operative strategy based on preoperative planning. Seventeen (74%) patients in the AVR group underwent MIAVR. Six patients (26%) underwent conventional AVR. Indications for conventional AVR instead of MIAVR were an elongated ascending aorta, ascending aortic calcification and ascending aortic dilatation. One patient (6%) in the MIAVR group was converted to a sternotomy due to excessive intraoperative bleeding. Two mortalities were reported during conventional MVS. There were no mortalities reported in the MIMVS, MIAVR or conventional AVR group. Preoperative planning of minimally invasive left-sided valve surgery with 3D CT reconstruction models is a useful and feasible method to determine operative strategy and exclude patients ineligible for a minimally invasive approach, thus potentially preventing complications. © The Author 2016. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
Joint Geophysical Inversion With Multi-Objective Global Optimization Methods
NASA Astrophysics Data System (ADS)
Lelievre, P. G.; Bijani, R.; Farquharson, C. G.
2015-12-01
Pareto multi-objective global optimization (PMOGO) methods generate a suite of solutions that minimize multiple objectives (e.g. data misfits and regularization terms) in a Pareto-optimal sense. Providing a suite of models, as opposed to a single model that minimizes a weighted sum of objectives, allows a more complete assessment of the possibilities and avoids the often difficult choice of how to weight each objective. We are applying PMOGO methods to three classes of inverse problems. The first class are standard mesh-based problems where the physical property values in each cell are treated as continuous variables. The second class of problems are also mesh-based but cells can only take discrete physical property values corresponding to known or assumed rock units. In the third class we consider a fundamentally different type of inversion in which a model comprises wireframe surfaces representing contacts between rock units; the physical properties of each rock unit remain fixed while the inversion controls the position of the contact surfaces via control nodes. This third class of problem is essentially a geometry inversion, which can be used to recover the unknown geometry of a target body or to investigate the viability of a proposed Earth model. Joint inversion is greatly simplified for the latter two problem classes because no additional mathematical coupling measure is required in the objective function. PMOGO methods can solve numerically complicated problems that could not be solved with standard descent-based local minimization methods. This includes the latter two classes of problems mentioned above. There are significant increases in the computational requirements when PMOGO methods are used but these can be ameliorated using parallelization and problem dimension reduction strategies.
Construction of a pulse-coupled dipole network capable of fear-like and relief-like responses
NASA Astrophysics Data System (ADS)
Lungsi Sharma, B.
2016-07-01
The challenge for neuroscience as an interdisciplinary programme is the integration of ideas among the disciplines to achieve a common goal. This paper deals with the problem of deriving a pulse-coupled neural network that is capable of demonstrating behavioural responses (fear-like and relief-like). Current pulse-coupled neural networks are designed mostly for engineering applications, particularly image processing. The discovered neural network was constructed using the method of minimal anatomies approach. The behavioural response of a level-coded activity-based model was used as a reference. Although the spiking-based model and the activity-based model are of different scales, the use of model-reference principle means that the characteristics that is referenced is its functional properties. It is demonstrated that this strategy of dissection and systematic construction is effective in the functional design of pulse-coupled neural network system with nonlinear signalling. The differential equations for the elastic weights in the reference model are replicated in the pulse-coupled network geometrically. The network reflects a possible solution to the problem of punishment and avoidance. The network developed in this work is a new network topology for pulse-coupled neural networks. Therefore, the model-reference principle is a powerful tool in connecting neuroscience disciplines. The continuity of concepts and phenomena is further maintained by systematic construction using methods like the method of minimal anatomies.
Comparative evaluation of urban storm water quality models
NASA Astrophysics Data System (ADS)
Vaze, J.; Chiew, Francis H. S.
2003-10-01
The estimation of urban storm water pollutant loads is required for the development of mitigation and management strategies to minimize impacts to receiving environments. Event pollutant loads are typically estimated using either regression equations or "process-based" water quality models. The relative merit of using regression models compared to process-based models is not clear. A modeling study is carried out here to evaluate the comparative ability of the regression equations and process-based water quality models to estimate event diffuse pollutant loads from impervious surfaces. The results indicate that, once calibrated, both the regression equations and the process-based model can estimate event pollutant loads satisfactorily. In fact, the loads estimated using the regression equation as a function of rainfall intensity and runoff rate are better than the loads estimated using the process-based model. Therefore, if only estimates of event loads are required, regression models should be used because they are simpler and require less data compared to process-based models.
The maximum intelligible range of the human voice
NASA Astrophysics Data System (ADS)
Boren, Braxton
This dissertation examines the acoustics of the spoken voice at high levels and the maximum number of people that could hear such a voice unamplified in the open air. In particular, it examines an early auditory experiment by Benjamin Franklin which sought to determine the maximum intelligible crowd for the Anglican preacher George Whitefield in the eighteenth century. Using Franklin's description of the experiment and a noise source on Front Street, the geometry and diffraction effects of such a noise source are examined to more precisely pinpoint Franklin's position when Whitefield's voice ceased to be intelligible. Based on historical maps, drawings, and prints, the geometry and material of Market Street is constructed as a computer model which is then used to construct an acoustic cone tracing model. Based on minimal values of the Speech Transmission Index (STI) at Franklin's position, Whitefield's on-axis Sound Pressure Level (SPL) at 1 m is determined, leading to estimates centering around 90 dBA. Recordings are carried out on trained actors and singers to determine their maximum time-averaged SPL at 1 m. This suggests that the greatest average SPL achievable by the human voice is 90-91 dBA, similar to the median estimates for Whitefield's voice. The sites of Whitefield's largest crowds are acoustically modeled based on historical evidence and maps. Based on Whitefield's SPL, the minimal STI value, and the crowd's background noise, this allows a prediction of the minimally intelligible area for each site. These yield maximum crowd estimates of 50,000 under ideal conditions, while crowds of 20,000 to 30,000 seem more reasonable when the crowd was reasonably quiet and Whitefield's voice was near 90 dBA.
Search for the minimal standard model Higgs boson in e +e - collisions at LEP
NASA Astrophysics Data System (ADS)
Akrawy, M. Z.; Alexander, G.; Allison, J.; Allport, P. P.; Anderson, K. J.; Armitage, J. C.; Arnison, G. T. J.; Ashton, P.; Azuelos, G.; Baines, J. T. M.; Ball, A. H.; Banks, J.; Barker, G. J.; Barlow, R. J.; Batley, J. R.; Beck, A.; Becker, J.; Behnke, T.; Bell, K. W.; Bella, G.; Bethke, S.; Biebel, O.; Binder, U.; Bloodworth, I. J.; Bock, P.; Breuker, H.; Brown, R. M.; Brun, R.; Buijs, A.; Burckhart, H. J.; Capiluppi, P.; Carnegie, R. K.; Carter, A. A.; Carter, J. R.; Chang, C. Y.; Charlton, D. G.; Chrin, J. T. M.; Clarke, P. E. L.; Cohen, I.; Collins, W. J.; Conboy, J. E.; Couch, M.; Coupland, M.; Cuffiani, M.; Dado, S.; Dallavalle, G. M.; Debu, P.; Deninno, M. M.; Dieckman, A.; Dittmar, M.; Dixit, M. S.; Duchovni, E.; Duerdoth, I. P.; Dumas, D. J. P.; Elcombe, P. A.; Estabrooks, P. G.; Etzion, E.; Fabbri, F.; Farthouat, P.; Fischer, H. M.; Fong, D. G.; French, M. T.; Fukunaga, C.; Gaidot, A.; Ganel, O.; Gary, J. W.; Gascon, J.; Geddes, N. I.; Gee, C. N. P.; Geich-Gimbel, C.; Gensler, S. W.; Gentit, F. X.; Giacomelli, G.; Gibson, V.; Gibson, W. R.; Gillies, J. D.; Goldberg, J.; Goodrick, M. J.; Gorn, W.; Granite, D.; Gross, E.; Grunhaus, J.; Hagedorn, H.; Hagemann, J.; Hansroul, M.; Hargrove, C. K.; Harrus, I.; Hart, J.; Hattersley, P. M.; Hauschild, M.; Hawkes, C. M.; Heflin, E.; Hemingway, R. J.; Heuer, R. D.; Hill, J. C.; Hillier, S. J.; Ho, C.; Hobbs, J. D.; Hobson, P. R.; Hochman, D.; Holl, B.; Homer, R. J.; Hou, S. R.; Howarth, C. P.; Hughes-Jones, R. E.; Humbert, R.; Igo-Kemenes, P.; Ihssen, H.; Imrie, D. C.; Janissen, L.; Jawahery, A.; Jeffreys, P. W.; Jeremie, H.; Jimack, M.; Jobes, M.; Jones, R. W. L.; Jovanovic, P.; Karlen, D.; Kawagoe, K.; Kawamoto, T.; Kellogg, R. G.; Kennedy, B. W.; Kleinwort, C.; Klem, D. E.; Knop, G.; Kobayashi, T.; Kokott, T. P.; Köpke, L.; Kowalewski, R.; Kreutzmann, H.; Kroll, J.; Kuwano, M.; Kyberd, P.; Lafferty, G. D.; Lamarche, F.; Larson, W. J.; Layter, J. G.; Le Du, P.; Leblanc, P.; Lee, A. M.; Lehto, M. H.; Lellouch, D.; Lennert, P.; Lessard, L.; Levinson, L.; Lloyd, S. L.; Loebinger, F. K.; Lorah, J. M.; Lorazo, B.; Losty, M. J.; Ludwig, J.; Ma, J.; Macbeth, A. A.; Mannelli, M.; Marcellini, S.; Maringer, G.; Martin, A. J.; Martin, J. P.; Mashimo, T.; Mättig, P.; Maur, U.; McMahon, T. J.; McNutt, J. R.; Meijers, F.; Menszner, D.; Merritt, F. S.; Mes, H.; Michelini, A.; Middleton, R. P.; Mikenberg, G.; Mildenberger, J.; Miller, D. J.; Milstene, C.; Minowa, M.; Mohr, W.; Montanari, A.; Mori, T.; Moss, M. W.; Murphy, P. G.; Murray, W. J.; Nellen, B.; Nguyen, H. H.; Nozaki, M.; O'Dowd, A. J. P.; O'Neale, S. W.; O'Neill, B. P.; Oakham, F. G.; Odorici, F.; Ogg, M.; Oh, H.; Oreglia, M. J.; Orito, S.; Pansart, J. P.; Patrick, G. N.; Pawley, S. J.; Pfister, P.; Pilcher, J. E.; Pinfold, J. L.; Plane, D. E.; Poli, B.; Pouladdej, A.; Prebys, E.; Pritchard, T. W.; Quast, G.; Raab, J.; Redmond, M. W.; Rees, D. L.; Regimbald, M.; Riles, K.; Roach, C. M.; Robins, S. A.; Rollnik, A.; Roney, J. M.; Rossberg, S.; Rossi, A. M.; Routenburg, P.; Runge, K.; Runolfsson, O.; Sanghera, S.; Sansum, R. A.; Sasaki, M.; Saunders, B. J.; Schaile, A. D.; Schaile, O.; Schappert, W.; Scharff-Hansen, P.; Schreiber, S.; Schwarz, J.; Shapira, A.; Shen, B. C.; Sherwood, P.; Simon, A.; Singh, P.; Siroli, G. P.; Skuja, A.; Smith, A. M.; Smith, T. J.; Snow, G. A.; Springer, R. W.; Sproston, M.; Stephens, K.; Stier, H. E.; Stroehmer, R.; Strom, D.; Takeda, H.; Takeshita, T.; Taras, P.; Thackray, N. J.; Tsukamoto, T.; Turner, M. F.; Tysarczyk-Niemeyer, G.; Van den plas, D.; VanDalen, G. J.; Van Kooten, R.; Vasseur, G.; Virtue, C. J.; von der Schmitt, H.; von Krogh, J.; Wagner, A.; Wahl, C.; Walker, J. P.; Ward, C. P.; Ward, D. R.; Watkins, P. M.; Watson, A. T.; Watson, N. K.; Weber, M.; Weisz, S.; Wells, P. S.; Wermes, N.; Weymann, M.; Wilson, G. W.; Wilson, J. A.; Wingerter, I.; Winterer, V.-H.; Wood, N. C.; Wotton, S.; Wuensch, B.; Wyatt, T. R.; Yaari, R.; Yang, Y.; Yekutieli, G.; Yoshida, T.; Zeuner, W.; Zorn, G. T.; OPAL Collaboration
1991-01-01
A search for the minimal standard model Higgs boson (H 0) has been performed with data from e +e - collisions in the OPAL detector at LEP. The analysis is based on approximately 8 pb -1 of data taken at centre-of-mass energies between 88.2 and 95.0 GeV. The search concentrated on the reaction e+e-→( e+e-, μ +μ -, voverlinevor τ +τ -) H0, H0→( qoverlineqor τ +τ -) for Higgs boson masses above 25 GeV/ c2. No Higgs boson candidates have been observed. The present study, combined with previous OPAL publications, excludes the existence of a standard model Higgs boson with mass in the range 3< mH 0<44GeV/ c2 at the 95% confidence level.
NASA Astrophysics Data System (ADS)
Wang, Dongyang; Ba, Dechun; Hao, Ming; Duan, Qihui; Liu, Kun; Mei, Qi
2018-05-01
Pneumatic NC (normally closed) valves are widely used in high density microfluidics systems. To improve actuation reliability, the actuation pressure needs to be reduced. In this work, we utilize 3D FEM (finite element method) modelling to get an insight into the valve actuation process numerically. Specifically, the progressive debonding process at the elastomer interface is simulated with CZM (cohesive zone model) method. To minimize the actuation pressure, the V-shape design has been investigated and compared with a normal straight design. The geometrical effects of valve shape has been elaborated, in terms of valve actuation pressure. Based on our simulated results, we formulate the main concerns for micro valve design and fabrication, which is significant for minimizing actuation pressures and ensuring reliable operation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Y; Liu, B; Kalra, M
Purpose: X-rays from CT scans can increase cancer risk to patients. Lifetime Attributable Risk of Cancer Incidence for adult patients has been investigated and shown to decrease as patient age. However, a new risk model shows an increasing risk trend for several radiosensitive organs for middle age patients. This study investigates the feasibility of a general method for optimizing tube current modulation (TCM) functions to minimize risk by reducing radiation dose to radiosensitive organs of patients. Methods: Organ-based TCM has been investigated in literature for eye lens dose and breast dose. Adopting the concept in organ-based TCM, this study seeksmore » to find an optimized tube current for minimal total risk to breasts and lungs by reducing dose to these organs. The contributions of each CT view to organ dose are determined through simulations of CT scan view-by-view using a GPU-based fast Monte Carlo code, ARCHER. A Linear Programming problem is established for tube current optimization, with Monte Carlo results as weighting factors at each view. A pre-determined dose is used as upper dose boundary, and tube current of each view is optimized to minimize the total risk. Results: An optimized tube current is found to minimize the total risk of lungs and breasts: compared to fixed current, the risk is reduced by 13%, with breast dose reduced by 38% and lung dose reduced by 7%. The average tube current is maintained during optimization to maintain image quality. In addition, dose to other organs in chest region is slightly affected, with relative change in dose smaller than 10%. Conclusion: Optimized tube current plans can be generated to minimize cancer risk to lungs and breasts while maintaining image quality. In the future, various risk models and greater number of projections per rotation will be simulated on phantoms of different gender and age. National Institutes of Health R01EB015478.« less
Minimal Pleural Effusion in Small Cell Lung Cancer: Proportion, Mechanisms, and Prognostic Effect.
Ryu, Jeong-Seon; Lim, Jun Hyeok; Lee, Jeong Min; Kim, Woo Chul; Lee, Kyung-Hee; Memon, Azra; Lee, Seul-Ki; Yi, Bo-Rim; Kim, Hyun-Jung; Hwang, Seung-Sik
2016-02-01
To determine the frequency and investigate possible mechanisms and prognostic relevance of minimal (<10-mm thickness) pleural effusion in patients with small cell lung cancer (SCLC). The single-center retrospective study was approved by the institutional review board of the hospital, and informed consent was waived by the patients. A cohort of 360 consecutive patients diagnosed with SCLC by using histologic analysis was enrolled in this study. Based on the status of pleural effusion on chest computed tomographic (CT) scans at diagnosis, patients were classified into three groups: no pleural effusion, minimal pleural effusion, and malignant pleural effusion. Eighteen variables related to patient, environment, stage, and treatment were included in the final model as potential confounders. Minimal pleural effusion was present in 74 patients (20.6%) and malignant pleural effusion in 83 patients (23.0%). Median survival was significantly different in patients with no, minimal, or malignant pleural effusion (median survival, 11.2, 5.93, and 4.83 months, respectively; P < .001, log-rank test). In the fully adjusted final model, patients with minimal pleural effusion had a significantly increased risk of death compared with those with no pleural effusion (adjusted hazard ratio, 1.454 [95% confidence interval: 1.012, 2.090]; P = .001). The prognostic effect was significant in patients with stage I-III disease (adjusted hazard ratio, 2.751 [95% confidence interval: 1.586, 4.773]; P < .001), but it disappeared in stage IV disease. An indirect mechanism representing mediastinal lymphadenopathy was responsible for the accumulation in all but one patient with minimal pleural effusion. Minimal pleural effusion is a common clinical finding in staging SCLC. Its presence is associated with worse survival in patients and should be considered when CT scans are interpreted. © RSNA, 2015.
Leppin, Aaron L.; Montori, Victor M.; Gionfriddo, Michael R.
2015-01-01
An increasing proportion of healthcare resources in the United States are directed toward an expanding group of complex and multimorbid patients. Federal stakeholders have called for new models of care to meet the needs of these patients. Minimally Disruptive Medicine (MDM) is a theory-based, patient-centered, and context-sensitive approach to care that focuses on achieving patient goals for life and health while imposing the smallest possible treatment burden on patients’ lives. The MDM Care Model is designed to be pragmatically comprehensive, meaning that it aims to address any and all factors that impact the implementation and effectiveness of care for patients with multiple chronic conditions. It comprises core activities that map to an underlying and testable theoretical framework. This encourages refinement and future study. Here, we present the conceptual rationale for and a practical approach to minimally disruptive care for patients with multiple chronic conditions. We introduce some of the specific tools and strategies that can be used to identify the right care for these patients and to put it into practice. PMID:27417747
A Parameter Tuning Scheme of Sea-ice Model Based on Automatic Differentiation Technique
NASA Astrophysics Data System (ADS)
Kim, J. G.; Hovland, P. D.
2001-05-01
Automatic diferentiation (AD) technique was used to illustrate a new approach for parameter tuning scheme of an uncoupled sea-ice model. Atmospheric forcing field of 1992 obtained from NCEP data was used as enforcing variables in the study. The simulation results were compared with the observed ice movement provided by the International Arctic Buoy Programme (IABP). All of the numerical experiments were based on a widely used dynamic and thermodynamic model for simulating the seasonal sea-ice chnage of the main Arctic ocean. We selected five dynamic and thermodynamic parameters for the tuning process in which the cost function defined by the norm of the difference between observed and simulated ice drift locations was minimized. The selected parameters are the air and ocean drag coefficients, the ice strength constant, the turning angle at ice-air/ocean interface, and the bulk sensible heat transfer coefficient. The drag coefficients were the major parameters to control sea-ice movement and extent. The result of the study shows that more realistic simulations of ice thickness distribution was produced by tuning the simulated ice drift trajectories. In the tuning process, the L-BFCGS-B minimization algorithm of a quasi-Newton method was used. The derivative information required in the minimization iterations was provided by the AD processed Fortran code. Compared with a conventional approach, AD generated derivative code provided fast and robust computations of derivative information.
Sensorimotor Model of Obstacle Avoidance in Echolocating Bats
Vanderelst, Dieter; Holderied, Marc W.; Peremans, Herbert
2015-01-01
Bat echolocation is an ability consisting of many subtasks such as navigation, prey detection and object recognition. Understanding the echolocation capabilities of bats comes down to isolating the minimal set of acoustic cues needed to complete each task. For some tasks, the minimal cues have already been identified. However, while a number of possible cues have been suggested, little is known about the minimal cues supporting obstacle avoidance in echolocating bats. In this paper, we propose that the Interaural Intensity Difference (IID) and travel time of the first millisecond of the echo train are sufficient cues for obstacle avoidance. We describe a simple control algorithm based on the use of these cues in combination with alternating ear positions modeled after the constant frequency bat Rhinolophus rouxii. Using spatial simulations (2D and 3D), we show that simple phonotaxis can steer a bat clear from obstacles without performing a reconstruction of the 3D layout of the scene. As such, this paper presents the first computationally explicit explanation for obstacle avoidance validated in complex simulated environments. Based on additional simulations modelling the FM bat Phyllostomus discolor, we conjecture that the proposed cues can be exploited by constant frequency (CF) bats and frequency modulated (FM) bats alike. We hypothesize that using a low level yet robust cue for obstacle avoidance allows bats to comply with the hard real-time constraints of this basic behaviour. PMID:26502063
An Efficient Interactive Model for On-Demand Sensing-As-A-Servicesof Sensor-Cloud
Dinh, Thanh; Kim, Younghan
2016-01-01
This paper proposes an efficient interactive model for the sensor-cloud to enable the sensor-cloud to efficiently provide on-demand sensing services for multiple applications with different requirements at the same time. The interactive model is designed for both the cloud and sensor nodes to optimize the resource consumption of physical sensors, as well as the bandwidth consumption of sensing traffic. In the model, the sensor-cloud plays a key role in aggregating application requests to minimize the workloads required for constrained physical nodes while guaranteeing that the requirements of all applications are satisfied. Physical sensor nodes perform their sensing under the guidance of the sensor-cloud. Based on the interactions with the sensor-cloud, physical sensor nodes adapt their scheduling accordingly to minimize their energy consumption. Comprehensive experimental results show that our proposed system achieves a significant improvement in terms of the energy consumption of physical sensors, the bandwidth consumption from the sink node to the sensor-cloud, the packet delivery latency, reliability and scalability, compared to current approaches. Based on the obtained results, we discuss the economical benefits and how the proposed system enables a win-win model in the sensor-cloud. PMID:27367689
NASA Astrophysics Data System (ADS)
Ibrahim, Ireen Munira; Liong, Choong-Yeun; Bakar, Sakhinah Abu; Ahmad, Norazura; Najmuddin, Ahmad Farid
2017-04-01
Emergency department (ED) is the main unit of a hospital that provides emergency treatment. Operating 24 hours a day with limited number of resources invites more problems to the current chaotic situation in some hospitals in Malaysia. Delays in getting treatments that caused patients to wait for a long period of time are among the frequent complaints against government hospitals. Therefore, the ED management needs a model that can be used to examine and understand resource capacity which can assist the hospital managers to reduce patients waiting time. Simulation model was developed based on 24 hours data collection. The model developed using Arena simulation replicates the actual ED's operations of a public hospital in Selangor, Malaysia. The OptQuest optimization in Arena is used to find the possible combinations of a number of resources that can minimize patients waiting time while increasing the number of patients served. The simulation model was modified for improvement based on results from OptQuest. The improvement model significantly improves ED's efficiency with an average of 32% reduction in average patients waiting times and 25% increase in the total number of patients served.
He, Wensi; Yan, Fangyou; Jia, Qingzhu; Xia, Shuqian; Wang, Qiang
2018-03-01
The hazardous potential of ionic liquids (ILs) is becoming an issue of great concern due to their important role in many industrial fields as green agents. The mathematical model for the toxicological effects of ILs is useful for the risk assessment and design of environmentally benign ILs. The objective of this work is to develop QSAR models to describe the minimal inhibitory concentration (MIC) and minimal bactericidal concentration (MBC) of ILs against Staphylococcus aureus (S. aureus). A total of 169 and 101 ILs with MICs and MBCs, respectively, are used to obtain multiple linear regression models based on matrix norm indexes. The norm indexes used in this work are proposed by our research group and they are first applied to estimate the antibacterial toxicity of these ILs against S. aureus. These two models precisely and reliably calculated the IL toxicities with a square of correlation coefficient (R 2 ) of 0.919 and a standard error of estimate (SE) of 0.341 (in log unit of mM) for pMIC, and an R 2 of 0.913 and SE of 0.282 for pMBC. Copyright © 2017 Elsevier Ltd. All rights reserved.
An Efficient Interactive Model for On-Demand Sensing-As-A-Servicesof Sensor-Cloud.
Dinh, Thanh; Kim, Younghan
2016-06-28
This paper proposes an efficient interactive model for the sensor-cloud to enable the sensor-cloud to efficiently provide on-demand sensing services for multiple applications with different requirements at the same time. The interactive model is designed for both the cloud and sensor nodes to optimize the resource consumption of physical sensors, as well as the bandwidth consumption of sensing traffic. In the model, the sensor-cloud plays a key role in aggregating application requests to minimize the workloads required for constrained physical nodes while guaranteeing that the requirements of all applications are satisfied. Physical sensor nodes perform their sensing under the guidance of the sensor-cloud. Based on the interactions with the sensor-cloud, physical sensor nodes adapt their scheduling accordingly to minimize their energy consumption. Comprehensive experimental results show that our proposed system achieves a significant improvement in terms of the energy consumption of physical sensors, the bandwidth consumption from the sink node to the sensor-cloud, the packet delivery latency, reliability and scalability, compared to current approaches. Based on the obtained results, we discuss the economical benefits and how the proposed system enables a win-win model in the sensor-cloud.
On the design of script languages for neural simulation.
Brette, Romain
2012-01-01
In neural network simulators, models are specified according to a language, either specific or based on a general programming language (e.g. Python). There are also ongoing efforts to develop standardized languages, for example NeuroML. When designing these languages, efforts are often focused on expressivity, that is, on maximizing the number of model types than can be described and simulated. I argue that a complementary goal should be to minimize the cognitive effort required on the part of the user to use the language. I try to formalize this notion with the concept of "language entropy", and I propose a few practical guidelines to minimize the entropy of languages for neural simulation.
Petkevičiūtė, D; Pasi, M; Gonzalez, O; Maddocks, J H
2014-11-10
cgDNA is a package for the prediction of sequence-dependent configuration-space free energies for B-form DNA at the coarse-grain level of rigid bases. For a fragment of any given length and sequence, cgDNA calculates the configuration of the associated free energy minimizer, i.e. the relative positions and orientations of each base, along with a stiffness matrix, which together govern differences in free energies. The model predicts non-local (i.e. beyond base-pair step) sequence dependence of the free energy minimizer. Configurations can be input or output in either the Curves+ definition of the usual helical DNA structural variables, or as a PDB file of coordinates of base atoms. We illustrate the cgDNA package by comparing predictions of free energy minimizers from (a) the cgDNA model, (b) time-averaged atomistic molecular dynamics (or MD) simulations, and (c) NMR or X-ray experimental observation, for (i) the Dickerson-Drew dodecamer and (ii) three oligomers containing A-tracts. The cgDNA predictions are rather close to those of the MD simulations, but many orders of magnitude faster to compute. Both the cgDNA and MD predictions are in reasonable agreement with the available experimental data. Our conclusion is that cgDNA can serve as a highly efficient tool for studying structural variations in B-form DNA over a wide range of sequences. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.
A minimal model of neutrino flavor
NASA Astrophysics Data System (ADS)
Luhn, Christoph; Parattu, Krishna Mohan; Wingerter, Akın
2012-12-01
Models of neutrino mass which attempt to describe the observed lepton mixing pattern are typically based on discrete family symmetries with a non-Abelian and one or more Abelian factors. The latter so-called shaping symmetries are imposed in order to yield a realistic phenomenology by forbidding unwanted operators. Here we propose a supersymmetric model of neutrino flavor which is based on the group T 7 and does not require extra {Z} N or U(1) factors in the Yukawa sector, which makes it the smallest realistic family symmetry that has been considered so far. At leading order, the model predicts tribimaximal mixing which arises completely accidentally from a combination of the T 7 Clebsch-Gordan coefficients and suitable flavon alignments. Next-to-leading order (NLO) operators break the simple tribimaximal structure and render the model compatible with the recent results of the Daya Bay and Reno collaborations which have measured a reactor angle of around 9°. Problematic NLO deviations of the other two mixing angles can be controlled in an ultraviolet completion of the model. The vacuum alignment mechanism that we use necessitates the introduction of a hidden flavon sector that transforms under a {Z} 6 symmetry, thereby spoiling the minimality of our model whose flavor symmetry is then T 7 × {Z} 6.
A toxicity cost function approach to optimal CPA equilibration in tissues.
Benson, James D; Higgins, Adam Z; Desai, Kunjan; Eroglu, Ali
2018-02-01
There is growing need for cryopreserved tissue samples that can be used in transplantation and regenerative medicine. While a number of specific tissue types have been successfully cryopreserved, this success is not general, and there is not a uniform approach to cryopreservation of arbitrary tissues. Additionally, while there are a number of long-established approaches towards optimizing cryoprotocols in single cell suspensions, and even plated cell monolayers, computational approaches in tissue cryopreservation have classically been limited to explanatory models. Here we develop a numerical approach to adapt cell-based CPA equilibration damage models for use in a classical tissue mass transport model. To implement this with real-world parameters, we measured CPA diffusivity in three human-sourced tissue types, skin, fibroid and myometrium, yielding propylene glycol diffusivities of 0.6 × 10 -6 cm 2 /s, 1.2 × 10 -6 cm 2 /s and 1.3 × 10 -6 cm 2 /s, respectively. Based on these results, we numerically predict and compare optimal multistep equilibration protocols that minimize the cell-based cumulative toxicity cost function and the damage due to excessive osmotic gradients at the tissue boundary. Our numerical results show that there are fundamental differences between protocols designed to minimize total CPA exposure time in tissues and protocols designed to minimize accumulated CPA toxicity, and that "one size fits all" stepwise approaches are predicted to be more toxic and take considerably longer than needed. Copyright © 2017 Elsevier Inc. All rights reserved.
Niazi, Muaz A
2014-01-01
The body structure of snakes is composed of numerous natural components thereby making it resilient, flexible, adaptive, and dynamic. In contrast, current computer animations as well as physical implementations of snake-like autonomous structures are typically designed to use either a single or a relatively smaller number of components. As a result, not only these artificial structures are constrained by the dimensions of the constituent components but often also require relatively more computationally intensive algorithms to model and animate. Still, these animations often lack life-like resilience and adaptation. This paper presents a solution to the problem of modeling snake-like structures by proposing an agent-based, self-organizing algorithm resulting in an emergent and surprisingly resilient dynamic structure involving a minimal of interagent communication. Extensive simulation experiments demonstrate the effectiveness as well as resilience of the proposed approach. The ideas originating from the proposed algorithm can not only be used for developing self-organizing animations but can also have practical applications such as in the form of complex, autonomous, evolvable robots with self-organizing, mobile components with minimal individual computational capabilities. The work also demonstrates the utility of exploratory agent-based modeling (EABM) in the engineering of artificial life-like complex adaptive systems.
Niazi, Muaz A.
2014-01-01
The body structure of snakes is composed of numerous natural components thereby making it resilient, flexible, adaptive, and dynamic. In contrast, current computer animations as well as physical implementations of snake-like autonomous structures are typically designed to use either a single or a relatively smaller number of components. As a result, not only these artificial structures are constrained by the dimensions of the constituent components but often also require relatively more computationally intensive algorithms to model and animate. Still, these animations often lack life-like resilience and adaptation. This paper presents a solution to the problem of modeling snake-like structures by proposing an agent-based, self-organizing algorithm resulting in an emergent and surprisingly resilient dynamic structure involving a minimal of interagent communication. Extensive simulation experiments demonstrate the effectiveness as well as resilience of the proposed approach. The ideas originating from the proposed algorithm can not only be used for developing self-organizing animations but can also have practical applications such as in the form of complex, autonomous, evolvable robots with self-organizing, mobile components with minimal individual computational capabilities. The work also demonstrates the utility of exploratory agent-based modeling (EABM) in the engineering of artificial life-like complex adaptive systems. PMID:24701135
Concentric Tube Robot Design and Optimization Based on Task and Anatomical Constraints
Bergeles, Christos; Gosline, Andrew H.; Vasilyev, Nikolay V.; Codd, Patrick J.; del Nido, Pedro J.; Dupont, Pierre E.
2015-01-01
Concentric tube robots are catheter-sized continuum robots that are well suited for minimally invasive surgery inside confined body cavities. These robots are constructed from sets of pre-curved superelastic tubes and are capable of assuming complex 3D curves. The family of 3D curves that the robot can assume depends on the number, curvatures, lengths and stiffnesses of the tubes in its tube set. The robot design problem involves solving for a tube set that will produce the family of curves necessary to perform a surgical procedure. At a minimum, these curves must enable the robot to smoothly extend into the body and to manipulate tools over the desired surgical workspace while respecting anatomical constraints. This paper introduces an optimization framework that utilizes procedureor patient-specific image-based anatomical models along with surgical workspace requirements to generate robot tube set designs. The algorithm searches for designs that minimize robot length and curvature and for which all paths required for the procedure consist of stable robot configurations. Two mechanics-based kinematic models are used. Initial designs are sought using a model assuming torsional rigidity. These designs are then refined using a torsionally-compliant model. The approach is illustrated with clinically relevant examples from neurosurgery and intracardiac surgery. PMID:26380575
Ding, Jinliang; Chai, Tianyou; Wang, Hong
2011-03-01
This paper presents a novel offline modeling for product quality prediction of mineral processing which consists of a number of unit processes in series. The prediction of the product quality of the whole mineral process (i.e., the mixed concentrate grade) plays an important role and the establishment of its predictive model is a key issue for the plantwide optimization. For this purpose, a hybrid modeling approach of the mixed concentrate grade prediction is proposed, which consists of a linear model and a nonlinear model. The least-squares support vector machine is adopted to establish the nonlinear model. The inputs of the predictive model are the performance indices of each unit process, while the output is the mixed concentrate grade. In this paper, the model parameter selection is transformed into the shape control of the probability density function (PDF) of the modeling error. In this context, both the PDF-control-based and minimum-entropy-based model parameter selection approaches are proposed. Indeed, this is the first time that the PDF shape control idea is used to deal with system modeling, where the key idea is to turn model parameters so that either the modeling error PDF is controlled to follow a target PDF or the modeling error entropy is minimized. The experimental results using the real plant data and the comparison of the two approaches are discussed. The results show the effectiveness of the proposed approaches.
Yang, Laurence; Tan, Justin; O'Brien, Edward J; Monk, Jonathan M; Kim, Donghyuk; Li, Howard J; Charusanti, Pep; Ebrahim, Ali; Lloyd, Colton J; Yurkovich, James T; Du, Bin; Dräger, Andreas; Thomas, Alex; Sun, Yuekai; Saunders, Michael A; Palsson, Bernhard O
2015-08-25
Finding the minimal set of gene functions needed to sustain life is of both fundamental and practical importance. Minimal gene lists have been proposed by using comparative genomics-based core proteome definitions. A definition of a core proteome that is supported by empirical data, is understood at the systems-level, and provides a basis for computing essential cell functions is lacking. Here, we use a systems biology-based genome-scale model of metabolism and expression to define a functional core proteome consisting of 356 gene products, accounting for 44% of the Escherichia coli proteome by mass based on proteomics data. This systems biology core proteome includes 212 genes not found in previous comparative genomics-based core proteome definitions, accounts for 65% of known essential genes in E. coli, and has 78% gene function overlap with minimal genomes (Buchnera aphidicola and Mycoplasma genitalium). Based on transcriptomics data across environmental and genetic backgrounds, the systems biology core proteome is significantly enriched in nondifferentially expressed genes and depleted in differentially expressed genes. Compared with the noncore, core gene expression levels are also similar across genetic backgrounds (two times higher Spearman rank correlation) and exhibit significantly more complex transcriptional and posttranscriptional regulatory features (40% more transcription start sites per gene, 22% longer 5'UTR). Thus, genome-scale systems biology approaches rigorously identify a functional core proteome needed to support growth. This framework, validated by using high-throughput datasets, facilitates a mechanistic understanding of systems-level core proteome function through in silico models; it de facto defines a paleome.
Optimal Partitioning of a Data Set Based on the "p"-Median Model
ERIC Educational Resources Information Center
Brusco, Michael J.; Kohn, Hans-Friedrich
2008-01-01
Although the "K"-means algorithm for minimizing the within-cluster sums of squared deviations from cluster centroids is perhaps the most common method for applied cluster analyses, a variety of other criteria are available. The "p"-median model is an especially well-studied clustering problem that requires the selection of "p" objects to serve as…
A Technical Description of the Officer Procurement Model (TOPOPS). Final Report.
ERIC Educational Resources Information Center
Akman, Allan; And Others
The Total Objective Plan for the Officer Procurement System (TOPOPS) is an aggregate-level, computer-based model of the Air Force Officer procurement system developed to operate on the UNIVAC 1108 system. It is designed to simulate officer accession and training and achieve optimal solutions in terms of either cost minimization or accession…
Trabelsi, Heykel; Koch, Mathilde; Faulon, Jean-Loup
2018-05-07
Progress in synthetic biology tools has transformed the way we engineer living cells. Applications of circuit design have reached a new level, offering solutions for metabolic engineering challenges that include developing screening approaches for libraries of pathway variants. The use of transcription-factor-based biosensors for screening has shown promising results, but the quantitative relationship between the sensors and the sensed molecules still needs more rational understanding. Herein, we have successfully developed a novel biosensor to detect pinocembrin based on a transcriptional regulator. The FdeR transcription factor (TF), known to respond to naringenin, was combined with a fluorescent reporter protein. By varying the copy number of its plasmid and the concentration of the biosensor TF through a combinatorial library, different responses have been recorded and modeled. The fitted model provides a tool to understand the impact of these parameters on the biosensor behavior in terms of dose-response and time curves and offers guidelines to build constructs oriented to increased sensitivity and or ability of linear detection at higher titers. Our model, the first to explicitly take into account the impact of plasmid copy number on biosensor sensitivity using Hill-based formalism, is able to explain uncharacterized systems without extensive knowledge of the properties of the TF. Moreover, it can be used to model the response of the biosensor to different compounds (here naringenin and pinocembrin) with minimal parameter refitting. © 2018 Wiley Periodicals, Inc.
Czaplewski, Cezary; Karczynska, Agnieszka; Sieradzan, Adam K; Liwo, Adam
2018-04-30
A server implementation of the UNRES package (http://www.unres.pl) for coarse-grained simulations of protein structures with the physics-based UNRES model, coined a name UNRES server, is presented. In contrast to most of the protein coarse-grained models, owing to its physics-based origin, the UNRES force field can be used in simulations, including those aimed at protein-structure prediction, without ancillary information from structural databases; however, the implementation includes the possibility of using restraints. Local energy minimization, canonical molecular dynamics simulations, replica exchange and multiplexed replica exchange molecular dynamics simulations can be run with the current UNRES server; the latter are suitable for protein-structure prediction. The user-supplied input includes protein sequence and, optionally, restraints from secondary-structure prediction or small x-ray scattering data, and simulation type and parameters which are selected or typed in. Oligomeric proteins, as well as those containing D-amino-acid residues and disulfide links can be treated. The output is displayed graphically (minimized structures, trajectories, final models, analysis of trajectory/ensembles); however, all output files can be downloaded by the user. The UNRES server can be freely accessed at http://unres-server.chem.ug.edu.pl.
An improved car-following model considering headway changes with memory
NASA Astrophysics Data System (ADS)
Yu, Shaowei; Shi, Zhongke
2015-03-01
To describe car-following behaviors in complex situations better, increase roadway traffic mobility and minimize cars' fuel consumptions, the linkage between headway changes with memory and car-following behaviors was explored with the field car-following data by using the gray correlation analysis method, and then an improved car-following model considering headway changes with memory on a single lane was proposed based on the full velocity difference model. Some numerical simulations were carried out by employing the improved car-following model to explore how headway changes with memory affected each car's velocity, acceleration, headway and fuel consumptions. The research results show that headway changes with memory have significant effects on car-following behaviors and fuel consumptions and that considering headway changes with memory in designing the adaptive cruise control strategy can improve the traffic flow stability and minimize cars' fuel consumptions.
Wang, Jing; Li, Tianfang; Lu, Hongbing; Liang, Zhengrong
2006-01-01
Reconstructing low-dose X-ray CT (computed tomography) images is a noise problem. This work investigated a penalized weighted least-squares (PWLS) approach to address this problem in two dimensions, where the WLS considers first- and second-order noise moments and the penalty models signal spatial correlations. Three different implementations were studied for the PWLS minimization. One utilizes a MRF (Markov random field) Gibbs functional to consider spatial correlations among nearby detector bins and projection views in sinogram space and minimizes the PWLS cost function by iterative Gauss-Seidel algorithm. Another employs Karhunen-Loève (KL) transform to de-correlate data signals among nearby views and minimizes the PWLS adaptively to each KL component by analytical calculation, where the spatial correlation among nearby bins is modeled by the same Gibbs functional. The third one models the spatial correlations among image pixels in image domain also by a MRF Gibbs functional and minimizes the PWLS by iterative successive over-relaxation algorithm. In these three implementations, a quadratic functional regularization was chosen for the MRF model. Phantom experiments showed a comparable performance of these three PWLS-based methods in terms of suppressing noise-induced streak artifacts and preserving resolution in the reconstructed images. Computer simulations concurred with the phantom experiments in terms of noise-resolution tradeoff and detectability in low contrast environment. The KL-PWLS implementation may have the advantage in terms of computation for high-resolution dynamic low-dose CT imaging. PMID:17024831
Search for high-mass dilepton resonances in p p collisions at s = 8 TeV with the ATLAS detector
Aad, G.; Abbott, B.; Abdallah, J.; ...
2014-09-19
Here, the ATLAS detector at the Large Hadron Collider is used to search for high-mass resonances decaying to dielectron or dimuon final states. Results are presented from an analysis of proton-proton (pp) collisions at a center-of-mass energy of 8 TeV corresponding to an integrated luminosity of 20.3 fb –1 in the dimuon channel. A narrow resonance with Standard Model Z couplings to fermions is excluded at 95% confidence level for masses less than 2.79 TeV in the dielectron channel, 2.53 TeV in the dimuon channel, and 2.90 TeV in the two channels combined. Limits on other model interpretations are alsomore » presented, including a grand-unification model based on the E 6 gauge group, Z* bosons, minimal Z' models, a spin-2 graviton excitation from Randall-Sundrum models, quantum black holes, and a minimal walking technicolor model with a composite Higgs boson.« less
Analysis of an optimization-based atomistic-to-continuum coupling method for point defects
Olson, Derek; Shapeev, Alexander V.; Bochev, Pavel B.; ...
2015-11-16
Here, we formulate and analyze an optimization-based Atomistic-to-Continuum (AtC) coupling method for problems with point defects. Application of a potential-based atomistic model near the defect core enables accurate simulation of the defect. Away from the core, where site energies become nearly independent of the lattice position, the method switches to a more efficient continuum model. The two models are merged by minimizing the mismatch of their states on an overlap region, subject to the atomistic and continuum force balance equations acting independently in their domains. We prove that the optimization problem is well-posed and establish error estimates.
Nallasivam, Ulaganathan; Shah, Vishesh H.; Shenvi, Anirudh A.; ...
2016-02-10
We present a general Global Minimization Algorithm (GMA) to identify basic or thermally coupled distillation configurations that require the least vapor duty under minimum reflux conditions for separating any ideal or near-ideal multicomponent mixture into a desired number of product streams. In this algorithm, global optimality is guaranteed by modeling the system using Underwood equations and reformulating the resulting constraints to bilinear inequalities. The speed of convergence to the globally optimal solution is increased by using appropriate feasibility and optimality based variable-range reduction techniques and by developing valid inequalities. As a result, the GMA can be coupled with already developedmore » techniques that enumerate basic and thermally coupled distillation configurations, to provide for the first time, a global optimization based rank-list of distillation configurations.« less
NASA Astrophysics Data System (ADS)
Liu, Hongcheng; Dong, Peng; Xing, Lei
2017-08-01
{{\\ell }2,1} -minimization-based sparse optimization was employed to solve the beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) planning. The technique approximates the exact BAO formulation with efficiently computable convex surrogates, leading to plans that are inferior to those attainable with recently proposed gradient-based greedy schemes. In this paper, we alleviate/reduce the nontrivial inconsistencies between the {{\\ell }2,1} -based formulations and the exact BAO model by proposing a new sparse optimization framework based on the most recent developments in group variable selection. We propose the incorporation of the group-folded concave penalty (gFCP) as a substitution to the {{\\ell }2,1} -minimization framework. The new formulation is then solved by a variation of an existing gradient method. The performance of the proposed scheme is evaluated by both plan quality and the computational efficiency using three IMRT cases: a coplanar prostate case, a coplanar head-and-neck case, and a noncoplanar liver case. Involved in the evaluation are two alternative schemes: the {{\\ell }2,1} -minimization approach and the gradient norm method (GNM). The gFCP-based scheme outperforms both counterpart approaches. In particular, gFCP generates better plans than those obtained using the {{\\ell }2,1} -minimization for all three cases with a comparable computation time. As compared to the GNM, the gFCP improves both the plan quality and computational efficiency. The proposed gFCP-based scheme provides a promising framework for BAO and promises to improve both planning time and plan quality.
Liu, Hongcheng; Dong, Peng; Xing, Lei
2017-07-20
[Formula: see text]-minimization-based sparse optimization was employed to solve the beam angle optimization (BAO) in intensity-modulated radiation therapy (IMRT) planning. The technique approximates the exact BAO formulation with efficiently computable convex surrogates, leading to plans that are inferior to those attainable with recently proposed gradient-based greedy schemes. In this paper, we alleviate/reduce the nontrivial inconsistencies between the [Formula: see text]-based formulations and the exact BAO model by proposing a new sparse optimization framework based on the most recent developments in group variable selection. We propose the incorporation of the group-folded concave penalty (gFCP) as a substitution to the [Formula: see text]-minimization framework. The new formulation is then solved by a variation of an existing gradient method. The performance of the proposed scheme is evaluated by both plan quality and the computational efficiency using three IMRT cases: a coplanar prostate case, a coplanar head-and-neck case, and a noncoplanar liver case. Involved in the evaluation are two alternative schemes: the [Formula: see text]-minimization approach and the gradient norm method (GNM). The gFCP-based scheme outperforms both counterpart approaches. In particular, gFCP generates better plans than those obtained using the [Formula: see text]-minimization for all three cases with a comparable computation time. As compared to the GNM, the gFCP improves both the plan quality and computational efficiency. The proposed gFCP-based scheme provides a promising framework for BAO and promises to improve both planning time and plan quality.
NASA Astrophysics Data System (ADS)
Leal, Allan M. M.; Kulik, Dmitrii A.; Kosakowski, Georg
2016-02-01
We present a numerical method for multiphase chemical equilibrium calculations based on a Gibbs energy minimization approach. The method can accurately and efficiently determine the stable phase assemblage at equilibrium independently of the type of phases and species that constitute the chemical system. We have successfully applied our chemical equilibrium algorithm in reactive transport simulations to demonstrate its effective use in computationally intensive applications. We used FEniCS to solve the governing partial differential equations of mass transport in porous media using finite element methods in unstructured meshes. Our equilibrium calculations were benchmarked with GEMS3K, the numerical kernel of the geochemical package GEMS. This allowed us to compare our results with a well-established Gibbs energy minimization algorithm, as well as their performance on every mesh node, at every time step of the transport simulation. The benchmark shows that our novel chemical equilibrium algorithm is accurate, robust, and efficient for reactive transport applications, and it is an improvement over the Gibbs energy minimization algorithm used in GEMS3K. The proposed chemical equilibrium method has been implemented in Reaktoro, a unified framework for modeling chemically reactive systems, which is now used as an alternative numerical kernel of GEMS.
Boutagy, Nabil E; Rogers, George W; Pyne, Emily S; Ali, Mostafa M; Hulver, Matthew W; Frisard, Madlyn I
2015-10-30
Skeletal muscle mitochondria play a specific role in many disease pathologies. As such, the measurement of oxygen consumption as an indicator of mitochondrial function in this tissue has become more prevalent. Although many technologies and assays exist that measure mitochondrial respiratory pathways in a variety of cells, tissue and species, there is currently a void in the literature in regards to the compilation of these assays using isolated mitochondria from mouse skeletal muscle for use in microplate based technologies. Importantly, the use of microplate based respirometric assays is growing among mitochondrial biologists as it allows for high throughput measurements using minimal quantities of isolated mitochondria. Therefore, a collection of microplate based respirometric assays were developed that are able to assess mechanistic changes/adaptations in oxygen consumption in a commonly used animal model. The methods presented herein provide step-by-step instructions to perform these assays with an optimal amount of mitochondrial protein and reagents, and high precision as evidenced by the minimal variance across the dynamic range of each assay.
3D High Resolution Mesh Deformation Based on Multi Library Wavelet Neural Network Architecture
NASA Astrophysics Data System (ADS)
Dhibi, Naziha; Elkefi, Akram; Bellil, Wajdi; Amar, Chokri Ben
2016-12-01
This paper deals with the features of a novel technique for large Laplacian boundary deformations using estimated rotations. The proposed method is based on a Multi Library Wavelet Neural Network structure founded on several mother wavelet families (MLWNN). The objective is to align features of mesh and minimize distortion with a fixed feature that minimizes the sum of the distances between all corresponding vertices. New mesh deformation method worked in the domain of Region of Interest (ROI). Our approach computes deformed ROI, updates and optimizes it to align features of mesh based on MLWNN and spherical parameterization configuration. This structure has the advantage of constructing the network by several mother wavelets to solve high dimensions problem using the best wavelet mother that models the signal better. The simulation test achieved the robustness and speed considerations when developing deformation methodologies. The Mean-Square Error and the ratio of deformation are low compared to other works from the state of the art. Our approach minimizes distortions with fixed features to have a well reconstructed object.
Attaining minimally disruptive medicine: context, challenges and a roadmap for implementation.
Shippee, N D; Allen, S V; Leppin, A L; May, C R; Montori, V M
2015-01-01
In this second of two papers on minimally disruptive medicine, we use the language of patient workload and patient capacity from the Cumulative Complexity Model to accomplish three tasks. First, we outline the current context in healthcare, comprised of contrasting problems: some people lack access to care and others receive too much care in an overmedicalised system, both of which reflect imbalances between patients' workloads and their capacity. Second, we identify and address five tensions and challenges between minimally disruptive medicine, the existing context, and other approaches to accessible and patient-centred care such as evidence-based medicine and greater patient engagement. Third, we outline a roadmap of three strategies toward implementing minimally disruptive medicine in practice, including large-scale paradigm shifts, mid-level add-ons to existing reform efforts, and a modular strategy using an existing 'toolkit' that is more limited in scope, but can fit into existing healthcare systems.
Increasing accuracy of dispersal kernels in grid-based population models
Slone, D.H.
2011-01-01
Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.
Chipman, Jonathan J; Sanda, Martin G; Dunn, Rodney L; Wei, John T; Litwin, Mark S; Crociani, Catrina M; Regan, Meredith M; Chang, Peter
2014-03-01
We expanded the clinical usefulness of EPIC-CP (Expanded Prostate Cancer Index Composite for Clinical Practice) by evaluating its responsiveness to health related quality of life changes, defining the minimally important differences for an individual patient change in each domain and applying it to a sexual outcome prediction model. In 1,201 subjects from a previously described multicenter longitudinal cohort we modeled the EPIC-CP domain scores of each treatment group before treatment, and at short-term and long-term followup. We considered a posttreatment domain score change from pretreatment of 0.5 SD or greater clinically significant and p ≤ 0.01 statistically significant. We determined the domain minimally important differences using the pooled 0.5 SD of the 2, 6, 12 and 24-month posttreatment changes from pretreatment values. We then recalibrated an EPIC-CP based nomogram model predicting 2-year post-prostatectomy functional erection from that developed using EPIC-26. For each health related quality of life domain EPIC-CP was sensitive to similar posttreatment health related quality of life changes with time, as was observed using EPIC-26. The EPIC-CP minimally important differences in changes in the urinary incontinence, urinary irritation/obstruction, bowel, sexual and vitality/hormonal domains were 1.0, 1.3, 1.2, 1.6 and 1.0, respectively. The EPIC-CP based sexual prediction model performed well (AUC 0.76). It showed robust agreement with its EPIC-26 based counterpart with 10% or less predicted probability differences between models in 95% of individuals and a mean ± SD difference of 0.0 ± 0.05 across all individuals. EPIC-CP is responsive to health related quality of life changes during convalescence and it can be used to predict 2-year post-prostatectomy sexual outcomes. It can facilitate shared medical decision making and patient centered care. Copyright © 2014 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.
Zhao, Jinzhe; Zhao, Qi; Jiang, Yingxu; Li, Weitao; Yang, Yamin; Qian, Zhiyu; Liu, Jia
2018-06-01
Liver thermal ablation techniques have been widely used for the treatment of liver cancer. Kinetic model of damage propagation play an important role for ablation prediction and real-time efficacy assessment. However, practical methods for modeling liver thermal damage are rare. A minimally invasive optical method especially adequate for in situ liver thermal damage modeling is introduced in this paper. Porcine liver tissue was heated by water bath under different temperatures. During thermal treatment, diffuse reflectance spectrum of liver was measured by optical fiber and used to deduce reduced scattering coefficient (μ ' s ). Arrhenius parameters were obtained through non-isothermal heating approach with damage marker of μ ' s . Activation energy (E a ) and frequency factor (A) was deduced from these experiments. A pair of averaged value is 1.200 × 10 5 J mol -1 and 4.016 × 10 17 s -1 . The results were verified for their reasonableness and practicality. Therefore, it is feasible to modeling liver thermal damage based on minimally invasive measurement of optical property and in situ kinetic analysis of damage progress with Arrhenius model. These parameters and this method are beneficial for preoperative planning and real-time efficacy assessment of liver ablation therapy. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Burgner, J.; Simpson, A. L.; Fitzpatrick, J. M.; Lathrop, R. A.; Herrell, S. D.; Miga, M. I.; Webster, R. J.
2013-01-01
Background Registered medical images can assist with surgical navigation and enable image-guided therapy delivery. In soft tissues, surface-based registration is often used and can be facilitated by laser surface scanning. Tracked conoscopic holography (which provides distance measurements) has been recently proposed as a minimally invasive way to obtain surface scans. Moving this technique from concept to clinical use requires a rigorous accuracy evaluation, which is the purpose of our paper. Methods We adapt recent non-homogeneous and anisotropic point-based registration results to provide a theoretical framework for predicting the accuracy of tracked distance measurement systems. Experiments are conducted a complex objects of defined geometry, an anthropomorphic kidney phantom and a human cadaver kidney. Results Experiments agree with model predictions, producing point RMS errors consistently < 1 mm, surface-based registration with mean closest point error < 1 mm in the phantom and a RMS target registration error of 0.8 mm in the human cadaver kidney. Conclusions Tracked conoscopic holography is clinically viable; it enables minimally invasive surface scan accuracy comparable to current clinical methods that require open surgery. PMID:22761086
The Dominant Folding Route Minimizes Backbone Distortion in SH3
Lammert, Heiko; Noel, Jeffrey K.; Onuchic, José N.
2012-01-01
Energetic frustration in protein folding is minimized by evolution to create a smooth and robust energy landscape. As a result the geometry of the native structure provides key constraints that shape protein folding mechanisms. Chain connectivity in particular has been identified as an essential component for realistic behavior of protein folding models. We study the quantitative balance of energetic and geometrical influences on the folding of SH3 in a structure-based model with minimal energetic frustration. A decomposition of the two-dimensional free energy landscape for the folding reaction into relevant energy and entropy contributions reveals that the entropy of the chain is not responsible for the folding mechanism. Instead the preferred folding route through the transition state arises from a cooperative energetic effect. Off-pathway structures are penalized by excess distortion in local backbone configurations and contact pair distances. This energy cost is a new ingredient in the malleable balance of interactions that controls the choice of routes during protein folding. PMID:23166485
A minimal model of epithelial tissue dynamics and its application to the corneal epithelium
NASA Astrophysics Data System (ADS)
Henkes, Silke; Matoz-Fernandez, Daniel; Kostanjevec, Kaja; Coburn, Luke; Sknepnek, Rastko; Collinson, J. Martin; Martens, Kirsten
Epithelial cell sheets are characterized by a complex interplay of active drivers, including cell motility, cell division and extrusion. Here we construct a particle-based minimal model tissue with only division/death dynamics and show that it always corresponds to a liquid state with a single dynamic time scale set by the division rate, and that no glassy phase is possible. Building on this, we construct an in-silico model of the mammalian corneal epithelium as such a tissue confined to a hemisphere bordered by the limbal stem cell zone. With added cell motility dynamics we are able to explain the steady-state spiral migration on the cornea, including the central vortex defect, and quantitatively compare it to eyes obtained from mice that are X-inactivation mosaic for LacZ.
FEM Modeling of a Magnetoelectric Transducer for Autonomous Micro Sensors in Medical Application
NASA Astrophysics Data System (ADS)
Yang, Gang; Talleb, Hakeim; Gensbittel, Aurélie; Ren, Zhuoxiang
2015-11-01
In the context of wireless and autonomous sensors, this paper presents the multiphysics modeling of an energy transducer based on magnetoelectric (ME) composite for biomedical applications. The study considers the power requirement of an implanted sensor, the communication distance, the size limit of the device for minimal invasive insertion as well as the electromagnetic exposure restriction of the human body. To minimize the electromagnetic absorption by the human body, the energy source is provided by an external reader emitting low frequency magnetic field. The modeling is carried out with the finite element method by solving simultaneously the multiple physics problems including the electric load of the conditioning circuit. The simulation results show that with the T-L mode of a trilayer laminated ME composite, the transducer can deliver the required energy in respecting different constraints.
On the nullspace of TLS multi-station adjustment
NASA Astrophysics Data System (ADS)
Sterle, Oskar; Kogoj, Dušan; Stopar, Bojan; Kregar, Klemen
2018-07-01
In the article we present an analytic aspect of TLS multi-station least-squares adjustment with the main focus on the datum problem. The datum problem is, compared to previously published researches, theoretically analyzed and solved, where the solution is based on nullspace derivation of the mathematical model. The importance of datum problem solution is seen in a complete description of TLS multi-station adjustment solutions from a set of all minimally constrained least-squares solutions. On a basis of known nullspace, estimable parameters are described and the geometric interpretation of all minimally constrained least squares solutions is presented. At the end a simulated example is used to analyze the results of TLS multi-station minimally constrained and inner constrained least-squares adjustment solutions.
Sparse RNA folding revisited: space-efficient minimum free energy structure prediction.
Will, Sebastian; Jabbari, Hosna
2016-01-01
RNA secondary structure prediction by energy minimization is the central computational tool for the analysis of structural non-coding RNAs and their interactions. Sparsification has been successfully applied to improve the time efficiency of various structure prediction algorithms while guaranteeing the same result; however, for many such folding problems, space efficiency is of even greater concern, particularly for long RNA sequences. So far, space-efficient sparsified RNA folding with fold reconstruction was solved only for simple base-pair-based pseudo-energy models. Here, we revisit the problem of space-efficient free energy minimization. Whereas the space-efficient minimization of the free energy has been sketched before, the reconstruction of the optimum structure has not even been discussed. We show that this reconstruction is not possible in trivial extension of the method for simple energy models. Then, we present the time- and space-efficient sparsified free energy minimization algorithm SparseMFEFold that guarantees MFE structure prediction. In particular, this novel algorithm provides efficient fold reconstruction based on dynamically garbage-collected trace arrows. The complexity of our algorithm depends on two parameters, the number of candidates Z and the number of trace arrows T; both are bounded by [Formula: see text], but are typically much smaller. The time complexity of RNA folding is reduced from [Formula: see text] to [Formula: see text]; the space complexity, from [Formula: see text] to [Formula: see text]. Our empirical results show more than 80 % space savings over RNAfold [Vienna RNA package] on the long RNAs from the RNA STRAND database (≥2500 bases). The presented technique is intentionally generalizable to complex prediction algorithms; due to their high space demands, algorithms like pseudoknot prediction and RNA-RNA-interaction prediction are expected to profit even stronger than "standard" MFE folding. SparseMFEFold is free software, available at http://www.bioinf.uni-leipzig.de/~will/Software/SparseMFEFold.
Dense mesh sampling for video-based facial animation
NASA Astrophysics Data System (ADS)
Peszor, Damian; Wojciechowska, Marzena
2016-06-01
The paper describes an approach for selection of feature points on three-dimensional, triangle mesh obtained using various techniques from several video footages. This approach has a dual purpose. First, it allows to minimize the data stored for the purpose of facial animation, so that instead of storing position of each vertex in each frame, one could store only a small subset of vertices for each frame and calculate positions of others based on the subset. Second purpose is to select feature points that could be used for anthropometry-based retargeting of recorded mimicry to another model, with sampling density beyond that which can be achieved using marker-based performance capture techniques. Developed approach was successfully tested on artificial models, models constructed using structured light scanner, and models constructed from video footages using stereophotogrammetry.
ERIC Educational Resources Information Center
Davis, Laurie Laughlin; Pastor, Dena A.; Dodd, Barbara G.; Chiang, Claire; Fitzpatrick, Steven J.
2003-01-01
Examined the effectiveness of the Sympson-Hetter technique and rotated content balancing relative to no exposure control and no content rotation conditions in a computerized adaptive testing system based on the partial credit model. Simulation results show the Sympson-Hetter technique can be used with minimal impact on measurement precision,…
Yohan Lee; Jeremy S. Fried; Heidi J. Albers; Robert G. Haight
2013-01-01
We combine a scenario-based, standard-response optimization model with stochastic simulation to improve the efficiency of resource deployment for initial attack on wildland fires in three planning units in California. The optimization model minimizes the expected number of fires that do not receive a standard response--defined as the number of resources by type that...
A supercritical airfoil experiment
NASA Technical Reports Server (NTRS)
Mateer, G. G.; Seegmiller, H. L.; Hand, L. A.; Szodruck, J.
1994-01-01
The purpose of this investigation is to provide a comprehensive data base for the validation of numerical simulations. The objective of the present paper is to provide a tabulation of the experimental data. The data were obtained in the two-dimensional, transonic flowfield surrounding a supercritical airfoil. A variety of flows were studied in which the boundary layer at the trailing edge of the model was either attached or separated. Unsteady flows were avoided by controlling the Mach number and angle of attack. Surface pressures were measured on both the model and wind tunnel walls, and the flowfield surrounding the model was documented using a laser Doppler velocimeter (LDV). Although wall interference could not be completely eliminated, its effect was minimized by employing the following techniques. Sidewall boundary layers were reduced by aspiration, and upper and lower walls were contoured to accommodate the flow around the model and the boundary-layer growth on the tunnel walls. A data base with minimal interference from a tunnel with solid walls provides an ideal basis for evaluating the development of codes for the transonic speed range because the codes can include the wall boundary conditions more precisely than interference connections can be made to the data sets.
Chatrchyan, S; Khachatryan, V; Sirunyan, A M; Tumasyan, A; Adam, W; Bergauer, T; Dragicevic, M; Erö, J; Fabjan, C; Friedl, M; Frühwirth, R; Ghete, V M; Hammer, J; Hänsel, S; Hoch, M; Hörmann, N; Hrubec, J; Jeitler, M; Kasieczka, G; Kiesenhofer, W; Krammer, M; Liko, D; Mikulec, I; Pernicka, M; Rohringer, H; Schöfbeck, R; Strauss, J; Teischinger, F; Wagner, P; Waltenberger, W; Walzel, G; Widl, E; Wulz, C-E; Mossolov, V; Shumeiko, N; Suarez Gonzalez, J; Benucci, L; De Wolf, E A; Janssen, X; Maes, T; Mucibello, L; Ochesanu, S; Roland, B; Rougny, R; Selvaggi, M; Van Haevermaet, H; Van Mechelen, P; Van Remortel, N; Blekman, F; Blyweert, S; D'Hondt, J; Devroede, O; Gonzalez Suarez, R; Kalogeropoulos, A; Maes, J; Maes, M; Van Doninck, W; Van Mulders, P; Van Onsem, G P; Villella, I; Charaf, O; Clerbaux, B; De Lentdecker, G; Dero, V; Gay, A P R; Hammad, G H; Hreus, T; Marage, P E; Thomas, L; Vander Velde, C; Vanlaer, P; Adler, V; Cimmino, A; Costantini, S; Grunewald, M; Klein, B; Lellouch, J; Marinov, A; McCartin, J; Ryckbosch, D; Thyssen, F; Tytgat, M; Vanelderen, L; Verwilligen, P; Walsh, S; Zaganidis, N; Basegmez, S; Bruno, G; Caudron, J; Ceard, L; Cortina Gil, E; De Favereau De Jeneret, J; Delaere, C; Favart, D; Giammanco, A; Grégoire, G; Hollar, J; Lemaitre, V; Liao, J; Militaru, O; Ovyn, S; Pagano, D; Pin, A; Piotrzkowski, K; Schul, N; Beliy, N; Caebergs, T; Daubie, E; Alves, G A; Damiao, D De Jesus; Pol, M E; Souza, M H G; Carvalho, W; Da Costa, E M; Martins, C De Oliveira; De Souza, S Fonseca; Mundim, L; Nogima, H; Oguri, V; Da Silva, W L Prado; Santoro, A; Do Amaral, S M Silva; Sznajder, A; De Araujo, F Torres Da Silva; Dias, F A; Tomei, T R Fernandez Perez; Gregores, E M; Lagana, C; Marinho, F; Mercadante, P G; Novaes, S F; Padula, Sandra S; Darmenov, N; Dimitrov, L; Genchev, V; Iaydjiev, P; Piperov, S; Rodozov, M; Stoykova, S; Sultanov, G; Tcholakov, V; Trayanov, R; Vankov, I; Dimitrov, A; Hadjiiska, R; Karadzhinova, A; Kozhuharov, V; Litov, L; Mateev, M; Pavlov, B; Petkov, P; Bian, J G; Chen, G M; Chen, H S; Jiang, C H; Liang, D; Liang, S; Meng, X; Tao, J; Wang, J; Wang, J; Wang, X; Wang, Z; Xiao, H; Xu, M; Zang, J; Zhang, Z; Ban, Y; Guo, S; Guo, Y; Li, W; Mao, Y; Qian, S J; Teng, H; Zhang, L; Zhu, B; Zou, W; Cabrera, A; Moreno, B Gomez; Rios, A A Ocampo; Oliveros, A F Osorio; Sanabria, J C; Godinovic, N; Lelas, D; Lelas, K; Plestina, R; Polic, D; Puljak, I; Antunovic, Z; Dzelalija, M; Brigljevic, V; Duric, S; Kadija, K; Morovic, S; Attikis, A; Galanti, M; Mousa, J; Nicolaou, C; Ptochos, F; Razis, P A; Finger, M; Finger, M; Assran, Y; Khalil, S; Mahmoud, M A; Hektor, A; Kadastik, M; Müntel, M; Raidal, M; Rebane, L; Azzolini, V; Eerola, P; Fedi, G; Czellar, S; Härkönen, J; Heikkinen, A; Karimäki, V; Kinnunen, R; Kortelainen, M J; Lampén, T; Lassila-Perini, K; Lehti, S; Lindén, T; Luukka, P; Mäenpää, T; Tuominen, E; Tuominiemi, J; Tuovinen, E; Ungaro, D; Wendland, L; Banzuzi, K; Korpela, A; Tuuva, T; Sillou, D; Besancon, M; Choudhury, S; Dejardin, M; Denegri, D; Fabbro, B; Faure, J L; Ferri, F; Ganjour, S; Gentit, F X; Givernaud, A; Gras, P; de Monchenault, G Hamel; Jarry, P; Locci, E; Malcles, J; Marionneau, M; Millischer, L; Rander, J; Rosowsky, A; Shreyber, I; Titov, M; Verrecchia, P; Baffioni, S; Beaudette, F; Benhabib, L; Bianchini, L; Bluj, M; Broutin, C; Busson, P; Charlot, C; Dahms, T; Dobrzynski, L; Elgammal, S; de Cassagnac, R Granier; Haguenauer, M; Miné, P; Mironov, C; Ochando, C; Paganini, P; Sabes, D; Salerno, R; Sirois, Y; Thiebaux, C; Wyslouch, B; Zabi, A; Agram, J-L; Andrea, J; Bloch, D; Bodin, D; Brom, J-M; Cardaci, M; Chabert, E C; Collard, C; Conte, E; Drouhin, F; Ferro, C; Fontaine, J-C; Gelé, D; Goerlach, U; Greder, S; Juillot, P; Karim, M; Le Bihan, A-C; Mikami, Y; Van Hove, P; Fassi, F; Mercier, D; Baty, C; Beauceron, S; Beaupere, N; Bedjidian, M; Bondu, O; Boudoul, G; Boumediene, D; Brun, H; Chierici, R; Contardo, D; Depasse, P; El Mamouni, H; Fay, J; Gascon, S; Ille, B; Kurca, T; Le Grand, T; Lethuillier, M; Mirabito, L; Perries, S; Sordini, V; Tosi, S; Tschudi, Y; Verdier, P; Lomidze, D; Anagnostou, G; Edelhoff, M; Feld, L; Heracleous, N; Hindrichs, O; Jussen, R; Klein, K; Merz, J; Mohr, N; Ostapchuk, A; Perieanu, A; Raupach, F; Sammet, J; Schael, S; Sprenger, D; Weber, H; Weber, M; Wittmer, B; Ata, M; Bender, W; Dietz-Laursonn, E; Erdmann, M; Frangenheim, J; Hebbeker, T; Hinzmann, A; Hoepfner, K; Klimkovich, T; Klingebiel, D; Kreuzer, P; Lanske, D; Magass, C; Merschmeyer, M; Meyer, A; Papacz, P; Pieta, H; Reithler, H; Schmitz, S A; Sonnenschein, L; Steggemann, J; Teyssier, D; Tonutti, M; Bontenackels, M; Davids, M; Duda, M; Flügge, G; Geenen, H; Giffels, M; Ahmad, W Haj; Heydhausen, D; Kress, T; Kuessel, Y; Linn, A; Nowack, A; Perchalla, L; Pooth, O; Rennefeld, J; Sauerland, P; Stahl, A; Thomas, M; Tornier, D; Zoeller, M H; Martin, M Aldaya; Behrenhoff, W; Behrens, U; Bergholz, M; Bethani, A; Borras, K; Cakir, A; Campbell, A; Castro, E; Dammann, D; Eckerlin, G; Eckstein, D; Flossdorf, A; Flucke, G; Geiser, A; Hauk, J; Jung, H; Kasemann, M; Katkov, I; Katsas, P; Kleinwort, C; Kluge, H; Knutsson, A; Krämer, M; Krücker, D; Kuznetsova, E; Lange, W; Lohmann, W; Mankel, R; Marienfeld, M; Melzer-Pellmann, I-A; Meyer, A B; Mnich, J; Mussgiller, A; Olzem, J; Pitzl, D; Raspereza, A; Raval, A; Rosin, M; Schmidt, R; Schoerner-Sadenius, T; Sen, N; Spiridonov, A; Stein, M; Tomaszewska, J; Walsh, R; Wissing, C; Autermann, C; Blobel, V; Bobrovskyi, S; Draeger, J; Enderle, H; Gebbert, U; Kaschube, K; Kaussen, G; Klanner, R; Lange, J; Mura, B; Naumann-Emme, S; Nowak, F; Pietsch, N; Sander, C; Schettler, H; Schleper, P; Schröder, M; Schum, T; Schwandt, J; Stadie, H; Steinbrück, G; Thomsen, J; Barth, C; Bauer, J; Buege, V; Chwalek, T; De Boer, W; Dierlamm, A; Dirkes, G; Feindt, M; Gruschke, J; Hackstein, C; Hartmann, F; Heinrich, M; Held, H; Hoffmann, K H; Honc, S; Komaragiri, J R; Kuhr, T; Martschei, D; Mueller, S; Müller, Th; Niegel, M; Oberst, O; Oehler, A; Ott, J; Peiffer, T; Piparo, D; Quast, G; Rabbertz, K; Ratnikov, F; Ratnikova, N; Renz, M; Saout, C; Scheurer, A; Schieferdecker, P; Schilling, F-P; Schmanau, M; Schott, G; Simonis, H J; Stober, F M; Troendle, D; Wagner-Kuhr, J; Weiler, T; Zeise, M; Zhukov, V; Ziebarth, E B; Daskalakis, G; Geralis, T; Karafasoulis, K; Kesisoglou, S; Kyriakis, A; Loukas, D; Manolakos, I; Markou, A; Markou, C; Mavrommatis, C; Ntomari, E; Petrakou, E; Gouskos, L; Mertzimekis, T J; Panagiotou, A; Stiliaris, E; Evangelou, I; Foudas, C; Kokkas, P; Manthos, N; Papadopoulos, I; Patras, V; Triantis, F A; Aranyi, A; Bencze, G; Boldizsar, L; Hajdu, C; Hidas, P; Horvath, D; Kapusi, A; Krajczar, K; Sikler, F; Veres, G I; Vesztergombi, G; Beni, N; Molnar, J; Palinkas, J; Szillasi, Z; Veszpremi, V; Raics, P; Trocsanyi, Z L; Ujvari, B; Bansal, S; Beri, S B; Bhatnagar, V; Dhingra, N; Gupta, R; Jindal, M; Kaur, M; Kohli, J M; Mehta, M Z; Nishu, N; Saini, L K; Sharma, A; Singh, A P; Singh, J B; Singh, S P; Ahuja, S; Bhattacharya, S; Choudhary, B C; Gupta, P; Jain, S; Jain, S; Kumar, A; Ranjan, K; Shivpuri, R K; Choudhury, R K; Dutta, D; Kailas, S; Kumar, V; Mohanty, A K; Pant, L M; Shukla, P; Aziz, T; Guchait, M; Gurtu, A; Maity, M; Majumder, D; Majumder, G; Mazumdar, K; Mohanty, G B; Saha, A; Sudhakar, K; Wickramage, N; Banerjee, S; Dugad, S; Mondal, N K; Arfaei, H; Bakhshiansohi, H; Etesami, S M; Fahim, A; Hashemi, M; Jafari, A; Khakzad, M; Mohammadi, A; Najafabadi, M Mohammadi; Mehdiabadi, S Paktinat; Safarzadeh, B; Zeinali, M; Abbrescia, M; Barbone, L; Calabria, C; Colaleo, A; Creanza, D; De Filippis, N; De Palma, M; Fiore, L; Iaselli, G; Lusito, L; Maggi, G; Maggi, M; Manna, N; Marangelli, B; My, S; Nuzzo, S; Pacifico, N; Pierro, G A; Pompili, A; Pugliese, G; Romano, F; Roselli, G; Selvaggi, G; Silvestris, L; Trentadue, R; Tupputi, S; Zito, G; Abbiendi, G; Benvenuti, A C; Bonacorsi, D; Braibant-Giacomelli, S; Brigliadori, L; Capiluppi, P; Castro, A; Cavallo, F R; Cuffiani, M; Dallavalle, G M; Fabbri, F; Fanfani, A; Fasanella, D; Giacomelli, P; Giunta, M; Marcellini, S; Masetti, G; Meneghelli, M; Montanari, A; Navarria, F L; Odorici, F; Perrotta, A; Primavera, F; Rossi, A M; Rovelli, T; Siroli, G; Travaglini, R; Albergo, S; Cappello, G; Chiorboli, M; Costa, S; Tricomi, A; Tuve, C; Barbagli, G; Ciulli, V; Civinini, C; D'Alessandro, R; Focardi, E; Frosali, S; Gallo, E; Gonzi, S; Lenzi, P; Meschini, M; Paoletti, S; Sguazzoni, G; Tropiano, A; Benussi, L; Bianco, S; Colafranceschi, S; Fabbri, F; Piccolo, D; Fabbricatore, P; Musenich, R; Benaglia, A; De Guio, F; Di Matteo, L; Ghezzi, A; Malvezzi, S; Martelli, A; Massironi, A; Menasce, D; Moroni, L; Paganoni, M; Pedrini, D; Ragazzi, S; Redaelli, N; Sala, S; Tabarelli de Fatis, T; Tancini, V; Buontempo, S; Montoya, C A Carrillo; Cavallo, N; De Cosa, A; Fabozzi, F; Iorio, A O M; Lista, L; Merola, M; Paolucci, P; Azzi, P; Bacchetta, N; Bellan, P; Bisello, D; Branca, A; Carlin, R; Checchia, P; De Mattia, M; Dorigo, T; Dosselli, U; Fanzago, F; Gasparini, F; Gasparini, U; Lacaprara, S; Lazzizzera, I; Margoni, M; Mazzucato, M; Meneguzzo, A T; Nespolo, M; Perrozzi, L; Pozzobon, N; Ronchese, P; Simonetto, F; Torassa, E; Tosi, M; Vanini, S; Zotto, P; Zumerle, G; Baesso, P; Berzano, U; Ratti, S P; Riccardi, C; Torre, P; Vitulo, P; Viviani, C; Biasini, M; Bilei, G M; Caponeri, B; Fanò, L; Lariccia, P; Lucaroni, A; Mantovani, G; Menichelli, M; Nappi, A; Romeo, F; Santocchia, A; Taroni, S; Valdata, M; Azzurri, P; Bagliesi, G; Bernardini, J; Boccali, T; Broccolo, G; Castaldi, R; D'Agnolo, R T; Dell'Orso, R; Fiori, F; Foà, L; Giassi, A; Kraan, A; Ligabue, F; Lomtadze, T; Martini, L; Messineo, A; Palla, F; Segneri, G; Serban, A T; Spagnolo, P; Tenchini, R; Tonelli, G; Venturi, A; Verdini, P G; Barone, L; Cavallari, F; Del Re, D; Di Marco, E; Diemoz, M; Franci, D; Grassi, M; Longo, E; Nourbakhsh, S; Organtini, G; Pandolfi, F; Paramatti, R; Rahatlou, S; Amapane, N; Arcidiacono, R; Argiro, S; Arneodo, M; Biino, C; Botta, C; Cartiglia, N; Castello, R; Costa, M; Demaria, N; Graziano, A; Mariotti, C; Marone, M; Maselli, S; Migliore, E; Mila, G; Monaco, V; Musich, M; Obertino, M M; Pastrone, N; Pelliccioni, M; Romero, A; Ruspa, M; Sacchi, R; Sola, V; Solano, A; Staiano, A; Vilela Pereira, A; Belforte, S; Cossutti, F; Della Ricca, G; Gobbo, B; Montanino, D; Penzo, A; Heo, S G; Nam, S K; Chang, S; Chung, J; Kim, D H; Kim, G N; Kim, J E; Kong, D J; Park, H; Ro, S R; Son, D; Son, D C; Son, T; Kim, Zero; Kim, J Y; Song, S; Choi, S; Hong, B; Jeong, M S; Jo, M; Kim, H; Kim, J H; Kim, T J; Lee, K S; Moon, D H; Park, S K; Rhee, H B; Seo, E; Shin, S; Sim, K S; Choi, M; Kang, S; Kim, H; Park, C; Park, I C; Park, S; Ryu, G; Choi, Y; Choi, Y K; Goh, J; Kim, M S; Kwon, E; Lee, J; Lee, S; Seo, H; Yu, I; Bilinskas, M J; Grigelionis, I; Janulis, M; Martisiute, D; Petrov, P; Sabonis, T; Castilla-Valdez, H; De La Cruz-Burelo, E; Lopez-Fernandez, R; Magaña Villalba, R; Sánchez-Hernández, A; Villasenor-Cendejas, L M; Carrillo Moreno, S; Vazquez Valencia, F; Salazar Ibarguen, H A; Casimiro Linares, E; Morelos Pineda, A; Reyes-Santos, M A; Krofcheck, D; Tam, J; Butler, P H; Doesburg, R; Silverwood, H; Ahmad, M; Ahmed, I; Asghar, M I; Hoorani, H R; Khan, W A; Khurshid, T; Qazi, S; Brona, G; Cwiok, M; Dominik, W; Doroba, K; Kalinowski, A; Konecki, M; Krolikowski, J; Frueboes, T; Gokieli, R; Górski, M; Kazana, M; Nawrocki, K; Romanowska-Rybinska, K; Szleper, M; Wrochna, G; Zalewski, P; Almeida, N; Bargassa, P; David, A; Faccioli, P; Parracho, P G Ferreira; Gallinaro, M; Musella, P; Nayak, A; Ribeiro, P Q; Seixas, J; Varela, J; Afanasiev, S; Belotelov, I; Bunin, P; Golutvin, I; Kamenev, A; Karjavin, V; Kozlov, G; Lanev, A; Moisenz, P; Palichik, V; Perelygin, V; Shmatov, S; Smirnov, V; Volodko, A; Zarubin, A; Golovtsov, V; Ivanov, Y; Kim, V; Levchenko, P; Murzin, V; Oreshkin, V; Smirnov, I; Sulimov, V; Uvarov, L; Vavilov, S; Vorobyev, A; Vorobyev, A; Andreev, Yu; Dermenev, A; Gninenko, S; Golubev, N; Kirsanov, M; Krasnikov, N; Matveev, V; Pashenkov, A; Toropin, A; Troitsky, S; Epshteyn, V; Gavrilov, V; Kaftanov, V; Kossov, M; Krokhotin, A; Lychkovskaya, N; Popov, V; Safronov, G; Semenov, S; Stolin, V; Vlasov, E; Zhokin, A; Boos, E; Dubinin, M; Dudko, L; Ershov, A; Gribushin, A; Kodolova, O; Lokhtin, I; Markina, A; Obraztsov, S; Perfilov, M; Petrushanko, S; Sarycheva, L; Savrin, V; Snigirev, A; Andreev, V; Azarkin, M; Dremin, I; Kirakosyan, M; Leonidov, A; Rusakov, S V; Vinogradov, A; Azhgirey, I; Bitioukov, S; Grishin, V; Kachanov, V; Konstantinov, D; Korablev, A; Krychkine, V; Petrov, V; Ryutin, R; Slabospitsky, S; Sobol, A; Tourtchanovitch, L; Troshin, S; Tyurin, N; Uzunian, A; Volkov, A; Adzic, P; Djordjevic, M; Krpic, D; Milosevic, J; Aguilar-Benitez, M; Alcaraz Maestre, J; Arce, P; Battilana, C; Calvo, E; Cepeda, M; Cerrada, M; Chamizo Llatas, M; Colino, N; De La Cruz, B; Delgado Peris, A; Diez Pardos, C; Domínguez Vázquez, D; Fernandez Bedoya, C; Fernández Ramos, J P; Ferrando, A; Flix, J; Fouz, M C; Garcia-Abia, P; Gonzalez Lopez, O; Goy Lopez, S; Hernandez, J M; Josa, M I; Merino, G; Puerta Pelayo, J; Redondo, I; Romero, L; Santaolalla, J; Soares, M S; Willmott, C; Albajar, C; Codispoti, G; de Trocóniz, J F; Cuevas, J; Fernandez Menendez, J; Folgueras, S; Gonzalez Caballero, I; Lloret Iglesias, L; Vizan Garcia, J M; Brochero Cifuentes, J A; Cabrillo, I J; Calderon, A; Chuang, S H; Duarte Campderros, J; Felcini, M; Fernandez, M; Gomez, G; Gonzalez Sanchez, J; Jorda, C; Lobelle Pardo, P; Lopez Virto, A; Marco, J; Marco, R; Martinez Rivero, C; Matorras, F; Munoz Sanchez, F J; Piedra Gomez, J; Rodrigo, T; Rodríguez-Marrero, A Y; Ruiz-Jimeno, A; Scodellaro, L; Sobron Sanudo, M; Vila, I; Vilar Cortabitarte, R; Abbaneo, D; Auffray, E; Auzinger, G; Baillon, P; Ball, A H; Barney, D; Bell, A J; Benedetti, D; Bernet, C; Bialas, W; Bloch, P; Bocci, A; Bolognesi, S; Bona, M; Breuker, H; Bunkowski, K; Camporesi, T; Cerminara, G; Coarasa Perez, J A; Curé, B; D'Enterria, D; De Roeck, A; Di Guida, S; Elliott-Peisert, A; Frisch, B; Funk, W; Gaddi, A; Gennai, S; Georgiou, G; Gerwig, H; Gigi, D; Gill, K; Giordano, D; Glege, F; Garrido, R Gomez-Reino; Gouzevitch, M; Govoni, P; Gowdy, S; Guiducci, L; Hansen, M; Hartl, C; Harvey, J; Hegeman, J; Hegner, B; Hoffmann, H F; Honma, A; Innocente, V; Janot, P; Kaadze, K; Karavakis, E; Lecoq, P; Lourenço, C; Mäki, T; Malberti, M; Malgeri, L; Mannelli, M; Masetti, L; Maurisset, A; Meijers, F; Mersi, S; Meschi, E; Moser, R; Mozer, M U; Mulders, M; Nesvold, E; Nguyen, M; Orimoto, T; Orsini, L; Perez, E; Petrilli, A; Pfeiffer, A; Pierini, M; Pimiä, M; Polese, G; Racz, A; Antunes, J Rodrigues; Rolandi, G; Rommerskirchen, T; Rovelli, C; Rovere, M; Sakulin, H; Schäfer, C; Schwick, C; Segoni, I; Sharma, A; Siegrist, P; Simon, M; Sphicas, P; Spiropulu, M; Stoye, M; Tropea, P; Tsirou, A; Vichoudis, P; Voutilainen, M; Zeuner, W D; Bertl, W; Deiters, K; Erdmann, W; Gabathuler, K; Horisberger, R; Ingram, Q; Kaestli, H C; König, S; Kotlinski, D; Langenegger, U; Meier, F; Renker, D; Rohe, T; Sibille, J; Starodumov, A; Bortignon, P; Caminada, L; Chanon, N; Chen, Z; Cittolin, S; Dissertori, G; Dittmar, M; Eugster, J; Freudenreich, K; Grab, C; Hervé, A; Hintz, W; Lecomte, P; Lustermann, W; Marchica, C; Del Arbol, P Martinez Ruiz; Meridiani, P; Milenovic, P; Moortgat, F; Nägeli, C; Nef, P; Nessi-Tedaldi, F; Pape, L; Pauss, F; Punz, T; Rizzi, A; Ronga, F J; Rossini, M; Sala, L; Sanchez, A K; Sawley, M-C; Stieger, B; Tauscher, L; Thea, A; Theofilatos, K; Treille, D; Urscheler, C; Wallny, R; Weber, M; Wehrli, L; Weng, J; Aguiló, E; Amsler, C; Chiochia, V; De Visscher, S; Favaro, C; Rikova, M Ivova; Mejias, B Millan; Otiougova, P; Regenfus, C; Robmann, P; Schmidt, A; Snoek, H; Chang, Y H; Chen, K H; Kuo, C M; Li, S W; Lin, W; Liu, Z K; Lu, Y J; Mekterovic, D; Volpe, R; Wu, J H; Yu, S S; Bartalini, P; Chang, P; Chang, Y H; Chang, Y W; Chao, Y; Chen, K F; Hou, W-S; Hsiung, Y; Kao, K Y; Lei, Y J; Lu, R-S; Shiu, J G; Tzeng, Y M; Wang, M; Adiguzel, A; Bakirci, M N; Cerci, S; Dozen, C; Dumanoglu, I; Eskut, E; Girgis, S; Gokbulut, G; Guler, Y; Gurpinar, E; Hos, I; Kangal, E E; Karaman, T; Topaksu, A Kayis; Nart, A; Onengut, G; Ozdemir, K; Ozturk, S; Polatoz, A; Sogut, K; Cerci, D Sunar; Tali, B; Topakli, H; Uzun, D; Vergili, L N; Vergili, M; Zorbilmez, C; Akin, I V; Aliev, T; Bilmis, S; Deniz, M; Gamsizkan, H; Guler, A M; Ocalan, K; Ozpineci, A; Serin, M; Sever, R; Surat, U E; Yildirim, E; Zeyrek, M; Deliomeroglu, M; Demir, D; Gülmez, E; Isildak, B; Kaya, M; Kaya, O; Ozkorucuklu, S; Sonmez, N; Levchuk, L; Bostock, F; Brooke, J J; Cheng, T L; Clement, E; Cussans, D; Frazier, R; Goldstein, J; Grimes, M; Hansen, M; Hartley, D; Heath, G P; Heath, H F; Jackson, J; Kreczko, L; Metson, S; Newbold, D M; Nirunpong, K; Poll, A; Senkin, S; Smith, V J; Ward, S; Basso, L; Bell, K W; Belyaev, A; Brew, C; Brown, R M; Camanzi, B; Cockerill, D J A; Coughlan, J A; Harder, K; Harper, S; Kennedy, B W; Olaiya, E; Petyt, D; Radburn-Smith, B C; Shepherd-Themistocleous, C H; Tomalin, I R; Womersley, W J; Worm, S D; Bainbridge, R; Ball, G; Ballin, J; Beuselinck, R; Buchmuller, O; Colling, D; Cripps, N; Cutajar, M; Davies, G; Della Negra, M; Ferguson, W; Fulcher, J; Futyan, D; Gilbert, A; Bryer, A Guneratne; Hall, G; Hatherell, Z; Hays, J; Iles, G; Jarvis, M; Karapostoli, G; Lyons, L; Macevoy, B C; Magnan, A-M; Marrouche, J; Mathias, B; Nandi, R; Nash, J; Nikitenko, A; Papageorgiou, A; Pesaresi, M; Petridis, K; Pioppi, M; Raymond, D M; Rogerson, S; Rompotis, N; Rose, A; Ryan, M J; Seez, C; Sharp, P; Sparrow, A; Tapper, A; Tourneur, S; Acosta, M Vazquez; Virdee, T; Wakefield, S; Wardle, N; Wardrope, D; Whyntie, T; Barrett, M; Chadwick, M; Cole, J E; Hobson, P R; Khan, A; Kyberd, P; Leslie, D; Martin, W; Reid, I D; Teodorescu, L; Hatakeyama, K; Bose, T; Jarrin, E Carrera; Fantasia, C; Heister, A; St John, J; Lawson, P; Lazic, D; Rohlf, J; Sperka, D; Sulak, L; Avetisyan, A; Bhattacharya, S; Chou, J P; Cutts, D; Ferapontov, A; Heintz, U; Jabeen, S; Kukartsev, G; Landsberg, G; Narain, M; Nguyen, D; Segala, M; Sinthuprasith, T; Speer, T; Tsang, K V; Breedon, R; Sanchez, M Calderon De La Barca; Chauhan, S; Chertok, M; Conway, J; Cox, P T; Dolen, J; Erbacher, R; Friis, E; Ko, W; Kopecky, A; Lander, R; Liu, H; Maruyama, S; Miceli, T; Nikolic, M; Pellett, D; Robles, J; Salur, S; Schwarz, T; Searle, M; Smith, J; Squires, M; Tripathi, M; Sierra, R Vasquez; Veelken, C; Andreev, V; Arisaka, K; Cline, D; Cousins, R; Deisher, A; Duris, J; Erhan, S; Farrell, C; Hauser, J; Ignatenko, M; Jarvis, C; Plager, C; Rakness, G; Schlein, P; Tucker, J; Valuev, V; Babb, J; Chandra, A; Clare, R; Ellison, J; Gary, J W; Giordano, F; Hanson, G; Jeng, G Y; Kao, S C; Liu, F; Liu, H; Long, O R; Luthra, A; Nguyen, H; Shen, B C; Stringer, R; Sturdy, J; Sumowidagdo, S; Wilken, R; Wimpenny, S; Andrews, W; Branson, J G; Cerati, G B; Dusinberre, E; Evans, D; Golf, F; Holzner, A; Kelley, R; Lebourgeois, M; Letts, J; Mangano, B; Padhi, S; Palmer, C; Petrucciani, G; Pi, H; Pieri, M; Ranieri, R; Sani, M; Sharma, V; Simon, S; Tu, Y; Vartak, A; Wasserbaech, S; Würthwein, F; Yagil, A; Yoo, J; Barge, D; Bellan, R; Campagnari, C; D'Alfonso, M; Danielson, T; Flowers, K; Geffert, P; Incandela, J; Justus, C; Kalavase, P; Koay, S A; Kovalskyi, D; Krutelyov, V; Lowette, S; McColl, N; Pavlunin, V; Rebassoo, F; Ribnik, J; Richman, J; Rossin, R; Stuart, D; To, W; Vlimant, J R; Apresyan, A; Bornheim, A; Bunn, J; Chen, Y; Gataullin, M; Ma, Y; Mott, A; Newman, H B; Rogan, C; Shin, K; Timciuc, V; Traczyk, P; Veverka, J; Wilkinson, R; Yang, Y; Zhu, R Y; Akgun, B; Carroll, R; Ferguson, T; Iiyama, Y; Jang, D W; Jun, S Y; Liu, Y F; Paulini, M; Russ, J; Vogel, H; Vorobiev, I; Cumalat, J P; Dinardo, M E; Drell, B R; Edelmaier, C J; Ford, W T; Gaz, A; Heyburn, B; Lopez, E Luiggi; Nauenberg, U; Smith, J G; Stenson, K; Ulmer, K A; Wagner, S R; Zang, S L; Agostino, L; Alexander, J; Cassel, D; Chatterjee, A; Das, S; Eggert, N; Gibbons, L K; Heltsley, B; Hopkins, W; Khukhunaishvili, A; Kreis, B; Kaufman, G Nicolas; Patterson, J R; Puigh, D; Ryd, A; Salvati, E; Shi, X; Sun, W; Teo, W D; Thom, J; Thompson, J; Vaughan, J; Weng, Y; Winstrom, L; Wittich, P; Biselli, A; Cirino, G; Winn, D; Abdullin, S; Albrow, M; Anderson, J; Apollinari, G; Atac, M; Bakken, J A; Banerjee, S; Bauerdick, L A T; Beretvas, A; Berryhill, J; Bhat, P C; Bloch, I; Borcherding, F; Burkett, K; Butler, J N; Chetluru, V; Cheung, H W K; Chlebana, F; Cihangir, S; Cooper, W; Eartly, D P; Elvira, V D; Esen, S; Fisk, I; Freeman, J; Gao, Y; Gottschalk, E; Green, D; Gunthoti, K; Gutsche, O; Hanlon, J; Harris, R M; Hirschauer, J; Hooberman, B; Jensen, H; Johnson, M; Joshi, U; Khatiwada, R; Klima, B; Kousouris, K; Kunori, S; Kwan, S; Leonidopoulos, C; Limon, P; Lincoln, D; Lipton, R; Lykken, J; Maeshima, K; Marraffino, J M; Mason, D; McBride, P; Miao, T; Mishra, K; Mrenna, S; Musienko, Y; Newman-Holmes, C; O'Dell, V; Pordes, R; Prokofyev, O; Saoulidou, N; Sexton-Kennedy, E; Sharma, S; Spalding, W J; Spiegel, L; Tan, P; Taylor, L; Tkaczyk, S; Uplegger, L; Vaandering, E W; Vidal, R; Whitmore, J; Wu, W; Yang, F; Yumiceva, F; Yun, J C; Acosta, D; Avery, P; Bourilkov, D; Chen, M; De Gruttola, M; Di Giovanni, G P; Dobur, D; Drozdetskiy, A; Field, R D; Fisher, M; Fu, Y; Furic, I K; Gartner, J; Kim, B; Konigsberg, J; Korytov, A; Kropivnitskaya, A; Kypreos, T; Matchev, K; Mitselmakher, G; Muniz, L; Prescott, C; Remington, R; Schmitt, M; Scurlock, B; Sellers, P; Skhirtladze, N; Snowball, M; Wang, D; Yelton, J; Zakaria, M; Ceron, C; Gaultney, V; Kramer, L; Lebolo, L M; Linn, S; Markowitz, P; Martinez, G; Mesa, D; Rodriguez, J L; Adams, T; Askew, A; Bandurin, D; Bochenek, J; Chen, J; Diamond, B; Gleyzer, S V; Haas, J; Hagopian, S; Hagopian, V; Jenkins, M; Johnson, K F; Prosper, H; Quertenmont, L; Sekmen, S; Veeraraghavan, V; Baarmand, M M; Dorney, B; Guragain, S; Hohlmann, M; Kalakhety, H; Ralich, R; Vodopiyanov, I; Adams, M R; Anghel, I M; Apanasevich, L; Bai, Y; Bazterra, V E; Betts, R R; Callner, J; Cavanaugh, R; Dragoiu, C; Gauthier, L; Gerber, C E; Hofman, D J; Khalatyan, S; Kunde, G J; Lacroix, F; Malek, M; O'Brien, C; Silvestre, C; Smoron, A; Strom, D; Varelas, N; Akgun, U; Albayrak, E A; Bilki, B; Clarida, W; Duru, F; Lae, C K; McCliment, E; Merlo, J-P; Mermerkaya, H; Mestvirishvili, A; Moeller, A; Nachtman, J; Newsom, C R; Norbeck, E; Olson, J; Onel, Y; Ozok, F; Sen, S; Wetzel, J; Yetkin, T; Yi, K; Barnett, B A; Blumenfeld, B; Bonato, A; Eskew, C; Fehling, D; Giurgiu, G; Gritsan, A V; Guo, Z J; Hu, G; Maksimovic, P; Rappoccio, S; Swartz, M; Tran, N V; Whitbeck, A; Baringer, P; Bean, A; Benelli, G; Grachov, O; Kenny Iii, R P; Murray, M; Noonan, D; Sanders, S; Wood, J S; Zhukova, V; Barfuss, A F; Bolton, T; Chakaberia, I; Ivanov, A; Khalil, S; Makouski, M; Maravin, Y; Shrestha, S; Svintradze, I; Wan, Z; Gronberg, J; Lange, D; Wright, D; Baden, A; Boutemeur, M; Eno, S C; Ferencek, D; Gomez, J A; Hadley, N J; Kellogg, R G; Kirn, M; Lu, Y; Mignerey, A C; Rossato, K; Rumerio, P; Santanastasio, F; Skuja, A; Temple, J; Tonjes, M B; Tonwar, S C; Twedt, E; Alver, B; Bauer, G; Bendavid, J; Busza, W; Butz, E; Cali, I A; Chan, M; Dutta, V; Everaerts, P; Ceballos, G Gomez; Goncharov, M; Hahn, K A; Harris, P; Kim, Y; Klute, M; Lee, Y-J; Li, W; Loizides, C; Luckey, P D; Ma, T; Nahn, S; Paus, C; Ralph, D; Roland, C; Roland, G; Rudolph, M; Stephans, G S F; Stöckli, F; Sumorok, K; Sung, K; Wenger, E A; Xie, S; Yang, M; Yilmaz, Y; Yoon, A S; Zanetti, M; Cooper, S I; Cushman, P; Dahmes, B; De Benedetti, A; Dudero, P R; Franzoni, G; Haupt, J; Klapoetke, K; Kubota, Y; Mans, J; Rekovic, V; Rusack, R; Sasseville, M; Singovsky, A; Cremaldi, L M; Godang, R; Kroeger, R; Perera, L; Rahmat, R; Sanders, D A; Summers, D; Bloom, K; Bose, S; Butt, J; Claes, D R; Dominguez, A; Eads, M; Keller, J; Kelly, T; Kravchenko, I; Lazo-Flores, J; Malbouisson, H; Malik, S; Snow, G R; Baur, U; Godshalk, A; Iashvili, I; Jain, S; Kharchilava, A; Kumar, A; Shipkowski, S P; Smith, K; Alverson, G; Barberis, E; Baumgartel, D; Boeriu, O; Chasco, M; Reucroft, S; Swain, J; Trocino, D; Wood, D; Zhang, J; Anastassov, A; Kubik, A; Odell, N; Ofierzynski, R A; Pollack, B; Pozdnyakov, A; Schmitt, M; Stoynev, S; Velasco, M; Won, S; Antonelli, L; Berry, D; Hildreth, M; Jessop, C; Karmgard, D J; Kolb, J; Kolberg, T; Lannon, K; Luo, W; Lynch, S; Marinelli, N; Morse, D M; Pearson, T; Ruchti, R; Slaunwhite, J; Valls, N; Wayne, M; Ziegler, J; Bylsma, B; Durkin, L S; Gu, J; Hill, C; Killewald, P; Kotov, K; Ling, T Y; Rodenburg, M; Williams, G; Adam, N; Berry, E; Elmer, P; Gerbaudo, D; Halyo, V; Hebda, P; Hunt, A; Jones, J; Laird, E; Pegna, D Lopes; Marlow, D; Medvedeva, T; Mooney, M; Olsen, J; Piroué, P; Quan, X; Saka, H; Stickland, D; Tully, C; Werner, J S; Zuranski, A; Acosta, J G; Huang, X T; Lopez, A; Mendez, H; Oliveros, S; Vargas, J E Ramirez; Zatserklyaniy, A; Alagoz, E; Barnes, V E; Bolla, G; Borrello, L; Bortoletto, D; Everett, A; Garfinkel, A F; Gutay, L; Hu, Z; Jones, M; Koybasi, O; Kress, M; Laasanen, A T; Leonardo, N; Liu, C; Maroussov, V; Merkel, P; Miller, D H; Neumeister, N; Shipsey, I; Silvers, D; Svyatkovskiy, A; Yoo, H D; Zablocki, J; Zheng, Y; Jindal, P; Parashar, N; Boulahouache, C; Cuplov, V; Ecklund, K M; Geurts, F J M; Padley, B P; Redjimi, R; Roberts, J; Zabel, J; Betchart, B; Bodek, A; Chung, Y S; Covarelli, R; de Barbaro, P; Demina, R; Eshaq, Y; Flacher, H; Garcia-Bellido, A; Goldenzweig, P; Gotra, Y; Han, J; Harel, A; Miner, D C; Orbaker, D; Petrillo, G; Vishnevskiy, D; Zielinski, M; Bhatti, A; Ciesielski, R; Demortier, L; Goulianos, K; Lungu, G; Malik, S; Mesropian, C; Yan, M; Atramentov, O; Barker, A; Duggan, D; Gershtein, Y; Gray, R; Halkiadakis, E; Hidas, D; Hits, D; Lath, A; Panwalkar, S; Patel, R; Richards, A; Rose, K; Schnetzer, S; Somalwar, S; Stone, R; Thomas, S; Cerizza, G; Hollingsworth, M; Spanier, S; Yang, Z C; York, A; Asaadi, J; Eusebi, R; Gilmore, J; Gurrola, A; Kamon, T; Khotilovich, V; Montalvo, R; Nguyen, C N; Osipenkov, I; Pakhotin, Y; Pivarski, J; Safonov, A; Sengupta, S; Tatarinov, A; Toback, D; Weinberger, M; Akchurin, N; Bardak, C; Damgov, J; Jeong, C; Kovitanggoon, K; Lee, S W; Roh, Y; Sill, A; Volobouev, I; Wigmans, R; Yazgan, E; Appelt, E; Brownson, E; Engh, D; Florez, C; Gabella, W; Issah, M; Johns, W; Kurt, P; Maguire, C; Melo, A; Sheldon, P; Snook, B; Tuo, S; Velkovska, J; Arenton, M W; Balazs, M; Boutle, S; Cox, B; Francis, B; Hirosky, R; Ledovskoy, A; Lin, C; Neu, C; Yohay, R; Gollapinni, S; Harr, R; Karchin, P E; Lamichhane, P; Mattson, M; Milstène, C; Sakharov, A; Anderson, M; Bachtis, M; Bellinger, J N; Carlsmith, D; Dasu, S; Efron, J; Flood, K; Gray, L; Grogg, K S; Grothe, M; Hall-Wilton, R; Herndon, M; Klabbers, P; Klukas, J; Lanaro, A; Lazaridis, C; Leonard, J; Loveless, R; Mohapatra, A; Palmonari, F; Reeder, D; Ross, I; Savin, A; Smith, W H; Swanson, J; Weinberg, M
2011-06-10
A search for neutral minimal supersymmetric standard model (MSSM) Higgs bosons in pp collisions at the LHC at a center-of-mass energy of 7 TeV is presented. The results are based on a data sample corresponding to an integrated luminosity of 36 pb(-1) recorded by the CMS experiment. The search uses decays of the Higgs bosons to tau pairs. No excess is observed in the tau-pair invariant-mass spectrum. The resulting upper limits on the Higgs boson production cross section times branching fraction to tau pairs, as a function of the pseudoscalar Higgs boson mass, yield stringent new bounds in the MSSM parameter space.
Modeling of tool path for the CNC sheet cutting machines
NASA Astrophysics Data System (ADS)
Petunin, Aleksandr A.
2015-11-01
In the paper the problem of tool path optimization for CNC (Computer Numerical Control) cutting machines is considered. The classification of the cutting techniques is offered. We also propose a new classification of toll path problems. The tasks of cost minimization and time minimization for standard cutting technique (Continuous Cutting Problem, CCP) and for one of non-standard cutting techniques (Segment Continuous Cutting Problem, SCCP) are formalized. We show that the optimization tasks can be interpreted as discrete optimization problem (generalized travel salesman problem with additional constraints, GTSP). Formalization of some constraints for these tasks is described. For the solution GTSP we offer to use mathematical model of Prof. Chentsov based on concept of a megalopolis and dynamic programming.
Closed Loop System Identification with Genetic Algorithms
NASA Technical Reports Server (NTRS)
Whorton, Mark S.
2004-01-01
High performance control design for a flexible space structure is challenging since high fidelity plant models are di.cult to obtain a priori. Uncertainty in the control design models typically require a very robust, low performance control design which must be tuned on-orbit to achieve the required performance. Closed loop system identi.cation is often required to obtain a multivariable open loop plant model based on closed-loop response data. In order to provide an accurate initial plant model to guarantee convergence for standard local optimization methods, this paper presents a global parameter optimization method using genetic algorithms. A minimal representation of the state space dynamics is employed to mitigate the non-uniqueness and over-parameterization of general state space realizations. This control-relevant system identi.cation procedure stresses the joint nature of the system identi.cation and control design problem by seeking to obtain a model that minimizes the di.erence between the predicted and actual closed-loop performance.
2011-07-31
officers select their own BOLC-B dates completely divorced of their unit assignment and that unit’s ARFORGEN cycle. We reschedule all FY10 cohort LTs...for BOLC-B based upon unit priority based upon number of days until LAD. Rescheduling all FY10 cohort LTs for BOLC-B based upon unit priority...with specialty branches (doctors, lawyers, nurses , chaplains, etc) which have minimal representation in BCT-level units. DCs are not generally
NASA Astrophysics Data System (ADS)
Pradanti, Paskalia; Hartono
2018-03-01
Determination of insulin injection dose in diabetes mellitus treatment can be considered as an optimal control problem. This article is aimed to simulate optimal blood glucose control for patient with diabetes mellitus. The blood glucose regulation of diabetic patient is represented by Ackerman’s Linear Model. This problem is then solved using dynamic programming method. The desired blood glucose level is obtained by minimizing the performance index in Lagrange form. The results show that dynamic programming based on Ackerman’s Linear Model is quite good to solve the problem.
Mathematical Analysis for Non-reciprocal-interaction-based Model of Collective Behavior
NASA Astrophysics Data System (ADS)
Kano, Takeshi; Osuka, Koichi; Kawakatsu, Toshihiro; Ishiguro, Akio
2017-12-01
In many natural and social systems, collective behaviors emerge as a consequence of non-reciprocal interaction between their constituents. As a first step towards understanding the core principle that underlies these phenomena, we previously proposed a minimal model of collective behavior based on non-reciprocal interactions by drawing inspiration from friendship formation in human society, and demonstrated via simulations that various non-trivial patterns emerge by changing parameters. In this study, a mathematical analysis of the proposed model wherein the system size is small is performed. Through the analysis, the mechanism of the transition between several patterns is elucidated.
Greenland, S
1996-03-15
This paper presents an approach to back-projection (back-calculation) of human immunodeficiency virus (HIV) person-year infection rates in regional subgroups based on combining a log-linear model for subgroup differences with a penalized spline model for trends. The penalized spline approach allows flexible trend estimation but requires far fewer parameters than fully non-parametric smoothers, thus saving parameters that can be used in estimating subgroup effects. Use of reasonable prior curve to construct the penalty function minimizes the degree of smoothing needed beyond model specification. The approach is illustrated in application to acquired immunodeficiency syndrome (AIDS) surveillance data from Los Angeles County.
Fundamentals and Recent Developments in Approximate Bayesian Computation
Lintusaari, Jarno; Gutmann, Michael U.; Dutta, Ritabrata; Kaski, Samuel; Corander, Jukka
2017-01-01
Abstract Bayesian inference plays an important role in phylogenetics, evolutionary biology, and in many other branches of science. It provides a principled framework for dealing with uncertainty and quantifying how it changes in the light of new evidence. For many complex models and inference problems, however, only approximate quantitative answers are obtainable. Approximate Bayesian computation (ABC) refers to a family of algorithms for approximate inference that makes a minimal set of assumptions by only requiring that sampling from a model is possible. We explain here the fundamentals of ABC, review the classical algorithms, and highlight recent developments. [ABC; approximate Bayesian computation; Bayesian inference; likelihood-free inference; phylogenetics; simulator-based models; stochastic simulation models; tree-based models.] PMID:28175922
Soley, Micheline B; Markmann, Andreas; Batista, Victor S
2018-06-12
We introduce the so-called "Classical Optimal Control Optimization" (COCO) method for global energy minimization based on the implementation of the diffeomorphic modulation under observable-response-preserving homotopy (DMORPH) gradient algorithm. A probe particle with time-dependent mass m( t;β) and dipole μ( r, t;β) is evolved classically on the potential energy surface V( r) coupled to an electric field E( t;β), as described by the time-dependent density of states represented on a grid, or otherwise as a linear combination of Gaussians generated by the k-means clustering algorithm. Control parameters β defining m( t;β), μ( r, t;β), and E( t;β) are optimized by following the gradients of the energy with respect to β, adapting them to steer the particle toward the global minimum energy configuration. We find that the resulting COCO algorithm is capable of resolving near-degenerate states separated by large energy barriers and successfully locates the global minima of golf potentials on flat and rugged surfaces, previously explored for testing quantum annealing methodologies and the quantum optimal control optimization (QuOCO) method. Preliminary results show successful energy minimization of multidimensional Lennard-Jones clusters. Beyond the analysis of energy minimization in the specific model systems investigated, we anticipate COCO should be valuable for solving minimization problems in general, including optimization of parameters in applications to machine learning and molecular structure determination.
AMMOS2: a web server for protein-ligand-water complexes refinement via molecular mechanics.
Labbé, Céline M; Pencheva, Tania; Jereva, Dessislava; Desvillechabrol, Dimitri; Becot, Jérôme; Villoutreix, Bruno O; Pajeva, Ilza; Miteva, Maria A
2017-07-03
AMMOS2 is an interactive web server for efficient computational refinement of protein-small organic molecule complexes. The AMMOS2 protocol employs atomic-level energy minimization of a large number of experimental or modeled protein-ligand complexes. The web server is based on the previously developed standalone software AMMOS (Automatic Molecular Mechanics Optimization for in silico Screening). AMMOS utilizes the physics-based force field AMMP sp4 and performs optimization of protein-ligand interactions at five levels of flexibility of the protein receptor. The new version 2 of AMMOS implemented in the AMMOS2 web server allows the users to include explicit water molecules and individual metal ions in the protein-ligand complexes during minimization. The web server provides comprehensive analysis of computed energies and interactive visualization of refined protein-ligand complexes. The ligands are ranked by the minimized binding energies allowing the users to perform additional analysis for drug discovery or chemical biology projects. The web server has been extensively tested on 21 diverse protein-ligand complexes. AMMOS2 minimization shows consistent improvement over the initial complex structures in terms of minimized protein-ligand binding energies and water positions optimization. The AMMOS2 web server is freely available without any registration requirement at the URL: http://drugmod.rpbs.univ-paris-diderot.fr/ammosHome.php. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
AMMOS2: a web server for protein–ligand–water complexes refinement via molecular mechanics
Labbé, Céline M.; Pencheva, Tania; Jereva, Dessislava; Desvillechabrol, Dimitri; Becot, Jérôme; Villoutreix, Bruno O.; Pajeva, Ilza
2017-01-01
Abstract AMMOS2 is an interactive web server for efficient computational refinement of protein–small organic molecule complexes. The AMMOS2 protocol employs atomic-level energy minimization of a large number of experimental or modeled protein–ligand complexes. The web server is based on the previously developed standalone software AMMOS (Automatic Molecular Mechanics Optimization for in silico Screening). AMMOS utilizes the physics-based force field AMMP sp4 and performs optimization of protein–ligand interactions at five levels of flexibility of the protein receptor. The new version 2 of AMMOS implemented in the AMMOS2 web server allows the users to include explicit water molecules and individual metal ions in the protein–ligand complexes during minimization. The web server provides comprehensive analysis of computed energies and interactive visualization of refined protein–ligand complexes. The ligands are ranked by the minimized binding energies allowing the users to perform additional analysis for drug discovery or chemical biology projects. The web server has been extensively tested on 21 diverse protein–ligand complexes. AMMOS2 minimization shows consistent improvement over the initial complex structures in terms of minimized protein–ligand binding energies and water positions optimization. The AMMOS2 web server is freely available without any registration requirement at the URL: http://drugmod.rpbs.univ-paris-diderot.fr/ammosHome.php. PMID:28486703
Dwell time algorithm based on the optimization theory for magnetorheological finishing
NASA Astrophysics Data System (ADS)
Zhang, Yunfei; Wang, Yang; Wang, Yajun; He, Jianguo; Ji, Fang; Huang, Wen
2010-10-01
Magnetorheological finishing (MRF) is an advanced polishing technique capable of rapidly converging to the required surface figure. This process can deterministically control the amount of the material removed by varying a time to dwell at each particular position on the workpiece surface. The dwell time algorithm is one of the most important key techniques of the MRF. A dwell time algorithm based on the1 matrix equation and optimization theory was presented in this paper. The conventional mathematical model of the dwell time was transferred to a matrix equation containing initial surface error, removal function and dwell time function. The dwell time to be calculated was just the solution to the large, sparse matrix equation. A new mathematical model of the dwell time based on the optimization theory was established, which aims to minimize the 2-norm or ∞-norm of the residual surface error. The solution meets almost all the requirements of precise computer numerical control (CNC) without any need for extra data processing, because this optimization model has taken some polishing condition as the constraints. Practical approaches to finding a minimal least-squares solution and a minimal maximum solution are also discussed in this paper. Simulations have shown that the proposed algorithm is numerically robust and reliable. With this algorithm an experiment has been performed on the MRF machine developed by ourselves. After 4.7 minutes' polishing, the figure error of a flat workpiece with a 50 mm diameter is improved by PV from 0.191λ(λ = 632.8 nm) to 0.087λ and RMS 0.041λ to 0.010λ. This algorithm can be constructed to polish workpieces of all shapes including flats, spheres, aspheres, and prisms, and it is capable of improving the polishing figures dramatically.
Train, Arianne T; Harmon, Carroll M; Rothstein, David H
2017-10-01
Although disparities in access to minimally invasive surgery are thought to exist in pediatric surgical patients in the United States, hospital-level practice patterns have not been evaluated as a possible contributing factor. Retrospective cohort study using the Kids' Inpatient Database, 2012. Odds ratios of undergoing a minimally invasive compared to open operation were calculated for six typical pediatric surgical operations after adjustment for multiple patient demographic and hospital-level variables. Further adjustment to the regression model was made by incorporating hospital practice patterns, defined as operation-specific minimally invasive frequency and volume. Age was the most significant patient demographic factor affecting application of minimally invasive surgery for all procedures. For several procedures, adjusting for individual hospital practice patterns removed race- and income-based disparities seen in performance of minimally invasive operations. Disparities related to insurance status were not affected by the same adjustment. Variation in the application of minimally invasive surgery in pediatric surgical patients is primarily influenced by patient age and the type of procedure performed. Perceived disparities in access related to some socioeconomic factors are decreased but not eliminated by accounting for individual hospital practice patterns, suggesting that complex underlying factors influence application of advanced surgical techniques. II. Copyright © 2017 Elsevier Inc. All rights reserved.
Discovery of Boolean metabolic networks: integer linear programming based approach.
Qiu, Yushan; Jiang, Hao; Ching, Wai-Ki; Cheng, Xiaoqing
2018-04-11
Traditional drug discovery methods focused on the efficacy of drugs rather than their toxicity. However, toxicity and/or lack of efficacy are produced when unintended targets are affected in metabolic networks. Thus, identification of biological targets which can be manipulated to produce the desired effect with minimum side-effects has become an important and challenging topic. Efficient computational methods are required to identify the drug targets while incurring minimal side-effects. In this paper, we propose a graph-based computational damage model that summarizes the impact of enzymes on compounds in metabolic networks. An efficient method based on Integer Linear Programming formalism is then developed to identify the optimal enzyme-combination so as to minimize the side-effects. The identified target enzymes for known successful drugs are then verified by comparing the results with those in the existing literature. Side-effects reduction plays a crucial role in the study of drug development. A graph-based computational damage model is proposed and the theoretical analysis states the captured problem is NP-completeness. The proposed approaches can therefore contribute to the discovery of drug targets. Our developed software is available at " http://hkumath.hku.hk/~wkc/APBC2018-metabolic-network.zip ".
A Systematic Approach for Model-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A requirement for effective aircraft engine performance estimation is the ability to account for engine degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. This paper presents a linear point design methodology for minimizing the degradation-induced error in model-based aircraft engine performance estimation applications. The technique specifically focuses on the underdetermined estimation problem, where there are more unknown health parameters than available sensor measurements. A condition for Kalman filter-based estimation is that the number of health parameters estimated cannot exceed the number of sensed measurements. In this paper, the estimated health parameter vector will be replaced by a reduced order tuner vector whose dimension is equivalent to the sensed measurement vector. The reduced order tuner vector is systematically selected to minimize the theoretical mean squared estimation error of a maximum a posteriori estimator formulation. This paper derives theoretical estimation errors at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the estimation accuracy achieved through conventional maximum a posteriori and Kalman filter estimation approaches. Maximum a posteriori estimation results demonstrate that reduced order tuning parameter vectors can be found that approximate the accuracy of estimating all health parameters directly. Kalman filter estimation results based on the same reduced order tuning parameter vectors demonstrate that significantly improved estimation accuracy can be achieved over the conventional approach of selecting a subset of health parameters to serve as the tuner vector. However, additional development is necessary to fully extend the methodology to Kalman filter-based estimation applications.
NASA Astrophysics Data System (ADS)
Kuruma, Yutetsu
2007-10-01
Self-reproduction is one of main properties that define living cells. In order to explore the self-reproduction process for the study of early cells, and to develop a research line somehow connected to the origin of life, we have built up a constructive ‘synthetic cells (minimal cells)’ approach. The minimal cells approach consists in the investigation of the minimal number of elements to accomplish simple cell-like processes like self-reproduction. Such approach belongs to the field of synthetic biology. The minimal cells are reconstructed from a totally reconstituted cell-free protein synthesis system (PURESYSTEM) and liposome compartments as containers. Based on this approach, we synthesized two membrane proteins (enzymes), GPAT and LPAAT, which are involved in the phosphatidic acid biosynthesis in bacteria. Both membrane proteins were successfully synthesized by PURESYSTEM encapsulated inside POPC liposomes. Additionally, the enzymatic activity of GPAT was restored by mixing the expressed enzyme with lipid and by forming liposomes in situ. Through these experimental evidences, here we present a possible model to achieve self-reproduction in minimal cells. Our results would contribute to the idea that early cells could have been built by an extremely small number of genes.
The NASA modern technology rotors program
NASA Technical Reports Server (NTRS)
Watts, M. E.; Cross, J. L.
1986-01-01
Existing data bases regarding helicopters are based on work conducted on 'old-technology' rotor systems. The Modern Technology Rotors (MTR) Program is to provide extensive data bases on rotor systems using present and emerging technology. The MTR is concerned with modern, four-bladed, rotor systems presently being manufactured or under development. Aspects of MTR philosophy are considered along with instrumentation, the MTR test program, the BV 360 Rotor, and the UH-60 Black Hawk. The program phases include computer modelling, shake test, model-scale test, minimally instrumented flight test, extensively pressure-instrumented-blade flight test, and full-scale wind tunnel test.
Qualitative properties of the minimal model of carbon circulation in the biosphere
NASA Astrophysics Data System (ADS)
Pestunov, Aleksandr; Fedotov, Anatoliy; Medvedev, Sergey
2014-05-01
Substantial changes in the biosphere during recent decades have caused legitimate concern in the international community. The fact that feedbacks between the atmospheric CO2 concentration, global temperature, permafrost, ocean CO2 concentration and air humidity increases the risk of catastrophic phenomena on the planetary scale. The precautionary principle allows us to consider greenhouse effect using the mathematical models of the biosphere-climate system. Minimal models do not allow us to make a quantitative description of the "biosphere-climate" system dynamics, which is determined by the aggregate effect of the set of known climatic and biosphere processes. However, the study of such models makes it possible to understand the qualitative mechanisms of biosphere processes and to evaluate their possible consequences. The global minimal model of long-term dynamics of carbon in biosphere is considered basing on assumption that anthropogenous carbon emissions in atmosphere are absent [1]. Qualitative analysis of the model shows that there exists a set of model parameters (taken from the current estimation ranges), such that the system becomes unstable. It is also shown that external influences on the carbon circulation can lead either to degradation of the biosphere or to global temperature change [2]. This work is aimed at revealing the conditions under which the biosphere model can become unstable, which can result in catastrophic changes in the Earth's biogeocenoses. The minimal model of the biosphere-climate system describes an improbable, but, nevertheless, a possible worst-case scenario of the biosphere evolution takes into consideration only the most dangerous biosphere mechanisms and ignores some climate feedbacks (such as transpiration). This work demonstrates the possibility of implementing the trigger mode in the biosphere, which can lead to dramatic changes in the state of the biosphere even without additional burning of fossil fuels. This mode implementation is possible under parameter values of the biosphere, lying within the ranges of their existing estimates. Hence a potential hazard of any drastic change of the biosphere conditions that may speed up possible shift of the biosphere to a new stable state. References 1. Bartsev S.I., Degermendzhi A.G., Fedotov A.M., Medvedev S.B., Pestunov A.I., Pestunov I.A. The Biosphere Trigger Mechanism in the Minimal Model for the Global Carbon Cycle of the Earth // Doklady Earth Sciences, 2012, Vol. 443, Part 2, pp. 489-492. © Pleiades Publishing, Ltd., 2012. 2. Fedotov A.M., Medvedev S.B., Pestunov A.I., Pestunov I.A., Bartsev S.I., Degermendzhi A.G. Qualitative analysis of the minimal model of carbon dynamics in the biosphere // Computational Technologies. 2012. Vol. 17. N 3. pp. 91-108 (in Russian).
NASA Astrophysics Data System (ADS)
Shahverdi, Masood
The cost and fuel economy of hybrid electrical vehicles (HEVs) are significantly dependent on the power-train energy storage system (ESS). A series HEV with a minimal all-electric mode (AEM) permits minimizing the size and cost of the ESS. This manuscript, pursuing the minimal size tactic, introduces a bandwidth based methodology for designing an efficient ESS. First, for a mid-size reference vehicle, a parametric study is carried out over various minimal-size ESSs, both hybrid (HESS) and non-hybrid (ESS), for finding the highest fuel economy. The results show that a specific type of high power battery with 4.5 kWh capacity can be selected as the winning candidate to study for further minimization. In a second study, following the twin goals of maximizing Fuel Economy (FE) and improving consumer acceptance, a sports car class Series-HEV (SHEV) was considered as a potential application which requires even more ESS minimization. The challenge with this vehicle is to reduce the ESS size compared to 4.5 kWh, because the available space allocation is only one fourth of the allowed battery size in the mid-size study by volume. Therefore, an advanced bandwidth-based controller is developed that allows a hybridized Subaru BRZ model to be realized with a light ESS. The result allows a SHEV to be realized with 1.13 kWh ESS capacity. In a third study, the objective is to find optimum SHEV designs with minimal AEM assumption which cover the design space between the fuel economies in the mid-size car study and the sports car study. Maximizing FE while minimizing ESS cost is more aligned with customer acceptance in the current state of market. The techniques applied to manage the power flow between energy sources of the power-train significantly affect the results of this optimization. A Pareto Frontier, including ESS cost and FE, for a SHEV with limited AEM, is introduced using an advanced bandwidth-based control strategy teamed up with duty ratio control. This controller allows the series hybrid's advantage of tightly managing engine efficiency to be extended to lighter ESS, as compared to the size of the ESS in available products in the market.
Adopting epidemic model to optimize medication and surgical intervention of excess weight
NASA Astrophysics Data System (ADS)
Sun, Ruoyan
2017-01-01
We combined an epidemic model with an objective function to minimize the weighted sum of people with excess weight and the cost of a medication and surgical intervention in the population. The epidemic model is consisted of ordinary differential equations to describe three subpopulation groups based on weight. We introduced an intervention using medication and surgery to deal with excess weight. An objective function is constructed taking into consideration the cost of the intervention as well as the weight distribution of the population. Using empirical data, we show that fixed participation rate reduces the size of obese population but increases the size for overweight. An optimal participation rate exists and decreases with respect to time. Both theoretical analysis and empirical example confirm the existence of an optimal participation rate, u*. Under u*, the weighted sum of overweight (S) and obese (O) population as well as the cost of the program is minimized. This article highlights the existence of an optimal participation rate that minimizes the number of people with excess weight and the cost of the intervention. The time-varying optimal participation rate could contribute to designing future public health interventions of excess weight.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Hao; Garzoglio, Gabriele; Ren, Shangping
FermiCloud is a private cloud developed in Fermi National Accelerator Laboratory to provide elastic and on-demand resources for different scientific research experiments. The design goal of the FermiCloud is to automatically allocate resources for different scientific applications so that the QoS required by these applications is met and the operational cost of the FermiCloud is minimized. Our earlier research shows that VM launching overhead has large variations. If such variations are not taken into consideration when making resource allocation decisions, it may lead to poor performance and resource waste. In this paper, we show how we may use an VMmore » launching overhead reference model to minimize VM launching overhead. In particular, we first present a training algorithm that automatically tunes a given refer- ence model to accurately reflect FermiCloud environment. Based on the tuned reference model for virtual machine launching overhead, we develop an overhead-aware-best-fit resource allocation algorithm that decides where and when to allocate resources so that the average virtual machine launching overhead is minimized. The experimental results indicate that the developed overhead-aware-best-fit resource allocation algorithm can significantly improved the VM launching time when large number of VMs are simultaneously launched.« less
SU-G-IeP1-13: Sub-Nyquist Dynamic MRI Via Prior Rank, Intensity and Sparsity Model (PRISM)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, B; Gao, H
Purpose: Accelerated dynamic MRI is important for MRI guided radiotherapy. Inspired by compressive sensing (CS), sub-Nyquist dynamic MRI has been an active research area, i.e., sparse sampling in k-t space for accelerated dynamic MRI. This work is to investigate sub-Nyquist dynamic MRI via a previously developed CS model, namely Prior Rank, Intensity and Sparsity Model (PRISM). Methods: The proposed method utilizes PRISM with rank minimization and incoherent sampling patterns for sub-Nyquist reconstruction. In PRISM, the low-rank background image, which is automatically calculated by rank minimization, is excluded from the L1 minimization step of the CS reconstruction to further sparsify themore » residual image, thus allowing for higher acceleration rates. Furthermore, the sampling pattern in k-t space is made more incoherent by sampling a different set of k-space points at different temporal frames. Results: Reconstruction results from L1-sparsity method and PRISM method with 30% undersampled data and 15% undersampled data are compared to demonstrate the power of PRISM for dynamic MRI. Conclusion: A sub- Nyquist MRI reconstruction method based on PRISM is developed with improved image quality from the L1-sparsity method.« less
Singularity-free dynamic equations of spacecraft-manipulator systems
NASA Astrophysics Data System (ADS)
From, Pål J.; Ytterstad Pettersen, Kristin; Gravdahl, Jan T.
2011-12-01
In this paper we derive the singularity-free dynamic equations of spacecraft-manipulator systems using a minimal representation. Spacecraft are normally modeled using Euler angles, which leads to singularities, or Euler parameters, which is not a minimal representation and thus not suited for Lagrange's equations. We circumvent these issues by introducing quasi-coordinates which allows us to derive the dynamics using minimal and globally valid non-Euclidean configuration coordinates. This is a great advantage as the configuration space of a spacecraft is non-Euclidean. We thus obtain a computationally efficient and singularity-free formulation of the dynamic equations with the same complexity as the conventional Lagrangian approach. The closed form formulation makes the proposed approach well suited for system analysis and model-based control. This paper focuses on the dynamic properties of free-floating and free-flying spacecraft-manipulator systems and we show how to calculate the inertia and Coriolis matrices in such a way that this can be implemented for simulation and control purposes without extensive knowledge of the mathematical background. This paper represents the first detailed study of modeling of spacecraft-manipulator systems with a focus on a singularity free formulation using the proposed framework.
A UV-complete Composite Higgs model for Electroweak Symmetry Breaking: Minimal Conformal Technicolor
NASA Astrophysics Data System (ADS)
Tacchi, Ruggero Altair
The Large Hadron Collider is currently collecting data. One of the main goals of the experiment is to find evidence of the mechanism responsible for the breaking of the electroweak symmetry. There are many different models attempting to explain this breaking and traditionally most of them involve the use of supersymmetry near the scale of the breaking. This work is focused on exploring a viable model that is not based on a weakly coupled low scale supersymmetry sector to explain the electroweak symmetry breaking. We build a model based on a new strong interaction, in the fashion of theories commonly called "technicolor", name that is reminiscent of one of the first attempts of explaining the electroweak symmetry breaking using a strong interaction similar to the one whose charges are called colors. We explicitly study the minimal model of conformal technicolor, an SU(2) gauge theory near a strongly coupled conformal fixed point, with conformal symmetry softly broken by technifermion mass terms. Conformal symmetry breaking triggers chiral symmetry breaking in the pattern SU(4) → Sp (4), which gives rise to a pseudo-Nambu-Goldstone boson that can act as a composite Higgs boson. There is an additional composite pseudoscalar A with mass larger than mh and suppressed direct production at LHC. We discuss the electroweak fit in this model in detail. A good fit requires fine tuning at the 10% level. We construct a complete, realistic, and natural UV completion of the model, that explains the origin of quark and lepton masses and mixing angles. We embed conformal technicolor in a supersymmetric theory, with supersymmetry broken at a high scale. The effective theory below the supersymmetry breaking scale is minimal conformal technicolor with an additional light technicolor gaugino that might give rise to an additional pseudo Nambu-Goldstone boson that is observable at the LHC.
NASA Astrophysics Data System (ADS)
Jung, Youngjean
This dissertation concerns the constitutive description of superelasticity in NiTi alloys and the finite element analysis of a corresponding material model at large strains. Constitutive laws for shape-memory alloys subject to biaxial loading, which are based on direct experimental observations, are generally not available. A reliable constitutive model for shape-memory alloys is important for various applications because Nitinol is now widely used in biotechnology devices such as endovascular stents, vena cava filters, dental files, archwires and guidewires, etc. As part of a broader project, tension-torsion tests are conducted on thin-walled tubes (thickness/radius ratio of 1:10) of the polycrystalline superelastic Nitinol using various loading/unloading paths under isothermal conditions. This biaxial loading/unloading test was carefully designed to avoid torsional buckling and strain non-uniformities. A micromechanical constitutive model, algorithmic implementation and numerical simulation of polycrystalline superelastic alloys under biaxial loading are developed. The constitutive model is based on the micromechanical structure of Ni-Ti crystals and accounts for the physical observation of solid-solid phase transformations through the minimization of the Helmholtz energy with dissipation. The model is formulated in finite deformations and incorporates the effect of texture which is of profound significance in the mechanical response of polycrystalline Nitinol tubes. The numerical implementation is based on the constrained minimization of a functional corresponding to the Helmholtz energy with dissipation. Special treatment of loading/unloading conditions is also developed to distinguish between forward/reverse transformation state. Simulations are conducted for thin tubes of Nitinol under tension-torsion, as well as for a simplified model of a biomedical stent.
Geographic information system/watershed model interface
Fisher, Gary T.
1989-01-01
Geographic information systems allow for the interactive analysis of spatial data related to water-resources investigations. A conceptual design for an interface between a geographic information system and a watershed model includes functions for the estimation of model parameter values. Design criteria include ease of use, minimal equipment requirements, a generic data-base management system, and use of a macro language. An application is demonstrated for a 90.1-square-kilometer subbasin of the Patuxent River near Unity, Maryland, that performs automated derivation of watershed parameters for hydrologic modeling.
Statistical analysis of target acquisition sensor modeling experiments
NASA Astrophysics Data System (ADS)
Deaver, Dawne M.; Moyer, Steve
2015-05-01
The U.S. Army RDECOM CERDEC NVESD Modeling and Simulation Division is charged with the development and advancement of military target acquisition models to estimate expected soldier performance when using all types of imaging sensors. Two elements of sensor modeling are (1) laboratory-based psychophysical experiments used to measure task performance and calibrate the various models and (2) field-based experiments used to verify the model estimates for specific sensors. In both types of experiments, it is common practice to control or measure environmental, sensor, and target physical parameters in order to minimize uncertainty of the physics based modeling. Predicting the minimum number of test subjects required to calibrate or validate the model should be, but is not always, done during test planning. The objective of this analysis is to develop guidelines for test planners which recommend the number and types of test samples required to yield a statistically significant result.
NASA Astrophysics Data System (ADS)
Wu, H.; Zhou, L.; Xu, T.; Fang, W. L.; He, W. G.; Liu, H. M.
2017-11-01
In order to improve the situation of voltage violation caused by the grid-connection of photovoltaic (PV) system in a distribution network, a bi-level programming model is proposed for battery energy storage system (BESS) deployment. The objective function of inner level programming is to minimize voltage violation, with the power of PV and BESS as the variables. The objective function of outer level programming is to minimize the comprehensive function originated from inner layer programming and all the BESS operating parameters, with the capacity and rated power of BESS as the variables. The differential evolution (DE) algorithm is applied to solve the model. Based on distribution network operation scenarios with photovoltaic generation under multiple alternative output modes, the simulation results of IEEE 33-bus system prove that the deployment strategy of BESS proposed in this paper is well adapted to voltage violation regulation invariable distribution network operation scenarios. It contributes to regulating voltage violation in distribution network, as well as to improve the utilization of PV systems.
Zhang, Cheng; Zhang, Tao; Zheng, Jian; Li, Ming; Lu, Yanfei; You, Jiali; Guan, Yihui
2015-01-01
In recent years, X-ray computed tomography (CT) is becoming widely used to reveal patient's anatomical information. However, the side effect of radiation, relating to genetic or cancerous diseases, has caused great public concern. The problem is how to minimize radiation dose significantly while maintaining image quality. As a practical application of compressed sensing theory, one category of methods takes total variation (TV) minimization as the sparse constraint, which makes it possible and effective to get a reconstruction image of high quality in the undersampling situation. On the other hand, a preliminary attempt of low-dose CT reconstruction based on dictionary learning seems to be another effective choice. But some critical parameters, such as the regularization parameter, cannot be determined by detecting datasets. In this paper, we propose a reweighted objective function that contributes to a numerical calculation model of the regularization parameter. A number of experiments demonstrate that this strategy performs well with better reconstruction images and saving of a large amount of time.
Stable Local Volatility Calibration Using Kernel Splines
NASA Astrophysics Data System (ADS)
Coleman, Thomas F.; Li, Yuying; Wang, Cheng
2010-09-01
We propose an optimization formulation using L1 norm to ensure accuracy and stability in calibrating a local volatility function for option pricing. Using a regularization parameter, the proposed objective function balances the calibration accuracy with the model complexity. Motivated by the support vector machine learning, the unknown local volatility function is represented by a kernel function generating splines and the model complexity is controlled by minimizing the 1-norm of the kernel coefficient vector. In the context of the support vector regression for function estimation based on a finite set of observations, this corresponds to minimizing the number of support vectors for predictability. We illustrate the ability of the proposed approach to reconstruct the local volatility function in a synthetic market. In addition, based on S&P 500 market index option data, we demonstrate that the calibrated local volatility surface is simple and resembles the observed implied volatility surface in shape. Stability is illustrated by calibrating local volatility functions using market option data from different dates.
Multi-disciplinary optimization of aeroservoelastic systems
NASA Technical Reports Server (NTRS)
Karpel, Mordechay
1990-01-01
Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.
Gray, Wayne D; Sims, Chris R; Fu, Wai-Tat; Schoelles, Michael J
2006-07-01
Soft constraints hypothesis (SCH) is a rational analysis approach that holds that the mixture of perceptual-motor and cognitive resources allocated for interactive behavior is adjusted based on temporal cost-benefit tradeoffs. Alternative approaches maintain that cognitive resources are in some sense protected or conserved in that greater amounts of perceptual-motor effort will be expended to conserve lesser amounts of cognitive effort. One alternative, the minimum memory hypothesis (MMH), holds that people favor strategies that minimize the use of memory. SCH is compared with MMH across 3 experiments and with predictions of an Ideal Performer Model that uses ACT-R's memory system in a reinforcement learning approach that maximizes expected utility by minimizing time. Model and data support the SCH view of resource allocation; at the under 1000-ms level of analysis, mixtures of cognitive and perceptual-motor resources are adjusted based on their cost-benefit tradeoffs for interactive behavior. ((c) 2006 APA, all rights reserved).
Leff, Daniel Richard; Orihuela-Espina, Felipe; Leong, Julian; Darzi, Ara; Yang, Guang-Zhong
2008-01-01
Learning to perform Minimally Invasive Surgery (MIS) requires considerable attention, concentration and spatial ability. Theoretically, this leads to activation in executive control (prefrontal) and visuospatial (parietal) centres of the brain. A novel approach is presented in this paper for analysing the flow of fronto-parietal haemodynamic behaviour and the associated variability between subjects. Serially acquired functional Near Infrared Spectroscopy (fNIRS) data from fourteen laparoscopic novices at different stages of learning is projected into a low-dimensional 'geospace', where sequentially acquired data is mapped to different locations. A trip distribution matrix based on consecutive directed trips between locations in the geospace reveals confluent fronto-parietal haemodynamic changes and a gravity model is applied to populate this matrix. To model global convergence in haemodynamic behaviour, a Markov chain is constructed and by comparing sequential haemodynamic distributions to the Markov's stationary distribution, inter-subject variability in learning an MIS task can be identified.
Optimality Principles for Model-Based Prediction of Human Gait
Ackermann, Marko; van den Bogert, Antonie J.
2010-01-01
Although humans have a large repertoire of potential movements, gait patterns tend to be stereotypical and appear to be selected according to optimality principles such as minimal energy. When applied to dynamic musculoskeletal models such optimality principles might be used to predict how a patient’s gait adapts to mechanical interventions such as prosthetic devices or surgery. In this paper we study the effects of different performance criteria on predicted gait patterns using a 2D musculoskeletal model. The associated optimal control problem for a family of different cost functions was solved utilizing the direct collocation method. It was found that fatigue-like cost functions produced realistic gait, with stance phase knee flexion, as opposed to energy-related cost functions which avoided knee flexion during the stance phase. We conclude that fatigue minimization may be one of the primary optimality principles governing human gait. PMID:20074736
NASA Astrophysics Data System (ADS)
Yusriski, R.; Sukoyo; Samadhi, T. M. A. A.; Halim, A. H.
2018-03-01
This research deals with a single machine batch scheduling model considering the influenced of learning, forgetting, and machine deterioration effects. The objective of the model is to minimize total inventory holding cost, and the decision variables are the number of batches (N), batch sizes (Q[i], i = 1, 2, .., N) and the sequence of processing the resulting batches. The parts to be processed are received at the right time and the right quantities, and all completed parts must be delivered at a common due date. We propose a heuristic procedure based on the Lagrange method to solve the problem. The effectiveness of the procedure is evaluated by comparing the resulting solution to the optimal solution obtained from the enumeration procedure using the integer composition technique and shows that the average effectiveness is 94%.
The Properties and the Nature of Light: The Study of Newton's Work and the Teaching of Optics
ERIC Educational Resources Information Center
Raftopoulos, Athanasios; Kalyfommatou, Niki; Constantinou, Constantinos P.
2005-01-01
The history of science shows that for each scientific issue there may be more than one models that are simultaneously accepted by the scientific community. One such case concerns the wave and corpuscular models of light. Newton claimed that he had proved some properties of light based on a set of minimal assumptions, without any commitments to any…
Extended robust support vector machine based on financial risk minimization.
Takeda, Akiko; Fujiwara, Shuhei; Kanamori, Takafumi
2014-11-01
Financial risk measures have been used recently in machine learning. For example, ν-support vector machine ν-SVM) minimizes the conditional value at risk (CVaR) of margin distribution. The measure is popular in finance because of the subadditivity property, but it is very sensitive to a few outliers in the tail of the distribution. We propose a new classification method, extended robust SVM (ER-SVM), which minimizes an intermediate risk measure between the CVaR and value at risk (VaR) by expecting that the resulting model becomes less sensitive than ν-SVM to outliers. We can regard ER-SVM as an extension of robust SVM, which uses a truncated hinge loss. Numerical experiments imply the ER-SVM's possibility of achieving a better prediction performance with proper parameter setting.
Real-Time Control of Lean Blowout in a Turbine Engine for Minimizing No(x) Emissions
NASA Technical Reports Server (NTRS)
Zinn, Ben
2004-01-01
This report describes research on the development and demonstration of a controlled combustor operates with minimal NO, emissions, thus meeting one of NASA s UEET program goals. NO(x) emissions have been successfully minimized by operating a premixed, lean burning combustor (modeling a lean prevaporized, premixed LPP combustor) safely near its lean blowout (LBO) limit over a range of operating conditions. This was accomplished by integrating the combustor with an LBO precursor sensor and closed-loop, rule-based control system that allowed the combustor to operate far closer to the point of LBO than an uncontrolled combustor would be allowed to in a current engine. Since leaner operation generally leads to lower NO, emissions, engine NO, was reduced without loss of safety.
Chen, Xi; Jiang, Xiling; Doddareddy, Rajitha; Geist, Brian; McIntosh, Thomas; Jusko, William J; Zhou, Honghui; Wang, Weirong
2018-04-01
The interleukin (IL)-23/T h 17/IL-17 immune pathway has been identified to play an important role in the pathogenesis of psoriasis. Many therapeutic proteins targeting IL-23 or IL-17 are currently under development for the treatment of psoriasis. In the present study, a mechanistic pharmacokinetics (PK)/pharmacodynamics (PD) study was conducted to assess the target-binding and disposition kinetics of a monoclonal antibody (mAb), CNTO 3723, and its soluble target, mouse IL-23, in an IL-23-induced psoriasis-like mouse model. A minimal physiologically based pharmacokinetic model with target-mediated drug disposition features was developed to quantitatively assess the kinetics and interrelationship between CNTO 3723 and exogenously administered, recombinant mouse IL-23 in both serum and lesional skin site. Furthermore, translational applications of the developed model were evaluated with incorporation of human PK for ustekinumab, an anti-human IL-23/IL-12 mAb developed for treatment of psoriasis, and human disease pathophysiology information in psoriatic patients. The results agreed well with the observed clinical data for ustekinumab. Our work provides an example on how mechanism-based PK/PD modeling can be applied during early drug discovery and how preclinical data can be used for human efficacious dose projection and guide decision making during early clinical development of therapeutic proteins. Copyright © 2018 by The Author(s).
NASA Astrophysics Data System (ADS)
Cobo-Lopez, Sergio; Saeed Bahramy, Mohammad; Arita, Ryotaro; Akbari, Alireza; Eremin, Ilya
2018-04-01
We develop the realistic minimal electronic model for recently discovered BiS2 superconductors including the spin–orbit (SO) coupling based on the first-principles band structure calculations. Due to strong SO coupling, characteristic for the Bi-based systems, the tight-binding low-energy model necessarily includes p x , p y , and p z orbitals. We analyze a potential Cooper-pairing instability from purely repulsive interaction for the moderate electronic correlations using the so-called leading angular harmonics approximation. For small and intermediate doping concentrations we find the dominant instabilities to be {d}{x2-{y}2}-wave, and s ±-wave symmetries, respectively. At the same time, in the absence of the sizable spin fluctuations the intra and interband Coulomb repulsions are of the same strength, which yield the strongly anisotropic behavior of the superconducting gaps on the Fermi surface. This agrees with recent angle resolved photoemission spectroscopy findings. In addition, we find that the Fermi surface topology for BiS2 layered systems at large electron doping can resemble the doped iron-based pnictide superconductors with electron and hole Fermi surfaces maintaining sufficient nesting between them. This could provide further boost to increase T c in these systems.
NASA Astrophysics Data System (ADS)
Shokri, Ali
2017-04-01
The hydrological cycle contains a wide range of linked surface and subsurface flow processes. In spite of natural connections between surface water and groundwater, historically, these processes have been studied separately. The current trend in hydrological distributed physically based model development is to combine distributed surface water models with distributed subsurface flow models. This combination results in a better estimation of the temporal and spatial variability of the interaction between surface and subsurface flow. On the other hand, simple lumped models such as the Soil Conservation Service Curve Number (SCS-CN) are still quite common because of their simplicity. In spite of the popularity of the SCS-CN method, there have always been concerns about the ambiguity of the SCS-CN method in explaining physical mechanism of rainfall-runoff processes. The aim of this study is to minimize these ambiguity by establishing a method to find an equivalence of the SCS-CN solution to the DrainFlow model, which is a fully distributed physically based coupled surface-subsurface flow model. In this paper, two hypothetical v-catchment tests are designed and the direct runoff from a storm event are calculated by both SCS-CN and DrainFlow models. To find a comparable solution to runoff prediction through the SCS-CN and DrainFlow, the variance between runoff predictions by the two models are minimized by changing Curve Number (CN) and initial abstraction (Ia) values. Results of this study have led to a set of lumped model parameters (CN and Ia) for each catchment that is comparable to a set of physically based parameters including hydraulic conductivity, Manning roughness coefficient, ground surface slope, and specific storage. Considering the lack of physical interpretation in CN and Ia is often argued as a weakness of SCS-CN method, the novel method in this paper gives a physical explanation to CN and Ia.
Minimizing student’s faults in determining the design of experiment through inquiry-based learning
NASA Astrophysics Data System (ADS)
Nilakusmawati, D. P. E.; Susilawati, M.
2017-10-01
The purpose of this study were to describe the used of inquiry method in an effort to minimize student’s fault in designing an experiment and to determine the effectiveness of the implementation of the inquiry method in minimizing student’s faults in designing experiments on subjects experimental design. This type of research is action research participants, with a model of action research design. The data source were students of the fifth semester who took a subject of experimental design at Mathematics Department, Faculty of Mathematics and Natural Sciences, Udayana University. Data was collected through tests, interviews, and observations. The hypothesis was tested by t-test. The result showed that the implementation of inquiry methods to minimize of students fault in designing experiments, analyzing experimental data, and interpret them in cycle 1 students can reduce fault by an average of 10.5%. While implementation in Cycle 2, students managed to reduce fault by an average of 8.78%. Based on t-test results can be concluded that the inquiry method effectively used to minimize of student’s fault in designing experiments, analyzing experimental data, and interpreting them. The nature of the teaching materials on subject of Experimental Design that demand the ability of students to think in a systematic, logical, and critical in analyzing the data and interpret the test cases makes the implementation of this inquiry become the proper method. In addition, utilization learning tool, in this case the teaching materials and the students worksheet is one of the factors that makes this inquiry method effectively minimizes of student’s fault when designing experiments.
Sakhteman, Amirhossein; Zare, Bijan
2016-01-01
An interactive application, Modelface, was presented for Modeller software based on windows platform. The application is able to run all steps of homology modeling including pdb to fasta generation, running clustal, model building and loop refinement. Other modules of modeler including energy calculation, energy minimization and the ability to make single point mutations in the PDB structures are also implemented inside Modelface. The API is a simple batch based application with no memory occupation and is free of charge for academic use. The application is also able to repair missing atom types in the PDB structures making it suitable for many molecular modeling studies such as docking and molecular dynamic simulation. Some successful instances of modeling studies using Modelface are also reported. PMID:28243276
NASA Astrophysics Data System (ADS)
Niakan, F.; Vahdani, B.; Mohammadi, M.
2015-12-01
This article proposes a multi-objective mixed-integer model to optimize the location of hubs within a hub network design problem under uncertainty. The considered objectives include minimizing the maximum accumulated travel time, minimizing the total costs including transportation, fuel consumption and greenhouse emissions costs, and finally maximizing the minimum service reliability. In the proposed model, it is assumed that for connecting two nodes, there are several types of arc in which their capacity, transportation mode, travel time, and transportation and construction costs are different. Moreover, in this model, determining the capacity of the hubs is part of the decision-making procedure and balancing requirements are imposed on the network. To solve the model, a hybrid solution approach is utilized based on inexact programming, interval-valued fuzzy programming and rough interval programming. Furthermore, a hybrid multi-objective metaheuristic algorithm, namely multi-objective invasive weed optimization (MOIWO), is developed for the given problem. Finally, various computational experiments are carried out to assess the proposed model and solution approaches.
NASA Astrophysics Data System (ADS)
Alfi, V.; Cristelli, M.; Pietronero, L.; Zaccaria, A.
2009-02-01
We present a detailed study of the statistical properties of the Agent Based Model introduced in paper I [Eur. Phys. J. B, DOI: 10.1140/epjb/e2009-00028-4] and of its generalization to the multiplicative dynamics. The aim of the model is to consider the minimal elements for the understanding of the origin of the stylized facts and their self-organization. The key elements are fundamentalist agents, chartist agents, herding dynamics and price behavior. The first two elements correspond to the competition between stability and instability tendencies in the market. The herding behavior governs the possibility of the agents to change strategy and it is a crucial element of this class of models. We consider a linear approximation for the price dynamics which permits a simple interpretation of the model dynamics and, for many properties, it is possible to derive analytical results. The generalized non linear dynamics results to be extremely more sensible to the parameter space and much more difficult to analyze and control. The main results for the nature and self-organization of the stylized facts are, however, very similar in the two cases. The main peculiarity of the non linear dynamics is an enhancement of the fluctuations and a more marked evidence of the stylized facts. We will also discuss some modifications of the model to introduce more realistic elements with respect to the real markets.
Rowe, Rachel K.; Harrison, Jordan L.; Thomas, Theresa C.; Pauly, James R.; Adelson, P. David; Lifshitz, Jonathan
2013-01-01
The use of animal modeling in traumatic brain injury (TBI) research is justified by the lack of sufficiently comprehensive in vitro and computer modeling that incorporates all components of the neurovascular unit. Valid animal modeling of TBI requires accurate replication of both the mechanical forces and secondary injury conditions observed in human patients. Regulatory requirements for animal modeling emphasize the administration of appropriate anesthetics and analgesics unless withholding these drugs is scientifically justified. The objective of this review is to present scientific justification for standardizing the use of anesthetics and analgesics, within a study, when modeling TBI in order to preserve study validity. Evidence for the interference of anesthetics and analgesics in the natural course of brain injury calls for consistent consideration of pain management regimens when conducting TBI research. Anesthetics administered at the time of or shortly after induction of brain injury can alter cognitive, motor, and histological outcomes following TBI. A consistent anesthesia protocol based on experimental objectives within each individual study is imperative when conducting TBI studies to control for the confounding effects of anesthesia on outcome parameters. Experimental studies that replicate the clinical condition are essential to gain further understanding and evaluate possible treatments for TBI. However, with animal models of TBI it is essential that investigators assure a uniform drug delivery protocol that minimizes confounding variables, while minimizing pain and suffering. PMID:23877609
Dynamic Modeling, Model-Based Control, and Optimization of Solid Oxide Fuel Cells
NASA Astrophysics Data System (ADS)
Spivey, Benjamin James
2011-07-01
Solid oxide fuel cells are a promising option for distributed stationary power generation that offers efficiencies ranging from 50% in stand-alone applications to greater than 80% in cogeneration. To advance SOFC technology for widespread market penetration, the SOFC should demonstrate improved cell lifetime and load-following capability. This work seeks to improve lifetime through dynamic analysis of critical lifetime variables and advanced control algorithms that permit load-following while remaining in a safe operating zone based on stress analysis. Control algorithms typically have addressed SOFC lifetime operability objectives using unconstrained, single-input-single-output control algorithms that minimize thermal transients. Existing SOFC controls research has not considered maximum radial thermal gradients or limits on absolute temperatures in the SOFC. In particular, as stress analysis demonstrates, the minimum cell temperature is the primary thermal stress driver in tubular SOFCs. This dissertation presents a dynamic, quasi-two-dimensional model for a high-temperature tubular SOFC combined with ejector and prereformer models. The model captures dynamics of critical thermal stress drivers and is used as the physical plant for closed-loop control simulations. A constrained, MIMO model predictive control algorithm is developed and applied to control the SOFC. Closed-loop control simulation results demonstrate effective load-following, constraint satisfaction for critical lifetime variables, and disturbance rejection. Nonlinear programming is applied to find the optimal SOFC size and steady-state operating conditions to minimize total system costs.
Flattening the inflaton potential beyond minimal gravity
NASA Astrophysics Data System (ADS)
Lee, Hyun Min
2018-01-01
We review the status of the Starobinsky-like models for inflation beyond minimal gravity and discuss the unitarity problem due to the presence of a large non-minimal gravity coupling. We show that the induced gravity models allow for a self-consistent description of inflation and discuss the implications of the inflaton couplings to the Higgs field in the Standard Model.
A manifold learning approach to data-driven computational materials and processes
NASA Astrophysics Data System (ADS)
Ibañez, Ruben; Abisset-Chavanne, Emmanuelle; Aguado, Jose Vicente; Gonzalez, David; Cueto, Elias; Duval, Jean Louis; Chinesta, Francisco
2017-10-01
Standard simulation in classical mechanics is based on the use of two very different types of equations. The first one, of axiomatic character, is related to balance laws (momentum, mass, energy, …), whereas the second one consists of models that scientists have extracted from collected, natural or synthetic data. In this work we propose a new method, able to directly link data to computers in order to perform numerical simulations. These simulations will employ universal laws while minimizing the need of explicit, often phenomenological, models. They are based on manifold learning methodologies.
Stabilized High-order Galerkin Methods Based on a Parameter-free Dynamic SGS Model for LES
2015-01-01
stresses obtained via Dyn-SGS are residual-based, the effect of the artificial diffusion is minimal in the regions where the solution is smooth. The direct...used in the analysis of the results rather than in the definition and analysis of the LES equations described from now on. 2.1 LES and the Dyn-SGS model... definition is sucient given the scope of the current study; nevertheless, a more proper defi- nition of for LES should be used in future work
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brinkman, J.J.; Griffioen, P.S.; Groot, S.
1987-03-01
The Netherlands have a rather complex water-management system consisting of a number of major rivers, canals, lakes and ditches. Water-quantity management on a regional scale is necessary for an effective water-quality policy. To support water management, a computer model was developed that includes both water quality and water quantity, based on three submodels: ABOPOL for the water movement, DELWAQ for the calculation of water quality variables and BLOOM-II for the phytoplankton growth. The northern province of Friesland was chosen as a test case for the integrated model to be developed, where water quality is highly related to the water distributionmore » and the main trade-off is minimizing the intake of (eutrophicated) alien water in order to minimize external nutrient load and maximizing the intake in order to flush channels and lakes. The results of the application of these models to this and to a number of hypothetical future situations are described.« less
Research on Collection System Optimal Design of Wind Farm with Obstacles
NASA Astrophysics Data System (ADS)
Huang, W.; Yan, B. Y.; Tan, R. S.; Liu, L. F.
2017-05-01
To the collection system optimal design of offshore wind farm, the factors considered are not only the reasonable configuration of the cable and switch, but also the influence of the obstacles on the topology design of the offshore wind farm. This paper presents a concrete topology optimization algorithm with obstacles. The minimal area rectangle encasing box of the obstacle is obtained by using the method of minimal area encasing box. Then the optimization algorithm combining the advantages of Dijkstra algorithm and Prim algorithm is used to gain the scheme of avoidance obstacle path planning. Finally a fuzzy comprehensive evaluation model based on the analytic hierarchy process is constructed to compare the performance of the different topologies. Case studies demonstrate the feasibility of the proposed algorithm and model.
Higgs boson from an extended symmetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barbieri, Riccardo; Bellazzini, Brando; Rychkov, Vyacheslav S.
The variety of ideas put forward in the context of a composite picture for the Higgs boson calls for a simple and effective description of the related phenomenology. Such a description is given here by means of a minimal model and is explicitly applied to the example of a Higgs-top sector from an SO(5) symmetry. We discuss the spectrum, the electroweak precision tests, B-physics, and naturalness. We show the difficulty in complying with the different constraints. The extended gauge sector relative to the standard SU(2)xU(1), if there is any, has little or no impact on these considerations. We also discussmore » the relation of the minimal model with its 'little Higgs' or holographic extensions based on the same symmetry.« less
Electroweak symmetry breaking and collider signatures in the next-to-minimal composite Higgs model
NASA Astrophysics Data System (ADS)
Niehoff, Christoph; Stangl, Peter; Straub, David M.
2017-04-01
We conduct a detailed numerical analysis of the composite pseudo-Nambu-Goldstone Higgs model based on the next-to-minimal coset SO(6)/SO(5) ≅ SU(4)/Sp(4), featuring an additional SM singlet scalar in the spectrum, which we allow to mix with the Higgs boson. We identify regions in parameter space compatible with all current exper-imental constraints, including radiative electroweak symmetry breaking, flavour physics, and direct searches at colliders. We find the additional scalar, with a mass predicted to be below a TeV, to be virtually unconstrained by current LHC data, but potentially in reach of run 2 searches. Promising indirect searches include rare semi-leptonic B decays, CP violation in B s mixing, and the electric dipole moment of the neutron.
Biological applications of phase-contrast electron microscopy.
Nagayama, Kuniaki
2014-01-01
Here, I review the principles and applications of phase-contrast electron microscopy using phase plates. First, I develop the principle of phase contrast based on a minimal model of microscopy, introducing a double Fourier-transform process to mathematically formulate the image formation. Next, I explain four phase-contrast (PC) schemes, defocus PC, Zernike PC, Hilbert differential contrast, and schlieren optics, as image-filtering processes in the context of the minimal model, with particular emphases on the Zernike PC and corresponding Zernike phase plates. Finally, I review applications of Zernike PC cryo-electron microscopy to biological systems such as protein molecules, virus particles, and cells, including single-particle analysis to delineate three-dimensional (3D) structures of protein and virus particles and cryo-electron tomography to reconstruct 3D images of complex protein systems and cells.
Price, A.; Peterson, James T.
2010-01-01
Stream fish managers often use fish sample data to inform management decisions affecting fish populations. Fish sample data, however, can be biased by the same factors affecting fish populations. To minimize the effect of sample biases on decision making, biologists need information on the effectiveness of fish sampling methods. We evaluated single-pass backpack electrofishing and seining combined with electrofishing by following a dual-gear, mark–recapture approach in 61 blocknetted sample units within first- to third-order streams. We also estimated fish movement out of unblocked units during sampling. Capture efficiency and fish abundances were modeled for 50 fish species by use of conditional multinomial capture–recapture models. The best-approximating models indicated that capture efficiencies were generally low and differed among species groups based on family or genus. Efficiencies of single-pass electrofishing and seining combined with electrofishing were greatest for Catostomidae and lowest for Ictaluridae. Fish body length and stream habitat characteristics (mean cross-sectional area, wood density, mean current velocity, and turbidity) also were related to capture efficiency of both methods, but the effects differed among species groups. We estimated that, on average, 23% of fish left the unblocked sample units, but net movement varied among species. Our results suggest that (1) common warmwater stream fish sampling methods have low capture efficiency and (2) failure to adjust for incomplete capture may bias estimates of fish abundance. We suggest that managers minimize bias from incomplete capture by adjusting data for site- and species-specific capture efficiency and by choosing sampling gear that provide estimates with minimal bias and variance. Furthermore, if block nets are not used, we recommend that managers adjust the data based on unconditional capture efficiency.
Automating the selection of standard parallels for conic map projections
NASA Astrophysics Data System (ADS)
Šavriǒ, Bojan; Jenny, Bernhard
2016-05-01
Conic map projections are appropriate for mapping regions at medium and large scales with east-west extents at intermediate latitudes. Conic projections are appropriate for these cases because they show the mapped area with less distortion than other projections. In order to minimize the distortion of the mapped area, the two standard parallels of conic projections need to be selected carefully. Rules of thumb exist for placing the standard parallels based on the width-to-height ratio of the map. These rules of thumb are simple to apply, but do not result in maps with minimum distortion. There also exist more sophisticated methods that determine standard parallels such that distortion in the mapped area is minimized. These methods are computationally expensive and cannot be used for real-time web mapping and GIS applications where the projection is adjusted automatically to the displayed area. This article presents a polynomial model that quickly provides the standard parallels for the three most common conic map projections: the Albers equal-area, the Lambert conformal, and the equidistant conic projection. The model defines the standard parallels with polynomial expressions based on the spatial extent of the mapped area. The spatial extent is defined by the length of the mapped central meridian segment, the central latitude of the displayed area, and the width-to-height ratio of the map. The polynomial model was derived from 3825 maps-each with a different spatial extent and computationally determined standard parallels that minimize the mean scale distortion index. The resulting model is computationally simple and can be used for the automatic selection of the standard parallels of conic map projections in GIS software and web mapping applications.
Identifying chemicals that provide a specific function within a product, yet have minimal impact on the human body or environment, is the goal of most formulation chemists and engineers practicing green chemistry. We present a methodology to identify potential chemical functional...
Adjacency Matrix-Based Transmit Power Allocation Strategies in Wireless Sensor Networks
Consolini, Luca; Medagliani, Paolo; Ferrari, Gianluigi
2009-01-01
In this paper, we present an innovative transmit power control scheme, based on optimization theory, for wireless sensor networks (WSNs) which use carrier sense multiple access (CSMA) with collision avoidance (CA) as medium access control (MAC) protocol. In particular, we focus on schemes where several remote nodes send data directly to a common access point (AP). Under the assumption of finite overall network transmit power and low traffic load, we derive the optimal transmit power allocation strategy that minimizes the packet error rate (PER) at the AP. This approach is based on modeling the CSMA/CA MAC protocol through a finite state machine and takes into account the network adjacency matrix, depending on the transmit power distribution and determining the network connectivity. It will be then shown that the transmit power allocation problem reduces to a convex constrained minimization problem. Our results show that, under the assumption of low traffic load, the power allocation strategy, which guarantees minimal delay, requires the maximization of network connectivity, which can be equivalently interpreted as the maximization of the number of non-zero entries of the adjacency matrix. The obtained theoretical results are confirmed by simulations for unslotted Zigbee WSNs. PMID:22346705
Minimal Model of Prey Localization through the Lateral-Line System
NASA Astrophysics Data System (ADS)
Franosch, Jan-Moritz P.; Sobotka, Marion C.; Elepfandt, Andreas; van Hemmen, J. Leo
2003-10-01
The clawed frog Xenopus is an aquatic predator catching prey at night by detecting water movements caused by its prey. We present a general method, a “minimal model” based on a minimum-variance estimator, to explain prey detection through the frog's many lateral-line organs, even in case several of them are defunct. We show how waveform reconstruction allows Xenopus' neuronal system to determine both the direction and the character of the prey and even to distinguish two simultaneous wave sources. The results can be applied to many aquatic amphibians, fish, or reptiles such as crocodilians.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mayle, Scott; Gupta, Tanuj; Davis, Sam
Monitoring of the intrinsic temperature and the thermal management is discussed for the carbon nanotube nano-circuits. The experimental results concerning fabricating and testing of a thermometer able to monitor the intrinsic temperature on nanoscale are reported. We also suggest a model which describes a bi-metal multilayer system able to filter the heat flow, based on separating the electron and phonon components one from another. The bi-metal multilayer structure minimizes the phonon component of the heat flow, while retaining the electronic part. The method allows one to improve the overall performance of the electronic nano-circuits due to minimizing the energy dissipation.
Bayard, David S.; Neely, Michael
2016-01-01
An experimental design approach is presented for individualized therapy in the special case where the prior information is specified by a nonparametric (NP) population model. Here, a nonparametric model refers to a discrete probability model characterized by a finite set of support points and their associated weights. An important question arises as to how to best design experiments for this type of model. Many experimental design methods are based on Fisher Information or other approaches originally developed for parametric models. While such approaches have been used with some success across various applications, it is interesting to note that they largely fail to address the fundamentally discrete nature of the nonparametric model. Specifically, the problem of identifying an individual from a nonparametric prior is more naturally treated as a problem of classification, i.e., to find a support point that best matches the patient’s behavior. This paper studies the discrete nature of the NP experiment design problem from a classification point of view. Several new insights are provided including the use of Bayes Risk as an information measure, and new alternative methods for experiment design. One particular method, denoted as MMopt (Multiple-Model Optimal), will be examined in detail and shown to require minimal computation while having distinct advantages compared to existing approaches. Several simulated examples, including a case study involving oral voriconazole in children, are given to demonstrate the usefulness of MMopt in pharmacokinetics applications. PMID:27909942
Kou, Weibin; Chen, Xumei; Yu, Lei; Gong, Huibo
2018-04-18
Most existing signal timing models are aimed to minimize the total delay and stops at intersections, without considering environmental factors. This paper analyzes the trade-off between vehicle emissions and traffic efficiencies on the basis of field data. First, considering the different operating modes of cruising, acceleration, deceleration, and idling, field data of emissions and Global Positioning System (GPS) are collected to estimate emission rates for heavy-duty and light-duty vehicles. Second, multiobjective signal timing optimization model is established based on a genetic algorithm to minimize delay, stops, and emissions. Finally, a case study is conducted in Beijing. Nine scenarios are designed considering different weights of emission and traffic efficiency. The results compared with those using Highway Capacity Manual (HCM) 2010 show that signal timing optimized by the model proposed in this paper can decrease vehicles delay and emissions more significantly. The optimization model can be applied in different cities, which provides supports for eco-signal design and development. Vehicle emissions are heavily at signal intersections in urban area. The multiobjective signal timing optimization model is proposed considering the trade-off between vehicle emissions and traffic efficiencies on the basis of field data. The results indicate that signal timing optimized by the model proposed in this paper can decrease vehicle emissions and delays more significantly. The optimization model can be applied in different cities, which provides supports for eco-signal design and development.
Bayard, David S; Neely, Michael
2017-04-01
An experimental design approach is presented for individualized therapy in the special case where the prior information is specified by a nonparametric (NP) population model. Here, a NP model refers to a discrete probability model characterized by a finite set of support points and their associated weights. An important question arises as to how to best design experiments for this type of model. Many experimental design methods are based on Fisher information or other approaches originally developed for parametric models. While such approaches have been used with some success across various applications, it is interesting to note that they largely fail to address the fundamentally discrete nature of the NP model. Specifically, the problem of identifying an individual from a NP prior is more naturally treated as a problem of classification, i.e., to find a support point that best matches the patient's behavior. This paper studies the discrete nature of the NP experiment design problem from a classification point of view. Several new insights are provided including the use of Bayes Risk as an information measure, and new alternative methods for experiment design. One particular method, denoted as MMopt (multiple-model optimal), will be examined in detail and shown to require minimal computation while having distinct advantages compared to existing approaches. Several simulated examples, including a case study involving oral voriconazole in children, are given to demonstrate the usefulness of MMopt in pharmacokinetics applications.
NASA Astrophysics Data System (ADS)
Foufoula-Georgiou, E.; Czuba, J. A.; Belmont, P.; Wilcock, P. R.; Gran, K. B.; Kumar, P.
2015-12-01
Climatic trends and agricultural intensification in Midwestern U.S. landscapes has contributed to hydrologic regime shifts and a cascade of changes to water quality and river ecosystems. Informing management and policy to mitigate undesired consequences requires a careful scientific analysis that includes data-based inference and conceptual/physical modeling. It also calls for a systems approach that sees beyond a single stream to the whole watershed, favoring the adoption of minimal complexity rather than highly parameterized models for scenario evaluation and comparison. Minimal complexity models can focus on key dynamic processes of the system of interest, reducing problems of model structure bias and equifinality. Here we present a comprehensive analysis of climatic, hydrologic, and ecologic trends in the Minnesota River basin, a 45,000 km2 basin undergoing continuous agricultural intensification and suffering from declining water quality and aquatic biodiversity. We show that: (a) it is easy to arrive at an erroneous view of the system using traditional analyses and modeling tools; (b) even with a well-founded understanding of the key drivers and processes contributing to the problem, there are multiple pathways for minimizing/reversing environmental degradation; and (c) addressing the underlying driver of change (i.e., increased streamflows and reduced water storage due to agricultural drainage practices) by restoring a small amount of water storage in the landscape results in multiple non-linear improvements in downstream water quality. We argue that "optimization" between ecosystem services and economic considerations requires simple modeling frameworks, which include the most essential elements of the whole system and allow for evaluation of alternative management scenarios. Science-based approaches informing management and policy are urgent in this region calling for a new era of watershed management to new and accelerating stressors at the intersection of the food-water-energy-environment nexus.
Glaser, Robert; Venus, Joachim
2017-04-01
The data presented in this article are related to the research article entitled "Model-based characterization of growth performance and l-lactic acid production with high optical purity by thermophilic Bacillus coagulans in a lignin-supplemented mixed substrate medium (R. Glaser and J. Venus, 2016) [1]". This data survey provides the information on characterization of three Bacillus coagulans strains. Information on cofermentation of lignocellulose-related sugars in lignin-containing media is given. Basic characterization data are supported by optical-density high-throughput screening and parameter adjustment to logistic growth models. Lab scale fermentation procedures are examined by model adjustment of a Monod kinetics-based growth model. Lignin consumption is analyzed using the data on decolorization of a lignin-supplemented minimal medium.
Frampton, Sally; Kneebone, Roger L.
2017-01-01
Abstract The term ‘minimally invasive’ was coined in 1986 to describe a range of procedures that involved making very small incisions or no incision at all for diseases traditionally treated by open surgery. We examine this major shift in British medical practice as a means of probing the nature of surgical innovation in the twentieth century. We first consider how concerns regarding surgical invasiveness had long been present in surgery, before examining how changing notions of post-operative care formed a foundation for change. We then go on to focus on a professional network involved in the promotion of minimally invasive therapy led by the urologist John Wickham. The minimally invasive movement, we contend, brought into focus tensions between surgical innovation and the evidence-based model of medical practice. Premised upon professional collaborations beyond surgery and a re-positioning of the patient role, we show how the movement elucidated changing notions of surgical authority. PMID:29713119
NASA Astrophysics Data System (ADS)
Chabab, M.; El Batoul, A.; Lahbas, A.; Oulne, M.
2018-05-01
Based on the minimal length concept, inspired by Heisenberg algebra, a closed analytical formula is derived for the energy spectrum of the prolate γ-rigid Bohr-Mottelson Hamiltonian of nuclei, within a quantum perturbation method (QPM), by considering a scaled Davidson potential in β shape variable. In the resulting solution, called X(3)-D-ML, the ground state and the first β-band are all studied as a function of the free parameters. The fact of introducing the minimal length concept with a QPM makes the model very flexible and a powerful approach to describe nuclear collective excitations of a variety of vibrational-like nuclei. The introduction of scaling parameters in the Davidson potential enables us to get a physical minimum of this latter in comparison with previous works. The analysis of the corrected wave function, as well as the probability density distribution, shows that the minimal length parameter has a physical upper bound limit.
Radhakrishnan, Nitin; Park, Jongwon; Kim, Chang-Soo
2012-01-01
Utilizing a simple fluidic structure, we demonstrate the improved performance of oxidase-based enzymatic biosensors. Electrolysis of water is utilized to generate bubbles to manipulate the oxygen microenvironment close to the biosensor in a fluidic channel. For the proper enzyme reactions to occur, a simple mechanical procedure of manipulating bubbles was developed to maximize the oxygen level while minimizing the pH change after electrolysis. The sensors show improved sensitivities based on the oxygen dependency of enzyme reaction. In addition, this oxygen-rich operation minimizes the ratio of electrochemical interference signal by ascorbic acid during sensor operation (i.e., amperometric detection of hydrogen peroxide). Although creatinine sensors have been used as the model system in this study, this method is applicable to many other biosensors that can use oxidase enzymes (e.g., glucose, alcohol, phenol, etc.) to implement a viable component for in-line fluidic sensor systems. PMID:23012527
Optimal design of the satellite constellation arrangement reconfiguration process
NASA Astrophysics Data System (ADS)
Fakoor, Mahdi; Bakhtiari, Majid; Soleymani, Mahshid
2016-08-01
In this article, a novel approach is introduced for the satellite constellation reconfiguration based on Lambert's theorem. Some critical problems are raised in reconfiguration phase, such as overall fuel cost minimization, collision avoidance between the satellites on the final orbital pattern, and necessary maneuvers for the satellites in order to be deployed in the desired position on the target constellation. To implement the reconfiguration phase of the satellite constellation arrangement at minimal cost, the hybrid Invasive Weed Optimization/Particle Swarm Optimization (IWO/PSO) algorithm is used to design sub-optimal transfer orbits for the satellites existing in the constellation. Also, the dynamic model of the problem will be modeled in such a way that, optimal assignment of the satellites to the initial and target orbits and optimal orbital transfer are combined in one step. Finally, we claim that our presented idea i.e. coupled non-simultaneous flight of satellites from the initial orbital pattern will lead to minimal cost. The obtained results show that by employing the presented method, the cost of reconfiguration process is reduced obviously.
Moioli, Renan C; Vargas, Patricia A; Husbands, Phil
2012-09-01
Oscillatory activity is ubiquitous in nervous systems, with solid evidence that synchronisation mechanisms underpin cognitive processes. Nevertheless, its informational content and relationship with behaviour are still to be fully understood. In addition, cognitive systems cannot be properly appreciated without taking into account brain-body- environment interactions. In this paper, we developed a model based on the Kuramoto Model of coupled phase oscillators to explore the role of neural synchronisation in the performance of a simulated robotic agent in two different minimally cognitive tasks. We show that there is a statistically significant difference in performance and evolvability depending on the synchronisation regime of the network. In both tasks, a combination of information flow and dynamical analyses show that networks with a definite, but not too strong, propensity for synchronisation are more able to reconfigure, to organise themselves functionally and to adapt to different behavioural conditions. The results highlight the asymmetry of information flow and its behavioural correspondence. Importantly, it also shows that neural synchronisation dynamics, when suitably flexible and reconfigurable, can generate minimally cognitive embodied behaviour.
Shape optimization of self-avoiding curves
NASA Astrophysics Data System (ADS)
Walker, Shawn W.
2016-04-01
This paper presents a softened notion of proximity (or self-avoidance) for curves. We then derive a sensitivity result, based on shape differential calculus, for the proximity. This is combined with a gradient-based optimization approach to compute three-dimensional, parameterized curves that minimize the sum of an elastic (bending) energy and a proximity energy that maintains self-avoidance by a penalization technique. Minimizers are computed by a sequential-quadratic-programming (SQP) method where the bending energy and proximity energy are approximated by a finite element method. We then apply this method to two problems. First, we simulate adsorbed polymer strands that are constrained to be bound to a surface and be (locally) inextensible. This is a basic model of semi-flexible polymers adsorbed onto a surface (a current topic in material science). Several examples of minimizing curve shapes on a variety of surfaces are shown. An advantage of the method is that it can be much faster than using molecular dynamics for simulating polymer strands on surfaces. Second, we apply our proximity penalization to the computation of ideal knots. We present a heuristic scheme, utilizing the SQP method above, for minimizing rope-length and apply it in the case of the trefoil knot. Applications of this method could be for generating good initial guesses to a more accurate (but expensive) knot-tightening algorithm.
Asquith, William H.; Thompson, David B.
2008-01-01
The U.S. Geological Survey, in cooperation with the Texas Department of Transportation and in partnership with Texas Tech University, investigated a refinement of the regional regression method and developed alternative equations for estimation of peak-streamflow frequency for undeveloped watersheds in Texas. A common model for estimation of peak-streamflow frequency is based on the regional regression method. The current (2008) regional regression equations for 11 regions of Texas are based on log10 transformations of all regression variables (drainage area, main-channel slope, and watershed shape). Exclusive use of log10-transformation does not fully linearize the relations between the variables. As a result, some systematic bias remains in the current equations. The bias results in overestimation of peak streamflow for both the smallest and largest watersheds. The bias increases with increasing recurrence interval. The primary source of the bias is the discernible curvilinear relation in log10 space between peak streamflow and drainage area. Bias is demonstrated by selected residual plots with superimposed LOWESS trend lines. To address the bias, a statistical framework based on minimization of the PRESS statistic through power transformation of drainage area is described and implemented, and the resulting regression equations are reported. Compared to log10-exclusive equations, the equations derived from PRESS minimization have PRESS statistics and residual standard errors less than the log10 exclusive equations. Selected residual plots for the PRESS-minimized equations are presented to demonstrate that systematic bias in regional regression equations for peak-streamflow frequency estimation in Texas can be reduced. Because the overall error is similar to the error associated with previous equations and because the bias is reduced, the PRESS-minimized equations reported here provide alternative equations for peak-streamflow frequency estimation.
Non-minimally coupled f(R) cosmology
NASA Astrophysics Data System (ADS)
Thakur, Shruti; Sen, Anjan A.; Seshadri, T. R.
2011-02-01
We investigate the consequences of non-minimal gravitational coupling to matter and study how it differs from the case of minimal coupling by choosing certain simple forms for the nature of coupling. The values of the parameters are specified at z=0 (present epoch) and the equations are evolved backwards to calculate the evolution of cosmological parameters. We find that the Hubble parameter evolves more slowly in non-minimal coupling case as compared to the minimal coupling case. In both the cases, the universe accelerates around present time, and enters the decelerating regime in the past. Using the latest Union2 dataset for supernova Type Ia observations as well as the data for baryon acoustic oscillation (BAO) from SDSS observations, we constraint the parameters of Linder exponential model in the two different approaches. We find that there is an upper bound on model parameter in minimal coupling. But for non-minimal coupling case, there is range of allowed values for the model parameter.
Thermal Model of Laser-Induced Eye Damage
1974-10-08
Identify by. block ntber) Ocular Damage Laser Effect3 Thermal Model Temperature Rise Prediction Retinal, Corneal, Lenticular Damage 20. ABSTR ACT (CoIfn...routine available to predict retinal or lenticular beam characteristics based on beam de- scripton at the cornea and distance of the last beam waist 5...used are selected for minimal aberrations of the astigmatic kind and that coma is negligible because of nearly axial "illumination. Secondly, the thermal
A Novel Approach to Adaptive Flow Separation Control
2016-09-03
particular, it considers control of flow separation over a NACA-0025 airfoil using microjet actuators and develops Adaptive Sampling Based Model...Predictive Control ( Adaptive SBMPC), a novel approach to Nonlinear Model Predictive Control that applies the Minimal Resource Allocation Network...Distribution Unlimited UU UU UU UU 03-09-2016 1-May-2013 30-Apr-2016 Final Report: A Novel Approach to Adaptive Flow Separation Control The views, opinions
A method of hidden Markov model optimization for use with geophysical data sets
NASA Technical Reports Server (NTRS)
Granat, R. A.
2003-01-01
Geophysics research has been faced with a growing need for automated techniques with which to process large quantities of data. A successful tool must meet a number of requirements: it should be consistent, require minimal parameter tuning, and produce scientifically meaningful results in reasonable time. We introduce a hidden Markov model (HMM)-based method for analysis of geophysical data sets that attempts to address these issues.
Zhou, D; Bui, K; Sostek, M; Al-Huniti, N
2016-05-01
Naloxegol, a peripherally acting μ-opioid receptor antagonist for the treatment of opioid-induced constipation, is a substrate for cytochrome P450 (CYP) 3A4/3A5 and the P-glycoprotein (P-gp) transporter. By integrating in silico, preclinical, and clinical pharmacokinetic (PK) findings, minimal and full physiologically based pharmacokinetic (PBPK) models were developed to predict the drug-drug interaction (DDI) potential for naloxegol. The models reasonably predicted the observed changes in naloxegol exposure with ketoconazole (increase of 13.1-fold predicted vs. 12.9-fold observed), diltiazem (increase of 2.8-fold predicted vs. 3.4-fold observed), rifampin (reduction of 76% predicted vs. 89% observed), and quinidine (increase of 1.2-fold predicted vs. 1.4-fold observed). The moderate CYP3A4 inducer efavirenz was predicted to reduce naloxegol exposure by ∼50%, whereas weak CYP3A inhibitors were predicted to minimally affect exposure. In summary, the PBPK models reasonably estimated interactions with various CYP3A modulators and can be used to guide dosing in clinical practice when naloxegol is coadministered with such agents. © 2016 The Authors CPT: Pharmacometrics & Systems Pharmacology published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.
Zeng, Canjun; Xiao, Jidong; Wu, Zhanglin; Huang, Wenhua
2015-01-01
The aim of this study is to evaluate the efficacy and feasibility of three-dimensional printing (3D printing) assisted internal fixation of unstable pelvic fracture from minimal invasive para-rectus abdominis approach. A total of 38 patients with unstable pelvic fractures were analyzed retrospectively from August 2012 to February 2014. All cases were treated operatively with internal fixation assisted by three-dimensional printing from minimal invasive para-rectus abdominis approach. Both preoperative CT and three-dimensional reconstruction were performed. Pelvic model was created by 3D printing. Data including the best entry points, plate position and direction and length of screw were obtained from simulated operation based on 3D printing pelvic model. The diaplasis and internal fixation were performed by minimal invasive para-rectus abdominis approach according to the optimized dada in real surgical procedure. Matta and Majeed score were used to evaluate currative effects after operation. According to the Matta standard, the outcome of the diaplasis achieved 97.37% with excellent and good. Majeed assessment showed 94.4% with excellent and good. The imageological examination showed consistency of internal fixation and simulated operation. The mean operation time was 110 minutes, mean intraoperative blood loss 320 ml, and mean incision length 6.5 cm. All patients have achieved clinical healing, with mean healing time of 8 weeks. Three-dimensional printing assisted internal fixation of unstable pelvic fracture from minimal invasive para-rectus abdominis approach is feasible and effective. This method has the advantages of trauma minimally, bleeding less, healing rapidly and satisfactory reduction, and worthwhile for spreading in clinical practice.
1987 Oak Ridge model conference: Proceedings: Volume I, Part 3, Waste Management
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1987-01-01
A conference sponsored by the United States Department of Energy (DOE), was held on waste management. Topics of discussion were transuranic waste management, chemical and physical treatment technologies, waste minimization, land disposal technology and characterization and analysis. Individual projects are processed separately for the data bases. (CBS)
Developmental Outcomes after Early Prefrontal Cortex Damage
ERIC Educational Resources Information Center
Eslinger, Paul J.; Flaherty-Craig, Claire V.; Benton, Arthur L.
2004-01-01
The neuropsychological bases of cognitive, social, and moral development are minimally understood, with a seemingly wide chasm between developmental theories and brain maturation models. As one approach to bridging ideas in these areas, we review 10 cases of early prefrontal cortex damage from the clinical literature, highlighting overall clinical…
DOT National Transportation Integrated Search
2012-08-01
With the purpose to minimize or prevent crash-induced fires in road and rail transportation, the : current interest in bio-derived and blended transportation fuels is increasing. Based on two years : of preliminary testing and analysis, it appears to...
DOT National Transportation Integrated Search
2006-01-01
As important habitats are being lost to human development, transportation agencies are facing increased expectations that their road projects avoid or minimize further habitat destruction and adverse effects on wildlife populations. Wildlife linkage ...
Sulcal set optimization for cortical surface registration.
Joshi, Anand A; Pantazis, Dimitrios; Li, Quanzheng; Damasio, Hanna; Shattuck, David W; Toga, Arthur W; Leahy, Richard M
2010-04-15
Flat mapping based cortical surface registration constrained by manually traced sulcal curves has been widely used for inter subject comparisons of neuroanatomical data. Even for an experienced neuroanatomist, manual sulcal tracing can be quite time consuming, with the cost increasing with the number of sulcal curves used for registration. We present a method for estimation of an optimal subset of size N(C) from N possible candidate sulcal curves that minimizes a mean squared error metric over all combinations of N(C) curves. The resulting procedure allows us to estimate a subset with a reduced number of curves to be traced as part of the registration procedure leading to optimal use of manual labeling effort for registration. To minimize the error metric we analyze the correlation structure of the errors in the sulcal curves by modeling them as a multivariate Gaussian distribution. For a given subset of sulci used as constraints in surface registration, the proposed model estimates registration error based on the correlation structure of the sulcal errors. The optimal subset of constraint curves consists of the N(C) sulci that jointly minimize the estimated error variance for the subset of unconstrained curves conditioned on the N(C) constraint curves. The optimal subsets of sulci are presented and the estimated and actual registration errors for these subsets are computed. Copyright 2009 Elsevier Inc. All rights reserved.
Hybrid architecture for encoded measurement-based quantum computation
Zwerger, M.; Briegel, H. J.; Dür, W.
2014-01-01
We present a hybrid scheme for quantum computation that combines the modular structure of elementary building blocks used in the circuit model with the advantages of a measurement-based approach to quantum computation. We show how to construct optimal resource states of minimal size to implement elementary building blocks for encoded quantum computation in a measurement-based way, including states for error correction and encoded gates. The performance of the scheme is determined by the quality of the resource states, where within the considered error model a threshold of the order of 10% local noise per particle for fault-tolerant quantum computation and quantum communication. PMID:24946906
Switching neuronal state: optimal stimuli revealed using a stochastically-seeded gradient algorithm.
Chang, Joshua; Paydarfar, David
2014-12-01
Inducing a switch in neuronal state using energy optimal stimuli is relevant to a variety of problems in neuroscience. Analytical techniques from optimal control theory can identify such stimuli; however, solutions to the optimization problem using indirect variational approaches can be elusive in models that describe neuronal behavior. Here we develop and apply a direct gradient-based optimization algorithm to find stimulus waveforms that elicit a change in neuronal state while minimizing energy usage. We analyze standard models of neuronal behavior, the Hodgkin-Huxley and FitzHugh-Nagumo models, to show that the gradient-based algorithm: (1) enables automated exploration of a wide solution space, using stochastically generated initial waveforms that converge to multiple locally optimal solutions; and (2) finds optimal stimulus waveforms that achieve a physiological outcome condition, without a priori knowledge of the optimal terminal condition of all state variables. Analysis of biological systems using stochastically-seeded gradient methods can reveal salient dynamical mechanisms underlying the optimal control of system behavior. The gradient algorithm may also have practical applications in future work, for example, finding energy optimal waveforms for therapeutic neural stimulation that minimizes power usage and diminishes off-target effects and damage to neighboring tissue.
Froese, Tom; Lenay, Charles; Ikegami, Takashi
2012-01-01
One of the major challenges faced by explanations of imitation is the “correspondence problem”: how is an agent able to match its bodily expression to the observed bodily expression of another agent, especially when there is no possibility of external self-observation? Current theories only consider the possibility of an innate or acquired matching mechanism belonging to an isolated individual. In this paper we evaluate an alternative that situates the explanation of imitation in the inter-individual dynamics of the interaction process itself. We implemented a minimal model of two interacting agents based on a recent psychological study of imitative behavior during minimalist perceptual crossing. The agents cannot sense the configuration of their own body, and do not have access to other's body configuration, either. And yet surprisingly they are still capable of converging on matching bodily configurations. Analysis revealed that the agents solved this version of the correspondence problem in terms of collective properties of the interaction process. Contrary to the assumption that such properties merely serve as external input or scaffolding for individual mechanisms, it was found that the behavioral dynamics were distributed across the model as a whole. PMID:23060768
Aggarwal, Rohit; Rider, Lisa G; Ruperto, Nicolino; Bayat, Nastaran; Erman, Brian; Feldman, Brian M; Oddis, Chester V; Amato, Anthony A; Chinoy, Hector; Cooper, Robert G; Dastmalchi, Maryam; Fiorentino, David; Isenberg, David; Katz, James D; Mammen, Andrew; de Visser, Marianne; Ytterberg, Steven R; Lundberg, Ingrid E; Chung, Lorinda; Danko, Katalin; García-De la Torre, Ignacio; Song, Yeong Wook; Villa, Luca; Rinaldi, Mariangela; Rockette, Howard; Lachenbruch, Peter A; Miller, Frederick W; Vencovsky, Jiri
2017-05-01
To develop response criteria for adult dermatomyositis (DM) and polymyositis (PM). Expert surveys, logistic regression, and conjoint analysis were used to develop 287 definitions using core set measures. Myositis experts rated greater improvement among multiple pairwise scenarios in conjoint analysis surveys, where different levels of improvement in 2 core set measures were presented. The PAPRIKA (Potentially All Pairwise Rankings of All Possible Alternatives) method determined the relative weights of core set measures and conjoint analysis definitions. The performance characteristics of the definitions were evaluated on patient profiles using expert consensus (gold standard) and were validated using data from a clinical trial. The nominal group technique was used to reach consensus. Consensus was reached for a conjoint analysis-based continuous model using absolute percent change in core set measures (physician, patient, and extramuscular global activity, muscle strength, Health Assessment Questionnaire, and muscle enzyme levels). A total improvement score (range 0-100), determined by summing scores for each core set measure, was based on improvement in and relative weight of each core set measure. Thresholds for minimal, moderate, and major improvement were ≥20, ≥40, and ≥60 points in the total improvement score. The same criteria were chosen for juvenile DM, with different improvement thresholds. Sensitivity and specificity in DM/PM patient cohorts were 85% and 92%, 90% and 96%, and 92% and 98% for minimal, moderate, and major improvement, respectively. Definitions were validated in the clinical trial analysis for differentiating the physician rating of improvement (P < 0.001). The response criteria for adult DM/PM consisted of the conjoint analysis model based on absolute percent change in 6 core set measures, with thresholds for minimal, moderate, and major improvement. © 2017, American College of Rheumatology.
Dukes, Kimberly; Tripp, Tara; Willinger, Marian; Odendaal, Hein; Elliott, Amy J; Kinney, Hannah C; Robinson, Fay; Petersen, Julie M; Raffo, Cheryl; Hereld, Dale; Groenewald, Coen; Angal, Jyoti; Hankins, Gary; Burd, Larry; Fifer, William P; Myers, Michael M; Hoffman, Howard J; Sullivan, Lisa
2017-08-01
Precise identification of drinking and smoking patterns during pregnancy is crucial to better understand the risk to the fetus. The purpose of this manuscript is to describe the methodological approach used to define prenatal drinking and smoking trajectories from a large prospective pregnancy cohort, and to describe maternal characteristics associated with different exposure patterns. In the Safe Passage Study, detailed information regarding quantity, frequency, and timing of exposure was self-reported up to four times during pregnancy and at 1 month post-delivery. Exposure trajectories were developed using data from 11,692 pregnancies (9912 women) where pregnancy outcome was known. Women were from three diverse populations: white (23%) and American Indian (17%) in the Northern Plains, US, and mixed ancestry (59%) in South Africa (other/not specified [1%]). Group-based trajectory modeling was used to identify 5 unique drinking trajectories (1 none/minimal, 2 quitting groups, 2 continuous groups) and 7 smoking trajectories (1 none/minimal, 2 quitting groups, 4 continuous groups). Women with pregnancies assigned to the low- or high-continuous drinking groups were less likely to have completed high school and were more likely to have enrolled in the study in the third trimester, be of mixed ancestry, or be depressed than those assigned to the none/minimal or quit-drinking groups. Results were similar when comparing continuous smokers to none/minimal and quit-smoking groups. Further, women classified as high- or low-continuous drinkers were more likely to smoke at moderate-, high-, and very high-continuous levels, as compared to women classified as non-drinkers and quitters. This is the first study of this size to utilize group-based trajectory modeling to identify unique prenatal drinking and smoking trajectories. These trajectories will be used in future analyses to determine which specific exposure patterns subsequently manifest as poor peri- and postnatal outcomes. Copyright © 2017 Elsevier Inc. All rights reserved.
Wu, Junfeng; Dai, Fang; Hu, Gang; Mou, Xuanqin
2018-04-18
Excessive radiation exposure in computed tomography (CT) scans increases the chance of developing cancer and has become a major clinical concern. Recently, statistical iterative reconstruction (SIR) with l0-norm dictionary learning regularization has been developed to reconstruct CT images from the low dose and few-view dataset in order to reduce radiation dose. Nonetheless, the sparse regularization term adopted in this approach is l0-norm, which cannot guarantee the global convergence of the proposed algorithm. To address this problem, in this study we introduced the l1-norm dictionary learning penalty into SIR framework for low dose CT image reconstruction, and developed an alternating minimization algorithm to minimize the associated objective function, which transforms CT image reconstruction problem into a sparse coding subproblem and an image updating subproblem. During the image updating process, an efficient model function approach based on balancing principle is applied to choose the regularization parameters. The proposed alternating minimization algorithm was evaluated first using real projection data of a sheep lung CT perfusion and then using numerical simulation based on sheep lung CT image and chest image. Both visual assessment and quantitative comparison using terms of root mean square error (RMSE) and structural similarity (SSIM) index demonstrated that the new image reconstruction algorithm yielded similar performance with l0-norm dictionary learning penalty and outperformed the conventional filtered backprojection (FBP) and total variation (TV) minimization algorithms.
Minimal camera networks for 3D image based modeling of cultural heritage objects.
Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma
2014-03-25
3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm.
Minimal Camera Networks for 3D Image Based Modeling of Cultural Heritage Objects
Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma
2014-01-01
3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue “Lamassu”. Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883–859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm. PMID:24670718
Hyperopt: a Python library for model selection and hyperparameter optimization
NASA Astrophysics Data System (ADS)
Bergstra, James; Komer, Brent; Eliasmith, Chris; Yamins, Dan; Cox, David D.
2015-01-01
Sequential model-based optimization (also known as Bayesian optimization) is one of the most efficient methods (per function evaluation) of function minimization. This efficiency makes it appropriate for optimizing the hyperparameters of machine learning algorithms that are slow to train. The Hyperopt library provides algorithms and parallelization infrastructure for performing hyperparameter optimization (model selection) in Python. This paper presents an introductory tutorial on the usage of the Hyperopt library, including the description of search spaces, minimization (in serial and parallel), and the analysis of the results collected in the course of minimization. This paper also gives an overview of Hyperopt-Sklearn, a software project that provides automatic algorithm configuration of the Scikit-learn machine learning library. Following Auto-Weka, we take the view that the choice of classifier and even the choice of preprocessing module can be taken together to represent a single large hyperparameter optimization problem. We use Hyperopt to define a search space that encompasses many standard components (e.g. SVM, RF, KNN, PCA, TFIDF) and common patterns of composing them together. We demonstrate, using search algorithms in Hyperopt and standard benchmarking data sets (MNIST, 20-newsgroups, convex shapes), that searching this space is practical and effective. In particular, we improve on best-known scores for the model space for both MNIST and convex shapes. The paper closes with some discussion of ongoing and future work.
Gorguluarslan, Recep M; Choi, Seung-Kyum; Saldana, Christopher J
2017-07-01
A methodology is proposed for uncertainty quantification and validation to accurately predict the mechanical response of lattice structures used in the design of scaffolds. Effective structural properties of the scaffolds are characterized using a developed multi-level stochastic upscaling process that propagates the quantified uncertainties at strut level to the lattice structure level. To obtain realistic simulation models for the stochastic upscaling process and minimize the experimental cost, high-resolution finite element models of individual struts were reconstructed from the micro-CT scan images of lattice structures which are fabricated by selective laser melting. The upscaling method facilitates the process of determining homogenized strut properties to reduce the computational cost of the detailed simulation model for the scaffold. Bayesian Information Criterion is utilized to quantify the uncertainties with parametric distributions based on the statistical data obtained from the reconstructed strut models. A systematic validation approach that can minimize the experimental cost is also developed to assess the predictive capability of the stochastic upscaling method used at the strut level and lattice structure level. In comparison with physical compression test results, the proposed methodology of linking the uncertainty quantification with the multi-level stochastic upscaling method enabled an accurate prediction of the elastic behavior of the lattice structure with minimal experimental cost by accounting for the uncertainties induced by the additive manufacturing process. Copyright © 2017 Elsevier Ltd. All rights reserved.
Hidden long evolutionary memory in a model biochemical network
NASA Astrophysics Data System (ADS)
Ali, Md. Zulfikar; Wingreen, Ned S.; Mukhopadhyay, Ranjan
2018-04-01
We introduce a minimal model for the evolution of functional protein-interaction networks using a sequence-based mutational algorithm, and apply the model to study neutral drift in networks that yield oscillatory dynamics. Starting with a functional core module, random evolutionary drift increases network complexity even in the absence of specific selective pressures. Surprisingly, we uncover a hidden order in sequence space that gives rise to long-term evolutionary memory, implying strong constraints on network evolution due to the topology of accessible sequence space.
Strategies for minimizing sample size for use in airborne LiDAR-based forest inventory
Junttila, Virpi; Finley, Andrew O.; Bradford, John B.; Kauranne, Tuomo
2013-01-01
Recently airborne Light Detection And Ranging (LiDAR) has emerged as a highly accurate remote sensing modality to be used in operational scale forest inventories. Inventories conducted with the help of LiDAR are most often model-based, i.e. they use variables derived from LiDAR point clouds as the predictive variables that are to be calibrated using field plots. The measurement of the necessary field plots is a time-consuming and statistically sensitive process. Because of this, current practice often presumes hundreds of plots to be collected. But since these plots are only used to calibrate regression models, it should be possible to minimize the number of plots needed by carefully selecting the plots to be measured. In the current study, we compare several systematic and random methods for calibration plot selection, with the specific aim that they be used in LiDAR based regression models for forest parameters, especially above-ground biomass. The primary criteria compared are based on both spatial representativity as well as on their coverage of the variability of the forest features measured. In the former case, it is important also to take into account spatial auto-correlation between the plots. The results indicate that choosing the plots in a way that ensures ample coverage of both spatial and feature space variability improves the performance of the corresponding models, and that adequate coverage of the variability in the feature space is the most important condition that should be met by the set of plots collected.
NASA Astrophysics Data System (ADS)
Moayedi, S. K.; Setare, M. R.; Khosropour, B.
2013-11-01
In the 1990s, Kempf and his collaborators Mangano and Mann introduced a D-dimensional (β, β‧)-two-parameter deformed Heisenberg algebra which leads to an isotropic minimal length (\\triangle Xi)\\min = \\hbar √ {Dβ +β '}, \\forall i\\in \\{1, 2, ..., D\\}. In this work, the Lagrangian formulation of a magnetostatic field in three spatial dimensions (D = 3) described by Kempf algebra is presented in the special case of β‧ = 2β up to the first-order over β. We show that at the classical level there is a similarity between magnetostatics in the presence of a minimal length scale (modified magnetostatics) and the magnetostatic sector of the Abelian Lee-Wick model in three spatial dimensions. The integral form of Ampere's law and the energy density of a magnetostatic field in the modified magnetostatics are obtained. Also, the Biot-Savart law in the modified magnetostatics is found. By studying the effect of minimal length corrections to the gyromagnetic moment of the muon, we conclude that the upper bound on the isotropic minimal length scale in three spatial dimensions is 4.42×10-19 m. The relationship between magnetostatics with a minimal length and the Gaete-Spallucci nonlocal magnetostatics [J. Phys. A: Math. Theor. 45, 065401 (2012)] is investigated.
Optimal design method to minimize users' thinking mapping load in human-machine interactions.
Huang, Yanqun; Li, Xu; Zhang, Jie
2015-01-01
The discrepancy between human cognition and machine requirements/behaviors usually results in serious mental thinking mapping loads or even disasters in product operating. It is important to help people avoid human-machine interaction confusions and difficulties in today's mental work mastered society. Improving the usability of a product and minimizing user's thinking mapping and interpreting load in human-machine interactions. An optimal human-machine interface design method is introduced, which is based on the purpose of minimizing the mental load in thinking mapping process between users' intentions and affordance of product interface states. By analyzing the users' thinking mapping problem, an operating action model is constructed. According to human natural instincts and acquired knowledge, an expected ideal design with minimized thinking loads is uniquely determined at first. Then, creative alternatives, in terms of the way human obtains operational information, are provided as digital interface states datasets. In the last, using the cluster analysis method, an optimum solution is picked out from alternatives, by calculating the distances between two datasets. Considering multiple factors to minimize users' thinking mapping loads, a solution nearest to the ideal value is found in the human-car interaction design case. The clustering results show its effectiveness in finding an optimum solution to the mental load minimizing problems in human-machine interaction design.
Advances in Global Full Waveform Inversion
NASA Astrophysics Data System (ADS)
Tromp, J.; Bozdag, E.; Lei, W.; Ruan, Y.; Lefebvre, M. P.; Modrak, R. T.; Orsvuran, R.; Smith, J. A.; Komatitsch, D.; Peter, D. B.
2017-12-01
Information about Earth's interior comes from seismograms recorded at its surface. Seismic imaging based on spectral-element and adjoint methods has enabled assimilation of this information for the construction of 3D (an)elastic Earth models. These methods account for the physics of wave excitation and propagation by numerically solving the equations of motion, and require the execution of complex computational procedures that challenge the most advanced high-performance computing systems. Current research is petascale; future research will require exascale capabilities. The inverse problem consists of reconstructing the characteristics of the medium from -often noisy- observations. A nonlinear functional is minimized, which involves both the misfit to the measurements and a Tikhonov-type regularization term to tackle inherent ill-posedness. Achieving scalability for the inversion process on tens of thousands of multicore processors is a task that offers many research challenges. We initiated global "adjoint tomography" using 253 earthquakes and produced the first-generation model named GLAD-M15, with a transversely isotropic model parameterization. We are currently running iterations for a second-generation anisotropic model based on the same 253 events. In parallel, we continue iterations for a transversely isotropic model with a larger dataset of 1,040 events to determine higher-resolution plume and slab images. A significant part of our research has focused on eliminating I/O bottlenecks in the adjoint tomography workflow. This has led to the development of a new Adaptable Seismic Data Format based on HDF5, and post-processing tools based on the ADIOS library developed by Oak Ridge National Laboratory. We use the Ensemble Toolkit for workflow stabilization & management to automate the workflow with minimal human interaction.
Conformal standard model, leptogenesis, and dark matter
NASA Astrophysics Data System (ADS)
Lewandowski, Adrian; Meissner, Krzysztof A.; Nicolai, Hermann
2018-02-01
The conformal standard model is a minimal extension of the Standard Model (SM) of particle physics based on the assumed absence of large intermediate scales between the TeV scale and the Planck scale, which incorporates only right-chiral neutrinos and a new complex scalar in addition to the usual SM degrees of freedom, but no other features such as supersymmetric partners. In this paper, we present a comprehensive quantitative analysis of this model, and show that all outstanding issues of particle physics proper can in principle be solved "in one go" within this framework. This includes in particular the stabilization of the electroweak scale, "minimal" leptogenesis and the explanation of dark matter, with a small mass and very weakly interacting Majoron as the dark matter candidate (for which we propose to use the name "minoron"). The main testable prediction of the model is a new and almost sterile scalar boson that would manifest itself as a narrow resonance in the TeV region. We give a representative range of parameter values consistent with our assumptions and with observation.
NASA Astrophysics Data System (ADS)
Sidibe, Souleymane
The implementation and monitoring of operational flight plans is a major occupation for a crew of commercial flights. The purpose of this operation is to set the vertical and lateral trajectories followed by airplane during phases of flight: climb, cruise, descent, etc. These trajectories are subjected to conflicting economical constraints: minimization of flight time and minimization of fuel consumed and environmental constraints. In its task of mission planning, the crew is assisted by the Flight Management System (FMS) which is used to construct the path to follow and to predict the behaviour of the aircraft along the flight plan. The FMS considered in our research, particularly includes an optimization model of flight only by calculating the optimal speed profile that minimizes the overall cost of flight synthesized by a criterion of cost index following a steady cruising altitude. However, the model based solely on optimization of the speed profile is not sufficient. It is necessary to expand the current optimization for simultaneous optimization of the speed and altitude in order to determine an optimum cruise altitude that minimizes the overall cost when the path is flown with the optimal speed profile. Then, a new program was developed. The latter is based on the method of dynamic programming invented by Bellman to solve problems of optimal paths. In addition, the improvement passes through research new patterns of trajectories integrating ascendant cruises and using the lateral plane with the effect of the weather: wind and temperature. Finally, for better optimization, the program takes into account constraint of flight domain of aircrafts which utilize the FMS.
Exploring non-holomorphic soft terms in the framework of gauge mediated supersymmetry breaking
NASA Astrophysics Data System (ADS)
Chattopadhyay, Utpal; Das, Debottam; Mukherjee, Samadrita
2018-01-01
It is known that in the absence of a gauge singlet field, a specific class of supersymmetry (SUSY) breaking non-holomorphic (NH) terms can be soft breaking in nature so that they may be considered along with the Minimal Supersymmetric Standard Model (MSSM) and beyond. There have been studies related to these terms in minimal supergravity based models. Consideration of an F-type SUSY breaking scenario in the hidden sector with two chiral superfields however showed Planck scale suppression of such terms. In an unbiased point of view for the sources of SUSY breaking, the NH terms in a phenomenological MSSM (pMSSM) type of analysis showed a possibility of a large SUSY contribution to muon g - 2, a reasonable amount of corrections to the Higgs boson mass and a drastic reduction of the electroweak fine-tuning for a higgsino dominated {\\tilde{χ}}_1^0 in some regions of parameter space. We first investigate here the effects of the NH terms in a low scale SUSY breaking scenario. In our analysis with minimal gauge mediated supersymmetry breaking (mGMSB) we probe how far the results can be compared with the previous pMSSM plus NH terms based study. We particularly analyze the Higgs, stop and the electroweakino sectors focusing on a higgsino dominated {\\tilde{χ}}_1^0 and {\\tilde{χ}}_1^{± } , a feature typically different from what appears in mGMSB. The effect of a limited degree of RG evolutions and vanishing of the trilinear coupling terms at the messenger scale can be overcome by choosing a non-minimal GMSB scenario, such as one with a matter-messenger interaction.
Panel Flutter Emulation Using a Few Concentrated Forces
NASA Astrophysics Data System (ADS)
Dhital, Kailash; Han, Jae-Hung
2018-04-01
The objective of this paper is to study the feasibility of panel flutter emulation using a few concentrated forces. The concentrated forces are considered to be equivalent to aerodynamic forces. The equivalence is carried out using surface spline method and principle of virtual work. The structural modeling of the plate is based on the classical plate theory and the aerodynamic modeling is based on the piston theory. The present approach differs from the linear panel flutter analysis in scheming the modal aerodynamics forces with unchanged structural properties. The solutions for the flutter problem are obtained numerically using the standard eigenvalue procedure. A few concentrated forces were considered with an optimization effort to decide their optimal locations. The optimization process is based on minimizing the error between the flutter bounds from emulated and linear flutter analysis method. The emulated flutter results for the square plate of four different boundary conditions using six concentrated forces are obtained with minimal error to the reference value. The results demonstrated the workability and viability of using concentrated forces in emulating real panel flutter. In addition, the paper includes the parametric studies of linear panel flutter whose proper literatures are not available.
Chindelevitch, Leonid; Trigg, Jason; Regev, Aviv; Berger, Bonnie
2014-01-01
Constraint-based models are currently the only methodology that allows the study of metabolism at the whole-genome scale. Flux balance analysis is commonly used to analyse constraint-based models. Curiously, the results of this analysis vary with the software being run, a situation that we show can be remedied by using exact rather than floating-point arithmetic. Here we introduce MONGOOSE, a toolbox for analysing the structure of constraint-based metabolic models in exact arithmetic. We apply MONGOOSE to the analysis of 98 existing metabolic network models and find that the biomass reaction is surprisingly blocked (unable to sustain non-zero flux) in nearly half of them. We propose a principled approach for unblocking these reactions and extend it to the problems of identifying essential and synthetic lethal reactions and minimal media. Our structural insights enable a systematic study of constraint-based metabolic models, yielding a deeper understanding of their possibilities and limitations. PMID:25291352
Optimization of power systems with voltage security constraints
NASA Astrophysics Data System (ADS)
Rosehart, William Daniel
As open access market principles are applied to power systems, significant changes in their operation and control are occurring. In the new marketplace, power systems are operating under higher loading conditions as market influences demand greater attention to operating cost versus stability margins. Since stability continues to be a basic requirement in the operation of any power system, new tools are being considered to analyze the effect of stability on the operating cost of the system, so that system stability can be incorporated into the costs of operating the system. In this thesis, new optimal power flow (OPF) formulations are proposed based on multi-objective methodologies to optimize active and reactive power dispatch while maximizing voltage security in power systems. The effects of minimizing operating costs, minimizing reactive power generation and/or maximizing voltage stability margins are analyzed. Results obtained using the proposed Voltage Stability Constrained OPF formulations are compared and analyzed to suggest possible ways of costing voltage security in power systems. When considering voltage stability margins the importance of system modeling becomes critical, since it has been demonstrated, based on bifurcation analysis, that modeling can have a significant effect of the behavior of power systems, especially at high loading levels. Therefore, this thesis also examines the effects of detailed generator models and several exponential load models. Furthermore, because of its influence on voltage stability, a Static Var Compensator model is also incorporated into the optimization problems.
NASA Astrophysics Data System (ADS)
Kim, U.; Parker, J.
2016-12-01
Many dense non-aqueous phase liquid (DNAPL) contaminated sites in the U.S. are reported as "remediation in progress" (RIP). However, the cost to complete (CTC) remediation at these sites is highly uncertain and in many cases, the current remediation plan may need to be modified or replaced to achieve remediation objectives. This study evaluates the effectiveness of iterative stochastic cost optimization that incorporates new field data for periodic parameter recalibration to incrementally reduce prediction uncertainty and implement remediation design modifications as needed to minimize the life cycle cost (i.e., CTC). This systematic approach, using the Stochastic Cost Optimization Toolkit (SCOToolkit), enables early identification and correction of problems to stay on track for completion while minimizing the expected (i.e., probability-weighted average) CTC. This study considers a hypothetical site involving multiple DNAPL sources in an unconfined aquifer using thermal treatment for source reduction and electron donor injection for dissolved plume control. The initial design is based on stochastic optimization using model parameters and their joint uncertainty based on calibration to site characterization data. The model is periodically recalibrated using new monitoring data and performance data for the operating remediation systems. Projected future performance using the current remediation plan is assessed and reoptimization of operational variables for the current system or consideration of alternative designs are considered depending on the assessment results. We compare remediation duration and cost for the stepwise re-optimization approach with single stage optimization as well as with a non-optimized design based on typical engineering practice.
Evaluation of Inventory Reduction Strategies: Balad Air Base Case Study
2012-03-01
produced by conducting individual simulations using a unique random seed generated by the default Anylogic © random number generator. The...develops an agent-based simulation model of the sustainment supply chain supporting Balad AB during its closure using the software AnyLogic ®. The...research. The goal of USAF Stockage Policy is to maximize customer support while minimizing inventory costs (DAF, 2011:1). USAF stocking decisions
Analytical evaluation of two motion washout techniques
NASA Technical Reports Server (NTRS)
Young, L. R.
1977-01-01
Practical tools were developed which extend the state of the art of moving base flight simulation for research and training purposes. The use of visual and vestibular cues to minimize the actual motion of the simulator itself was a primary consideration. The investigation consisted of optimum programming of motion cues based on a physiological model of the vestibular system to yield 'ideal washout logic' for any given simulator constraints.
Minimization of bovine tuberculosis control costs in US dairy herds
Smith, Rebecca L.; Tauer, Loren W.; Schukken, Ynte H.; Lu, Zhao; Grohn, Yrjo T.
2013-01-01
The objective of this study was to minimize the cost of controlling an isolated bovine tuberculosis (bTB) outbreak in a US dairy herd, using a stochastic simulation model of bTB with economic and biological layers. A model optimizer produced a control program that required 2-month testing intervals (TI) with 2 negative whole-herd tests to leave quarantine. This control program minimized both farm and government costs. In all cases, test-and-removal costs were lower than depopulation costs, although the variability in costs increased for farms with high holding costs or small herd sizes. Increasing herd size significantly increased costs for both the farm and the government, while increasing indemnity payments significantly decreased farm costs and increasing testing costs significantly increased government costs. Based on the results of this model, we recommend 2-month testing intervals for herds after an outbreak of bovine tuberculosis, with 2 negative whole herd tests being sufficient to lift quarantine. A prolonged test and cull program may cause a state to lose its bTB-free status during the testing period. When the cost of losing the bTB-free status is greater than $1.4 million then depopulation of farms could be preferred over a test and cull program. PMID:23953679
Developing a model for hospital inherent safety assessment: Conceptualization and validation.
Yari, Saeed; Akbari, Hesam; Gholami Fesharaki, Mohammad; Khosravizadeh, Omid; Ghasemi, Mohammad; Barsam, Yalda; Akbari, Hamed
2018-01-01
Paying attention to the safety of hospitals, as the most crucial institute for providing medical and health services wherein a bundle of facilities, equipment, and human resource exist, is of significant importance. The present research aims at developing a model for assessing hospitals' safety based on principles of inherent safety design. Face validity (30 experts), content validity (20 experts), construct validity (268 examples), convergent validity, and divergent validity have been employed to validate the prepared questionnaire; and the items analysis, the Cronbach's alpha test, ICC test (to measure reliability of the test), composite reliability coefficient have been used to measure primary reliability. The relationship between variables and factors has been confirmed at 0.05 significance level by conducting confirmatory factor analysis (CFA) and structural equations modeling (SEM) technique with the use of Smart-PLS. R-square and load factors values, which were higher than 0.67 and 0.300 respectively, indicated the strong fit. Moderation (0.970), simplification (0.959), substitution (0.943), and minimization (0.5008) have had the most weights in determining the inherent safety of hospital respectively. Moderation, simplification, and substitution, among the other dimensions, have more weight on the inherent safety, while minimization has the less weight, which could be due do its definition as to minimize the risk.
Sid, S; Volant, A; Lesage, G; Heran, M
2017-11-01
Energy consumption and sludge production minimization represent rising challenges for wastewater treatment plants (WWTPs). The goal of this study is to investigate how energy is consumed throughout the whole plant and how operating conditions affect this energy demand. A WWTP based on the activated sludge process was selected as a case study. Simulations were performed using a pre-compiled model implemented in GPS-X simulation software. Model validation was carried out by comparing experimental and modeling data of the dynamic behavior of the mixed liquor suspended solids (MLSS) concentration and nitrogen compounds concentration, energy consumption for aeration, mixing and sludge treatment and annual sludge production over a three year exercise. In this plant, the energy required for bioreactor aeration was calculated at approximately 44% of the total energy demand. A cost optimization strategy was applied by varying the MLSS concentrations (from 1 to 8 gTSS/L) while recording energy consumption, sludge production and effluent quality. An increase of MLSS led to an increase of the oxygen requirement for biomass aeration, but it also reduced total sludge production. Results permit identification of a key MLSS concentration allowing identification of the best compromise between levels of treatment required, biological energy demand and sludge production while minimizing the overall costs.
NASA Astrophysics Data System (ADS)
Luo, Keqin
1999-11-01
The electroplating industry of over 10,000 planting plants nationwide is one of the major waste generators in the industry. Large quantities of wastewater, spent solvents, spent process solutions, and sludge are the major wastes generated daily in plants, which costs the industry tremendously for waste treatment and disposal and hinders the further development of the industry. It becomes, therefore, an urgent need for the industry to identify technically most effective and economically most attractive methodologies and technologies to minimize the waste, while the production competitiveness can be still maintained. This dissertation aims at developing a novel WM methodology using artificial intelligence, fuzzy logic, and fundamental knowledge in chemical engineering, and an intelligent decision support tool. The WM methodology consists of two parts: the heuristic knowledge-based qualitative WM decision analysis and support methodology and fundamental knowledge-based quantitative process analysis methodology for waste reduction. In the former, a large number of WM strategies are represented as fuzzy rules. This becomes the main part of the knowledge base in the decision support tool, WMEP-Advisor. In the latter, various first-principles-based process dynamic models are developed. These models can characterize all three major types of operations in an electroplating plant, i.e., cleaning, rinsing, and plating. This development allows us to perform a thorough process analysis on bath efficiency, chemical consumption, wastewater generation, sludge generation, etc. Additional models are developed for quantifying drag-out and evaporation that are critical for waste reduction. The models are validated through numerous industrial experiments in a typical plating line of an industrial partner. The unique contribution of this research is that it is the first time for the electroplating industry to (i) use systematically available WM strategies, (ii) know quantitatively and accurately what is going on in each tank, and (iii) identify all WM opportunities through process improvement. This work has formed a solid foundation for the further development of powerful WM technologies for comprehensive WM in the following decade.
Application of Inverse Modeling to Estimate Groundwater Recharge under Future Climate Scenario
NASA Astrophysics Data System (ADS)
Akbariyeh, S.; Wang, T.; Bartelt-Hunt, S.; Li, Y.
2016-12-01
Climate variability and change will impose profound influences on groundwater systems. Accurate estimation of groundwater recharge is extremely important for predicting the flow and contaminant transport in the subsurface, which, however, remains as one of the most challenging tasks in the field of hydrology. Using an inverse modeling technique and HYDRUS 1D software, we predicted the spatial distribution of groundwater recharge across the Upper Platte basin in Nebraska, USA, based on 5-year projected future climate and soil moisture data (2057-2060). The climate data was obtained from Weather Research and Forecasting (WRF) model under RCP 8.5 scenario, which was downscaled from global CCSM4 model to a resolution of 24 by 24 km2. Precipitation, potential evapotranspiration, and soil moisture data were extracted from 76 grids located within the Upper Platte basin to perform the inverse modeling. Hargreaves equation was used to calculate the potential evapotranspiration according to latitude, maximum and minimum temperature, and leaf area index (LAI) data at each node. Van-Genuchten parameters were optimized using the inverse algorithm to minimize the error between input and modeled soil moisture data. The groundwater recharge was calculated as the amount of water that passed the lower boundary of the best fitted model. The year of 2057 was used as a spin-up period to minimize the impact of initial conditions. The model was calibrated for years 2058 to 2059 and validation was performed for 2060. This work demonstrates an efficient approach to estimating groundwater recharge based on climate modeling results, which will aid groundwater resources management under future climate scenarios.
NASA Astrophysics Data System (ADS)
Wu, W. H.; Chao, D. Y.
2016-07-01
Traditional region-based liveness-enforcing supervisors focus on (1) maximal permissiveness of not losing legal states, (2) structural simplicity of minimal number of monitors, and (3) fast computation. Lately, a number of similar approaches can achieve minimal configuration using efficient linear programming. However, it is unclear as to the relationship between the minimal configuration and the net structure. It is important to explore the structures involved for the fewest monitors required. Once the lower bound is achieved, further iteration to merge (or reduce the number of) monitors is not necessary. The minimal strongly connected resource subnet (i.e., all places are resources) that contains the set of resource places in a basic siphon is an elementary circuit. Earlier, we showed that the number of monitors required for liveness-enforcing and maximal permissiveness equals that of basic siphons for a subclass of Petri nets modelling manufacturing, called α systems. This paper extends this to systems more powerful than the α one so that the number of monitors in a minimal configuration remains to be lower bounded by that of basic siphons. This paper develops the theory behind and shows examples.
Gradient gravitational search: An efficient metaheuristic algorithm for global optimization.
Dash, Tirtharaj; Sahu, Prabhat K
2015-05-30
The adaptation of novel techniques developed in the field of computational chemistry to solve the concerned problems for large and flexible molecules is taking the center stage with regard to efficient algorithm, computational cost and accuracy. In this article, the gradient-based gravitational search (GGS) algorithm, using analytical gradients for a fast minimization to the next local minimum has been reported. Its efficiency as metaheuristic approach has also been compared with Gradient Tabu Search and others like: Gravitational Search, Cuckoo Search, and Back Tracking Search algorithms for global optimization. Moreover, the GGS approach has also been applied to computational chemistry problems for finding the minimal value potential energy of two-dimensional and three-dimensional off-lattice protein models. The simulation results reveal the relative stability and physical accuracy of protein models with efficient computational cost. © 2015 Wiley Periodicals, Inc.
Mature red blood cells: from optical model to inverse light-scattering problem.
Gilev, Konstantin V; Yurkin, Maxim A; Chernyshova, Ekaterina S; Strokotov, Dmitry I; Chernyshev, Andrei V; Maltsev, Valeri P
2016-04-01
We propose a method for characterization of mature red blood cells (RBCs) morphology, based on measurement of light-scattering patterns (LSPs) of individual RBCs with the scanning flow cytometer and on solution of the inverse light-scattering (ILS) problem for each LSP. We considered a RBC shape model, corresponding to the minimal bending energy of the membrane with isotropic elasticity, and constructed an analytical approximation, which allows rapid simulation of the shape, given the diameter and minimal and maximal thicknesses. The ILS problem was solved by the nearest-neighbor interpolation using a preliminary calculated database of 250,000 theoretical LSPs. For each RBC in blood sample we determined three abovementioned shape characteristics and refractive index, which also allows us to calculate volume, surface area, sphericity index, spontaneous curvature, hemoglobin concentration and content.
Mature red blood cells: from optical model to inverse light-scattering problem
Gilev, Konstantin V.; Yurkin, Maxim A.; Chernyshova, Ekaterina S.; Strokotov, Dmitry I.; Chernyshev, Andrei V.; Maltsev, Valeri P.
2016-01-01
We propose a method for characterization of mature red blood cells (RBCs) morphology, based on measurement of light-scattering patterns (LSPs) of individual RBCs with the scanning flow cytometer and on solution of the inverse light-scattering (ILS) problem for each LSP. We considered a RBC shape model, corresponding to the minimal bending energy of the membrane with isotropic elasticity, and constructed an analytical approximation, which allows rapid simulation of the shape, given the diameter and minimal and maximal thicknesses. The ILS problem was solved by the nearest-neighbor interpolation using a preliminary calculated database of 250,000 theoretical LSPs. For each RBC in blood sample we determined three abovementioned shape characteristics and refractive index, which also allows us to calculate volume, surface area, sphericity index, spontaneous curvature, hemoglobin concentration and content. PMID:27446656
Modification of Schrödinger-Newton equation due to braneworld models with minimal length
NASA Astrophysics Data System (ADS)
Bhat, Anha; Dey, Sanjib; Faizal, Mir; Hou, Chenguang; Zhao, Qin
2017-07-01
We study the correction of the energy spectrum of a gravitational quantum well due to the combined effect of the braneworld model with infinite extra dimensions and generalized uncertainty principle. The correction terms arise from a natural deformation of a semiclassical theory of quantum gravity governed by the Schrödinger-Newton equation based on a minimal length framework. The two fold correction in the energy yields new values of the spectrum, which are closer to the values obtained in the GRANIT experiment. This raises the possibility that the combined theory of the semiclassical quantum gravity and the generalized uncertainty principle may provide an intermediate theory between the semiclassical and the full theory of quantum gravity. We also prepare a schematic experimental set-up which may guide to the understanding of the phenomena in the laboratory.
Land transportation model for supply chain manufacturing industries
NASA Astrophysics Data System (ADS)
Kurniawan, Fajar
2017-12-01
Supply chain is a system that integrates production, inventory, distribution and information processes for increasing productivity and minimize costs. Transportation is an important part of the supply chain system, especially for supporting the material distribution process, work in process products and final products. In fact, Jakarta as the distribution center of manufacturing industries for the industrial area. Transportation system has a large influences on the implementation of supply chain process efficiency. The main problem faced in Jakarta is traffic jam that will affect on the time of distribution. Based on the system dynamic model, there are several scenarios that can provide solutions to minimize timing of distribution that will effect on the cost such as the construction of ports approaching industrial areas other than Tanjung Priok, widening road facilities, development of railways system, and the development of distribution center.
Impact of Soft Tissue Heterogeneity on Augmented Reality for Liver Surgery.
Haouchine, Nazim; Cotin, Stephane; Peterlik, Igor; Dequidt, Jeremie; Lopez, Mario Sanz; Kerrien, Erwan; Berger, Marie-Odile
2015-05-01
This paper presents a method for real-time augmented reality of internal liver structures during minimally invasive hepatic surgery. Vessels and tumors computed from pre-operative CT scans can be overlaid onto the laparoscopic view for surgery guidance. Compared to current methods, our method is able to locate the in-depth positions of the tumors based on partial three-dimensional liver tissue motion using a real-time biomechanical model. This model permits to properly handle the motion of internal structures even in the case of anisotropic or heterogeneous tissues, as it is the case for the liver and many anatomical structures. Experimentations conducted on phantom liver permits to measure the accuracy of the augmentation while real-time augmentation on in vivo human liver during real surgery shows the benefits of such an approach for minimally invasive surgery.
Nazeer, Shaiju S; Sandhyamani, S; Jayasree, Ramapurath S
2015-06-07
Worldwide, liver cancer is the fifth most common cancer in men and seventh most common cancer in women. Intoxicant-induced liver injury is one of the major causes for severe structural damage with fibrosis and functional derangement of the liver leading to cancer in its later stages. This report focuses on the minimally invasive autofluorescence spectroscopic (AFS) studies on intoxicant, carbon tetrachloride (CCl4)-induced liver damage in a rodent model. Different stages of liver damage, including the reversed stage, on stoppage of the intoxicant are examined. Emission from prominent fluorophores, such as collagen, nicotinamide adenine dinucleotide (NADH), and flavin adenine dinucleotide (FAD), and variations in redox ratio have been studied. A direct correlation between the severity of the disease and the levels of collagen and redox ratio was observed. On withdrawal of the intoxicant, a gradual reversal of the disease to normal conditions was observed as indicated by the decrease in collagen levels and redox ratio. Multivariate statistical techniques and principal component analysis followed by linear discriminant analysis (PC-LDA) were used to develop diagnostic algorithms for distinguishing different stages of the liver disease based on spectral features. The PC-LDA modeling on a minimally invasive AFS dataset yielded diagnostic sensitivities of 93%, 87% and 87% and specificities of 90%, 98% and 98% for pairwise classification among normal, fibrosis, cirrhosis and reversal conditions. We conclude that AFS along with PC-LDA algorithm has the potential for rapid and accurate minimally invasive diagnosis and detection of structural changes due to liver injury resulting from various intoxicants.
The impact of joint responses of devices in an airport security system.
Nie, Xiaofeng; Batta, Rajan; Drury, Colin G; Lin, Li
2009-02-01
In this article, we consider a model for an airport security system in which the declaration of a threat is based on the joint responses of inspection devices. This is in contrast to the typical system in which each check station independently declares a passenger as having a threat or not having a threat. In our framework the declaration of threat/no-threat is based upon the passenger scores at the check stations he/she goes through. To do this we use concepts from classification theory in the field of multivariate statistics analysis and focus on the main objective of minimizing the expected cost of misclassification. The corresponding correct classification and misclassification probabilities can be obtained by using a simulation-based method. After computing the overall false alarm and false clear probabilities, we compare our joint response system with two other independently operated systems. A model that groups passengers in a manner that minimizes the false alarm probability while maintaining the false clear probability within specifications set by a security authority is considered. We also analyze the staffing needs at each check station for such an inspection scheme. An illustrative example is provided along with sensitivity analysis on key model parameters. A discussion is provided on some implementation issues, on the various assumptions made in the analysis, and on potential drawbacks of the approach.
A new mathematical modeling approach for the energy of threonine molecule
NASA Astrophysics Data System (ADS)
Sahiner, Ahmet; Kapusuz, Gulden; Yilmaz, Nurullah
2017-07-01
In this paper, we propose an improved new methodology in energy conformation problems for finding optimum energy values. First, we construct the Bezier surfaces near local minimizers based on the data obtained from Density Functional Theory (DFT) calculations. Second, we blend the constructed surfaces in order to obtain a single smooth model. Finally, we apply the global optimization algorithm to find two torsion angles those make the energy of the molecule minimum.
Energy-Based Design of Reconfigurable Micro Air Vehicle (MAV) Flight Structures
2014-02-01
plate bending element derived herein. The purpose of the six degree-of-freedom model was to accommodate in-plane and out-of-plane aerodynamic loading...combinations. The FE model was validated and the MATLAB implementation was verified with classical beam and plate solutions. A compliance minimization...formulation was not found among the finite element literature. Therefore a formulation of such a bending element was derived using classic Kirchoff plate
NASA Technical Reports Server (NTRS)
Holms, A. G.
1980-01-01
Population model coefficients were chosen to simulate a saturated 2 to the fourth power fixed effects experiment having an unfavorable distribution of relative values. Using random number studies, deletion strategies were compared that were based on the F distribution, on an order statistics distribution of Cochran's, and on a combination of the two. Results of the comparisons and a recommended strategy are given.
Formulation analysis and computation of an optimization-based local-to-nonlocal coupling method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
D'Elia, Marta; Bochev, Pavel Blagoveston
2017-01-01
In this paper, we present an optimization-based coupling method for local and nonlocal continuum models. Our approach couches the coupling of the models into a control problem where the states are the solutions of the nonlocal and local equations, the objective is to minimize their mismatch on the overlap of the local and nonlocal problem domains, and the virtual controls are the nonlocal volume constraint and the local boundary condition. We present the method in the context of Local-to-Nonlocal di usion coupling. Numerical examples illustrate the theoretical properties of the approach.
Rapid identification of bacterial biofilms and biofilm wound models using a multichannel nanosensor.
Li, Xiaoning; Kong, Hao; Mout, Rubul; Saha, Krishnendu; Moyano, Daniel F; Robinson, Sandra M; Rana, Subinoy; Zhang, Xinrong; Riley, Margaret A; Rotello, Vincent M
2014-12-23
Identification of infectious bacteria responsible for biofilm-associated infections is challenging due to the complex and heterogeneous biofilm matrix. To address this issue and minimize the impact of heterogeneity on biofilm identification, we developed a gold nanoparticle (AuNP)-based multichannel sensor to detect and identify biofilms based on their physicochemical properties. Our results showed that the sensor can discriminate six bacterial biofilms including two composed of uropathogenic bacteria. The capability of the sensor was further demonstrated through discrimination of biofilms in a mixed bacteria/mammalian cell in vitro wound model.
Barbarich-Marsteller, Nicole C.; Underwood, Mark D.; Foltin, Richard W.; Myers, Michael M.; Walsh, B. Timothy; Barrett, Jeffrey S.; Marsteller, Douglas A.
2018-01-01
Objective Activity-based anorexia is a translational rodent model that results in severe weight loss, hyperactivity, and voluntary self-starvation. The goal of our investigation was to identify vulnerable and resistant phenotypes of activity-based anorexia in adolescent female rats. Method Sprague-Dawley rats were maintained under conditions of restricted access to food (N = 64; or unlimited access, N = 16) until experimental exit, predefined as a target weight loss of 30–35% or meeting predefined criteria for animal health. Nonlinear mixed effects statistical modeling was used to describe wheel running behavior, time to event analysis was used to assess experimental exit, and a regressive partitioning algorithm was used to classify phenotypes. Results Objective criteria were identified for distinguishing novel phenotypes of activity-based anorexia, including a vulnerable phenotype that conferred maximal hyperactivity, minimal food intake, and the shortest time to experimental exit, and a resistant phenotype that conferred minimal activity and the longest time to experimental exit. Discussion The identification of objective criteria for defining vulnerable and resistant phenotypes of activity-based anorexia in adolescent female rats provides an important framework for studying the neural mechanisms that promote vulnerability to or protection against the development of self-starvation and hyperactivity during adolescence. Ultimately, future studies using these novel phenotypes may provide important translational insights into the mechanisms that promote these maladaptive behaviors characteristic of anorexia nervosa. PMID:23853140
Barbarich-Marsteller, Nicole C; Underwood, Mark D; Foltin, Richard W; Myers, Michael M; Walsh, B Timothy; Barrett, Jeffrey S; Marsteller, Douglas A
2013-11-01
Activity-based anorexia is a translational rodent model that results in severe weight loss, hyperactivity, and voluntary self-starvation. The goal of our investigation was to identify vulnerable and resistant phenotypes of activity-based anorexia in adolescent female rats. Sprague-Dawley rats were maintained under conditions of restricted access to food (N = 64; or unlimited access, N = 16) until experimental exit, predefined as a target weight loss of 30-35% or meeting predefined criteria for animal health. Nonlinear mixed effects statistical modeling was used to describe wheel running behavior, time to event analysis was used to assess experimental exit, and a regressive partitioning algorithm was used to classify phenotypes. Objective criteria were identified for distinguishing novel phenotypes of activity-based anorexia, including a vulnerable phenotype that conferred maximal hyperactivity, minimal food intake, and the shortest time to experimental exit, and a resistant phenotype that conferred minimal activity and the longest time to experimental exit. The identification of objective criteria for defining vulnerable and resistant phenotypes of activity-based anorexia in adolescent female rats provides an important framework for studying the neural mechanisms that promote vulnerability to or protection against the development of self-starvation and hyperactivity during adolescence. Ultimately, future studies using these novel phenotypes may provide important translational insights into the mechanisms that promote these maladaptive behaviors characteristic of anorexia nervosa. Copyright © 2013 Wiley Periodicals, Inc.
Thermometry and thermal management of carbon nanotube circuits
NASA Astrophysics Data System (ADS)
Mayle, Scott; Gupta, Tanuj; Davis, Sam; Chandrasekhar, Venkat; Shafraniuk, Serhii
2015-05-01
Monitoring of the intrinsic temperature and the thermal management is discussed for the carbon nanotube nano-circuits. The experimental results concerning fabricating and testing of a thermometer able to monitor the intrinsic temperature on nanoscale are reported. We also suggest a model which describes a bi-metal multilayer system able to filter the heat flow, based on separating the electron and phonon components one from another. The bi-metal multilayer structure minimizes the phonon component of the heat flow, while retaining the electronic part. The method allows one to improve the overall performance of the electronic nano-circuits due to minimizing the energy dissipation.
Wagner, Martin G; Hatt, Charles R; Dunkerley, David A P; Bodart, Lindsay E; Raval, Amish N; Speidel, Michael A
2018-04-16
Transcatheter aortic valve replacement (TAVR) is a minimally invasive procedure in which a prosthetic heart valve is placed and expanded within a defective aortic valve. The device placement is commonly performed using two-dimensional (2D) fluoroscopic imaging. Within this work, we propose a novel technique to track the motion and deformation of the prosthetic valve in three dimensions based on biplane fluoroscopic image sequences. The tracking approach uses a parameterized point cloud model of the valve stent which can undergo rigid three-dimensional (3D) transformation and different modes of expansion. Rigid elements of the model are individually rotated and translated in three dimensions to approximate the motions of the stent. Tracking is performed using an iterative 2D-3D registration procedure which estimates the model parameters by minimizing the mean-squared image values at the positions of the forward-projected model points. Additionally, an initialization technique is proposed, which locates clusters of salient features to determine the initial position and orientation of the model. The proposed algorithms were evaluated based on simulations using a digital 4D CT phantom as well as experimentally acquired images of a prosthetic valve inside a chest phantom with anatomical background features. The target registration error was 0.12 ± 0.04 mm in the simulations and 0.64 ± 0.09 mm in the experimental data. The proposed algorithm could be used to generate 3D visualization of the prosthetic valve from two projections. In combination with soft-tissue sensitive-imaging techniques like transesophageal echocardiography, this technique could enable 3D image guidance during TAVR procedures. © 2018 American Association of Physicists in Medicine.
Dik, Jan-Willem H; Hendrix, Ron; Friedrich, Alex W; Luttjeboer, Jos; Panday, Prashant Nannan; Wilting, Kasper R; Lo-Ten-Foe, Jerome R; Postma, Maarten J; Sinha, Bhanu
2015-01-01
In order to stimulate appropriate antimicrobial use and thereby lower the chances of resistance development, an Antibiotic Stewardship Team (A-Team) has been implemented at the University Medical Center Groningen, the Netherlands. Focus of the A-Team was a pro-active day 2 case-audit, which was financially evaluated here to calculate the return on investment from a hospital perspective. Effects were evaluated by comparing audited patients with a historic cohort with the same diagnosis-related groups. Based upon this evaluation a cost-minimization model was created that can be used to predict the financial effects of a day 2 case-audit. Sensitivity analyses were performed to deal with uncertainties. Finally, the model was used to financially evaluate the A-Team. One whole year including 114 patients was evaluated. Implementation costs were calculated to be €17,732, which represent total costs spent to implement this A-Team. For this specific patient group admitted to a urology ward and consulted on day 2 by the A-Team, the model estimated total savings of €60,306 after one year for this single department, leading to a return on investment of 5.9. The implemented multi-disciplinary A-Team performing a day 2 case-audit in the hospital had a positive return on investment caused by a reduced length of stay due to a more appropriate antibiotic therapy. Based on the extensive data analysis, a model of this intervention could be constructed. This model could be used by other institutions, using their own data to estimate the effects of a day 2 case-audit in their hospital.
Information needs for increasing log transport efficiency
Timothy P. McDonald; Steven E. Taylor; Robert B. Rummer; Jorge Valenzuela
2001-01-01
Three methods of dispatching trucks to loggers were tested using a log transport simulation model: random allocation, fixed assignment of trucks to loggers, and dispatch based on knowledge of the current status of trucks and loggers within the system. This 'informed' dispatch algorithm attempted to minimize the difference in time between when a logger would...
Hydrological processes at the urban residential scale
Q. Xiao; E.G. McPherson; J.R. Simpson; S.L. Ustin
2007-01-01
In the face of increasing urbanization, there is growing interest in application of microscale hydrologic solutions to minimize storm runoff and conserve water at the source. In this study, a physically based numerical model was developed to understand hydrologic processes better at the urban residential scale and the interaction of these processes among different...
Proceedings of the 1984 IEEE international conference on systems, man and cybernetics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1984-01-01
This conference contains papers on artificial intelligence, pattern recognition, and man-machine systems. Topics considered include concurrent minimization, a robot programming system, system modeling and simulation, camera calibration, thermal power plants, image processing, fault diagnosis, knowledge-based systems, power systems, hydroelectric power plants, expert systems, and electrical transients.
Persistence in Distance Education: A Study Case Using Bayesian Network to Understand Retention
ERIC Educational Resources Information Center
Eliasquevici, Marianne Kogut; da Rocha Seruffo, Marcos César; Resque, Sônia Nazaré Fernandes
2017-01-01
This article presents a study on the variables promoting student retention in distance undergraduate courses at Federal University of Pará, aiming to help school managers minimize student attrition and maximize retention until graduation. The theoretical background is based on Rovai's Composite Model and the methodological approach is conditional…
Zhang, Cheng; Zhang, Tao; Li, Ming; Lu, Yanfei; You, Jiali; Guan, Yihui
2015-01-01
In recent years, X-ray computed tomography (CT) is becoming widely used to reveal patient's anatomical information. However, the side effect of radiation, relating to genetic or cancerous diseases, has caused great public concern. The problem is how to minimize radiation dose significantly while maintaining image quality. As a practical application of compressed sensing theory, one category of methods takes total variation (TV) minimization as the sparse constraint, which makes it possible and effective to get a reconstruction image of high quality in the undersampling situation. On the other hand, a preliminary attempt of low-dose CT reconstruction based on dictionary learning seems to be another effective choice. But some critical parameters, such as the regularization parameter, cannot be determined by detecting datasets. In this paper, we propose a reweighted objective function that contributes to a numerical calculation model of the regularization parameter. A number of experiments demonstrate that this strategy performs well with better reconstruction images and saving of a large amount of time. PMID:26550024
Two Methods for Efficient Solution of the Hitting-Set Problem
NASA Technical Reports Server (NTRS)
Vatan, Farrokh; Fijany, Amir
2005-01-01
A paper addresses much of the same subject matter as that of Fast Algorithms for Model-Based Diagnosis (NPO-30582), which appears elsewhere in this issue of NASA Tech Briefs. However, in the paper, the emphasis is more on the hitting-set problem (also known as the transversal problem), which is well known among experts in combinatorics. The authors primary interest in the hitting-set problem lies in its connection to the diagnosis problem: it is a theorem of model-based diagnosis that in the set-theory representation of the components of a system, the minimal diagnoses of a system are the minimal hitting sets of the system. In the paper, the hitting-set problem (and, hence, the diagnosis problem) is translated from a combinatorial to a computational problem by mapping it onto the Boolean satisfiability and integer- programming problems. The paper goes on to describe developments nearly identical to those summarized in the cited companion NASA Tech Briefs article, including the utilization of Boolean-satisfiability and integer- programming techniques to reduce the computation time and/or memory needed to solve the hitting-set problem.
Differential geometry based solvation model II: Lagrangian formulation
Chen, Zhan; Baker, Nathan A.; Wei, G. W.
2010-01-01
Solvation is an elementary process in nature and is of paramount importance to more sophisticated chemical, biological and biomolecular processes. The understanding of solvation is an essential prerequisite for the quantitative description and analysis of biomolecular systems. This work presents a Lagrangian formulation of our differential geometry based solvation model. The Lagrangian representation of biomolecular surfaces has a few utilities/advantages. First, it provides an essential basis for biomolecular visualization, surface electrostatic potential map and visual perception of biomolecules. Additionally, it is consistent with the conventional setting of implicit solvent theories and thus, many existing theoretical algorithms and computational software packages can be directly employed. Finally, the Lagrangian representation does not need to resort to artificially enlarged van der Waals radii as often required by the Eulerian representation in solvation analysis. The main goal of the present work is to analyze the connection, similarity and difference between the Eulerian and Lagrangian formalisms of the solvation model. Such analysis is important to the understanding of the differential geometry based solvation model. The present model extends the scaled particle theory (SPT) of nonpolar solvation model with a solvent-solute interaction potential. The nonpolar solvation model is completed with a Poisson-Boltzmann (PB) theory based polar solvation model. The differential geometry theory of surfaces is employed to provide a natural description of solvent-solute interfaces. The minimization of the total free energy functional, which encompasses the polar and nonpolar contributions, leads to coupled potential driven geometric flow and Poisson-Boltzmann equations. Due to the development of singularities and nonsmooth manifolds in the Lagrangian representation, the resulting potential-driven geometric flow equation is embedded into the Eulerian representation for the purpose of computation, thanks to the equivalence of the Laplace-Beltrami operator in the two representations. The coupled partial differential equations (PDEs) are solved with an iterative procedure to reach a steady state, which delivers desired solvent-solute interface and electrostatic potential for problems of interest. These quantities are utilized to evaluate the solvation free energies and protein-protein binding affinities. A number of computational methods and algorithms are described for the interconversion of Lagrangian and Eulerian representations, and for the solution of the coupled PDE system. The proposed approaches have been extensively validated. We also verify that the mean curvature flow indeed gives rise to the minimal molecular surface (MMS) and the proposed variational procedure indeed offers minimal total free energy. Solvation analysis and applications are considered for a set of 17 small compounds and a set of 23 proteins. The salt effect on protein-protein binding affinity is investigated with two protein complexes by using the present model. Numerical results are compared to the experimental measurements and to those obtained by using other theoretical methods in the literature. PMID:21279359
Feldmann, Arne; Anso, Juan; Bell, Brett; Williamson, Tom; Gavaghan, Kate; Gerber, Nicolas; Rohrbach, Helene; Weber, Stefan; Zysset, Philippe
2016-05-01
Surgical robots have been proposed ex vivo to drill precise holes in the temporal bone for minimally invasive cochlear implantation. The main risk of the procedure is damage of the facial nerve due to mechanical interaction or due to temperature elevation during the drilling process. To evaluate the thermal risk of the drilling process, a simplified model is proposed which aims to enable an assessment of risk posed to the facial nerve for a given set of constant process parameters for different mastoid bone densities. The model uses the bone density distribution along the drilling trajectory in the mastoid bone to calculate a time dependent heat production function at the tip of the drill bit. Using a time dependent moving point source Green's function, the heat equation can be solved at a certain point in space so that the resulting temperatures can be calculated over time. The model was calibrated and initially verified with in vivo temperature data. The data was collected in minimally invasive robotic drilling of 12 holes in four different sheep. The sheep were anesthetized and the temperature elevations were measured with a thermocouple which was inserted in a previously drilled hole next to the planned drilling trajectory. Bone density distributions were extracted from pre-operative CT data by averaging Hounsfield values over the drill bit diameter. Post-operative [Formula: see text]CT data was used to verify the drilling accuracy of the trajectories. The comparison of measured and calculated temperatures shows a very good match for both heating and cooling phases. The average prediction error of the maximum temperature was less than 0.7 °C and the average root mean square error was approximately 0.5 °C. To analyze potential thermal damage, the model was used to calculate temperature profiles and cumulative equivalent minutes at 43 °C at a minimal distance to the facial nerve. For the selected drilling parameters, temperature elevation profiles and cumulative equivalent minutes suggest that thermal elevation of this minimally invasive cochlear implantation surgery may pose a risk to the facial nerve, especially in sclerotic or high density mastoid bones. Optimized drilling parameters need to be evaluated and the model could be used for future risk evaluation.
Human systems immunology: hypothesis-based modeling and unbiased data-driven approaches.
Arazi, Arnon; Pendergraft, William F; Ribeiro, Ruy M; Perelson, Alan S; Hacohen, Nir
2013-10-31
Systems immunology is an emerging paradigm that aims at a more systematic and quantitative understanding of the immune system. Two major approaches have been utilized to date in this field: unbiased data-driven modeling to comprehensively identify molecular and cellular components of a system and their interactions; and hypothesis-based quantitative modeling to understand the operating principles of a system by extracting a minimal set of variables and rules underlying them. In this review, we describe applications of the two approaches to the study of viral infections and autoimmune diseases in humans, and discuss possible ways by which these two approaches can synergize when applied to human immunology. Copyright © 2012 Elsevier Ltd. All rights reserved.
Development of Unsteady Aerodynamic and Aeroelastic Reduced-Order Models Using the FUN3D Code
NASA Technical Reports Server (NTRS)
Silva, Walter A.; Vatsa, Veer N.; Biedron, Robert T.
2009-01-01
Recent significant improvements to the development of CFD-based unsteady aerodynamic reduced-order models (ROMs) are implemented into the FUN3D unstructured flow solver. These improvements include the simultaneous excitation of the structural modes of the CFD-based unsteady aerodynamic system via a single CFD solution, minimization of the error between the full CFD and the ROM unsteady aero- dynamic solution, and computation of a root locus plot of the aeroelastic ROM. Results are presented for a viscous version of the two-dimensional Benchmark Active Controls Technology (BACT) model and an inviscid version of the AGARD 445.6 aeroelastic wing using the FUN3D code.
Ross, Rachel A.; Mandelblat-Cerf, Yael; Verstegen, Anne M.J.
2017-01-01
Anorexia nervosa (AN) is a psychiatric illness with minimal effective treatments and a very high rate of mortality. Understanding the neurobiological underpinnings of the disease is imperative for improving outcomes and can be aided by the study of animal models. The activity-based anorexia rodent model (ABA) is the current best parallel for the study of AN. This review describes the basic neurobiology of feeding and hyperactivity seen in both ABA and AN, and compiles the research on the role that stress-response and reward pathways play in modulating the homeostatic drive to eat and to expend energy, which become dysfunctional in ABA and AN. PMID:27824637
Ross, Rachel A; Mandelblat-Cerf, Yael; Verstegen, Anne M J
Anorexia nervosa (AN) is a psychiatric illness with minimal effective treatments and a very high rate of mortality. Understanding the neurobiological underpinnings of the disease is imperative for improving outcomes and can be aided by the study of animal models. The activity-based anorexia rodent model (ABA) is the current best parallel for the study of AN. This review describes the basic neurobiology of feeding and hyperactivity seen in both ABA and AN, and compiles the research on the role that stress-response and reward pathways play in modulating the homeostatic drive to eat and to expend energy, which become dysfunctional in ABA and AN.
Salgado, Iván; Mera-Hernández, Manuel; Chairez, Isaac
2017-11-01
This study addresses the problem of designing an output-based controller to stabilize multi-input multi-output (MIMO) systems in the presence of parametric disturbances as well as uncertainties in the state model and output noise measurements. The controller design includes a linear state transformation which separates uncertainties matched to the control input and the unmatched ones. A differential neural network (DNN) observer produces a nonlinear approximation of the matched perturbation and the unknown states simultaneously in the transformed coordinates. This study proposes the use of the Attractive Ellipsoid Method (AEM) to optimize the gains of the controller and the gain observer in the DNN structure. As a consequence, the obtained control input minimizes the convergence zone for the estimation error. Moreover, the control design uses the estimated disturbance provided by the DNN to obtain a better performance in the stabilization task in comparison with a quasi-minimal output feedback controller based on a Luenberger observer and a sliding mode controller. Numerical results pointed out the advantages obtained by the nonlinear control based on the DNN observer. The first example deals with the stabilization of an academic linear MIMO perturbed system and the second example stabilizes the trajectories of a DC-motor into a predefined operation point. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Daigle, Matthew John; Goebel, Kai Frank
2010-01-01
Model-based prognostics captures system knowledge in the form of physics-based models of components, and how they fail, in order to obtain accurate predictions of end of life (EOL). EOL is predicted based on the estimated current state distribution of a component and expected profiles of future usage. In general, this requires simulations of the component using the underlying models. In this paper, we develop a simulation-based prediction methodology that achieves computational efficiency by performing only the minimal number of simulations needed in order to accurately approximate the mean and variance of the complete EOL distribution. This is performed through the use of the unscented transform, which predicts the means and covariances of a distribution passed through a nonlinear transformation. In this case, the EOL simulation acts as that nonlinear transformation. In this paper, we review the unscented transform, and describe how this concept is applied to efficient EOL prediction. As a case study, we develop a physics-based model of a solenoid valve, and perform simulation experiments to demonstrate improved computational efficiency without sacrificing prediction accuracy.
Gu, Deqing; Jian, Xingxing; Zhang, Cheng; Hua, Qiang
2017-01-01
Genome-scale metabolic network models (GEMs) have played important roles in the design of genetically engineered strains and helped biologists to decipher metabolism. However, due to the complex gene-reaction relationships that exist in model systems, most algorithms have limited capabilities with respect to directly predicting accurate genetic design for metabolic engineering. In particular, methods that predict reaction knockout strategies leading to overproduction are often impractical in terms of gene manipulations. Recently, we proposed a method named logical transformation of model (LTM) to simplify the gene-reaction associations by introducing intermediate pseudo reactions, which makes it possible to generate genetic design. Here, we propose an alternative method to relieve researchers from deciphering complex gene-reactions by adding pseudo gene controlling reactions. In comparison to LTM, this new method introduces fewer pseudo reactions and generates a much smaller model system named as gModel. We showed that gModel allows two seldom reported applications: identification of minimal genomes and design of minimal cell factories within a modified OptKnock framework. In addition, gModel could be used to integrate expression data directly and improve the performance of the E-Fmin method for predicting fluxes. In conclusion, the model transformation procedure will facilitate genetic research based on GEMs, extending their applications.
Sampling strategies based on singular vectors for assimilated models in ocean forecasting systems
NASA Astrophysics Data System (ADS)
Fattorini, Maria; Brandini, Carlo; Ortolani, Alberto
2016-04-01
Meteorological and oceanographic models do need observations, not only as a ground truth element to verify the quality of the models, but also to keep model forecast error acceptable: through data assimilation techniques which merge measured and modelled data, natural divergence of numerical solutions from reality can be reduced / controlled and a more reliable solution - called analysis - is computed. Although this concept is valid in general, its application, especially in oceanography, raises many problems due to three main reasons: the difficulties that have ocean models in reaching an acceptable state of equilibrium, the high measurements cost and the difficulties in realizing them. The performances of the data assimilation procedures depend on the particular observation networks in use, well beyond the background quality and the used assimilation method. In this study we will present some results concerning the great impact of the dataset configuration, in particular measurements position, on the evaluation of the overall forecasting reliability of an ocean model. The aim consists in identifying operational criteria to support the design of marine observation networks at regional scale. In order to identify the observation network able to minimize the forecast error, a methodology based on Singular Vectors Decomposition of the tangent linear model is proposed. Such a method can give strong indications on the local error dynamics. In addition, for the purpose of avoiding redundancy of information contained in the data, a minimal distance among data positions has been chosen on the base of a spatial correlation analysis of the hydrodynamic fields under investigation. This methodology has been applied for the choice of data positions starting from simplified models, like an ideal double-gyre model and a quasi-geostrophic one. Model configurations and data assimilation are based on available ROMS routines, where a variational assimilation algorithm (4D-var) is included as part of the code These first applications have provided encouraging results in terms of increased predictability time and reduced forecast error, also improving the quality of the analysis used to recover the real circulation patterns from a first guess quite far from the real state.
Common elements of adolescent prevention programs: minimizing burden while maximizing reach.
Boustani, Maya M; Frazier, Stacy L; Becker, Kimberly D; Bechor, Michele; Dinizulu, Sonya M; Hedemann, Erin R; Ogle, Robert R; Pasalich, Dave S
2015-03-01
A growing number of evidence-based youth prevention programs are available, but challenges related to dissemination and implementation limit their reach and impact. The current review identifies common elements across evidence-based prevention programs focused on the promotion of health-related outcomes in adolescents. We reviewed and coded descriptions of the programs for common practice and instructional elements. Problem-solving emerged as the most common practice element, followed by communication skills, and insight building. Psychoeducation, modeling, and role play emerged as the most common instructional elements. In light of significant comorbidity in poor outcomes for youth, and corresponding overlap in their underlying skills deficits, we propose that synthesizing the prevention literature using a common elements approach has the potential to yield novel information and inform prevention programming to minimize burden and maximize reach and impact for youth.
A New Distributed Optimization for Community Microgrids Scheduling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Starke, Michael R; Tomsovic, Kevin
This paper proposes a distributed optimization model for community microgrids considering the building thermal dynamics and customer comfort preference. The microgrid central controller (MCC) minimizes the total cost of operating the community microgrid, including fuel cost, purchasing cost, battery degradation cost and voluntary load shedding cost based on the customers' consumption, while the building energy management systems (BEMS) minimize their electricity bills as well as the cost associated with customer discomfort due to room temperature deviation from the set point. The BEMSs and the MCC exchange information on energy consumption and prices. When the optimization converges, the distributed generation scheduling,more » energy storage charging/discharging and customers' consumption as well as the energy prices are determined. In particular, we integrate the detailed thermal dynamic characteristics of buildings into the proposed model. The heating, ventilation and air-conditioning (HVAC) systems can be scheduled intelligently to reduce the electricity cost while maintaining the indoor temperature in the comfort range set by customers. Numerical simulation results show the effectiveness of proposed model.« less
Augmented halal food traceability system: analysis and design using UML
NASA Astrophysics Data System (ADS)
Usman, Y. V.; Fauzi, A. M.; Irawadi, T. T.; Djatna, T.
2018-04-01
Augmented halal food traceability is expanding the range of halal traceability in food supply chain where currently only available for tracing from the source of raw material to the industrial warehouse or inbound logistic. The halal traceability system must be developed in the integrated form that includes inbound and outbound logistics. The objective of this study was to develop a reliable initial model of integrated traceability system of halal food supply chain. The method was based on unified modeling language (UML) such as use case, sequence, and business process diagram. A goal programming model was formulated considering two objective functions which include (1) minimization of risk of halal traceability failures happened potentially during outbound logistics activities and (2) maximization of quality of halal product information. The result indicates the supply of material is the most important point to be considered in minimizing the risk of failure of halal food traceability system whereas no risk observed in manufacturing and distribution.
Minimal-assumption inference from population-genomic data
NASA Astrophysics Data System (ADS)
Weissman, Daniel; Hallatschek, Oskar
Samples of multiple complete genome sequences contain vast amounts of information about the evolutionary history of populations, much of it in the associations among polymorphisms at different loci. Current methods that take advantage of this linkage information rely on models of recombination and coalescence, limiting the sample sizes and populations that they can analyze. We introduce a method, Minimal-Assumption Genomic Inference of Coalescence (MAGIC), that reconstructs key features of the evolutionary history, including the distribution of coalescence times, by integrating information across genomic length scales without using an explicit model of recombination, demography or selection. Using simulated data, we show that MAGIC's performance is comparable to PSMC' on single diploid samples generated with standard coalescent and recombination models. More importantly, MAGIC can also analyze arbitrarily large samples and is robust to changes in the coalescent and recombination processes. Using MAGIC, we show that the inferred coalescence time histories of samples of multiple human genomes exhibit inconsistencies with a description in terms of an effective population size based on single-genome data.
Minimal model for tag-based cooperation
NASA Astrophysics Data System (ADS)
Traulsen, Arne; Schuster, Heinz Georg
2003-10-01
Recently, Riolo et al. [Nature (London) 414, 441 (2001)] showed by computer simulations that cooperation can arise without reciprocity when agents donate only to partners who are sufficiently similar to themselves. One striking outcome of their simulations was the observation that the number of tolerant agents that support a wide range of players was not constant in time, but showed characteristic fluctuations. The cause and robustness of these tides of tolerance remained to be explored. Here we clarify the situation by solving a minimal version of the model of Riolo et al. It allows us to identify a net surplus of random changes from intolerant to tolerant agents as a necessary mechanism that produces these oscillations of tolerance, which segregate different agents in time. This provides a new mechanism for maintaining different agents, i.e., for creating biodiversity. In our model the transition to the oscillating state is caused by a saddle node bifurcation. The frequency of the oscillations increases linearly with the transition rate from tolerant to intolerant agents.
Linear functional minimization for inverse modeling
Barajas-Solano, David A.; Wohlberg, Brendt Egon; Vesselinov, Velimir Valentinov; ...
2015-06-01
In this paper, we present a novel inverse modeling strategy to estimate spatially distributed parameters of nonlinear models. The maximum a posteriori (MAP) estimators of these parameters are based on a likelihood functional, which contains spatially discrete measurements of the system parameters and spatiotemporally discrete measurements of the transient system states. The piecewise continuity prior for the parameters is expressed via Total Variation (TV) regularization. The MAP estimator is computed by minimizing a nonquadratic objective equipped with the TV operator. We apply this inversion algorithm to estimate hydraulic conductivity of a synthetic confined aquifer from measurements of conductivity and hydraulicmore » head. The synthetic conductivity field is composed of a low-conductivity heterogeneous intrusion into a high-conductivity heterogeneous medium. Our algorithm accurately reconstructs the location, orientation, and extent of the intrusion from the steady-state data only. Finally, addition of transient measurements of hydraulic head improves the parameter estimation, accurately reconstructing the conductivity field in the vicinity of observation locations.« less
Modeling of substrate and inhibitor binding to phospholipase A2.
Sessions, R B; Dauber-Osguthorpe, P; Campbell, M M; Osguthorpe, D J
1992-09-01
Molecular graphics and molecular mechanics techniques have been used to study the mode of ligand binding and mechanism of action of the enzyme phospholipase A2. A substrate-enzyme complex was constructed based on the crystal structure of the apoenzyme. The complex was minimized to relieve initial strain, and the structural and energetic features of the resultant complex analyzed in detail, at the molecular and residue level. The minimized complex was then used as a basis for examining the action of the enzyme on modified substrates, binding of inhibitors to the enzyme, and possible reaction intermediate complexes. The model is compatible with the suggested mechanism of hydrolysis and with experimental data about stereoselectivity, efficiency of hydrolysis of modified substrates, and inhibitor potency. In conclusion, the model can be used as a tool in evaluating new ligands as possible substrates and in the rational design of inhibitors, for the therapeutic treatment of diseases such as rheumatoid arthritis, atherosclerosis, and asthma.
Reduction of shock induced noise in imperfectly expanded supersonic jets using convex optimization
NASA Astrophysics Data System (ADS)
Adhikari, Sam
2007-11-01
Imperfectly expanded jets generate screech noise. The imbalance between the backpressure and the exit pressure of the imperfectly expanded jets produce shock cells and expansion or compression waves from the nozzle. The instability waves and the shock cells interact to generate the screech sound. The mathematical model consists of cylindrical coordinate based full Navier-Stokes equations and large-eddy-simulation turbulence modeling. Analytical and computational analysis of the three-dimensional helical effects provide a model that relates several parameters with shock cell patterns, screech frequency and distribution of shock generation locations. Convex optimization techniques minimize the shock cell patterns and the instability waves. The objective functions are (convex) quadratic and the constraint functions are affine. In the quadratic optimization programs, minimization of the quadratic functions over a set of polyhedrons provides the optimal result. Various industry standard methods like regression analysis, distance between polyhedra, bounding variance, Markowitz optimization, and second order cone programming is used for Quadratic Optimization.
Model-based tomographic reconstruction
Chambers, David H; Lehman, Sean K; Goodman, Dennis M
2012-06-26
A model-based approach to estimating wall positions for a building is developed and tested using simulated data. It borrows two techniques from geophysical inversion problems, layer stripping and stacking, and combines them with a model-based estimation algorithm that minimizes the mean-square error between the predicted signal and the data. The technique is designed to process multiple looks from an ultra wideband radar array. The processed signal is time-gated and each section processed to detect the presence of a wall and estimate its position, thickness, and material parameters. The floor plan of a building is determined by moving the array around the outside of the building. In this paper we describe how the stacking and layer stripping algorithms are combined and show the results from a simple numerical example of three parallel walls.
Wang, Yu; Zhang, Yaonan; Yao, Zhaomin; Zhao, Ruixue; Zhou, Fengfeng
2016-01-01
Non-lethal macular diseases greatly impact patients’ life quality, and will cause vision loss at the late stages. Visual inspection of the optical coherence tomography (OCT) images by the experienced clinicians is the main diagnosis technique. We proposed a computer-aided diagnosis (CAD) model to discriminate age-related macular degeneration (AMD), diabetic macular edema (DME) and healthy macula. The linear configuration pattern (LCP) based features of the OCT images were screened by the Correlation-based Feature Subset (CFS) selection algorithm. And the best model based on the sequential minimal optimization (SMO) algorithm achieved 99.3% in the overall accuracy for the three classes of samples. PMID:28018716
Lin, Yunyue; Wu, Qishi; Cai, Xiaoshan; ...
2010-01-01
Data transmission from sensor nodes to a base station or a sink node often incurs significant energy consumption, which critically affects network lifetime. We generalize and solve the problem of deploying multiple base stations to maximize network lifetime in terms of two different metrics under one-hop and multihop communication models. In the one-hop communication model, the sensors far away from base stations always deplete their energy much faster than others. We propose an optimal solution and a heuristic approach based on the minimal enclosing circle algorithm to deploy a base station at the geometric center of each cluster. In themore » multihop communication model, both base station location and data routing mechanism need to be considered in maximizing network lifetime. We propose an iterative algorithm based on rigorous mathematical derivations and use linear programming to compute the optimal routing paths for data transmission. Simulation results show the distinguished performance of the proposed deployment algorithms in maximizing network lifetime.« less
Stochastic simulation by image quilting of process-based geological models
NASA Astrophysics Data System (ADS)
Hoffimann, Júlio; Scheidt, Céline; Barfod, Adrian; Caers, Jef
2017-09-01
Process-based modeling offers a way to represent realistic geological heterogeneity in subsurface models. The main limitation lies in conditioning such models to data. Multiple-point geostatistics can use these process-based models as training images and address the data conditioning problem. In this work, we further develop image quilting as a method for 3D stochastic simulation capable of mimicking the realism of process-based geological models with minimal modeling effort (i.e. parameter tuning) and at the same time condition them to a variety of data. In particular, we develop a new probabilistic data aggregation method for image quilting that bypasses traditional ad-hoc weighting of auxiliary variables. In addition, we propose a novel criterion for template design in image quilting that generalizes the entropy plot for continuous training images. The criterion is based on the new concept of voxel reuse-a stochastic and quilting-aware function of the training image. We compare our proposed method with other established simulation methods on a set of process-based training images of varying complexity, including a real-case example of stochastic simulation of the buried-valley groundwater system in Denmark.
Extensions of D-optimal Minimal Designs for Symmetric Mixture Models
Raghavarao, Damaraju; Chervoneva, Inna
2017-01-01
The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. In This Paper, Extensions Of The D-Optimal Minimal Designs Are Developed For A General Mixture Model To Allow Additional Interior Points In The Design Space To Enable Prediction Of The Entire Response Surface Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations. PMID:29081574
Dynamic Bus Travel Time Prediction Models on Road with Multiple Bus Routes
Bai, Cong; Peng, Zhong-Ren; Lu, Qing-Chang; Sun, Jian
2015-01-01
Accurate and real-time travel time information for buses can help passengers better plan their trips and minimize waiting times. A dynamic travel time prediction model for buses addressing the cases on road with multiple bus routes is proposed in this paper, based on support vector machines (SVMs) and Kalman filtering-based algorithm. In the proposed model, the well-trained SVM model predicts the baseline bus travel times from the historical bus trip data; the Kalman filtering-based dynamic algorithm can adjust bus travel times with the latest bus operation information and the estimated baseline travel times. The performance of the proposed dynamic model is validated with the real-world data on road with multiple bus routes in Shenzhen, China. The results show that the proposed dynamic model is feasible and applicable for bus travel time prediction and has the best prediction performance among all the five models proposed in the study in terms of prediction accuracy on road with multiple bus routes. PMID:26294903
Dynamic Bus Travel Time Prediction Models on Road with Multiple Bus Routes.
Bai, Cong; Peng, Zhong-Ren; Lu, Qing-Chang; Sun, Jian
2015-01-01
Accurate and real-time travel time information for buses can help passengers better plan their trips and minimize waiting times. A dynamic travel time prediction model for buses addressing the cases on road with multiple bus routes is proposed in this paper, based on support vector machines (SVMs) and Kalman filtering-based algorithm. In the proposed model, the well-trained SVM model predicts the baseline bus travel times from the historical bus trip data; the Kalman filtering-based dynamic algorithm can adjust bus travel times with the latest bus operation information and the estimated baseline travel times. The performance of the proposed dynamic model is validated with the real-world data on road with multiple bus routes in Shenzhen, China. The results show that the proposed dynamic model is feasible and applicable for bus travel time prediction and has the best prediction performance among all the five models proposed in the study in terms of prediction accuracy on road with multiple bus routes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rongle Zhang; Jie Chang; Yuanyuan Xu
A new kinetic model of the Fischer-Tropsch synthesis (FTS) is proposed to describe the non-Anderson-Schulz-Flory (ASF) product distribution. The model is based on the double-polymerization monomers hypothesis, in which the surface C{sub 2}{asterisk} species acts as a chain-growth monomer in the light-product range, while C{sub 1}{asterisk} species acts as a chain-growth monomer in the heavy-product range. The detailed kinetic model in the Langmuir-Hinshelwood-Hougen-Watson type based on the elementary reactions is derived for FTS and the water-gas-shift reaction. Kinetic model candidates are evaluated by minimization of multiresponse objective functions with a genetic algorithm approach. The model of hydrocarbon product distribution ismore » consistent with experimental data (
2016-01-01
Elucidating the underlying mechanisms of fatal cardiac arrhythmias requires a tight integration of electrophysiological experiments, models, and theory. Existing models of transmembrane action potential (AP) are complex (resulting in over parameterization) and varied (leading to dissimilar predictions). Thus, simpler models are needed to elucidate the “minimal physiological requirements” to reproduce significant observable phenomena using as few parameters as possible. Moreover, models have been derived from experimental studies from a variety of species under a range of environmental conditions (for example, all existing rabbit AP models incorporate a formulation of the rapid sodium current, INa, based on 30 year old data from chick embryo cell aggregates). Here we develop a simple “parsimonious” rabbit AP model that is mathematically identifiable (i.e., not over parameterized) by combining a novel Hodgkin-Huxley formulation of INa with a phenomenological model of repolarization similar to the voltage dependent, time-independent rectifying outward potassium current (IK). The model was calibrated using the following experimental data sets measured from the same species (rabbit) under physiological conditions: dynamic current-voltage (I-V) relationships during the AP upstroke; rapid recovery of AP excitability during the relative refractory period; and steady-state INa inactivation via voltage clamp. Simulations reproduced several important “emergent” phenomena including cellular alternans at rates > 250 bpm as observed in rabbit myocytes, reentrant spiral waves as observed on the surface of the rabbit heart, and spiral wave breakup. Model variants were studied which elucidated the minimal requirements for alternans and spiral wave break up, namely the kinetics of INa inactivation and the non-linear rectification of IK.The simplicity of the model, and the fact that its parameters have physiological meaning, make it ideal for engendering generalizable mechanistic insight and should provide a solid “building-block” to generate more detailed ionic models to represent complex rabbit electrophysiology. PMID:27749895
Gray, Richard A; Pathmanathan, Pras
2016-10-01
Elucidating the underlying mechanisms of fatal cardiac arrhythmias requires a tight integration of electrophysiological experiments, models, and theory. Existing models of transmembrane action potential (AP) are complex (resulting in over parameterization) and varied (leading to dissimilar predictions). Thus, simpler models are needed to elucidate the "minimal physiological requirements" to reproduce significant observable phenomena using as few parameters as possible. Moreover, models have been derived from experimental studies from a variety of species under a range of environmental conditions (for example, all existing rabbit AP models incorporate a formulation of the rapid sodium current, INa, based on 30 year old data from chick embryo cell aggregates). Here we develop a simple "parsimonious" rabbit AP model that is mathematically identifiable (i.e., not over parameterized) by combining a novel Hodgkin-Huxley formulation of INa with a phenomenological model of repolarization similar to the voltage dependent, time-independent rectifying outward potassium current (IK). The model was calibrated using the following experimental data sets measured from the same species (rabbit) under physiological conditions: dynamic current-voltage (I-V) relationships during the AP upstroke; rapid recovery of AP excitability during the relative refractory period; and steady-state INa inactivation via voltage clamp. Simulations reproduced several important "emergent" phenomena including cellular alternans at rates > 250 bpm as observed in rabbit myocytes, reentrant spiral waves as observed on the surface of the rabbit heart, and spiral wave breakup. Model variants were studied which elucidated the minimal requirements for alternans and spiral wave break up, namely the kinetics of INa inactivation and the non-linear rectification of IK.The simplicity of the model, and the fact that its parameters have physiological meaning, make it ideal for engendering generalizable mechanistic insight and should provide a solid "building-block" to generate more detailed ionic models to represent complex rabbit electrophysiology.
A bottom-up approach to the strong CP problem
NASA Astrophysics Data System (ADS)
Diaz-Cruz, J. L.; Hollik, W. G.; Saldana-Salazar, U. J.
2018-05-01
The strong CP problem is one of many puzzles in the theoretical description of elementary particle physics that still lacks an explanation. While top-down solutions to that problem usually comprise new symmetries or fields or both, we want to present a rather bottom-up perspective. The main problem seems to be how to achieve small CP violation in the strong interactions despite the large CP violation in weak interactions. In this paper, we show that with minimal assumptions on the structure of mass (Yukawa) matrices, they do not contribute to the strong CP problem and thus we can provide a pathway to a solution of the strong CP problem within the structures of the Standard Model and no extension at the electroweak scale is needed. However, to address the flavor puzzle, models based on minimal SU(3) flavor groups leading to the proposed flavor matrices are favored. Though we refrain from an explicit UV completion of the Standard Model, we provide a simple requirement for such models not to show a strong CP problem by construction.
Text-Based On-Line Conferencing: A Conceptual and Empirical Analysis Using a Minimal Prototype.
ERIC Educational Resources Information Center
McCarthy, John C.; And Others
1993-01-01
Analyzes requirements for text-based online conferencing through the use of a minimal prototype. Topics discussed include prototyping with a minimal system; text-based communication; the system as a message passer versus the system as a shared data structure; and three exercises that showed how users worked with the prototype. (Contains 61…
Cost-effective cloud computing: a case study using the comparative genomics tool, roundup.
Kudtarkar, Parul; Deluca, Todd F; Fusaro, Vincent A; Tonellato, Peter J; Wall, Dennis P
2010-12-22
Comparative genomics resources, such as ortholog detection tools and repositories are rapidly increasing in scale and complexity. Cloud computing is an emerging technological paradigm that enables researchers to dynamically build a dedicated virtual cluster and may represent a valuable alternative for large computational tools in bioinformatics. In the present manuscript, we optimize the computation of a large-scale comparative genomics resource-Roundup-using cloud computing, describe the proper operating principles required to achieve computational efficiency on the cloud, and detail important procedures for improving cost-effectiveness to ensure maximal computation at minimal costs. Utilizing the comparative genomics tool, Roundup, as a case study, we computed orthologs among 902 fully sequenced genomes on Amazon's Elastic Compute Cloud. For managing the ortholog processes, we designed a strategy to deploy the web service, Elastic MapReduce, and maximize the use of the cloud while simultaneously minimizing costs. Specifically, we created a model to estimate cloud runtime based on the size and complexity of the genomes being compared that determines in advance the optimal order of the jobs to be submitted. We computed orthologous relationships for 245,323 genome-to-genome comparisons on Amazon's computing cloud, a computation that required just over 200 hours and cost $8,000 USD, at least 40% less than expected under a strategy in which genome comparisons were submitted to the cloud randomly with respect to runtime. Our cost savings projections were based on a model that not only demonstrates the optimal strategy for deploying RSD to the cloud, but also finds the optimal cluster size to minimize waste and maximize usage. Our cost-reduction model is readily adaptable for other comparative genomics tools and potentially of significant benefit to labs seeking to take advantage of the cloud as an alternative to local computing infrastructure.
Minimally invasive surgery: national trends in adoption and future directions for hospital strategy.
Tsui, Charlotte; Klein, Rachel; Garabrant, Matthew
2013-07-01
Surgeons have rapidly adopted minimally invasive surgical (MIS) techniques for a wide range of applications since the first laparoscopic appendectomy was performed in 1983. At the helm of this MIS shift has been laparoscopy, with robotic surgery also gaining ground in a number of areas. Researchers estimated national volumes, growth forecasts, and MIS adoption rates for the following procedures: cholecystectomy, appendectomy, gastric bypass, ventral hernia repair, colectomy, prostatectomy, tubal ligation, hysterectomy, and myomectomy. MIS adoption rates are based on secondary research, interviews with clinicians and administrators involved in MIS, and a review of clinical literature, where available. Overall volume estimates and growth forecasts are sourced from The Advisory Board Company's national demand model which provides current and future utilization rate projections for inpatient and outpatient services. The model takes into account demographics (growth and aging of the population) as well as non demographic factors such as inpatient to outpatient shift, increase in disease prevalence, technological advancements, coverage expansion, and changing payment models. Surgeons perform cholecystectomy, a relatively simple procedure, laparoscopically in 96 % of the cases. Use of the robot as a tool in laparoscopy is gaining traction in general surgery and seeing particular growth within colorectal surgery. Surgeons use robotic surgery in 15 % of colectomy cases, far behind that of prostatectomy but similar to that of hysterectomy, which have robotic adoption rates of 90 and 20 %, respectively. Surgeons are using minimally invasive surgical techniques, primarily laparoscopy and robotic surgery, to perform procedures that were previously done as open surgery. As risk-based pressures mount, hospital executives will increasingly scrutinize the cost of new technology and the impact it has on patient outcomes. These changing market dynamics may thwart the expansion of new surgical techniques and heighten emphasis on competency standards.
Optimal Tuner Selection for Kalman-Filter-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2011-01-01
An emerging approach in the field of aircraft engine controls and system health management is the inclusion of real-time, onboard models for the inflight estimation of engine performance variations. This technology, typically based on Kalman-filter concepts, enables the estimation of unmeasured engine performance parameters that can be directly utilized by controls, prognostics, and health-management applications. A challenge that complicates this practice is the fact that an aircraft engine s performance is affected by its level of degradation, generally described in terms of unmeasurable health parameters such as efficiencies and flow capacities related to each major engine module. Through Kalman-filter-based estimation techniques, the level of engine performance degradation can be estimated, given that there are at least as many sensors as health parameters to be estimated. However, in an aircraft engine, the number of sensors available is typically less than the number of health parameters, presenting an under-determined estimation problem. A common approach to address this shortcoming is to estimate a subset of the health parameters, referred to as model tuning parameters. The problem/objective is to optimally select the model tuning parameters to minimize Kalman-filterbased estimation error. A tuner selection technique has been developed that specifically addresses the under-determined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine that seeks to minimize the theoretical mean-squared estimation error of the Kalman filter. This approach can significantly reduce the error in onboard aircraft engine parameter estimation applications such as model-based diagnostic, controls, and life usage calculations. The advantage of the innovation is the significant reduction in estimation errors that it can provide relative to the conventional approach of selecting a subset of health parameters to serve as the model tuning parameter vector. Because this technique needs only to be performed during the system design process, it places no additional computation burden on the onboard Kalman filter implementation. The technique has been developed for aircraft engine onboard estimation applications, as this application typically presents an under-determined estimation problem. However, this generic technique could be applied to other industries using gas turbine engine technology.
Rule extraction from minimal neural networks for credit card screening.
Setiono, Rudy; Baesens, Bart; Mues, Christophe
2011-08-01
While feedforward neural networks have been widely accepted as effective tools for solving classification problems, the issue of finding the best network architecture remains unresolved, particularly so in real-world problem settings. We address this issue in the context of credit card screening, where it is important to not only find a neural network with good predictive performance but also one that facilitates a clear explanation of how it produces its predictions. We show that minimal neural networks with as few as one hidden unit provide good predictive accuracy, while having the added advantage of making it easier to generate concise and comprehensible classification rules for the user. To further reduce model size, a novel approach is suggested in which network connections from the input units to this hidden unit are removed by a very straightaway pruning procedure. In terms of predictive accuracy, both the minimized neural networks and the rule sets generated from them are shown to compare favorably with other neural network based classifiers. The rules generated from the minimized neural networks are concise and thus easier to validate in a real-life setting.
Sia, Sheau Fung; Zhao, Xihai; Li, Rui; Zhang, Yu; Chong, Winston; He, Le; Chen, Yu
2016-11-01
Internal carotid artery stenosis requires an accurate risk assessment for the prevention of stroke. Although the internal carotid artery area stenosis ratio at the common carotid artery bifurcation can be used as one of the diagnostic methods of internal carotid artery stenosis, the accuracy of results would still depend on the measurement techniques. The purpose of this study is to propose a novel method to estimate the effect of internal carotid artery stenosis on the blood flow based on the concept of minimization of energy loss. Eight internal carotid arteries from different medical centers were diagnosed as stenosed internal carotid arteries, as plaques were found at different locations on the vessel. A computational fluid dynamics solver was developed based on an open-source code (OpenFOAM) to test the flow ratio and energy loss of those stenosed internal carotid arteries. For comparison, a healthy internal carotid artery and an idealized internal carotid artery model have also been tested and compared with stenosed internal carotid artery in terms of flow ratio and energy loss. We found that at a given common carotid artery bifurcation, there must be a certain flow distribution in the internal carotid artery and external carotid artery, for which the total energy loss at the bifurcation is at a minimum; for a given common carotid artery flow rate, an irregular shaped plaque at the bifurcation constantly resulted in a large value of minimization of energy loss. Thus, minimization of energy loss can be used as an indicator for the estimation of internal carotid artery stenosis.
NASA Astrophysics Data System (ADS)
Dijkgraaf, Robbert; Verlinde, Herman; Verlinde, Erik
1991-03-01
We calculate correlation functions in minimal topological field theories. These twisted versions of N = 2 minimal models have recently been proposed to describe d < 1 matrix models, once coupled to topological gravity. In our calculation we make use of the Landau-Ginzburg formulation of the N = 2 models, and we find a direct relation between the Landau-Ginzburg superpotential and the KdV differential operator. Using this correspondence we show that the minimal topological models are in perfect agreement with the matrix models as solved in terms of the KdV hierarchy. This proves the equivalence at tree-level of topological and ordinary string thoery in d < 1.
Job shop scheduling model for non-identic machine with fixed delivery time to minimize tardiness
NASA Astrophysics Data System (ADS)
Kusuma, K. K.; Maruf, A.
2016-02-01
Scheduling non-identic machines problem with low utilization characteristic and fixed delivery time are frequent in manufacture industry. This paper propose a mathematical model to minimize total tardiness for non-identic machines in job shop environment. This model will be categorized as an integer linier programming model and using branch and bound algorithm as the solver method. We will use fixed delivery time as main constraint and different processing time to process a job. The result of this proposed model shows that the utilization of production machines can be increase with minimal tardiness using fixed delivery time as constraint.
The Blueprint of a Minimal Cell: MiniBacillus
Reuß, Daniel R.; Commichau, Fabian M.; Gundlach, Jan; Zhu, Bingyao
2016-01-01
SUMMARY Bacillus subtilis is one of the best-studied organisms. Due to the broad knowledge and annotation and the well-developed genetic system, this bacterium is an excellent starting point for genome minimization with the aim of constructing a minimal cell. We have analyzed the genome of B. subtilis and selected all genes that are required to allow life in complex medium at 37°C. This selection is based on the known information on essential genes and functions as well as on gene and protein expression data and gene conservation. The list presented here includes 523 and 119 genes coding for proteins and RNAs, respectively. These proteins and RNAs are required for the basic functions of life in information processing (replication and chromosome maintenance, transcription, translation, protein folding, and secretion), metabolism, cell division, and the integrity of the minimal cell. The completeness of the selected metabolic pathways, reactions, and enzymes was verified by the development of a model of metabolism of the minimal cell. A comparison of the MiniBacillus genome to the recently reported designed minimal genome of Mycoplasma mycoides JCVI-syn3.0 indicates excellent agreement in the information-processing pathways, whereas each species has a metabolism that reflects specific evolution and adaptation. The blueprint of MiniBacillus presented here serves as the starting point for a successive reduction of the B. subtilis genome. PMID:27681641
IMFIT: A FAST, FLEXIBLE NEW PROGRAM FOR ASTRONOMICAL IMAGE FITTING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erwin, Peter; Universitäts-Sternwarte München, Scheinerstrasse 1, D-81679 München
2015-02-01
I describe a new, open-source astronomical image-fitting program called IMFIT, specialized for galaxies but potentially useful for other sources, which is fast, flexible, and highly extensible. A key characteristic of the program is an object-oriented design that allows new types of image components (two-dimensional surface-brightness functions) to be easily written and added to the program. Image functions provided with IMFIT include the usual suspects for galaxy decompositions (Sérsic, exponential, Gaussian), along with Core-Sérsic and broken-exponential profiles, elliptical rings, and three components that perform line-of-sight integration through three-dimensional luminosity-density models of disks and rings seen at arbitrary inclinations. Available minimization algorithmsmore » include Levenberg-Marquardt, Nelder-Mead simplex, and Differential Evolution, allowing trade-offs between speed and decreased sensitivity to local minima in the fit landscape. Minimization can be done using the standard χ{sup 2} statistic (using either data or model values to estimate per-pixel Gaussian errors, or else user-supplied error images) or Poisson-based maximum-likelihood statistics; the latter approach is particularly appropriate for cases of Poisson data in the low-count regime. I show that fitting low-signal-to-noise ratio galaxy images using χ{sup 2} minimization and individual-pixel Gaussian uncertainties can lead to significant biases in fitted parameter values, which are avoided if a Poisson-based statistic is used; this is true even when Gaussian read noise is present.« less
High‐resolution trench photomosaics from image‐based modeling: Workflow and error analysis
Reitman, Nadine G.; Bennett, Scott E. K.; Gold, Ryan D.; Briggs, Richard; Duross, Christopher
2015-01-01
Photomosaics are commonly used to construct maps of paleoseismic trench exposures, but the conventional process of manually using image‐editing software is time consuming and produces undesirable artifacts and distortions. Herein, we document and evaluate the application of image‐based modeling (IBM) for creating photomosaics and 3D models of paleoseismic trench exposures, illustrated with a case‐study trench across the Wasatch fault in Alpine, Utah. Our results include a structure‐from‐motion workflow for the semiautomated creation of seamless, high‐resolution photomosaics designed for rapid implementation in a field setting. Compared with conventional manual methods, the IBM photomosaic method provides a more accurate, continuous, and detailed record of paleoseismic trench exposures in approximately half the processing time and 15%–20% of the user input time. Our error analysis quantifies the effect of the number and spatial distribution of control points on model accuracy. For this case study, an ∼87 m2 exposure of a benched trench photographed at viewing distances of 1.5–7 m yields a model with <2 cm root mean square error (rmse) with as few as six control points. Rmse decreases as more control points are implemented, but the gains in accuracy are minimal beyond 12 control points. Spreading control points throughout the target area helps to minimize error. We propose that 3D digital models and corresponding photomosaics should be standard practice in paleoseismic exposure archiving. The error analysis serves as a guide for future investigations that seek balance between speed and accuracy during photomosaic and 3D model construction.
Measuring, modeling, and minimizing capacitances in heterojunction bipolar transistors
NASA Astrophysics Data System (ADS)
Anholt, R.; Bozada, C.; Dettmer, R.; Via, D.; Jenkins, T.; Barrette, J.; Ebel, J.; Havasy, C.; Sewell, J.; Quach, T.
1996-07-01
We demonstrate methods to separate junction and pad capacitances from on-wafer S-parameter measurements of HBTs with different areas and layouts. The measured junction capacitances are in good agreement with models, indicating that large-area devices are suitable for monitoring vendor epi-wafer doping. Measuring open HBTs does not give the correct pad capacitances. Finally, a capacitance comparison for a variety of layouts shows that bar-devices consistently give smaller base-collector values than multiple dot HBTs.
Critical research and advanced technology (CRT) support project
NASA Technical Reports Server (NTRS)
Furman, E. R.; Anderson, D. N.; Hodge, P. E.; Lowell, C. E.; Nainiger, J. J.; Schultz, D. F.
1983-01-01
A critical technology base for utility and industrial gas turbines by planning the use of coal-derived fuels was studied. Development tasks were included in the following areas: (1) Combustion - investigate the combustion of coal-derived fuels and methods to minimize the conversion of fuel-bound nitrogen to NOx; (2) materials - understand and minimize hot corrosion; (3) system studies - integrate and focus the technological efforts. A literature survey of coal-derived fuels was completed and a NOx emissions model was developed. Flametube tests of a two-stage (rich-lean) combustor defined optimum equivalence ratios for minimizing NOx emissions. Sector combustor tests demonstrated variable air control to optimize equivalence ratios over a wide load range and steam cooling of the primary zone liner. The catalytic combustion of coal-derived fuels was demonstrated. The combustion of coal-derived gases is very promising. A hot-corrosion life prediction model was formulated and verified with laboratory testing of doped fuels. Fuel additives to control sulfur corrosion were studied. The intermittent application of barium proved effective. Advanced thermal barrier coatings were developed and tested. Coating failure modes were identified and new material formulations and fabrication parameters were specified. System studies in support of the thermal barrier coating development were accomplished.
Short cell-penetrating peptides: a model of interactions with gene promoter sites.
Khavinson, V Kh; Tarnovskaya, S I; Linkova, N S; Pronyaeva, V E; Shataeva, L K; Yakutseni, P P
2013-01-01
Analysis of the main parameters of molecular mechanics (number of hydrogen bonds, hydrophobic and electrostatic interactions, DNA-peptide complex minimization energy) provided the data to validate the previously proposed qualitative models of peptide-DNA interactions and to evaluate their quantitative characteristics. Based on these estimations, a three-dimensional model of Lys-Glu and Ala-Glu-Asp-Gly peptide interactions with DNA sites (GCAG and ATTTC) located in the promoter zones of genes encoding CD5, IL-2, MMP2, and Tram1 signal molecules.
Modal phase measuring deflectometry
Huang, Lei; Xue, Junpeng; Gao, Bo; ...
2016-10-14
Here in this work, a model based method is applied to phase measuring deflectometry, which is named as modal phase measuring deflectometry. The height and slopes of the surface under test are represented by mathematical models and updated by optimizing the model coefficients to minimize the discrepancy between the reprojection in ray tracing and the actual measurement. The pose of the screen relative to the camera is pre-calibrated and further optimized together with the shape coefficients of the surface under test. Simulations and experiments are conducted to demonstrate the feasibility of the proposed approach.
Electromagnetic on-aircraft antenna radiation in the presence of composite plates
NASA Technical Reports Server (NTRS)
Kan, S. H-T.; Rojas, R. G.
1994-01-01
The UTD-based NEWAIR3 code is modified such that it can model modern aircraft by composite plates. One good model of conductor-backed composites is the impedance boundary condition where the composites are replaced by surfaces with complex impedances. This impedance-plate model is then used to model the composite plates in the NEWAIR3 code. In most applications, the aircraft distorts the desired radiation pattern of the antenna. However, test examples conducted in this report have shown that the undesired scattered fields are minimized if the right impedance values are chosen for the surface impedance plates.
An Algebraic Implicitization and Specialization of Minimum KL-Divergence Models
NASA Astrophysics Data System (ADS)
Dukkipati, Ambedkar; Manathara, Joel George
In this paper we study representation of KL-divergence minimization, in the cases where integer sufficient statistics exists, using tools from polynomial algebra. We show that the estimation of parametric statistical models in this case can be transformed to solving a system of polynomial equations. In particular, we also study the case of Kullback-Csisźar iteration scheme. We present implicit descriptions of these models and show that implicitization preserves specialization of prior distribution. This result leads us to a Gröbner bases method to compute an implicit representation of minimum KL-divergence models.
Description of bioremediation of soils using the model of a multistep system of microorganisms
NASA Astrophysics Data System (ADS)
Lubysheva, A. I.; Potashev, K. A.; Sofinskaya, O. A.
2018-01-01
The paper deals with the development of a mathematical model describing the interaction of a multi-step system of microorganisms in soil polluted with oil products. Each step in this system uses products of vital activity of the previous step to feed. Six different models of the multi-step system are considered. The equipping of the models with coefficients was carried out from the condition of minimizing the residual of the calculated and experimental data using an original algorithm based on the Levenberg-Marquardt method in combination with the Monte Carlo method for the initial approximation finding.
Optimizing global liver function in radiation therapy treatment planning
NASA Astrophysics Data System (ADS)
Wu, Victor W.; Epelman, Marina A.; Wang, Hesheng; Romeijn, H. Edwin; Feng, Mary; Cao, Yue; Ten Haken, Randall K.; Matuszak, Martha M.
2016-09-01
Liver stereotactic body radiation therapy (SBRT) patients differ in both pre-treatment liver function (e.g. due to degree of cirrhosis and/or prior treatment) and radiosensitivity, leading to high variability in potential liver toxicity with similar doses. This work investigates three treatment planning optimization models that minimize risk of toxicity: two consider both voxel-based pre-treatment liver function and local-function-based radiosensitivity with dose; one considers only dose. Each model optimizes different objective functions (varying in complexity of capturing the influence of dose on liver function) subject to the same dose constraints and are tested on 2D synthesized and 3D clinical cases. The normal-liver-based objective functions are the linearized equivalent uniform dose (\\ell \\text{EUD} ) (conventional ‘\\ell \\text{EUD} model’), the so-called perfusion-weighted \\ell \\text{EUD} (\\text{fEUD} ) (proposed ‘fEUD model’), and post-treatment global liver function (GLF) (proposed ‘GLF model’), predicted by a new liver-perfusion-based dose-response model. The resulting \\ell \\text{EUD} , fEUD, and GLF plans delivering the same target \\ell \\text{EUD} are compared with respect to their post-treatment function and various dose-based metrics. Voxel-based portal venous liver perfusion, used as a measure of local function, is computed using DCE-MRI. In cases used in our experiments, the GLF plan preserves up to 4.6 % ≤ft(7.5 % \\right) more liver function than the fEUD (\\ell \\text{EUD} ) plan does in 2D cases, and up to 4.5 % ≤ft(5.6 % \\right) in 3D cases. The GLF and fEUD plans worsen in \\ell \\text{EUD} of functional liver on average by 1.0 Gy and 0.5 Gy in 2D and 3D cases, respectively. Liver perfusion information can be used during treatment planning to minimize the risk of toxicity by improving expected GLF; the degree of benefit varies with perfusion pattern. Although fEUD model optimization is computationally inexpensive and often achieves better GLF than \\ell \\text{EUD} model optimization does, the GLF model directly optimizes a more clinically relevant metric and can further improve fEUD plan quality.
Fuzzy multi-objective chance-constrained programming model for hazardous materials transportation
NASA Astrophysics Data System (ADS)
Du, Jiaoman; Yu, Lean; Li, Xiang
2016-04-01
Hazardous materials transportation is an important and hot issue of public safety. Based on the shortest path model, this paper presents a fuzzy multi-objective programming model that minimizes the transportation risk to life, travel time and fuel consumption. First, we present the risk model, travel time model and fuel consumption model. Furthermore, we formulate a chance-constrained programming model within the framework of credibility theory, in which the lengths of arcs in the transportation network are assumed to be fuzzy variables. A hybrid intelligent algorithm integrating fuzzy simulation and genetic algorithm is designed for finding a satisfactory solution. Finally, some numerical examples are given to demonstrate the efficiency of the proposed model and algorithm.
Minimal wave speed for a class of non-cooperative reaction-diffusion systems of three equations
NASA Astrophysics Data System (ADS)
Zhang, Tianran
2017-05-01
In this paper, we study the traveling wave solutions and minimal wave speed for a class of non-cooperative reaction-diffusion systems consisting of three equations. Based on the eigenvalues, a pair of upper-lower solutions connecting only the invasion-free equilibrium are constructed and the Schauder's fixed-point theorem is applied to show the existence of traveling semi-fronts for an auxiliary system. Then the existence of traveling semi-fronts of original system is obtained by limit arguments. The traveling semi-fronts are proved to connect another equilibrium if natural birth and death rates are not considered and to be persistent if these rates are incorporated. Then non-existence of bounded traveling semi-fronts is obtained by two-sided Laplace transform. Then the above results are applied to some disease-transmission models and a predator-prey model.
NLEAP/GIS approach for identifying and mitigating regional nitrate-nitrogen leaching
Shaffer, M.J.; Hall, M.D.; Wylie, B.K.; Wagner, D.G.; Corwin, D.L.; Loague, K.
1996-01-01
Improved simulation-based methodology is needed to help identify broad geographical areas where potential NO3-N leaching may be occurring from agriculture and suggest management alternatives that minimize the problem. The Nitrate Leaching and Economic Analysis Package (NLEAP) model was applied to estimate regional NO3-N leaching in eastern Colorado. Results show that a combined NLEAP/GIS technology can be used to identify potential NO3-N hot spots in shallow alluvial aquifers under irrigated agriculture. The NLEAP NO3-N Leached (NL) index provided the most promising single index followed by NO3-N Available for Leaching (NAL). The same combined technology also shows promise in identifying Best Management Practice (BMP) methods that help minimize NO3-N leaching in vulnerable areas. Future plans call for linkage of the NLEAP/GIS procedures with groundwater modeling to establish a mechanistic analysis of agriculture-aquifer interactions at a regional scale.
Generalized One-Band Model Based on Zhang-Rice Singlets for Tetragonal CuO.
Hamad, I J; Manuel, L O; Aligia, A A
2018-04-27
Tetragonal CuO (T-CuO) has attracted attention because of its structure similar to that of the cuprates. It has been recently proposed as a compound whose study can give an end to the long debate about the proper microscopic modeling for cuprates. In this work, we rigorously derive an effective one-band generalized t-J model for T-CuO, based on orthogonalized Zhang-Rice singlets, and make an estimative calculation of its parameters, based on previous ab initio calculations. By means of the self-consistent Born approximation, we then evaluate the spectral function and the quasiparticle dispersion for a single hole doped in antiferromagnetically ordered half filled T-CuO. Our predictions show very good agreement with angle-resolved photoemission spectra and with theoretical multiband results. We conclude that a generalized t-J model remains the minimal Hamiltonian for a correct description of single-hole dynamics in cuprates.
Extremum Seeking Control of Smart Inverters for VAR Compensation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arnold, Daniel; Negrete-Pincetic, Matias; Stewart, Emma
2015-09-04
Reactive power compensation is used by utilities to ensure customer voltages are within pre-defined tolerances and reduce system resistive losses. While much attention has been paid to model-based control algorithms for reactive power support and Volt Var Optimization (VVO), these strategies typically require relatively large communications capabilities and accurate models. In this work, a non-model-based control strategy for smart inverters is considered for VAR compensation. An Extremum Seeking control algorithm is applied to modulate the reactive power output of inverters based on real power information from the feeder substation, without an explicit feeder model. Simulation results using utility demand informationmore » confirm the ability of the control algorithm to inject VARs to minimize feeder head real power consumption. In addition, we show that the algorithm is capable of improving feeder voltage profiles and reducing reactive power supplied by the distribution substation.« less
Generalized One-Band Model Based on Zhang-Rice Singlets for Tetragonal CuO
NASA Astrophysics Data System (ADS)
Hamad, I. J.; Manuel, L. O.; Aligia, A. A.
2018-04-01
Tetragonal CuO (T-CuO) has attracted attention because of its structure similar to that of the cuprates. It has been recently proposed as a compound whose study can give an end to the long debate about the proper microscopic modeling for cuprates. In this work, we rigorously derive an effective one-band generalized t -J model for T-CuO, based on orthogonalized Zhang-Rice singlets, and make an estimative calculation of its parameters, based on previous ab initio calculations. By means of the self-consistent Born approximation, we then evaluate the spectral function and the quasiparticle dispersion for a single hole doped in antiferromagnetically ordered half filled T-CuO. Our predictions show very good agreement with angle-resolved photoemission spectra and with theoretical multiband results. We conclude that a generalized t -J model remains the minimal Hamiltonian for a correct description of single-hole dynamics in cuprates.
Tawhai, Merryn H.; Clark, Alys R.; Burrowes, Kelly S.
2011-01-01
Biophysically-based computational models provide a tool for integrating and explaining experimental data, observations, and hypotheses. Computational models of the pulmonary circulation have evolved from minimal and efficient constructs that have been used to study individual mechanisms that contribute to lung perfusion, to sophisticated multi-scale and -physics structure-based models that predict integrated structure-function relationships within a heterogeneous organ. This review considers the utility of computational models in providing new insights into the function of the pulmonary circulation, and their application in clinically motivated studies. We review mathematical and computational models of the pulmonary circulation based on their application; we begin with models that seek to answer questions in basic science and physiology and progress to models that aim to have clinical application. In looking forward, we discuss the relative merits and clinical relevance of computational models: what important features are still lacking; and how these models may ultimately be applied to further increasing our understanding of the mechanisms occurring in disease of the pulmonary circulation. PMID:22034608
Maes, Wouter H; Heuvelmans, Griet; Muys, Bart
2009-10-01
Although the importance of green (evaporative) water flows in delivering ecosystem services has been recognized, most operational impact assessment methods still focus only on blue water flows. In this paper, we present a new model to evaluate the effect of land use occupation and transformation on water quantity. Conceptually based on the supply of ecosystem services by terrestrial and aquatic ecosystems, the model is developed for, but not limited to, land use impact assessment in life cycle assessment (LCA) and requires a minimum amount of input data. Impact is minimal when evapotranspiration is equal to that of the potential natural vegetation, and maximal when evapotranspiration is zero or when it exceeds a threshold value derived from the concept of environmental water requirement. Three refinements to the model, requiring more input data, are proposed. The first refinement considers a minimal impact over a certain range based on the boundary evapotranspiration of the potential natural vegetation. In the second refinement the effects of evaporation and transpiration are accounted for separately, and in the third refinement a more correct estimate of evaporation from a fully sealed surface is incorporated. The simplicity and user friendliness of the proposed impact assessment method are illustrated with two examples.
Jeong, Hyundoo; Yoon, Byung-Jun
2017-03-14
Network querying algorithms provide computational means to identify conserved network modules in large-scale biological networks that are similar to known functional modules, such as pathways or molecular complexes. Two main challenges for network querying algorithms are the high computational complexity of detecting potential isomorphism between the query and the target graphs and ensuring the biological significance of the query results. In this paper, we propose SEQUOIA, a novel network querying algorithm that effectively addresses these issues by utilizing a context-sensitive random walk (CSRW) model for network comparison and minimizing the network conductance of potential matches in the target network. The CSRW model, inspired by the pair hidden Markov model (pair-HMM) that has been widely used for sequence comparison and alignment, can accurately assess the node-to-node correspondence between different graphs by accounting for node insertions and deletions. The proposed algorithm identifies high-scoring network regions based on the CSRW scores, which are subsequently extended by maximally reducing the network conductance of the identified subnetworks. Performance assessment based on real PPI networks and known molecular complexes show that SEQUOIA outperforms existing methods and clearly enhances the biological significance of the query results. The source code and datasets can be downloaded from http://www.ece.tamu.edu/~bjyoon/SEQUOIA .
Chu, Hone-Jay; Lin, Bo-Cheng; Yu, Ming-Run; Chan, Ta-Chien
2016-12-13
Outbreaks of infectious diseases or multi-casualty incidents have the potential to generate a large number of patients. It is a challenge for the healthcare system when demand for care suddenly surges. Traditionally, valuation of heath care spatial accessibility was based on static supply and demand information. In this study, we proposed an optimal model with the three-step floating catchment area (3SFCA) to account for the supply to minimize variability in spatial accessibility. We used empirical dengue fever outbreak data in Tainan City, Taiwan in 2015 to demonstrate the dynamic change in spatial accessibility based on the epidemic trend. The x and y coordinates of dengue-infected patients with precision loss were provided publicly by the Tainan City government, and were used as our model's demand. The spatial accessibility of heath care during the dengue outbreak from August to October 2015 was analyzed spatially and temporally by producing accessibility maps, and conducting capacity change analysis. This study also utilized the particle swarm optimization (PSO) model to decrease the spatial variation in accessibility and shortage areas of healthcare resources as the epidemic went on. The proposed method in this study can help decision makers reallocate healthcare resources spatially when the ratios of demand and supply surge too quickly and form clusters in some locations.
Modeling and control for closed environment plant production systems
NASA Technical Reports Server (NTRS)
Fleisher, David H.; Ting, K. C.; Janes, H. W. (Principal Investigator)
2002-01-01
A computer program was developed to study multiple crop production and control in controlled environment plant production systems. The program simulates crop growth and development under nominal and off-nominal environments. Time-series crop models for wheat (Triticum aestivum), soybean (Glycine max), and white potato (Solanum tuberosum) are integrated with a model-based predictive controller. The controller evaluates and compensates for effects of environmental disturbances on crop production scheduling. The crop models consist of a set of nonlinear polynomial equations, six for each crop, developed using multivariate polynomial regression (MPR). Simulated data from DSSAT crop models, previously modified for crop production in controlled environments with hydroponics under elevated atmospheric carbon dioxide concentration, were used for the MPR fitting. The model-based predictive controller adjusts light intensity, air temperature, and carbon dioxide concentration set points in response to environmental perturbations. Control signals are determined from minimization of a cost function, which is based on the weighted control effort and squared-error between the system response and desired reference signal.
A global approach for using kinematic redundancy to minimize base reactions of manipulators
NASA Technical Reports Server (NTRS)
Chung, C. L.; Desa, S.
1989-01-01
An important consideration in the use of manipulators in microgravity environments is the minimization of the base reactions, i.e. the magnitude of the force and the moment exerted by the manipulator on its base as it performs its tasks. One approach which was proposed and implemented is to use the redundant degree of freedom in a kinematically redundant manipulator to plan manipulator trajectories to minimize base reactions. A global approach was developed for minimizing the magnitude of the base reactions for kinematically redundant manipulators which integrates the Partitioned Jacobian method of redundancy resolution, a 4-3-4 joint-trajectory representation and the minimization of a cost function which is the time-integral of the magnitude of the base reactions. The global approach was also compared with a local approach developed earlier for the case of point-to-point motion of a three degree-of-freedom planar manipulator with one redundant degree-of-freedom. The results show that the global approach is more effective in reducing and smoothing the base force while the local approach is superior in reducing the base moment.
An Event-Based Approach to Distributed Diagnosis of Continuous Systems
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Roychoudhurry, Indranil; Biswas, Gautam; Koutsoukos, Xenofon
2010-01-01
Distributed fault diagnosis solutions are becoming necessary due to the complexity of modern engineering systems, and the advent of smart sensors and computing elements. This paper presents a novel event-based approach for distributed diagnosis of abrupt parametric faults in continuous systems, based on a qualitative abstraction of measurement deviations from the nominal behavior. We systematically derive dynamic fault signatures expressed as event-based fault models. We develop a distributed diagnoser design algorithm that uses these models for designing local event-based diagnosers based on global diagnosability analysis. The local diagnosers each generate globally correct diagnosis results locally, without a centralized coordinator, and by communicating a minimal number of measurements between themselves. The proposed approach is applied to a multi-tank system, and results demonstrate a marked improvement in scalability compared to a centralized approach.
Experimental Research on the Dense CFB's Riser and the Simulation Based on the EMMS Model
NASA Astrophysics Data System (ADS)
Wang, X. Y.; Wang, S. D.; Fan, B. G.; Liao, L. L.; Jiang, F.; Xu, X.; Wu, X. Z.; Xiao, Y. H.
2010-03-01
The flow structure in the CFB (circulating fluidized bed) riser has been investigated. Experimental studies were performed in a cold square section unit with 270 mm×270 mm×10 m. Since the drag force model based on homogeneous two-phase flow such as the Gidaspow drag model could not depict the heterogeneous structures of the gas-solid flow, the structure-dependent energy-minimization multi-scale (EMMS) model based on the heterogenerity was applied in the paper and a revised drag force model based on the EMMS model was proposed. A 2D two-fluid model was used to simulate a bench-scale square cross-section riser of a cold CFB. The typical core-annulus structure and the back-mixing near the wall of the riser were observed and the assembly and fragmentation processes of clusters were captured. By comparing with the Gidaspow drag model, the results obtained by the revised drag model based on EMMS shows better consistency with the experimental data. The model can also depict the difference from the two exit configurations. This study once again proves the key role of drag force in CFD (Computational Fluid Dynamics) simulation and also shows the availability of the revised drag model to describe the gas-solid flow in CFB risers.
Extensions of D-optimal Minimal Designs for Symmetric Mixture Models.
Li, Yanyan; Raghavarao, Damaraju; Chervoneva, Inna
2017-01-01
The purpose of mixture experiments is to explore the optimum blends of mixture components, which will provide desirable response characteristics in finished products. D-optimal minimal designs have been considered for a variety of mixture models, including Scheffé's linear, quadratic, and cubic models. Usually, these D-optimal designs are minimally supported since they have just as many design points as the number of parameters. Thus, they lack the degrees of freedom to perform the Lack of Fit tests. Also, the majority of the design points in D-optimal minimal designs are on the boundary: vertices, edges, or faces of the design simplex. Also a new strategy for adding multiple interior points for symmetric mixture models is proposed. We compare the proposed designs with Cornell (1986) two ten-point designs for the Lack of Fit test by simulations.
Non-minimal gravitational reheating during kination
NASA Astrophysics Data System (ADS)
Dimopoulos, Konstantinos; Markkanen, Tommi
2018-06-01
A new mechanism is presented which can reheat the Universe in non-oscillatory models of inflation, where the inflation period is followed by a period dominated by the kinetic density for the inflaton field (kination). The mechanism considers an auxiliary field non-minimally coupled to gravity. The auxiliary field is a spectator during inflation, rendered heavy by the non-minimal coupling to gravity. During kination however, the non-minimal coupling generates a tachyonic mass, which displaces the field, until its bare mass becomes important, leading to coherent oscillations. Then, the field decays into the radiation bath of the hot big bang. The model is generic and predictive, in that the resulting reheating temperature is a function only of the model parameters (masses and couplings) and not of initial conditions. It is shown that reheating can be very efficient also when considering only the Standard Model.
Predictive Cache Modeling and Analysis
2011-11-01
metaheuristic /bin-packing algorithm to optimize task placement based on task communication characterization. Our previous work on task allocation showed...Cache Miss Minimization Technology To efficiently explore combinations and discover nearly-optimal task-assignment algorithms , we extended to our...it was possible to use our algorithmic techniques to decrease network bandwidth consumption by ~25%. In this effort, we adapted these existing
ERIC Educational Resources Information Center
Moran, Thomas Eugene; Taliaferro, Andrea R.; Pate, Joshua R.
2014-01-01
Community-based physical activity programs for people with disabilities have barriers that are unique to their program leader qualifications and the population they serve. Moran and Block (2010) argued that there is a need for practical strategies that are easy for communities to implement, maximize resources, and minimize the impact of barriers…
ERIC Educational Resources Information Center
Bakhsh, Muhammad; Mahmood, Amjad; Sangi, Nazir Ahmed
2018-01-01
It is important for distance learning institutions to be well prepared before designing and implementing any new technology based learning system to justify the investment and minimize failure risk. It can be achieved by systematically assessing the readiness of all stakeholders. This paper first proposes an m-readiness assessment process and…
Younes, Mohammad K; Nopiah, Z M; Basri, N E Ahmad; Basri, H; Abushammala, Mohammed F M; Younes, Mohammed Y
2016-09-01
Solid waste prediction is crucial for sustainable solid waste management. The collection of accurate waste data records is challenging in developing countries. Solid waste generation is usually correlated with economic, demographic and social factors. However, these factors are not constant due to population and economic growth. The objective of this research is to minimize the land requirements for solid waste disposal for implementation of the Malaysian vision of waste disposal options. This goal has been previously achieved by integrating the solid waste forecasting model, waste composition and the Malaysian vision. The modified adaptive neural fuzzy inference system (MANFIS) was employed to develop a solid waste prediction model and search for the optimum input factors. The performance of the model was evaluated using the root mean square error (RMSE) and the coefficient of determination (R(2)). The model validation results are as follows: RMSE for training=0.2678, RMSE for testing=3.9860 and R(2)=0.99. Implementation of the Malaysian vision for waste disposal options can minimize the land requirements for waste disposal by up to 43%. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sun, Biao; Zhao, Wenfeng; Zhu, Xinshan
2017-06-01
Objective. Data compression is crucial for resource-constrained wireless neural recording applications with limited data bandwidth, and compressed sensing (CS) theory has successfully demonstrated its potential in neural recording applications. In this paper, an analytical, training-free CS recovery method, termed group weighted analysis {{\\ell}1} -minimization (GWALM), is proposed for wireless neural recording. Approach. The GWALM method consists of three parts: (1) the analysis model is adopted to enforce sparsity of the neural signals, therefore overcoming the drawbacks of conventional synthesis models and enhancing the recovery performance. (2) A multi-fractional-order difference matrix is constructed as the analysis operator, thus avoiding the dictionary learning procedure and reducing the need for previously acquired data and computational complexities. (3) By exploiting the statistical properties of the analysis coefficients, a group weighting approach is developed to enhance the performance of analysis {{\\ell}1} -minimization. Main results. Experimental results on synthetic and real datasets reveal that the proposed approach outperforms state-of-the-art CS-based methods in terms of both spike recovery quality and classification accuracy. Significance. Energy and area efficiency of the GWALM make it an ideal candidate for resource-constrained, large scale wireless neural recording applications. The training-free feature of the GWALM further improves its robustness to spike shape variation, thus making it more practical for long term wireless neural recording.
Sun, Biao; Zhao, Wenfeng; Zhu, Xinshan
2017-06-01
Data compression is crucial for resource-constrained wireless neural recording applications with limited data bandwidth, and compressed sensing (CS) theory has successfully demonstrated its potential in neural recording applications. In this paper, an analytical, training-free CS recovery method, termed group weighted analysis [Formula: see text]-minimization (GWALM), is proposed for wireless neural recording. The GWALM method consists of three parts: (1) the analysis model is adopted to enforce sparsity of the neural signals, therefore overcoming the drawbacks of conventional synthesis models and enhancing the recovery performance. (2) A multi-fractional-order difference matrix is constructed as the analysis operator, thus avoiding the dictionary learning procedure and reducing the need for previously acquired data and computational complexities. (3) By exploiting the statistical properties of the analysis coefficients, a group weighting approach is developed to enhance the performance of analysis [Formula: see text]-minimization. Experimental results on synthetic and real datasets reveal that the proposed approach outperforms state-of-the-art CS-based methods in terms of both spike recovery quality and classification accuracy. Energy and area efficiency of the GWALM make it an ideal candidate for resource-constrained, large scale wireless neural recording applications. The training-free feature of the GWALM further improves its robustness to spike shape variation, thus making it more practical for long term wireless neural recording.
Uzun, Harun; Yıldız, Zeynep; Goldfarb, Jillian L; Ceylan, Selim
2017-06-01
As biomass becomes more integrated into our energy feedstocks, the ability to predict its combustion enthalpies from routine data such as carbon, ash, and moisture content enables rapid decisions about utilization. The present work constructs a novel artificial neural network model with a 3-3-1 tangent sigmoid architecture to predict biomasses' higher heating values from only their proximate analyses, requiring minimal specificity as compared to models based on elemental composition. The model presented has a considerably higher correlation coefficient (0.963) and lower root mean square (0.375), mean absolute (0.328), and mean bias errors (0.010) than other models presented in the literature which, at least when applied to the present data set, tend to under-predict the combustion enthalpy. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Gambino, James; Tarver, Craig; Springer, H. Keo; White, Bradley; Fried, Laurence
2017-06-01
We present a novel method for optimizing parameters of the Ignition and Growth reactive flow (I&G) model for high explosives. The I&G model can yield accurate predictions of experimental observations. However, calibrating the model is a time-consuming task especially with multiple experiments. In this study, we couple the differential evolution global optimization algorithm to simulations of shock initiation experiments in the multi-physics code ALE3D. We develop parameter sets for HMX based explosives LX-07 and LX-10. The optimization finds the I&G model parameters that globally minimize the difference between calculated and experimental shock time of arrival at embedded pressure gauges. This work was performed under the auspices of the U.S. DOE by LLNL under contract DE-AC52-07NA27344. LLNS, LLC LLNL-ABS- 724898.
A New Perspective on Modeling Groundwater-Driven Health Risk With Subjective Information
NASA Astrophysics Data System (ADS)
Ozbek, M. M.
2003-12-01
Fuzzy rule-based systems provide an efficient environment for the modeling of expert information in the context of risk management for groundwater contamination problems. In general, their use in the form of conditional pieces of knowledge, has been either as a tool for synthesizing control laws from data (i.e., conjunction-based models), or in a knowledge representation and reasoning perspective in Artificial Intelligence (i.e., implication-based models), where only the latter may lead to coherence problems (e.g., input data that leads to logical inconsistency when added to the knowledge base). We implement a two-fold extension to an implication-based groundwater risk model (Ozbek and Pinder, 2002) including: 1) the implementation of sufficient conditions for a coherent knowledge base, and 2) the interpolation of expert statements to supplement gaps in knowledge. The original model assumes statements of public health professionals for the characterization of the exposed individual and the relation of dose and pattern of exposure to its carcinogenic effects. We demonstrate the utility of the extended model in that it: 1)identifies inconsistent statements and establishes coherence in the knowledge base, and 2) minimizes the burden of knowledge elicitation from the experts for utilizing existing knowledge in an optimal fashion.ÿÿ
Translational PK/PD of Anti-Infective Therapeutics
Rathi, Chetan; Lee, Richard E.; Meibohm, Bernd
2016-01-01
Translational PK/PD modeling has emerged as a critical technique for quantitative analysis of the relationship between dose, exposure and response of antibiotics. By combining model components for pharmacokinetics, bacterial growth kinetics and concentration-dependent drug effects, these models are able to quantitatively capture and simulate the complex interplay between antibiotic, bacterium and host organism. Fine-tuning of these basic model structures allows to further account for complicating factors such as resistance development, combination therapy, or host responses. With this tool set at hand, mechanism-based PK/PD modeling and simulation allows to develop optimal dosing regimens for novel and established antibiotics for maximum efficacy and minimal resistance development. PMID:27978987
KoBaMIN: a knowledge-based minimization web server for protein structure refinement.
Rodrigues, João P G L M; Levitt, Michael; Chopra, Gaurav
2012-07-01
The KoBaMIN web server provides an online interface to a simple, consistent and computationally efficient protein structure refinement protocol based on minimization of a knowledge-based potential of mean force. The server can be used to refine either a single protein structure or an ensemble of proteins starting from their unrefined coordinates in PDB format. The refinement method is particularly fast and accurate due to the underlying knowledge-based potential derived from structures deposited in the PDB; as such, the energy function implicitly includes the effects of solvent and the crystal environment. Our server allows for an optional but recommended step that optimizes stereochemistry using the MESHI software. The KoBaMIN server also allows comparison of the refined structures with a provided reference structure to assess the changes brought about by the refinement protocol. The performance of KoBaMIN has been benchmarked widely on a large set of decoys, all models generated at the seventh worldwide experiments on critical assessment of techniques for protein structure prediction (CASP7) and it was also shown to produce top-ranking predictions in the refinement category at both CASP8 and CASP9, yielding consistently good results across a broad range of model quality values. The web server is fully functional and freely available at http://csb.stanford.edu/kobamin.
A particle swarm-based algorithm for optimization of multi-layered and graded dental ceramics.
Askari, Ehsan; Flores, Paulo; Silva, Filipe
2018-01-01
The thermal residual stresses (TRSs) generated owing to the cooling down from the processing temperature in layered ceramic systems can lead to crack formation as well as influence the bending stress distribution and the strength of the structure. The purpose of this study is to minimize the thermal residual and bending stresses in dental ceramics to enhance their strength as well as to prevent the structure failure. Analytical parametric models are developed to evaluate thermal residual stresses in zirconia-porcelain multi-layered and graded discs and to simulate the piston-on-ring test. To identify optimal designs of zirconia-based dental restorations, a particle swarm optimizer is also developed. The thickness of each interlayer and compositional distribution are referred to as design variables. The effect of layers number constituting the interlayer between two based materials on the performance of graded prosthetic systems is also investigated. The developed methodology is validated against results available in literature and a finite element model constructed in the present study. Three different cases are considered to determine the optimal design of graded prosthesis based on minimizing (a) TRSs; (b) bending stresses; and (c) both TRS and bending stresses. It is demonstrated that each layer thickness and composition profile have important contributions into the resulting stress field and magnitude. Copyright © 2017 Elsevier Ltd. All rights reserved.
A New Model for Solving Time-Cost-Quality Trade-Off Problems in Construction
Fu, Fang; Zhang, Tao
2016-01-01
A poor quality affects project makespan and its total costs negatively, but it can be recovered by repair works during construction. We construct a new non-linear programming model based on the classic multi-mode resource constrained project scheduling problem considering repair works. In order to obtain satisfactory quality without a high increase of project cost, the objective is to minimize total quality cost which consists of the prevention cost and failure cost according to Quality-Cost Analysis. A binary dependent normal distribution function is adopted to describe the activity quality; Cumulative quality is defined to determine whether to initiate repair works, according to the different relationships among activity qualities, namely, the coordinative and precedence relationship. Furthermore, a shuffled frog-leaping algorithm is developed to solve this discrete trade-off problem based on an adaptive serial schedule generation scheme and adjusted activity list. In the program of the algorithm, the frog-leaping progress combines the crossover operator of genetic algorithm and a permutation-based local search. Finally, an example of a construction project for a framed railway overpass is provided to examine the algorithm performance, and it assist in decision making to search for the appropriate makespan and quality threshold with minimal cost. PMID:27911939
Numerical simulation of residual stress in laser based additive manufacturing process
NASA Astrophysics Data System (ADS)
Kalyan Panda, Bibhu; Sahoo, Seshadev
2018-03-01
Minimizing the residual stress build-up in metal-based additive manufacturing plays a pivotal role in selecting a particular material and technique for making an industrial part. In beam-based additive manufacturing, although a great deal of effort has been made to minimize the residual stresses, it is still elusive how to do so by simply optimizing the processing parameters, such as beam size, beam power, and scan speed. Amid different types of additive manufacturing processes, Direct Metal Laser Sintering (DMLS) process uses a high-power laser to melt and sinter layers of metal powder. The rapid solidification and heat transfer on powder bed endows a high cooling rate which leads to the build-up of residual stresses, that will affect the mechanical properties of the build parts. In the present work, the authors develop a numerical thermo-mechanical model for the measurement of residual stress in the AlSi10Mg build samples by using finite element method. Transient temperature distribution in the powder bed was assessed using the coupled thermal to structural model. Subsequently, the residual stresses were estimated with varying laser power. From the simulation result, it found that the melt pool dimensions increase with increasing the laser power and the magnitude of residual stresses in the built part increases.
NASA Astrophysics Data System (ADS)
Ghanbari Mardasi, Amir; Ghanbari, Mahmood; Salmani Tehrani, Mehdi
2014-09-01
Although recently Minimal Invasive Robotic Surgery (MIRS) has been more addressed because of its wide range of benefits, however there are still some limitations in this regard. In order to address the shortcomings of MIRS systems, various types of tactile sensors with different sensing principles have been presented in the last few years. In the present paper a MEMS-based optical sensor, which has been recently proposed by researchers, is investigated using numerical simulation. By this type of sensors real time quantification of both dynamic and statics contact forces between the tissue and surgical instrument would be possible. The presented sensor has one moving part and works based on the intensity modulation principle of optical fibers. It is electrically-passive, MRI-compatible and it is possible to be fabricated using available standard micro fabrication techniques. The behavior of the sensor has been simulated using COMSOL MULTIPHYSICS 3.5 software. Stress analysis is conducted on the sensor to assess the deflection of the moving part of the sensor due to applied force. The optical simulation is then conducted to estimate the power loss due to the moving part deflection. Using FEM modeling, the relation between force and deflection is derived which is necessary for the calibration of the sensor.
Lee, J; Scheraga, H A; Rackovsky, S
1996-01-01
The lateral packing of a collagen-like molecule, CH3CO-(Gly-L-Pro-L-Pro)4-NHCH3, has been examined by energy minimization with the ECEPP/3 force field. Two current packing models, the Smith collagen microfibril twisted equilateral pentagonal model and the quasi-hexagonal packing model, have been extensively investigated. In treating the Smith microfibril model, energy minimization was carried out on various conformations including those with the symmetry of equivalent packing, i.e., in which the triple helices were arranged equivalently with respect to each other. Both models are based on the experimental observation of the characteristic axial periodicity, D = 67 nm, of light and dark bands, indicating that, if any superstructure exists, it should consist of five triple helices. The quasi-hexagonal packing structure is found to be energetically more favorable than the Smith microfibril model by as much as 31.2 kcal/mol of five triple helices. This is because the quasi-hexagonal packing geometry provides more nonbonded interaction possibilities between triple helices than does the Smith microfibril geometry. Our results are consistent with recent x-ray studies with synthetic collagen-like molecules and rat tail tendon, in which the data were interpreted as being consistent with either a quasi-hexagonal or a square-triangular structure.
A minimal titration model of the mammalian dynamical heat shock response
NASA Astrophysics Data System (ADS)
Sivéry, Aude; Courtade, Emmanuel; Thommen, Quentin
2016-12-01
Environmental stress, such as oxidative or heat stress, induces the activation of the heat shock response (HSR) and leads to an increase in the heat shock proteins (HSPs) level. These HSPs act as molecular chaperones to maintain cellular proteostasis. Controlled by highly intricate regulatory mechanisms, having stress-induced activation and feedback regulations with multiple partners, the HSR is still incompletely understood. In this context, we propose a minimal molecular model for the gene regulatory network of the HSR that reproduces quantitatively different heat shock experiments both on heat shock factor 1 (HSF1) and HSPs activities. This model, which is based on chemical kinetics laws, is kept with a low dimensionality without altering the biological interpretation of the model dynamics. This simplistic model highlights the titration of HSF1 by chaperones as the guiding line of the network. Moreover, by a steady states analysis of the network, three different temperature stress regimes appear: normal, acute, and chronic, where normal stress corresponds to pseudo thermal adaption. The protein triage that governs the fate of damaged proteins or the different stress regimes are consequences of the titration mechanism. The simplicity of the present model is of interest in order to study detailed modelling of cross regulation between the HSR and other major genetic networks like the cell cycle or the circadian clock.
Corrêa, Elizabeth Nappi; Retondario, Anabelle; Alves, Mariane de Almeida; Bricarello, Liliana Paula; Rockenbach, Gabriele; Hinnig, Patrícia de Fragas; Neves, Janaina das; Vasconcelos, Francisco de Assis Guedes de
2018-03-29
Access to food retailers is an environmental determinant that influences what people consume. This study aimed to test the association between the use of food outlets and schoolchildren's intake of minimally processed and ultra-processed foods. This was a cross-sectional study conducted in public and private schools in Florianópolis, state of Santa Catarina, southern Brazil, from September 2012 to June 2013. The sample consisted of randomly selected clusters of schoolchildren aged 7 to 14 years, who were attending 30 schools. Parents or guardians provided socioeconomic and demographic data and answered questions about use of food outlets. Dietary intake was surveyed using a dietary recall questionnaire based on the previous day's intake. The foods or food groups were classified according to the level of processing. Negative binomial regression was used for data analysis. We included 2,195 schoolchildren in the study. We found that buying foods from snack bars or fast-food outlets was associated with the intake frequency of ultra-processed foods among 11-14 years old in an adjusted model (incidence rate ratio, IRR: 1.11; 95% confidence interval, CI: 1.01;1.23). Use of butchers was associated with the intake frequency of unprocessed/minimally processed foods among children 11-14 years old in the crude model (IRR: 1.11; 95% CI: 1.01;1.22) and in the adjusted model (IRR: 1.11; 95% CI: 1.06;1.17). Use of butchers was associated with higher intake of unprocessed/minimally processed foods while use of snack bars or fast-food outlets may have a negative impact on schoolchildren's dietary habits.
Data splitting for artificial neural networks using SOM-based stratified sampling.
May, R J; Maier, H R; Dandy, G C
2010-03-01
Data splitting is an important consideration during artificial neural network (ANN) development where hold-out cross-validation is commonly employed to ensure generalization. Even for a moderate sample size, the sampling methodology used for data splitting can have a significant effect on the quality of the subsets used for training, testing and validating an ANN. Poor data splitting can result in inaccurate and highly variable model performance; however, the choice of sampling methodology is rarely given due consideration by ANN modellers. Increased confidence in the sampling is of paramount importance, since the hold-out sampling is generally performed only once during ANN development. This paper considers the variability in the quality of subsets that are obtained using different data splitting approaches. A novel approach to stratified sampling, based on Neyman sampling of the self-organizing map (SOM), is developed, with several guidelines identified for setting the SOM size and sample allocation in order to minimize the bias and variance in the datasets. Using an example ANN function approximation task, the SOM-based approach is evaluated in comparison to random sampling, DUPLEX, systematic stratified sampling, and trial-and-error sampling to minimize the statistical differences between data sets. Of these approaches, DUPLEX is found to provide benchmark performance with good model performance, with no variability. The results show that the SOM-based approach also reliably generates high-quality samples and can therefore be used with greater confidence than other approaches, especially in the case of non-uniform datasets, with the benefit of scalability to perform data splitting on large datasets. Copyright 2009 Elsevier Ltd. All rights reserved.
Direction of Arrival Estimation for MIMO Radar via Unitary Nuclear Norm Minimization
Wang, Xianpeng; Huang, Mengxing; Wu, Xiaoqin; Bi, Guoan
2017-01-01
In this paper, we consider the direction of arrival (DOA) estimation issue of noncircular (NC) source in multiple-input multiple-output (MIMO) radar and propose a novel unitary nuclear norm minimization (UNNM) algorithm. In the proposed method, the noncircular properties of signals are used to double the virtual array aperture, and the real-valued data are obtained by utilizing unitary transformation. Then a real-valued block sparse model is established based on a novel over-complete dictionary, and a UNNM algorithm is formulated for recovering the block-sparse matrix. In addition, the real-valued NC-MUSIC spectrum is used to design a weight matrix for reweighting the nuclear norm minimization to achieve the enhanced sparsity of solutions. Finally, the DOA is estimated by searching the non-zero blocks of the recovered matrix. Because of using the noncircular properties of signals to extend the virtual array aperture and an additional real structure to suppress the noise, the proposed method provides better performance compared with the conventional sparse recovery based algorithms. Furthermore, the proposed method can handle the case of underdetermined DOA estimation. Simulation results show the effectiveness and advantages of the proposed method. PMID:28441770
Adequacy of depression treatment among college students in the United States.
Eisenberg, Daniel; Chung, Henry
2012-01-01
There is no published evidence on the adequacy of depression care among college students and how this varies by subpopulations and provider types. We estimated the prevalence of minimally adequate treatment among students with significant past-year depressive symptoms. Data were collected via a confidential online survey of a random sample of 8488 students from 15 colleges and universities in the 2009 Healthy Minds Study. Depressive symptoms were assessed by the Patient Health Questionnaire-2, adapted to a past-year time frame. Students with probable depression were coded as having received minimally adequate depression care based on the criteria from Wang and colleagues (2005). Minimally adequate treatment was received by only 22% of depressed students. The likelihood of minimally adequate treatment was similarly low for both psychiatric medication and psychotherapy. Minimally adequate care was lower for students prescribed medication by a primary care provider as compared to a psychiatrist (P<.01). Racial/ethnic minority students were less likely to receive depression care (P<.01). Adequacy of depression care is a significant problem in the college population. Solutions will likely require greater availability of psychiatry care, better coordination between specialty and primary care using collaborative care models, and increased efforts to retain students in psychotherapy. Copyright © 2012 Elsevier Inc. All rights reserved.
Model-Based Thermal System Design Optimization for the James Webb Space Telescope
NASA Technical Reports Server (NTRS)
Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.
2017-01-01
Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.
Model-based thermal system design optimization for the James Webb Space Telescope
NASA Astrophysics Data System (ADS)
Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.
2017-10-01
Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.
Automated knowledge-base refinement
NASA Technical Reports Server (NTRS)
Mooney, Raymond J.
1994-01-01
Over the last several years, we have developed several systems for automatically refining incomplete and incorrect knowledge bases. These systems are given an imperfect rule base and a set of training examples and minimally modify the knowledge base to make it consistent with the examples. One of our most recent systems, FORTE, revises first-order Horn-clause knowledge bases. This system can be viewed as automatically debugging Prolog programs based on examples of correct and incorrect I/O pairs. In fact, we have already used the system to debug simple Prolog programs written by students in a programming language course. FORTE has also been used to automatically induce and revise qualitative models of several continuous dynamic devices from qualitative behavior traces. For example, it has been used to induce and revise a qualitative model of a portion of the Reaction Control System (RCS) of the NASA Space Shuttle. By fitting a correct model of this portion of the RCS to simulated qualitative data from a faulty system, FORTE was also able to correctly diagnose simple faults in this system.
Systems Biology Perspectives on Minimal and Simpler Cells
Xavier, Joana C.; Patil, Kiran Raosaheb
2014-01-01
SUMMARY The concept of the minimal cell has fascinated scientists for a long time, from both fundamental and applied points of view. This broad concept encompasses extreme reductions of genomes, the last universal common ancestor (LUCA), the creation of semiartificial cells, and the design of protocells and chassis cells. Here we review these different areas of research and identify common and complementary aspects of each one. We focus on systems biology, a discipline that is greatly facilitating the classical top-down and bottom-up approaches toward minimal cells. In addition, we also review the so-called middle-out approach and its contributions to the field with mathematical and computational models. Owing to the advances in genomics technologies, much of the work in this area has been centered on minimal genomes, or rather minimal gene sets, required to sustain life. Nevertheless, a fundamental expansion has been taking place in the last few years wherein the minimal gene set is viewed as a backbone of a more complex system. Complementing genomics, progress is being made in understanding the system-wide properties at the levels of the transcriptome, proteome, and metabolome. Network modeling approaches are enabling the integration of these different omics data sets toward an understanding of the complex molecular pathways connecting genotype to phenotype. We review key concepts central to the mapping and modeling of this complexity, which is at the heart of research on minimal cells. Finally, we discuss the distinction between minimizing the number of cellular components and minimizing cellular complexity, toward an improved understanding and utilization of minimal and simpler cells. PMID:25184563
Gomez-Ramirez, Jaime; Costa, Tommaso
2017-12-01
Here we investigate whether systems that minimize prediction error e.g. predictive coding, can also show creativity, or on the contrary, prediction error minimization unqualifies for the design of systems that respond in creative ways to non-recurrent problems. We argue that there is a key ingredient that has been overlooked by researchers that needs to be incorporated to understand intelligent behavior in biological and technical systems. This ingredient is boredom. We propose a mathematical model based on the Black-Scholes-Merton equation which provides mechanistic insights into the interplay between boredom and prediction pleasure as the key drivers of behavior. Copyright © 2017 Elsevier B.V. All rights reserved.
Verification of Autonomous Systems for Space Applications
NASA Technical Reports Server (NTRS)
Brat, G.; Denney, E.; Giannakopoulou, D.; Frank, J.; Jonsson, A.
2006-01-01
Autonomous software, especially if it is based on model, can play an important role in future space applications. For example, it can help streamline ground operations, or, assist in autonomous rendezvous and docking operations, or even, help recover from problems (e.g., planners can be used to explore the space of recovery actions for a power subsystem and implement a solution without (or with minimal) human intervention). In general, the exploration capabilities of model-based systems give them great flexibility. Unfortunately, it also makes them unpredictable to our human eyes, both in terms of their execution and their verification. The traditional verification techniques are inadequate for these systems since they are mostly based on testing, which implies a very limited exploration of their behavioral space. In our work, we explore how advanced V&V techniques, such as static analysis, model checking, and compositional verification, can be used to gain trust in model-based systems. We also describe how synthesis can be used in the context of system reconfiguration and in the context of verification.
Formal Semantics and Implementation of BPMN 2.0 Inclusive Gateways
NASA Astrophysics Data System (ADS)
Christiansen, David Raymond; Carbone, Marco; Hildebrandt, Thomas
We present the first direct formalization of the semantics of inclusive gateways as described in the Business Process Modeling Notation (BPMN) 2.0 Beta 1 specification. The formal semantics is given for a minimal subset of BPMN 2.0 containing just the inclusive and exclusive gateways and the start and stop events. By focusing on this subset we achieve a simple graph model that highlights the particular non-local features of the inclusive gateway semantics. We sketch two ways of implementing the semantics using algorithms based on incrementally updated data structures and also discuss distributed communication-based implementations of the two algorithms.
$L^1$ penalization of volumetric dose objectives in optimal control of PDEs
Barnard, Richard C.; Clason, Christian
2017-02-11
This work is concerned with a class of PDE-constrained optimization problems that are motivated by an application in radiotherapy treatment planning. Here the primary design objective is to minimize the volume where a functional of the state violates a prescribed level, but prescribing these levels in the form of pointwise state constraints leads to infeasible problems. We therefore propose an alternative approach based on L 1 penalization of the violation that is also applicable when state constraints are infeasible. We establish well-posedness of the corresponding optimal control problem, derive first-order optimality conditions, discuss convergence of minimizers as the penalty parametermore » tends to infinity, and present a semismooth Newton method for their efficient numerical solution. Finally, the performance of this method for a model problem is illustrated and contrasted with an alternative approach based on (regularized) state constraints.« less
Model-Based Battery Management Systems: From Theory to Practice
NASA Astrophysics Data System (ADS)
Pathak, Manan
Lithium-ion batteries are now extensively being used as the primary storage source. Capacity and power fade, and slow recharging times are key issues that restrict its use in many applications. Battery management systems are critical to address these issues, along with ensuring its safety. This dissertation focuses on exploring various control strategies using detailed physics-based electrochemical models developed previously for lithium-ion batteries, which could be used in advanced battery management systems. Optimal charging profiles for minimizing capacity fade based on SEI-layer formation are derived and the benefits of using such control strategies are shown by experimentally testing them on a 16 Ah NMC-based pouch cell. This dissertation also explores different time-discretization strategies for non-linear models, which gives an improved order of convergence for optimal control problems. Lastly, this dissertation also explores a physics-based model for predicting the linear impedance of a battery, and develops a freeware that is extremely robust and computationally fast. Such a code could be used for estimating transport, kinetic and material properties of the battery based on the linear impedance spectra.
Ebell, Mark H; Jang, Woncheol; Shen, Ye; Geocadin, Romergryko G
2013-11-11
Informing patients and providers of the likelihood of survival after in-hospital cardiac arrest (IHCA), neurologically intact or with minimal deficits, may be useful when discussing do-not-attempt-resuscitation orders. To develop a simple prearrest point score that can identify patients unlikely to survive IHCA, neurologically intact or with minimal deficits. The study included 51,240 inpatients experiencing an index episode of IHCA between January 1, 2007, and December 31, 2009, in 366 hospitals participating in the Get With the Guidelines-Resuscitation registry. Dividing data into training (44.4%), test (22.2%), and validation (33.4%) data sets, we used multivariate methods to select the best independent predictors of good neurologic outcome, created a series of candidate decision models, and used the test data set to select the model that best classified patients as having a very low (<1%), low (1%-3%), average (>3%-15%), or higher than average (>15%) likelihood of survival after in-hospital cardiopulmonary resuscitation for IHCA with good neurologic status. The final model was evaluated using the validation data set. Survival to discharge after in-hospital cardiopulmonary resuscitation for IHCA with good neurologic status (neurologically intact or with minimal deficits) based on a Cerebral Performance Category score of 1. The best performing model was a simple point score based on 13 prearrest variables. The C statistic was 0.78 when applied to the validation set. It identified the likelihood of a good outcome as very low in 9.4% of patients (good outcome in 0.9%), low in 18.9% (good outcome in 1.7%), average in 54.0% (good outcome in 9.4%), and above average in 17.7% (good outcome in 27.5%). Overall, the score can identify more than one-quarter of patients as having a low or very low likelihood of survival to discharge, neurologically intact or with minimal deficits after IHCA (good outcome in 1.4%). The Good Outcome Following Attempted Resuscitation (GO-FAR) scoring system identifies patients who are unlikely to benefit from a resuscitation attempt should they experience IHCA. This information can be used as part of a shared decision regarding do-not-attempt-resuscitation orders.
In-flight propulsion system characterization for both Mars Exploration Rover Spacecraft
NASA Technical Reports Server (NTRS)
Barber, Todd J.; Picha, Frank Q.
2004-01-01
Two Mars Exploration Rover spacecraft were dispensed to red planet in 2003, culminating in a phenomenally successful prime science mission. Twin cruise stage propulsion systems were developed in record time, largely through heritage with Mars Pathfinder. As expected, consumable usage was minimal during the short seven-month cruise for both spacecraft. Propellant usage models based on pressure and temperature agreed with throughput models with in a few percent. Trajectory correction maneuver performance was nominal, allowing the cancellation of near-Mars maneuvers. Spin thruster delivered impulse was 10-12% high vs. ground based models for the intial spin-down maneuvers, while turn performance was XX-XX% high/low vs. expectations. No clear indications for pressure transducer drift were noted during the brief MER missions.
Natural Aggregation Approach based Home Energy Manage System with User Satisfaction Modelling
NASA Astrophysics Data System (ADS)
Luo, F. J.; Ranzi, G.; Dong, Z. Y.; Murata, J.
2017-07-01
With the prevalence of advanced sensing and two-way communication technologies, Home Energy Management System (HEMS) has attracted lots of attentions in recent years. This paper proposes a HEMS that optimally schedules the controllable Residential Energy Resources (RERs) in a Time-of-Use (TOU) pricing and high solar power penetrated environment. The HEMS aims to minimize the overall operational cost of the home, and the user’s satisfactions and requirements on the operation of different household appliances are modelled and considered in the HEMS. Further, a new biological self-aggregation intelligence based optimization technique previously proposed by the authors, i.e., Natural Aggregation Algorithm (NAA), is applied to solve the proposed HEMS optimization model. Simulations are conducted to validate the proposed method.
Negotiation-based Order Lot-Sizing Approach for Two-tier Supply Chain
NASA Astrophysics Data System (ADS)
Chao, Yuan; Lin, Hao Wen; Chen, Xili; Murata, Tomohiro
This paper focuses on a negotiation based collaborative planning process for the determination of order lot-size over multi-period planning, and confined to a two-tier supply chain scenario. The aim is to study how negotiation based planning processes would be used to refine locally preferred ordering patterns, which would consequently affect the overall performance of the supply chain in terms of costs and service level. Minimal information exchanges in the form of mathematical models are suggested to represent the local preferences and used to support the negotiation processes.
Novel Driving Control of Power Assisted Wheelchair Based on Minimum Jerk Trajectory
NASA Astrophysics Data System (ADS)
Seki, Hirokazu; Sugimoto, Takeaki; Tadakuma, Susumu
This paper describes a novel trajectory control scheme for power assisted wheelchair. Human input torque patterns are always intermittent in power assisted wheelchairs, therefore, the suitable trajectories must be generated also after the human decreases his/her input torque. This paper tries to solve this significant problem based on minimum jerk model minimizing the changing rate of acceleration. The proposed control system based on minimum jerk trajectory is expected to improve the ride quality, stability and safety. Some experiments show the effectiveness of the proposed method.
Cao, Yuansheng; Gong, Zongping; Quan, H T
2015-06-01
Motivated by the recent proposed models of the information engine [Proc. Natl. Acad. Sci. USA 109, 11641 (2012)] and the information refrigerator [Phys. Rev. Lett. 111, 030602 (2013)], we propose a minimal model of the information pump and the information eraser based on enzyme kinetics. This device can either pump molecules against the chemical potential gradient by consuming the information to be encoded in the bit stream or (partially) erase the information initially encoded in the bit stream by consuming the Gibbs free energy. The dynamics of this model is solved exactly, and the "phase diagram" of the operation regimes is determined. The efficiency and the power of the information machine is analyzed. The validity of the second law of thermodynamics within our model is clarified. Our model offers a simple paradigm for the investigating of the thermodynamics of information processing involving the chemical potential in small systems.
NASA Astrophysics Data System (ADS)
Arsenault, Richard; Poissant, Dominique; Brissette, François
2015-11-01
This paper evaluated the effects of parametric reduction of a hydrological model on five regionalization methods and 267 catchments in the province of Quebec, Canada. The Sobol' variance-based sensitivity analysis was used to rank the model parameters by their influence on the model results and sequential parameter fixing was performed. The reduction in parameter correlations improved parameter identifiability, however this improvement was found to be minimal and was not transposed in the regionalization mode. It was shown that 11 of the HSAMI models' 23 parameters could be fixed with little or no loss in regionalization skill. The main conclusions were that (1) the conceptual lumped models used in this study did not represent physical processes sufficiently well to warrant parameter reduction for physics-based regionalization methods for the Canadian basins examined and (2) catchment descriptors did not adequately represent the relevant hydrological processes, namely snow accumulation and melt.
Foraging optimally for home ranges
Mitchell, Michael S.; Powell, Roger A.
2012-01-01
Economic models predict behavior of animals based on the presumption that natural selection has shaped behaviors important to an animal's fitness to maximize benefits over costs. Economic analyses have shown that territories of animals are structured by trade-offs between benefits gained from resources and costs of defending them. Intuitively, home ranges should be similarly structured, but trade-offs are difficult to assess because there are no costs of defense, thus economic models of home-range behavior are rare. We present economic models that predict how home ranges can be efficient with respect to spatially distributed resources, discounted for travel costs, under 2 strategies of optimization, resource maximization and area minimization. We show how constraints such as competitors can influence structure of homes ranges through resource depression, ultimately structuring density of animals within a population and their distribution on a landscape. We present simulations based on these models to show how they can be generally predictive of home-range behavior and the mechanisms that structure the spatial distribution of animals. We also show how contiguous home ranges estimated statistically from location data can be misleading for animals that optimize home ranges on landscapes with patchily distributed resources. We conclude with a summary of how we applied our models to nonterritorial black bears (Ursus americanus) living in the mountains of North Carolina, where we found their home ranges were best predicted by an area-minimization strategy constrained by intraspecific competition within a social hierarchy. Economic models can provide strong inference about home-range behavior and the resources that structure home ranges by offering falsifiable, a priori hypotheses that can be tested with field observations.
Fader, Amanda N; Xu, Tim; Dunkin, Brian J; Makary, Martin A
2016-11-01
Surgery is one of the highest priced services in health care, and complications from surgery can be serious and costly. Recently, advances in surgical techniques have allowed surgeons to perform many common operations using minimally invasive methods that result in fewer complications. Despite this, the rates of open surgery remain high across multiple surgical disciplines. This is an expert commentary and review of the contemporary literature regarding minimally invasive surgery practices nationwide, the benefits of less invasive approaches, and how minimally invasive compared with open procedures are differentially reimbursed in the United States. We explore the incentive of the current surgeon reimbursement fee schedule and its potential implications. A surgeon's preference to perform minimally invasive compared with open surgery remains highly variable in the U.S., even after adjustment for patient comorbidities and surgical complexity. Nationwide administrative claims data across several surgical disciplines demonstrates that minimally invasive surgery utilization in place of open surgery is associated with reduced adverse events and cost savings. Reducing surgical complications by increasing adoption of minimally invasive operations has significant cost implications for health care. However, current U.S. payment structures may perversely incentivize open surgery and financially reward physicians who do not necessarily embrace newer or best minimally invasive surgery practices. Utilization of minimally invasive surgery varies considerably in the U.S., representing one of the greatest disparities in health care. Existing physician payment models must translate the growing body of research in surgical care into physician-level rewards for quality, including choice of operation. Promoting safe surgery should be an important component of a strong, value-based healthcare system. Resolving the potentially perverse incentives in paying for surgical approaches may help address disparities in surgical care, reduce the prevalent problem of variation, and help contain health care costs.
Minimal string theories and integrable hierarchies
NASA Astrophysics Data System (ADS)
Iyer, Ramakrishnan
Well-defined, non-perturbative formulations of the physics of string theories in specific minimal or superminimal model backgrounds can be obtained by solving matrix models in the double scaling limit. They provide us with the first examples of completely solvable string theories. Despite being relatively simple compared to higher dimensional critical string theories, they furnish non-perturbative descriptions of interesting physical phenomena such as geometrical transitions between D-branes and fluxes, tachyon condensation and holography. The physics of these theories in the minimal model backgrounds is succinctly encoded in a non-linear differential equation known as the string equation, along with an associated hierarchy of integrable partial differential equations (PDEs). The bosonic string in (2,2m-1) conformal minimal model backgrounds and the type 0A string in (2,4 m) superconformal minimal model backgrounds have the Korteweg-de Vries system, while type 0B in (2,4m) backgrounds has the Zakharov-Shabat system. The integrable PDE hierarchy governs flows between backgrounds with different m. In this thesis, we explore this interesting connection between minimal string theories and integrable hierarchies further. We uncover the remarkable role that an infinite hierarchy of non-linear differential equations plays in organizing and connecting certain minimal string theories non-perturbatively. We are able to embed the type 0A and 0B (A,A) minimal string theories into this single framework. The string theories arise as special limits of a rich system of equations underpinned by an integrable system known as the dispersive water wave hierarchy. We find that there are several other string-like limits of the system, and conjecture that some of them are type IIA and IIB (A,D) minimal string backgrounds. We explain how these and several other string-like special points arise and are connected. In some cases, the framework endows the theories with a non-perturbative definition for the first time. Notably, we discover that the Painleve IV equation plays a key role in organizing the string theory physics, joining its siblings, Painleve I and II, whose roles have previously been identified in this minimal string context. We then present evidence that the conjectured type II theories have smooth non-perturbative solutions, connecting two perturbative asymptotic regimes, in a 't Hooft limit. Our technique also demonstrates evidence for new minimal string theories that are not apparent in a perturbative analysis.
2013-12-01
leukemia (AML) and glioblastoma ( GBM ). Our laboratory is interested in the potential of F10 for improved treatment of prostate cancer based upon...displays strong anti-cancer activity and minimal systemic toxicity in pre-clinical models of AML and GBM and that in previous studies demonstrated...of the low toxicity and strong anti-cancer activity of F10 in animal models of AML and GBM this combination is likely to be effective and well
1998-04-28
be discussed. 2.1 ECONOMIC REPLACEMENT THEORY Decisions about heavy equipment should be made based on sound economic principles , not emotions...Life) will be less than L*. The converse is also true. 2.1.3 The Repair Limit Theory A different way of looking at the economic replacement decision...Summary Three different economic models have been reviewed in this section. The output of each is distinct. One seeks to minimize costs, one seeks to
Hybrid optimal scheduling for intermittent androgen suppression of prostate cancer
NASA Astrophysics Data System (ADS)
Hirata, Yoshito; di Bernardo, Mario; Bruchovsky, Nicholas; Aihara, Kazuyuki
2010-12-01
We propose a method for achieving an optimal protocol of intermittent androgen suppression for the treatment of prostate cancer. Since the model that reproduces the dynamical behavior of the surrogate tumor marker, prostate specific antigen, is piecewise linear, we can obtain an analytical solution for the model. Based on this, we derive conditions for either stopping or delaying recurrent disease. The solution also provides a design principle for the most favorable schedule of treatment that minimizes the rate of expansion of the malignant cell population.
Directions for model building from asymptotic safety
NASA Astrophysics Data System (ADS)
Bond, Andrew D.; Hiller, Gudrun; Kowalska, Kamila; Litim, Daniel F.
2017-08-01
Building on recent advances in the understanding of gauge-Yukawa theories we explore possibilities to UV-complete the Standard Model in an asymptotically safe manner. Minimal extensions are based on a large flavor sector of additional fermions coupled to a scalar singlet matrix field. We find that asymptotic safety requires fermions in higher representations of SU(3) C × SU(2) L . Possible signatures at colliders are worked out and include R-hadron searches, diboson signatures and the evolution of the strong and weak coupling constants.
Aggarwal, Rohit; Rider, Lisa G; Ruperto, Nicolino; Bayat, Nastaran; Erman, Brian; Feldman, Brian M; Oddis, Chester V; Amato, Anthony A; Chinoy, Hector; Cooper, Robert G; Dastmalchi, Maryam; Fiorentino, David; Isenberg, David; Katz, James D; Mammen, Andrew; de Visser, Marianne; Ytterberg, Steven R; Lundberg, Ingrid E; Chung, Lorinda; Danko, Katalin; García-De la Torre, Ignacio; Song, Yeong Wook; Villa, Luca; Rinaldi, Mariangela; Rockette, Howard; Lachenbruch, Peter A; Miller, Frederick W; Vencovsky, Jiri
2017-05-01
To develop response criteria for adult dermatomyositis (DM) and polymyositis (PM). Expert surveys, logistic regression, and conjoint analysis were used to develop 287 definitions using core set measures. Myositis experts rated greater improvement among multiple pairwise scenarios in conjoint analysis surveys, where different levels of improvement in 2 core set measures were presented. The PAPRIKA (Potentially All Pairwise Rankings of All Possible Alternatives) method determined the relative weights of core set measures and conjoint analysis definitions. The performance characteristics of the definitions were evaluated on patient profiles using expert consensus (gold standard) and were validated using data from a clinical trial. The nominal group technique was used to reach consensus. Consensus was reached for a conjoint analysis-based continuous model using absolute per cent change in core set measures (physician, patient, and extramuscular global activity, muscle strength, Health Assessment Questionnaire, and muscle enzyme levels). A total improvement score (range 0-100), determined by summing scores for each core set measure, was based on improvement in and relative weight of each core set measure. Thresholds for minimal, moderate, and major improvement were ≥20, ≥40, and ≥60 points in the total improvement score. The same criteria were chosen for juvenile DM, with different improvement thresholds. Sensitivity and specificity in DM/PM patient cohorts were 85% and 92%, 90% and 96%, and 92% and 98% for minimal, moderate, and major improvement, respectively. Definitions were validated in the clinical trial analysis for differentiating the physician rating of improvement (p<0.001). The response criteria for adult DM/PM consisted of the conjoint analysis model based on absolute per cent change in 6 core set measures, with thresholds for minimal, moderate, and major improvement. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Genetic Algorithm Based Framework for Automation of Stochastic Modeling of Multi-Season Streamflows
NASA Astrophysics Data System (ADS)
Srivastav, R. K.; Srinivasan, K.; Sudheer, K.
2009-05-01
Synthetic streamflow data generation involves the synthesis of likely streamflow patterns that are statistically indistinguishable from the observed streamflow data. The various kinds of stochastic models adopted for multi-season streamflow generation in hydrology are: i) parametric models which hypothesize the form of the periodic dependence structure and the distributional form a priori (examples are PAR, PARMA); disaggregation models that aim to preserve the correlation structure at the periodic level and the aggregated annual level; ii) Nonparametric models (examples are bootstrap/kernel based methods), which characterize the laws of chance, describing the stream flow process, without recourse to prior assumptions as to the form or structure of these laws; (k-nearest neighbor (k-NN), matched block bootstrap (MABB)); non-parametric disaggregation model. iii) Hybrid models which blend both parametric and non-parametric models advantageously to model the streamflows effectively. Despite many of these developments that have taken place in the field of stochastic modeling of streamflows over the last four decades, accurate prediction of the storage and the critical drought characteristics has been posing a persistent challenge to the stochastic modeler. This is partly because, usually, the stochastic streamflow model parameters are estimated by minimizing a statistically based objective function (such as maximum likelihood (MLE) or least squares (LS) estimation) and subsequently the efficacy of the models is being validated based on the accuracy of prediction of the estimates of the water-use characteristics, which requires large number of trial simulations and inspection of many plots and tables. Still accurate prediction of the storage and the critical drought characteristics may not be ensured. In this study a multi-objective optimization framework is proposed to find the optimal hybrid model (blend of a simple parametric model, PAR(1) model and matched block bootstrap (MABB) ) based on the explicit objective functions of minimizing the relative bias and relative root mean square error in estimating the storage capacity of the reservoir. The optimal parameter set of the hybrid model is obtained based on the search over a multi- dimensional parameter space (involving simultaneous exploration of the parametric (PAR(1)) as well as the non-parametric (MABB) components). This is achieved using the efficient evolutionary search based optimization tool namely, non-dominated sorting genetic algorithm - II (NSGA-II). This approach helps in reducing the drudgery involved in the process of manual selection of the hybrid model, in addition to predicting the basic summary statistics dependence structure, marginal distribution and water-use characteristics accurately. The proposed optimization framework is used to model the multi-season streamflows of River Beaver and River Weber of USA. In case of both the rivers, the proposed GA-based hybrid model yields a much better prediction of the storage capacity (where simultaneous exploration of both parametric and non-parametric components is done) when compared with the MLE-based hybrid models (where the hybrid model selection is done in two stages, thus probably resulting in a sub-optimal model). This framework can be further extended to include different linear/non-linear hybrid stochastic models at other temporal and spatial scales as well.
Minimal realization of right-handed gauge symmetry
NASA Astrophysics Data System (ADS)
Nomura, Takaaki; Okada, Hiroshi
2018-01-01
We propose a minimally extended gauge symmetry model with U (1 )R , where only the right-handed fermions have nonzero charges in the fermion sector. To achieve both anomaly cancellations and minimality, three right-handed neutrinos are naturally required, and the standard model Higgs has to have nonzero charge under this symmetry. Then we find that its breaking scale(Λ ) is restricted by precise measurement of neutral gauge boson in the standard model; therefore, O (10 ) TeV ≲Λ . We also discuss its testability of the new gauge boson and discrimination of U (1 )R model from U (1 )B-L one at collider physics such as LHC and ILC.
Frequency response function (FRF) based updating of a laser spot welded structure
NASA Astrophysics Data System (ADS)
Zin, M. S. Mohd; Rani, M. N. Abdul; Yunus, M. A.; Sani, M. S. M.; Wan Iskandar Mirza, W. I. I.; Mat Isa, A. A.
2018-04-01
The objective of this paper is to present frequency response function (FRF) based updating as a method for matching the finite element (FE) model of a laser spot welded structure with a physical test structure. The FE model of the welded structure was developed using CQUAD4 and CWELD element connectors, and NASTRAN was used to calculate the natural frequencies, mode shapes and FRF. Minimization of the discrepancies between the finite element and experimental FRFs was carried out using the exceptional numerical capability of NASTRAN Sol 200. The experimental work was performed under free-free boundary conditions using LMS SCADAS. Avast improvement in the finite element FRF was achieved using the frequency response function (FRF) based updating with two different objective functions proposed.
PCTDSE: A parallel Cartesian-grid-based TDSE solver for modeling laser-atom interactions
NASA Astrophysics Data System (ADS)
Fu, Yongsheng; Zeng, Jiaolong; Yuan, Jianmin
2017-01-01
We present a parallel Cartesian-grid-based time-dependent Schrödinger equation (TDSE) solver for modeling laser-atom interactions. It can simulate the single-electron dynamics of atoms in arbitrary time-dependent vector potentials. We use a split-operator method combined with fast Fourier transforms (FFT), on a three-dimensional (3D) Cartesian grid. Parallelization is realized using a 2D decomposition strategy based on the Message Passing Interface (MPI) library, which results in a good parallel scaling on modern supercomputers. We give simple applications for the hydrogen atom using the benchmark problems coming from the references and obtain repeatable results. The extensions to other laser-atom systems are straightforward with minimal modifications of the source code.
Subsite mapping of enzymes. Depolymerase computer modelling.
Allen, J D; Thoma, J A
1976-01-01
We have developed a depolymerase computer model that uses a minimization routine. The model is designed so that, given experimental bond-cleavage frequencies for oligomeric substrates and experimental Michaelis parameters as a function of substrate chain length, the optimum subsite map is generated. The minimized sum of the weighted-squared residuals of the experimental and calculated data is used as a criterion of the goodness-of-fit for the optimized subsite map. The application of the minimization procedure to subsite mapping is explored through the use of simulated data. A procedure is developed whereby the minimization model can be used to determine the number of subsites in the enzymic binding region and to locate the position of the catalytic amino acids among these subsites. The degree of propagation of experimental variance into the subsite-binding energies is estimated. The question of whether hydrolytic rate coefficients are constant or a function of the number of filled subsites is examined. PMID:999629
Congestion Pricing for Aircraft Pushback Slot Allocation.
Liu, Lihua; Zhang, Yaping; Liu, Lan; Xing, Zhiwei
2017-01-01
In order to optimize aircraft pushback management during rush hour, aircraft pushback slot allocation based on congestion pricing is explored while considering monetary compensation based on the quality of the surface operations. First, the concept of the "external cost of surface congestion" is proposed, and a quantitative study on the external cost is performed. Then, an aircraft pushback slot allocation model for minimizing the total surface cost is established. An improved discrete differential evolution algorithm is also designed. Finally, a simulation is performed on Xinzheng International Airport using the proposed model. By comparing the pushback slot control strategy based on congestion pricing with other strategies, the advantages of the proposed model and algorithm are highlighted. In addition to reducing delays and optimizing the delay distribution, the model and algorithm are better suited for use for actual aircraft pushback management during rush hour. Further, it is also observed they do not result in significant increases in the surface cost. These results confirm the effectiveness and suitability of the proposed model and algorithm.
Large eddy simulations of time-dependent and buoyancy-driven channel flows
NASA Technical Reports Server (NTRS)
Cabot, William H.
1993-01-01
The primary goal of this work has been to assess the performance of the dynamic SGS model in the large eddy simulation (LES) of channel flows in a variety of situations, viz., in temporal development of channel flow turned by a transverse pressure gradient and especially in buoyancy-driven turbulent flows such as Rayleigh-Benard and internally heated channel convection. For buoyancy-driven flows, there are additional buoyant terms that are possible in the base models, and one objective has been to determine if the dynamic SGS model results are sensitive to such terms. The ultimate goal is to determine the minimal base model needed in the dynamic SGS model to provide accurate results in flows with more complicated physical features. In addition, a program of direct numerical simulation (DNS) of fully compressible channel convection has been undertaken to determine stratification and compressibility effects. These simulations are intended to provide a comparative base for performing the LES of compressible (or highly stratified, pseudo-compressible) convection at high Reynolds number in the future.
Congestion Pricing for Aircraft Pushback Slot Allocation
Zhang, Yaping
2017-01-01
In order to optimize aircraft pushback management during rush hour, aircraft pushback slot allocation based on congestion pricing is explored while considering monetary compensation based on the quality of the surface operations. First, the concept of the “external cost of surface congestion” is proposed, and a quantitative study on the external cost is performed. Then, an aircraft pushback slot allocation model for minimizing the total surface cost is established. An improved discrete differential evolution algorithm is also designed. Finally, a simulation is performed on Xinzheng International Airport using the proposed model. By comparing the pushback slot control strategy based on congestion pricing with other strategies, the advantages of the proposed model and algorithm are highlighted. In addition to reducing delays and optimizing the delay distribution, the model and algorithm are better suited for use for actual aircraft pushback management during rush hour. Further, it is also observed they do not result in significant increases in the surface cost. These results confirm the effectiveness and suitability of the proposed model and algorithm. PMID:28114429
NASA Astrophysics Data System (ADS)
Deng, Bo; Shi, Yaoyao
2017-11-01
The tape winding technology is an effective way to fabricate rotationally composite materials. Nevertheless, some inevitable defects will seriously influence the performance of winding products. One of the crucial ways to identify the quality of fiber-reinforced composite material products is examining its void content. Significant improvement in products' mechanical properties can be achieved by minimizing the void defect. Two methods were applied in this study, finite element analysis and experimental testing, respectively, to investigate the mechanism of how void forming in composite tape winding processing. Based on the theories of interlayer intimate contact and Domain Superposition Technique (DST), a three-dimensional model of prepreg tape void with SolidWorks has been modeled in this paper. Whereafter, ABAQUS simulation software was used to simulate the void content change with pressure and temperature. Finally, a series of experiments were performed to determine the accuracy of the model-based predictions. The results showed that the model is effective for predicting the void content in the composite tape winding process.
An inverse finance problem for estimation of the volatility
NASA Astrophysics Data System (ADS)
Neisy, A.; Salmani, K.
2013-01-01
Black-Scholes model, as a base model for pricing in derivatives markets has some deficiencies, such as ignoring market jumps, and considering market volatility as a constant factor. In this article, we introduce a pricing model for European-Options under jump-diffusion underlying asset. Then, using some appropriate numerical methods we try to solve this model with integral term, and terms including derivative. Finally, considering volatility as an unknown parameter, we try to estimate it by using our proposed model. For the purpose of estimating volatility, in this article, we utilize inverse problem, in which inverse problem model is first defined, and then volatility is estimated using minimization function with Tikhonov regularization.
Møller, Jonas B; Overgaard, Rune V; Madsen, Henrik; Hansen, Torben; Pedersen, Oluf; Ingwersen, Steen H
2010-02-01
Several articles have investigated stochastic differential equations (SDEs) in PK/PD models, but few have quantitatively investigated the benefits to predictive performance of models based on real data. Estimation of first phase insulin secretion which reflects beta-cell function using models of the OGTT is a difficult problem in need of further investigation. The present work aimed at investigating the power of SDEs to predict the first phase insulin secretion (AIR (0-8)) in the IVGTT based on parameters obtained from the minimal model of the OGTT, published by Breda et al. (Diabetes 50(1):150-158, 2001). In total 174 subjects underwent both an OGTT and a tolbutamide modified IVGTT. Estimation of parameters in the oral minimal model (OMM) was performed using the FOCE-method in NONMEM VI on insulin and C-peptide measurements. The suggested SDE models were based on a continuous AR(1) process, i.e. the Ornstein-Uhlenbeck process, and the extended Kalman filter was implemented in order to estimate the parameters of the models. Inclusion of the Ornstein-Uhlenbeck (OU) process caused improved description of the variation in the data as measured by the autocorrelation function (ACF) of one-step prediction errors. A main result was that application of SDE models improved the correlation between the individual first phase indexes obtained from OGTT and AIR (0-8) (r = 0.36 to r = 0.49 and r = 0.32 to r = 0.47 with C-peptide and insulin measurements, respectively). In addition to the increased correlation also the properties of the indexes obtained using the SDE models more correctly assessed the properties of the first phase indexes obtained from the IVGTT. In general it is concluded that the presented SDE approach not only caused autocorrelation of errors to decrease but also improved estimation of clinical measures obtained from the glucose tolerance tests. Since, the estimation time of extended models was not heavily increased compared to basic models, the applied method is concluded to have high relevance not only in theory but also in practice.
Materials science research in microgravity
NASA Technical Reports Server (NTRS)
Perepezko, John H.
1992-01-01
There are several important attributes of an extended duration microgravity environment that offer a new dimension in the control of the microstructure, processing, and properties of materials. First, when gravitational effects are minimized, buoyancy driven convection flows are also minimized. The flows due to density differences, brought about either by composition or temperature gradients will then be reduced or eliminated to permit a more precise control of the temperature and the composition of a melt which is critical in achieving high quality crystal growth of electronic materials or alloy structures. Secondly, body force effects such as sedimentation, hydrostatic pressure, and deformation are similarly reduced. These effects may interfere with attempts to produce uniformly dispersed or aligned second phases during melt solidification. Thirdly, operating in a microgravity environment will facilitate the containerless processing of melts to eliminate the limitations of containment for reactive melts. The noncontacting forces such as those developed from electromagnet, electrostatic, or acoustic fields can be used to position samples. With this mode of operation, contamination can be minimized to enable the study of reactive melts and to eliminate extraneous crystal nucleation so that novel crystalline structures and new glass compositions may be produced. In order to take advantage of the microgravity environment for materials research, it has become clear that reliable processing models based on a sound ground based experimental experience and an established thermophysical property data base are essential.
NASA Astrophysics Data System (ADS)
Le, Zichun; Suo, Kaihua; Fu, Minglei; Jiang, Ling; Dong, Wen
2012-03-01
In order to minimize the average end to end delay for data transporting in hybrid wireless optical broadband access network, a novel routing algorithm named MSTMCF (minimum spanning tree and minimum cost flow) is devised. The routing problem is described as a minimum spanning tree and minimum cost flow model and corresponding algorithm procedures are given. To verify the effectiveness of MSTMCF algorithm, extensively simulations based on OWNS have been done under different types of traffic source.
JPEG2000 encoding with perceptual distortion control.
Liu, Zhen; Karam, Lina J; Watson, Andrew B
2006-07-01
In this paper, a new encoding approach is proposed to control the JPEG2000 encoding in order to reach a desired perceptual quality. The new method is based on a vision model that incorporates various masking effects of human visual perception and a perceptual distortion metric that takes spatial and spectral summation of individual quantization errors into account. Compared with the conventional rate-based distortion minimization JPEG2000 encoding, the new method provides a way to generate consistent quality images at a lower bit rate.
Model Based Iterative Reconstruction for Bright Field Electron Tomography (Postprint)
2013-02-01
which is based on the iterative coordinate descent (ICD), works by constructing a substitute to the original cost4 at every point, and minimizing this...using Beer’s law. Thus the projection integral corresponding to the ith measurement is given by log ( λD λi ) . There can be cases in which the dosage λD...Inputs: Measurements g, Initial reconstruction f ′, Initial dosage d′, Fraction of entries to reject R %Outputs: Reconstruction f̂ and dosage parameter d̂
Use of Reference Frames for Interplanetary Navigation at JPL
NASA Technical Reports Server (NTRS)
Heflin, Michael; Jacobs, Chris; Sovers, Ojars; Moore, Angelyn; Owen, Sue
2010-01-01
Navigation of interplanetary spacecraft is typically based on range, Doppler, and differential interferometric measurements made by ground-based telescopes. Acquisition and interpretation of these observations requires accurate knowledge of the terrestrial reference frame and its orientation with respect to the celestial frame. Work is underway at JPL to reprocess historical VLBI and GPS data to improve realizations of the terrestrial and celestial frames. Improvements include minimal constraint alignment, improved tropospheric modeling, better orbit determination, and corrections for antenna phase center patterns.
Lu, Zhonghua; Arikatla, Venkata S; Han, Zhongqing; Allen, Brian F; De, Suvranu
2014-12-01
High-frequency electricity is used in the majority of surgical interventions. However, modern computer-based training and simulation systems rely on physically unrealistic models that fail to capture the interplay of the electrical, mechanical and thermal properties of biological tissue. We present a real-time and physically realistic simulation of electrosurgery by modelling the electrical, thermal and mechanical properties as three iteratively solved finite element models. To provide subfinite-element graphical rendering of vaporized tissue, a dual-mesh dynamic triangulation algorithm based on isotherms is proposed. The block compressed row storage (BCRS) structure is shown to be critical in allowing computationally efficient changes in the tissue topology due to vaporization. We have demonstrated our physics-based electrosurgery cutting algorithm through various examples. Our matrix manipulation algorithms designed for topology changes have shown low computational cost. Our simulator offers substantially greater physical fidelity compared to previous simulators that use simple geometry-based heat characterization. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Widhiarso, Wahyu; Rosyidi, Cucuk Nur
2018-02-01
Minimizing production cost in a manufacturing company will increase the profit of the company. The cutting parameters will affect total processing time which then will affect the production cost of machining process. Besides affecting the production cost and processing time, the cutting parameters will also affect the environment. An optimization model is needed to determine the optimum cutting parameters. In this paper, we develop an optimization model to minimize the production cost and the environmental impact in CNC turning process. The model is used a multi objective optimization. Cutting speed and feed rate are served as the decision variables. Constraints considered are cutting speed, feed rate, cutting force, output power, and surface roughness. The environmental impact is converted from the environmental burden by using eco-indicator 99. Numerical example is given to show the implementation of the model and solved using OptQuest of Oracle Crystal Ball software. The results of optimization indicate that the model can be used to optimize the cutting parameters to minimize the production cost and the environmental impact.
Terence L. Wagner; Eric J. Villavaso
1999-01-01
This study examines the effects of temperature and adult diet on the development of hypertrophied fat bodies in prediapausing adult boll weevils, Anthonomus grandis grandis Boheman. Simulation models derived from this work are used to estimate the minimal ages at which male and female boll weevils exhibit diapause morphology, based on conditions...
Development and application of a probabilistic method for wildfire suppression cost modeling
Matthew P. Thompson; Jessica R. Haas; Mark A. Finney; David E. Calkin; Michael S. Hand; Mark J. Browne; Martin Halek; Karen C. Short; Isaac C. Grenfell
2015-01-01
Wildfire activity and escalating suppression costs continue to threaten the financial health of federal land management agencies. In order to minimize and effectively manage the cost of financial risk, agencies need the ability to quantify that risk. A fundamental aim of this research effort, therefore, is to develop a process for generating risk-based metrics for...
Scott W. Bailey; Patricia A. Brousseau; Kevin J. McGuire; Donald S. Ross
2014-01-01
Upland headwater catchments, such as those in the AppalachianMountain region, are typified by coarse textured soils, flashy hydrologic response, and low baseflow of streams, suggesting well drained soils and minimal groundwater storage. Model formulations of soil genesis, nutrient cycling, critical loads and rainfall/runoff response are typically based on vertical...
Impossibility Theorem in Proportional Representation Problem
NASA Astrophysics Data System (ADS)
Karpov, Alexander
2010-09-01
The study examines general axiomatics of Balinski and Young and analyzes existed proportional representation methods using this approach. The second part of the paper provides new axiomatics based on rational choice models. New system of axioms is applied to study known proportional representation systems. It is shown that there is no proportional representation method satisfying a minimal set of the axioms (monotonicity and neutrality).
ERIC Educational Resources Information Center
Department of Housing and Urban Development, Washington, DC.
This student manual comprises the United States Environmental Protection Agency's model renovation training course designed for renovation, remodeling, and painting contractors. It provides information regarding the containment, minimization, and cleanup of lead hazards during activities that disturb lead painted surfaces. Introductory material…
DOE Office of Scientific and Technical Information (OSTI.GOV)
STADLER, MICHAEL; MASHAYEKH, SALMAN; DEFOREST, NICHOLAS
The ODC Microgrid Controller is an optimization-based model predicative microgrid controller (MPMC) to minimize operation cost (and/or CO2 emissions) in a microgrid in the grid-connected mode. It is composed of several modules, including a) forecasting, b) optimization, c) data exchange and d) power balancing modules. In the presence of a multi-layered control system architecture, these modules will reside in the supervisory control layer.
Systems biology perspectives on minimal and simpler cells.
Xavier, Joana C; Patil, Kiran Raosaheb; Rocha, Isabel
2014-09-01
The concept of the minimal cell has fascinated scientists for a long time, from both fundamental and applied points of view. This broad concept encompasses extreme reductions of genomes, the last universal common ancestor (LUCA), the creation of semiartificial cells, and the design of protocells and chassis cells. Here we review these different areas of research and identify common and complementary aspects of each one. We focus on systems biology, a discipline that is greatly facilitating the classical top-down and bottom-up approaches toward minimal cells. In addition, we also review the so-called middle-out approach and its contributions to the field with mathematical and computational models. Owing to the advances in genomics technologies, much of the work in this area has been centered on minimal genomes, or rather minimal gene sets, required to sustain life. Nevertheless, a fundamental expansion has been taking place in the last few years wherein the minimal gene set is viewed as a backbone of a more complex system. Complementing genomics, progress is being made in understanding the system-wide properties at the levels of the transcriptome, proteome, and metabolome. Network modeling approaches are enabling the integration of these different omics data sets toward an understanding of the complex molecular pathways connecting genotype to phenotype. We review key concepts central to the mapping and modeling of this complexity, which is at the heart of research on minimal cells. Finally, we discuss the distinction between minimizing the number of cellular components and minimizing cellular complexity, toward an improved understanding and utilization of minimal and simpler cells. Copyright © 2014, American Society for Microbiology. All Rights Reserved.
Optimization of HAART with genetic algorithms and agent-based models of HIV infection.
Castiglione, F; Pappalardo, F; Bernaschi, M; Motta, S
2007-12-15
Highly Active AntiRetroviral Therapies (HAART) can prolong life significantly to people infected by HIV since, although unable to eradicate the virus, they are quite effective in maintaining control of the infection. However, since HAART have several undesirable side effects, it is considered useful to suspend the therapy according to a suitable schedule of Structured Therapeutic Interruptions (STI). In the present article we describe an application of genetic algorithms (GA) aimed at finding the optimal schedule for a HAART simulated with an agent-based model (ABM) of the immune system that reproduces the most significant features of the response of an organism to the HIV-1 infection. The genetic algorithm helps in finding an optimal therapeutic schedule that maximizes immune restoration, minimizes the viral count and, through appropriate interruptions of the therapy, minimizes the dose of drug administered to the simulated patient. To validate the efficacy of the therapy that the genetic algorithm indicates as optimal, we ran simulations of opportunistic diseases and found that the selected therapy shows the best survival curve among the different simulated control groups. A version of the C-ImmSim simulator is available at http://www.iac.cnr.it/~filippo/c-ImmSim.html
Rosi, Francesca; Legan, Lea; Miliani, Costanza; Ropret, Polonca
2017-05-01
A new analytical approach, based on micro-transflection measurements from a diamond-coated metal sampling stick, is presented for the analysis of painting varnishes. Minimally invasive sampling is performed from the varnished surface using the stick, which is directly used as a transflection substrate for micro Fourier transform infrared (FTIR) measurements. With use of a series of varnished model paints, the micro-transflection method has been proved to be a valuable tool for the identification of surface components thanks to the selectivity of the sampling, the enhancement of the absorbance signal, and the easier spectral interpretation because the profiles are similar to transmission mode ones. Driven by these positive outcomes, the method was then tested as tool supporting noninvasive reflection FTIR spectroscopy during the assessment of varnish removal by solvent cleaning on paint models. Finally, the integrated analytical approach based on the two reflection methods was successfully applied for the monitoring of the cleaning of the sixteenth century painting Presentation in the Temple by Vittore Carpaccio. Graphical Abstract Micro-transflection FTIR on a metallic stick for the identification of varnishes during painting cleanings.
Feng, Haihua; Karl, William Clem; Castañon, David A
2008-05-01
In this paper, we develop a new unified approach for laser radar range anomaly suppression, range profiling, and segmentation. This approach combines an object-based hybrid scene model for representing the range distribution of the field and a statistical mixture model for the range data measurement noise. The image segmentation problem is formulated as a minimization problem which jointly estimates the target boundary together with the target region range variation and background range variation directly from the noisy and anomaly-filled range data. This formulation allows direct incorporation of prior information concerning the target boundary, target ranges, and background ranges into an optimal reconstruction process. Curve evolution techniques and a generalized expectation-maximization algorithm are jointly employed as an efficient solver for minimizing the objective energy, resulting in a coupled pair of object and intensity optimization tasks. The method directly and optimally extracts the target boundary, avoiding a suboptimal two-step process involving image smoothing followed by boundary extraction. Experiments are presented demonstrating that the proposed approach is robust to anomalous pixels (missing data) and capable of producing accurate estimation of the target boundary and range values from noisy data.
Qi, Fei; Ju, Feng; Bai, Dong Ming; Chen, Bai
2018-02-01
For the outstanding compliance and dexterity of continuum robot, it is increasingly used in minimally invasive surgery. The wide workspace, high dexterity and strong payload capacity are essential to the continuum robot. In this article, we investigate the workspace of a cable-driven continuum robot that we proposed. The influence of section number on the workspace is discussed when robot is operated in narrow environment. Meanwhile, the structural parameters of this continuum robot are optimized to achieve better kinematic performance. Moreover, an indicator based on the dexterous solid angle for evaluating the dexterity of robot is introduced and the distal end dexterity is compared for the three-section continuum robot with different range of variables. Results imply that the wider range of variables achieve the better dexterity. Finally, the static model of robot based on the principle of virtual work is derived to analyze the relationship between the bending shape deformation and the driven force. The simulations and experiments for plane and spatial motions are conducted to validate the feasibility of model, respectively. Results of this article can contribute to the real-time control and movement and can be a design reference for cable-driven continuum robot.
Jonkers, Ilse; De Schutter, Joris; De Groote, Friedl
2016-01-01
Experimental studies have shown that a continuum of ankle and hip strategies is used to restore posture following an external perturbation. Postural responses can be modeled by feedback control with feedback gains that optimize a specific objective. On the one hand, feedback gains that minimize effort have been used to predict muscle activity during perturbed standing. On the other hand, hip and ankle strategies have been predicted by minimizing postural instability and deviation from upright posture. It remains unclear, however, whether and how effort minimization influences the selection of a specific postural response. We hypothesize that the relative importance of minimizing mechanical work vs. postural instability influences the strategy used to restore upright posture. This hypothesis was investigated based on experiments and predictive simulations of the postural response following a backward support surface translation. Peak hip flexion angle was significantly correlated with three experimentally determined measures of effort, i.e., mechanical work, mean muscle activity and metabolic energy. Furthermore, a continuum of ankle and hip strategies was predicted in simulation when changing the relative importance of minimizing mechanical work and postural instability, with increased weighting of mechanical work resulting in an ankle strategy. In conclusion, the combination of experimental measurements and predictive simulations of the postural response to a backward support surface translation showed that the trade-off between effort and postural instability minimization can explain the selection of a specific postural response in the continuum of potential ankle and hip strategies. PMID:27489362
Clarke, John R
2009-01-01
Surgical errors with minimally invasive surgery differ from those in open surgery. Perforations are typically the result of trocar introduction or electrosurgery. Infections include bioburdens, notably enteric viruses, on complex instruments. Retained foreign objects are primarily unretrieved device fragments and lost gallstones or other specimens. Fires and burns come from illuminated ends of fiber-optic cables and from electrosurgery. Pressure ischemia is more likely with longer endoscopic surgical procedures. Gas emboli can occur. Minimally invasive surgery is more dependent on complex equipment, with high likelihood of failures. Standardization, checklists, and problem reporting are solutions for minimizing failures. The necessity of electrosurgery makes education about best electrosurgical practices important. The recording of minimally invasive surgical procedures is an opportunity to debrief in a way that improves the reliability of future procedures. Safety depends on reliability, designing systems to withstand inevitable human errors. Safe systems are characterized by a commitment to safety, formal protocols for communications, teamwork, standardization around best practice, and reporting of problems for improvement of the system. Teamwork requires shared goals, mental models, and situational awareness in order to facilitate mutual monitoring and backup. An effective team has a flat hierarchy; team members are empowered to speak up if they are concerned about problems. Effective teams plan, rehearse, distribute the workload, and debrief. Surgeons doing minimally invasive surgery have a unique opportunity to incorporate the principles of safety into the development of their discipline.
On the formulation of a minimal uncertainty model for robust control with structured uncertainty
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.; Chang, B.-C.; Fischl, Robert
1991-01-01
In the design and analysis of robust control systems for uncertain plants, representing the system transfer matrix in the form of what has come to be termed an M-delta model has become widely accepted and applied in the robust control literature. The M represents a transfer function matrix M(s) of the nominal closed loop system, and the delta represents an uncertainty matrix acting on M(s). The nominal closed loop system M(s) results from closing the feedback control system, K(s), around a nominal plant interconnection structure P(s). The uncertainty can arise from various sources, such as structured uncertainty from parameter variations or multiple unsaturated uncertainties from unmodeled dynamics and other neglected phenomena. In general, delta is a block diagonal matrix, but for real parameter variations delta is a diagonal matrix of real elements. Conceptually, the M-delta structure can always be formed for any linear interconnection of inputs, outputs, transfer functions, parameter variations, and perturbations. However, very little of the currently available literature addresses computational methods for obtaining this structure, and none of this literature addresses a general methodology for obtaining a minimal M-delta model for a wide class of uncertainty, where the term minimal refers to the dimension of the delta matrix. Since having a minimally dimensioned delta matrix would improve the efficiency of structured singular value (or multivariable stability margin) computations, a method of obtaining a minimal M-delta would be useful. Hence, a method of obtaining the interconnection system P(s) is required. A generalized procedure for obtaining a minimal P-delta structure for systems with real parameter variations is presented. Using this model, the minimal M-delta model can then be easily obtained by closing the feedback loop. The procedure involves representing the system in a cascade-form state-space realization, determining the minimal uncertainty matrix, delta, and constructing the state-space representation of P(s). Three examples are presented to illustrate the procedure.
Model Based Autonomy for Robust Mars Operations
NASA Technical Reports Server (NTRS)
Kurien, James A.; Nayak, P. Pandurang; Williams, Brian C.; Lau, Sonie (Technical Monitor)
1998-01-01
Space missions have historically relied upon a large ground staff, numbering in the hundreds for complex missions, to maintain routine operations. When an anomaly occurs, this small army of engineers attempts to identify and work around the problem. A piloted Mars mission, with its multiyear duration, cost pressures, half-hour communication delays and two-week blackouts cannot be closely controlled by a battalion of engineers on Earth. Flight crew involvement in routine system operations must also be minimized to maximize science return. It also may be unrealistic to require the crew have the expertise in each mission subsystem needed to diagnose a system failure and effect a timely repair, as engineers did for Apollo 13. Enter model-based autonomy, which allows complex systems to autonomously maintain operation despite failures or anomalous conditions, contributing to safe, robust, and minimally supervised operation of spacecraft, life support, In Situ Resource Utilization (ISRU) and power systems. Autonomous reasoning is central to the approach. A reasoning algorithm uses a logical or mathematical model of a system to infer how to operate the system, diagnose failures and generate appropriate behavior to repair or reconfigure the system in response. The 'plug and play' nature of the models enables low cost development of autonomy for multiple platforms. Declarative, reusable models capture relevant aspects of the behavior of simple devices (e.g. valves or thrusters). Reasoning algorithms combine device models to create a model of the system-wide interactions and behavior of a complex, unique artifact such as a spacecraft. Rather than requiring engineers to all possible interactions and failures at design time or perform analysis during the mission, the reasoning engine generates the appropriate response to the current situation, taking into account its system-wide knowledge, the current state, and even sensor failures or unexpected behavior.
Keeping speed and distance for aligned motion
NASA Astrophysics Data System (ADS)
Farkas, Illés J.; Kun, Jeromos; Jin, Yi; He, Gaoqi; Xu, Mingliang
2015-01-01
The cohesive collective motion (flocking, swarming) of autonomous agents is ubiquitously observed and exploited in both natural and man-made settings, thus, minimal models for its description are essential. In a model with continuous space and time we find that if two particles arrive symmetrically in a plane at a large angle, then (i) radial repulsion and (ii) linear self-propelling toward a fixed preferred speed are sufficient for them to depart at a smaller angle. For this local gain of momentum explicit velocity alignment is not necessary, nor are adhesion or attraction, inelasticity or anisotropy of the particles, or nonlinear drag. With many particles obeying these microscopic rules of motion we find that their spatial confinement to a square with periodic boundaries (which is an indirect form of attraction) leads to stable macroscopic ordering. As a function of the strength of added noise we see—at finite system sizes—a critical slowing down close to the order-disorder boundary and a discontinuous transition. After varying the density of particles at constant system size and varying the size of the system with constant particle density we predict that in the infinite system size (or density) limit the hysteresis loop disappears and the transition becomes continuous. We note that animals, humans, drones, etc., tend to move asynchronously and are often more responsive to motion than positions. Thus, for them velocity-based continuous models can provide higher precision than coordinate-based models. An additional characteristic and realistic feature of the model is that convergence to the ordered state is fastest at a finite density, which is in contrast to models applying (discontinuous) explicit velocity alignments and discretized time. To summarize, we find that the investigated model can provide a minimal description of flocking.
NASA Astrophysics Data System (ADS)
Javed, Hassan; Armstrong, Peter
2015-08-01
The efficiency bar for a Minimum Equipment Performance Standard (MEPS) generally aims to minimize energy consumption and life cycle cost of a given chiller type and size category serving a typical load profile. Compressor type has a significant chiller performance impact. Performance of screw and reciprocating compressors is expressed in terms of pressure ratio and speed for a given refrigerant and suction density. Isentropic efficiency for a screw compressor is strongly affected by under- and over-compression (UOC) processes. The theoretical simple physical UOC model involves a compressor-specific (but sometimes unknown) volume index parameter and the real gas properties of the refrigerant used. Isentropic efficiency is estimated by the UOC model and a bi-cubic, used to account for flow, friction and electrical losses. The unknown volume index, a smoothing parameter (to flatten the UOC model peak) and bi-cubic coefficients are identified by curve fitting to minimize an appropriate residual norm. Chiller performance maps are produced for each compressor type by selecting optimized sub-cooling and condenser fan speed options in a generic component-based chiller model. SEER is the sum of hourly load (from a typical building in the climate of interest) and specific power for the same hourly conditions. An empirical UAE cooling load model, scalable to any equipment capacity, is used to establish proposed UAE MEPS. Annual electricity use and cost, determined from SEER and annual cooling load, and chiller component cost data are used to find optimal chiller designs and perform life-cycle cost comparison between screw and reciprocating compressor-based chillers. This process may be applied to any climate/load model in order to establish optimized MEPS for any country and/or region.