Assessing alternative measures of wealth in health research.
Cubbin, Catherine; Pollack, Craig; Flaherty, Brian; Hayward, Mark; Sania, Ayesha; Vallone, Donna; Braveman, Paula
2011-05-01
We assessed whether it would be feasible to replace the standard measure of net worth with simpler measures of wealth in population-based studies examining associations between wealth and health. We used data from the 2004 Survey of Consumer Finances (respondents aged 25-64 years) and the 2004 Health and Retirement Survey (respondents aged 50 years or older) to construct logistic regression models relating wealth to health status and smoking. For our wealth measure, we used the standard measure of net worth as well as 9 simpler measures of wealth, and we compared results among the 10 models. In both data sets and for both health indicators, models using simpler wealth measures generated conclusions about the association between wealth and health that were similar to the conclusions generated by models using net worth. The magnitude and significance of the odds ratios were similar for the covariates in multivariate models, and the model-fit statistics for models using these simpler measures were similar to those for models using net worth. Our findings suggest that simpler measures of wealth may be acceptable in population-based studies of health.
Billing code algorithms to identify cases of peripheral artery disease from administrative data
Fan, Jin; Arruda-Olson, Adelaide M; Leibson, Cynthia L; Smith, Carin; Liu, Guanghui; Bailey, Kent R; Kullo, Iftikhar J
2013-01-01
Objective To construct and validate billing code algorithms for identifying patients with peripheral arterial disease (PAD). Methods We extracted all encounters and line item details including PAD-related billing codes at Mayo Clinic Rochester, Minnesota, between July 1, 1997 and June 30, 2008; 22 712 patients evaluated in the vascular laboratory were divided into training and validation sets. Multiple logistic regression analysis was used to create an integer code score from the training dataset, and this was tested in the validation set. We applied a model-based code algorithm to patients evaluated in the vascular laboratory and compared this with a simpler algorithm (presence of at least one of the ICD-9 PAD codes 440.20–440.29). We also applied both algorithms to a community-based sample (n=4420), followed by a manual review. Results The logistic regression model performed well in both training and validation datasets (c statistic=0.91). In patients evaluated in the vascular laboratory, the model-based code algorithm provided better negative predictive value. The simpler algorithm was reasonably accurate for identification of PAD status, with lesser sensitivity and greater specificity. In the community-based sample, the sensitivity (38.7% vs 68.0%) of the simpler algorithm was much lower, whereas the specificity (92.0% vs 87.6%) was higher than the model-based algorithm. Conclusions A model-based billing code algorithm had reasonable accuracy in identifying PAD cases from the community, and in patients referred to the non-invasive vascular laboratory. The simpler algorithm had reasonable accuracy for identification of PAD in patients referred to the vascular laboratory but was significantly less sensitive in a community-based sample. PMID:24166724
Water balance models in one-month-ahead streamflow forecasting
Alley, William M.
1985-01-01
Techniques are tested that incorporate information from water balance models in making 1-month-ahead streamflow forecasts in New Jersey. The results are compared to those based on simple autoregressive time series models. The relative performance of the models is dependent on the month of the year in question. The water balance models are most useful for forecasts of April and May flows. For the stations in northern New Jersey, the April and May forecasts were made in order of decreasing reliability using the water-balance-based approaches, using the historical monthly means, and using simple autoregressive models. The water balance models were useful to a lesser extent for forecasts during the fall months. For the rest of the year the improvements in forecasts over those obtained using the simpler autoregressive models were either very small or the simpler models provided better forecasts. When using the water balance models, monthly corrections for bias are found to improve minimum mean-square-error forecasts as well as to improve estimates of the forecast conditional distributions.
Mechatronics by Analogy and Application to Legged Locomotion
NASA Astrophysics Data System (ADS)
Ragusila, Victor
A new design methodology for mechatronic systems, dubbed as Mechatronics by Analogy (MbA), is introduced and applied to designing a leg mechanism. The new methodology argues that by establishing a similarity relation between a complex system and a number of simpler models it is possible to design the former using the analysis and synthesis means developed for the latter. The methodology provides a framework for concurrent engineering of complex systems while maintaining the transparency of the system behaviour through making formal analogies between the system and those with more tractable dynamics. The application of the MbA methodology to the design of a monopod robot leg, called the Linkage Leg, is also studied. A series of simulations show that the dynamic behaviour of the Linkage Leg is similar to that of a combination of a double pendulum and a spring-loaded inverted pendulum, based on which the system kinematic, dynamic, and control parameters can be designed concurrently. The first stage of Mechatronics by Analogy is a method of extracting significant features of system dynamics through simpler models. The goal is to determine a set of simpler mechanisms with similar dynamic behaviour to that of the original system in various phases of its motion. A modular bond-graph representation of the system is determined, and subsequently simplified using two simplification algorithms. The first algorithm determines the relevant dynamic elements of the system for each phase of motion, and the second algorithm finds the simple mechanism described by the remaining dynamic elements. In addition to greatly simplifying the controller for the system, using simpler mechanisms with similar behaviour provides a greater insight into the dynamics of the system. This is seen in the second stage of the new methodology, which concurrently optimizes the simpler mechanisms together with a control system based on their dynamics. Once the optimal configuration of the simpler system is determined, the original mechanism is optimized such that its dynamic behaviour is analogous. It is shown that, if this analogy is achieved, the control system designed based on the simpler mechanisms can be directly implemented to the more complex system, and their dynamic behaviours are close enough for the system performance to be effectively the same. Finally it is shown that, for the employed objective of fast legged locomotion, the proposed methodology achieves a better design than Reduction-by-Feedback, a competing methodology that uses control layers to simplify the dynamics of the system.
Heuristics for the Hodgkin-Huxley system.
Hoppensteadt, Frank
2013-09-01
Hodgkin and Huxley (HH) discovered that voltages control ionic currents in nerve membranes. This led them to describe electrical activity in a neuronal membrane patch in terms of an electronic circuit whose characteristics were determined using empirical data. Due to the complexity of this model, a variety of heuristics, including relaxation oscillator circuits and integrate-and-fire models, have been used to investigate activity in neurons, and these simpler models have been successful in suggesting experiments and explaining observations. Connections between most of the simpler models had not been made clear until recently. Shown here are connections between these heuristics and the full HH model. In particular, we study a new model (Type III circuit): It includes the van der Pol-based models; it can be approximated by a simple integrate-and-fire model; and it creates voltages and currents that correspond, respectively, to the h and V components of the HH system. Copyright © 2012 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Fan, Yi; Lance, Charles E.
2017-01-01
The correlated trait-correlated method (CTCM) model for the analysis of multitrait-multimethod (MTMM) data is known to suffer convergence and admissibility (C&A) problems. We describe a little known and seldom applied reparameterized version of this model (CTCM-R) based on Rindskopf's reparameterization of the simpler confirmatory factor…
A computational approach to climate science education with CLIMLAB
NASA Astrophysics Data System (ADS)
Rose, B. E. J.
2017-12-01
CLIMLAB is a Python-based software toolkit for interactive, process-oriented climate modeling for use in education and research. It is motivated by the need for simpler tools and more reproducible workflows with which to "fill in the gaps" between blackboard-level theory and the results of comprehensive climate models. With CLIMLAB you can interactively mix and match physical model components, or combine simpler process models together into a more comprehensive model. I use CLIMLAB in the classroom to put models in the hands of students (undergraduate and graduate), and emphasize a hierarchical, process-oriented approach to understanding the key emergent properties of the climate system. CLIMLAB is equally a tool for climate research, where the same needs exist for more robust, process-based understanding and reproducible computational results. I will give an overview of CLIMLAB and an update on recent developments, including: a full-featured, well-documented, interactive implementation of a widely-used radiation model (RRTM) packaging with conda-forge for compiler-free (and hassle-free!) installation on Mac, Windows and Linux interfacing with xarray for i/o and graphics with gridded model data a rich and growing collection of examples and self-computing lecture notes in Jupyter notebook format
NASA Astrophysics Data System (ADS)
Hopp, L.; Ivanov, V. Y.
2010-12-01
There is still a debate in rainfall-runoff modeling over the advantage of using three-dimensional models based on partial differential equations describing variably saturated flow vs. models with simpler infiltration and flow routing algorithms. Fully explicit 3D models are computationally demanding but allow the representation of spatially complex domains, heterogeneous soils, conditions of ponded infiltration, and solute transport, among others. Models with simpler infiltration and flow routing algorithms provide faster run times and are likely to be more versatile in the treatment of extreme conditions such as soil drying but suffer from underlying assumptions and ad-hoc parameterizations. In this numerical study, we explore the question of whether these two model strategies are competing approaches or if they complement each other. As a 3D physics-based model we use HYDRUS-3D, a finite element model that numerically solves the Richards equation for variably-saturated water flow. As an example of a simpler model, we use tRIBS+VEGGIE that solves the 1D Richards equation for vertical flow and applies Dupuit-Forchheimer approximation for saturated lateral exchange and gravity-driven flow for unsaturated lateral exchange. The flow can be routed using either the D-8 (steepest descent) or D-infinity flow routing algorithms. We study lateral subsurface stormflow and moisture dynamics at the hillslope-scale, using a zero-order basin topography, as a function of storm size, antecedent moisture conditions and slope angle. The domain and soil characteristics are representative of a forested hillslope with conductive soils in a humid environment, where the major runoff generating process is lateral subsurface stormflow. We compare spatially integrated lateral subsurface flow at the downslope boundary as well as spatial patterns of soil moisture. We illustrate situations where both model approaches perform equally well and identify conditions under which the application of a fully-explicit 3D model may be required for a realistic description of the hydrologic response.
Photogrammetric Modeling and Image-Based Rendering for Rapid Virtual Environment Creation
2004-12-01
area and different methods have been proposed. Pertinent methods include: Camera Calibration , Structure from Motion, Stereo Correspondence, and Image...Based Rendering 1.1.1 Camera Calibration Determining the 3D structure of a model from multiple views becomes simpler if the intrinsic (or internal...can introduce significant nonlinearities into the image. We have found that camera calibration is a straightforward process which can simplify the
Spontaneous emergence of milling (vortex state) in a Vicsek-like model
NASA Astrophysics Data System (ADS)
Costanzo, A.; Hemelrijk, C. K.
2018-04-01
Collective motion is of interest to laymen and scientists in different fields. In groups of animals, many patterns of collective motion arise such as polarized schools and mills (i.e. circular motion). Collective motion can be generated in computational models of different degrees of complexity. In these models, moving individuals coordinate with others nearby. In the more complex models, individuals attract each other, aligning their headings, and avoiding collisions. Simpler models may include only one or two of these types of interactions. The collective pattern that interests us here is milling, which is observed in many animal species. It has been reproduced in the more complex models, but not in simpler models that are based only on alignment, such as the well-known Vicsek model. Our aim is to provide insight in the minimal conditions required for milling by making minimal modifications to the Vicsek model. Our results show that milling occurs when both the field of view and the maximal angular velocity are decreased. Remarkably, apart from milling, our minimal model also exhibits many of the other patterns of collective motion observed in animal groups.
On macromolecular refinement at subatomic resolution with interatomic scatterers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Afonine, Pavel V., E-mail: pafonine@lbl.gov; Grosse-Kunstleve, Ralf W.; Adams, Paul D.
2007-11-01
Modelling deformation electron density using interatomic scatters is simpler than multipolar methods, produces comparable results at subatomic resolution and can easily be applied to macromolecules. A study of the accurate electron-density distribution in molecular crystals at subatomic resolution (better than ∼1.0 Å) requires more detailed models than those based on independent spherical atoms. A tool that is conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8–1.0 Å, the number of experimental data is insufficient for full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented bymore » additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark data sets gave results that were comparable in quality with the results of multipolar refinement and superior to those for conventional models. Applications to several data sets of both small molecules and macromolecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package.« less
Application of powder densification models to the consolidation processing of composites
NASA Technical Reports Server (NTRS)
Wadley, H. N. G.; Elzey, D. M.
1991-01-01
Unidirectional fiber reinforced metal matrix composite tapes (containing a single layer of parallel fibers) can now be produced by plasma deposition. These tapes can be stacked and subjected to a thermomechanical treatment that results in a fully dense near net shape component. The mechanisms by which this consolidation step occurs are explored, and models to predict the effect of different thermomechanical conditions (during consolidation) upon the kinetics of densification are developed. The approach is based upon a methodology developed by Ashby and others for the simpler problem of HIP of spherical powders. The complex problem is devided into six, much simpler, subproblems, and then their predicted contributions are added to densification. The initial problem decomposition is to treat the two extreme geometries encountered (contact deformation occurring between foils and shrinkage of isolated, internal pores). Deformation of these two geometries is modelled for plastic, power law creep and diffusional flow. The results are reported in the form of a densification map.
NASA Astrophysics Data System (ADS)
Shen, C.; Fang, K.
2017-12-01
Deep Learning (DL) methods have made revolutionary strides in recent years. A core value proposition of DL is that abstract notions and patterns can be extracted purely from data, without the need for domain expertise. Process-based models (PBM), on the other hand, can be regarded as repositories of human knowledge or hypotheses about how systems function. Here, through computational examples, we argue that there is merit in integrating PBMs with DL due to the imbalance and lack of data in many situations, especially in hydrology. We trained a deep-in-time neural network, the Long Short-Term Memory (LSTM), to learn soil moisture dynamics from Soil Moisture Active Passive (SMAP) Level 3 product. We show that when PBM solutions are integrated into LSTM, the network is able to better generalize across regions. LSTM is able to better utilize PBM solutions than simpler statistical methods. Our results suggest PBMs have generalization value which should be carefully assessed and utilized. We also emphasize that when properly regularized, the deep network is robust and is of superior testing performance compared to simpler methods.
NASA Astrophysics Data System (ADS)
Li, Yutong; Wang, Yuxin; Duffy, Alex H. B.
2014-11-01
Computer-based conceptual design for routine design has made great strides, yet non-routine design has not been given due attention, and it is still poorly automated. Considering that the function-behavior-structure(FBS) model is widely used for modeling the conceptual design process, a computer-based creativity enhanced conceptual design model(CECD) for non-routine design of mechanical systems is presented. In the model, the leaf functions in the FBS model are decomposed into and represented with fine-grain basic operation actions(BOA), and the corresponding BOA set in the function domain is then constructed. Choosing building blocks from the database, and expressing their multiple functions with BOAs, the BOA set in the structure domain is formed. Through rule-based dynamic partition of the BOA set in the function domain, many variants of regenerated functional schemes are generated. For enhancing the capability to introduce new design variables into the conceptual design process, and dig out more innovative physical structure schemes, the indirect function-structure matching strategy based on reconstructing the combined structure schemes is adopted. By adjusting the tightness of the partition rules and the granularity of the divided BOA subsets, and making full use of the main function and secondary functions of each basic structure in the process of reconstructing of the physical structures, new design variables and variants are introduced into the physical structure scheme reconstructing process, and a great number of simpler physical structure schemes to accomplish the overall function organically are figured out. The creativity enhanced conceptual design model presented has a dominant capability in introducing new deign variables in function domain and digging out simpler physical structures to accomplish the overall function, therefore it can be utilized to solve non-routine conceptual design problem.
The Krylov accelerated SIMPLE(R) method for flow problems in industrial furnaces
NASA Astrophysics Data System (ADS)
Vuik, C.; Saghir, A.; Boerstoel, G. P.
2000-08-01
Numerical modeling of the melting and combustion process is an important tool in gaining understanding of the physical and chemical phenomena that occur in a gas- or oil-fired glass-melting furnace. The incompressible Navier-Stokes equations are used to model the gas flow in the furnace. The discrete Navier-Stokes equations are solved by the SIMPLE(R) pressure-correction method. In these applications, many SIMPLE(R) iterations are necessary to obtain an accurate solution. In this paper, Krylov accelerated versions are proposed: GCR-SIMPLE(R). The properties of these methods are investigated for a simple two-dimensional flow. Thereafter, the efficiencies of the methods are compared for three-dimensional flows in industrial glass-melting furnaces. Copyright
Simple systems that exhibit self-directed replication
NASA Technical Reports Server (NTRS)
Reggia, James A.; Armentrout, Steven L.; Chou, Hui-Hsien; Peng, Yun
1993-01-01
Biological experience and intuition suggest that self-replication is an inherently complex phenomenon, and early cellular automata models support that conception. More recently, simpler computational models of self-directed replication called sheathed loops have been developed. It is shown here that 'unsheathing' these structures and altering certain assumptions about the symmetry of their components leads to a family of nontrivial self-replicating structures some substantially smaller and simpler than those previously reported. The dependence of replication time and transition function complexity on initial structure size, cell state symmetry, and neighborhood are examined. These results support the view that self-replication is not an inherently complex phenomenon but rather an emergent property arising from local interactions in systems that can be much simpler than is generally believed.
Expanded Processing Techniques for EMI Systems
2012-07-01
possible to perform better target detection using physics-based algorithms and the entire data set, rather than simulating a simpler data set and mapping...possible to perform better target detection using physics-based algorithms and the entire data set, rather than simulating a simpler data set and...54! Figure 4.25: Plots of simulated MetalMapper data for two oblate spheroidal targets
CLIMLAB: a Python-based software toolkit for interactive, process-oriented climate modeling
NASA Astrophysics Data System (ADS)
Rose, B. E. J.
2015-12-01
Global climate is a complex emergent property of the rich interactions between simpler components of the climate system. We build scientific understanding of this system by breaking it down into component process models (e.g. radiation, large-scale dynamics, boundary layer turbulence), understanding each components, and putting them back together. Hands-on experience and freedom to tinker with climate models (whether simple or complex) is invaluable for building physical understanding. CLIMLAB is an open-ended software engine for interactive, process-oriented climate modeling. With CLIMLAB you can interactively mix and match model components, or combine simpler process models together into a more comprehensive model. It was created primarily to support classroom activities, using hands-on modeling to teach fundamentals of climate science at both undergraduate and graduate levels. CLIMLAB is written in Python and ties in with the rich ecosystem of open-source scientific Python tools for numerics and graphics. The IPython notebook format provides an elegant medium for distributing interactive example code. I will give an overview of the current capabilities of CLIMLAB, the curriculum we have developed thus far, and plans for the future. Using CLIMLAB requires some basic Python coding skills. We consider this an educational asset, as we are targeting upper-level undergraduates and Python is an increasingly important language in STEM fields. However CLIMLAB is well suited to be deployed as a computational back-end for a graphical gaming environment based on earth-system modeling.
Adaptive reduction of constitutive model-form error using a posteriori error estimation techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bishop, Joseph E.; Brown, Judith Alice
In engineering practice, models are typically kept as simple as possible for ease of setup and use, computational efficiency, maintenance, and overall reduced complexity to achieve robustness. In solid mechanics, a simple and efficient constitutive model may be favored over one that is more predictive, but is difficult to parameterize, is computationally expensive, or is simply not available within a simulation tool. In order to quantify the modeling error due to the choice of a relatively simple and less predictive constitutive model, we adopt the use of a posteriori model-form error-estimation techniques. Based on local error indicators in the energymore » norm, an algorithm is developed for reducing the modeling error by spatially adapting the material parameters in the simpler constitutive model. The resulting material parameters are not material properties per se, but depend on the given boundary-value problem. As a first step to the more general nonlinear case, we focus here on linear elasticity in which the “complex” constitutive model is general anisotropic elasticity and the chosen simpler model is isotropic elasticity. As a result, the algorithm for adaptive error reduction is demonstrated using two examples: (1) A transversely-isotropic plate with hole subjected to tension, and (2) a transversely-isotropic tube with two side holes subjected to torsion.« less
Adaptive reduction of constitutive model-form error using a posteriori error estimation techniques
Bishop, Joseph E.; Brown, Judith Alice
2018-06-15
In engineering practice, models are typically kept as simple as possible for ease of setup and use, computational efficiency, maintenance, and overall reduced complexity to achieve robustness. In solid mechanics, a simple and efficient constitutive model may be favored over one that is more predictive, but is difficult to parameterize, is computationally expensive, or is simply not available within a simulation tool. In order to quantify the modeling error due to the choice of a relatively simple and less predictive constitutive model, we adopt the use of a posteriori model-form error-estimation techniques. Based on local error indicators in the energymore » norm, an algorithm is developed for reducing the modeling error by spatially adapting the material parameters in the simpler constitutive model. The resulting material parameters are not material properties per se, but depend on the given boundary-value problem. As a first step to the more general nonlinear case, we focus here on linear elasticity in which the “complex” constitutive model is general anisotropic elasticity and the chosen simpler model is isotropic elasticity. As a result, the algorithm for adaptive error reduction is demonstrated using two examples: (1) A transversely-isotropic plate with hole subjected to tension, and (2) a transversely-isotropic tube with two side holes subjected to torsion.« less
MOSES: A Matlab-based open-source stochastic epidemic simulator.
Varol, Huseyin Atakan
2016-08-01
This paper presents an open-source stochastic epidemic simulator. Discrete Time Markov Chain based simulator is implemented in Matlab. The simulator capable of simulating SEQIJR (susceptible, exposed, quarantined, infected, isolated and recovered) model can be reduced to simpler models by setting some of the parameters (transition probabilities) to zero. Similarly, it can be extended to more complicated models by editing the source code. It is designed to be used for testing different control algorithms to contain epidemics. The simulator is also designed to be compatible with a network based epidemic simulator and can be used in the network based scheme for the simulation of a node. Simulations show the capability of reproducing different epidemic model behaviors successfully in a computationally efficient manner.
Matlab-Excel Interface for OpenDSS
DOE Office of Scientific and Technical Information (OSTI.GOV)
The software allows users of the OpenDSS grid modeling software to access their load flow models using a GUI interface developed in MATLAB. The circuit definitions are entered into a Microsoft Excel spreadsheet which makes circuit creation and editing a much simpler process than the basic text-based editors used in the native OpenDSS interface. Plot tools have been developed which can be accessed through a MATLAB GUI once the desired parameters have been simulated.
Tufto, Jarle
2010-01-01
Domesticated species frequently spread their genes into populations of wild relatives through interbreeding. The domestication process often involves artificial selection for economically desirable traits. This can lead to an indirect response in unknown correlated traits and a reduction in fitness of domesticated individuals in the wild. Previous models for the effect of gene flow from domesticated species to wild relatives have assumed that evolution occurs in one dimension. Here, I develop a quantitative genetic model for the balance between migration and multivariate stabilizing selection. Different forms of correlational selection consistent with a given observed ratio between average fitness of domesticated and wild individuals offsets the phenotypic means at migration-selection balance away from predictions based on simpler one-dimensional models. For almost all parameter values, correlational selection leads to a reduction in the migration load. For ridge selection, this reduction arises because the distance the immigrants deviates from the local optimum in effect is reduced. For realistic parameter values, however, the effect of correlational selection on the load is small, suggesting that simpler one-dimensional models may still be adequate in terms of predicting mean population fitness and viability.
Optimal synchronization of Kuramoto oscillators: A dimensional reduction approach
NASA Astrophysics Data System (ADS)
Pinto, Rafael S.; Saa, Alberto
2015-12-01
A recently proposed dimensional reduction approach for studying synchronization in the Kuramoto model is employed to build optimal network topologies to favor or to suppress synchronization. The approach is based in the introduction of a collective coordinate for the time evolution of the phase locked oscillators, in the spirit of the Ott-Antonsen ansatz. We show that the optimal synchronization of a Kuramoto network demands the maximization of the quadratic function ωTL ω , where ω stands for the vector of the natural frequencies of the oscillators and L for the network Laplacian matrix. Many recently obtained numerical results can be reobtained analytically and in a simpler way from our maximization condition. A computationally efficient hill climb rewiring algorithm is proposed to generate networks with optimal synchronization properties. Our approach can be easily adapted to the case of the Kuramoto models with both attractive and repulsive interactions, and again many recent numerical results can be rederived in a simpler and clearer analytical manner.
Human sleep and circadian rhythms: a simple model based on two coupled oscillators.
Strogatz, S H
1987-01-01
We propose a model of the human circadian system. The sleep-wake and body temperature rhythms are assumed to be driven by a pair of coupled nonlinear oscillators described by phase variables alone. The novel aspect of the model is that its equations may be solved analytically. Computer simulations are used to test the model against sleep-wake data pooled from 15 studies of subjects living for weeks in unscheduled, time-free environments. On these tests the model performs about as well as the existing models, although its mathematical structure is far simpler.
The algorithmic anatomy of model-based evaluation
Daw, Nathaniel D.; Dayan, Peter
2014-01-01
Despite many debates in the first half of the twentieth century, it is now largely a truism that humans and other animals build models of their environments and use them for prediction and control. However, model-based (MB) reasoning presents severe computational challenges. Alternative, computationally simpler, model-free (MF) schemes have been suggested in the reinforcement learning literature, and have afforded influential accounts of behavioural and neural data. Here, we study the realization of MB calculations, and the ways that this might be woven together with MF values and evaluation methods. There are as yet mostly only hints in the literature as to the resulting tapestry, so we offer more preview than review. PMID:25267820
Comparative evaluation of urban storm water quality models
NASA Astrophysics Data System (ADS)
Vaze, J.; Chiew, Francis H. S.
2003-10-01
The estimation of urban storm water pollutant loads is required for the development of mitigation and management strategies to minimize impacts to receiving environments. Event pollutant loads are typically estimated using either regression equations or "process-based" water quality models. The relative merit of using regression models compared to process-based models is not clear. A modeling study is carried out here to evaluate the comparative ability of the regression equations and process-based water quality models to estimate event diffuse pollutant loads from impervious surfaces. The results indicate that, once calibrated, both the regression equations and the process-based model can estimate event pollutant loads satisfactorily. In fact, the loads estimated using the regression equation as a function of rainfall intensity and runoff rate are better than the loads estimated using the process-based model. Therefore, if only estimates of event loads are required, regression models should be used because they are simpler and require less data compared to process-based models.
Venous thromboembolism prevention guidelines for medical inpatients: mind the (implementation) gap.
Maynard, Greg; Jenkins, Ian H; Merli, Geno J
2013-10-01
Hospital-associated nonsurgical venous thromboembolism (VTE) is an important problem addressed by new guidelines from the American College of Physicians (ACP) and American College of Chest Physicians (AT9). Narrative review and critique. Both guidelines discount asymptomatic VTE outcomes and caution against overprophylaxis, but have different methodologies and estimates of risk/benefit. Guideline complexity and lack of consensus on VTE risk assessment contribute to an implementation gap. Methods to estimate prophylaxis benefit have significant limitations because major trials included mostly screening-detected events. AT9 relies on a single Italian cohort study to conclude that those with a Padua score ≥4 have a very high VTE risk, whereas patients with a score <4 (60% of patients) have a very small risk. However, the cohort population has less comorbidity than US inpatients, and over 1% of patients with a score of 3 suffered pulmonary emboli. The ACP guideline does not endorse any risk-assessment model. AT9 includes the Padua model and Caprini point-based system for nonsurgical inpatients and surgical inpatients, respectively, but there is no evidence they are more effective than simpler risk-assessment models. New VTE prevention guidelines provide varied guidance on important issues including risk assessment. If Padua is used, a threshold of 3, as well as 4, should be considered. Simpler VTE risk-assessment models may be superior to complicated point-based models in environments without sophisticated clinical decision support. © 2013 Society of Hospital Medicine.
NASA Astrophysics Data System (ADS)
Sandfeld, Stefan; Budrikis, Zoe; Zapperi, Stefano; Fernandez Castellanos, David
2015-02-01
Crystalline plasticity is strongly interlinked with dislocation mechanics and nowadays is relatively well understood. Concepts and physical models of plastic deformation in amorphous materials on the other hand—where the concept of linear lattice defects is not applicable—still are lagging behind. We introduce an eigenstrain-based finite element lattice model for simulations of shear band formation and strain avalanches. Our model allows us to study the influence of surfaces and finite size effects on the statistics of avalanches. We find that even with relatively complex loading conditions and open boundary conditions, critical exponents describing avalanche statistics are unchanged, which validates the use of simpler scalar lattice-based models to study these phenomena.
NASA Technical Reports Server (NTRS)
Flowers, George T.
1994-01-01
Substantial progress has been made toward the goals of this research effort in the past six months. A simplified rotor model with a flexible shaft and backup bearings has been developed. The model is based upon the work of Ishii and Kirk. Parameter studies of the behavior of this model are currently being conducted. A simple rotor model which includes a flexible disk and bearings with clearance has been developed and the dynamics of the model investigated. The study consists of simulation work coupled with experimental verification. The work is documented in the attached paper. A rotor model based upon the T-501 engine has been developed which includes backup bearing effects. The dynamics of this model are currently being studied with the objective of verifying the conclusions obtained from the simpler models. Parallel simulation runs are being conducted using an ANSYS based finite element model of the T-501.
VizieR Online Data Catalog: Bayesian method for detecting stellar flares (Pitkin+, 2014)
NASA Astrophysics Data System (ADS)
Pitkin, M.; Williams, D.; Fletcher, L.; Grant, S. D. T.
2015-05-01
We present a Bayesian-odds-ratio-based algorithm for detecting stellar flares in light-curve data. We assume flares are described by a model in which there is a rapid rise with a half-Gaussian profile, followed by an exponential decay. Our signal model also contains a polynomial background model required to fit underlying light-curve variations in the data, which could otherwise partially mimic a flare. We characterize the false alarm probability and efficiency of this method under the assumption that any unmodelled noise in the data is Gaussian, and compare it with a simpler thresholding method based on that used in Walkowicz et al. We find our method has a significant increase in detection efficiency for low signal-to-noise ratio (S/N) flares. For a conservative false alarm probability our method can detect 95 per cent of flares with S/N less than 20, as compared to S/N of 25 for the simpler method. We also test how well the assumption of Gaussian noise holds by applying the method to a selection of 'quiet' Kepler stars. As an example we have applied our method to a selection of stars in Kepler Quarter 1 data. The method finds 687 flaring stars with a total of 1873 flares after vetos have been applied. For these flares we have made preliminary characterizations of their durations and and S/N. (1 data file).
A Bayesian method for detecting stellar flares
NASA Astrophysics Data System (ADS)
Pitkin, M.; Williams, D.; Fletcher, L.; Grant, S. D. T.
2014-12-01
We present a Bayesian-odds-ratio-based algorithm for detecting stellar flares in light-curve data. We assume flares are described by a model in which there is a rapid rise with a half-Gaussian profile, followed by an exponential decay. Our signal model also contains a polynomial background model required to fit underlying light-curve variations in the data, which could otherwise partially mimic a flare. We characterize the false alarm probability and efficiency of this method under the assumption that any unmodelled noise in the data is Gaussian, and compare it with a simpler thresholding method based on that used in Walkowicz et al. We find our method has a significant increase in detection efficiency for low signal-to-noise ratio (S/N) flares. For a conservative false alarm probability our method can detect 95 per cent of flares with S/N less than 20, as compared to S/N of 25 for the simpler method. We also test how well the assumption of Gaussian noise holds by applying the method to a selection of `quiet' Kepler stars. As an example we have applied our method to a selection of stars in Kepler Quarter 1 data. The method finds 687 flaring stars with a total of 1873 flares after vetos have been applied. For these flares we have made preliminary characterizations of their durations and and S/N.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Childs, Andrew M.; Center for Theoretical Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139; Leung, Debbie W.
We present unified, systematic derivations of schemes in the two known measurement-based models of quantum computation. The first model (introduced by Raussendorf and Briegel, [Phys. Rev. Lett. 86, 5188 (2001)]) uses a fixed entangled state, adaptive measurements on single qubits, and feedforward of the measurement results. The second model (proposed by Nielsen, [Phys. Lett. A 308, 96 (2003)] and further simplified by Leung, [Int. J. Quant. Inf. 2, 33 (2004)]) uses adaptive two-qubit measurements that can be applied to arbitrary pairs of qubits, and feedforward of the measurement results. The underlying principle of our derivations is a variant of teleportationmore » introduced by Zhou, Leung, and Chuang, [Phys. Rev. A 62, 052316 (2000)]. Our derivations unify these two measurement-based models of quantum computation and provide significantly simpler schemes.« less
NASA Astrophysics Data System (ADS)
Cisneros, Rafael; Gao, Rui; Ortega, Romeo; Husain, Iqbal
2016-10-01
The present paper proposes a maximum power extraction control for a wind system consisting of a turbine, a permanent magnet synchronous generator, a rectifier, a load and one constant voltage source, which is used to form the DC bus. We propose a linear PI controller, based on passivity, whose stability is guaranteed under practically reasonable assumptions. PI structures are widely accepted in practice as they are easier to tune and simpler than other existing model-based methods. Real switching based simulations have been performed to assess the performance of the proposed controller.
Model Hierarchies in Edge-Based Compartmental Modeling for Infectious Disease Spread
Miller, Joel C.; Volz, Erik M.
2012-01-01
We consider the family of edge-based compartmental models for epidemic spread developed in [11]. These models allow for a range of complex behaviors, and in particular allow us to explicitly incorporate duration of a contact into our mathematical models. Our focus here is to identify conditions under which simpler models may be substituted for more detailed models, and in so doing we define a hierarchy of epidemic models. In particular we provide conditions under which it is appropriate to use the standard mass action SIR model, and we show what happens when these conditions fail. Using our hierarchy, we provide a procedure leading to the choice of the appropriate model for a given population. Our result about the convergence of models to the Mass Action model gives clear, rigorous conditions under which the Mass Action model is accurate. PMID:22911242
AFC-Enabled Simplified High-Lift System Integration Study
NASA Technical Reports Server (NTRS)
Hartwich, Peter M.; Dickey, Eric D.; Sclafani, Anthony J.; Camacho, Peter; Gonzales, Antonio B.; Lawson, Edward L.; Mairs, Ron Y.; Shmilovich, Arvin
2014-01-01
The primary objective of this trade study report is to explore the potential of using Active Flow Control (AFC) for achieving lighter and mechanically simpler high-lift systems for transonic commercial transport aircraft. This assessment was conducted in four steps. First, based on the Common Research Model (CRM) outer mold line (OML) definition, two high-lift concepts were developed. One concept, representative of current production-type commercial transonic transports, features leading edge slats and slotted trailing edge flaps with Fowler motion. The other CRM-based design relies on drooped leading edges and simply hinged trailing edge flaps for high-lift generation. The relative high-lift performance of these two high-lift CRM variants is established using Computational Fluid Dynamics (CFD) solutions to the Reynolds-Averaged Navier-Stokes (RANS) equations for steady flow. These CFD assessments identify the high-lift performance that needs to be recovered through AFC to have the CRM variant with the lighter and mechanically simpler high-lift system match the performance of the conventional high-lift system. Conceptual design integration studies for the AFC-enhanced high-lift systems were conducted with a NASA Environmentally Responsible Aircraft (ERA) reference configuration, the so-called ERA-0003 concept. These design trades identify AFC performance targets that need to be met to produce economically feasible ERA-0003-like concepts with lighter and mechanically simpler high-lift designs that match the performance of conventional high-lift systems. Finally, technical challenges are identified associated with the application of AFC-enabled highlift systems to modern transonic commercial transports for future technology maturation efforts.
Simulating Complex Satellites and a Space-Based Surveillance Sensor Simulation
2009-09-01
high-resolution imagery (Fig. 1). Thus other means for characterizing satellites will need to be developed. Research into non- resolvable space object...computing power and time . The second way, which we are using here is to create simpler models of satellite bodies and use albedo-area calculations...their position, movement, size, and physical features. However, there are many satellites in orbit that are simply too small or too far away to resolve by
Karnon, Jonathan; Haji Ali Afzali, Hossein
2014-06-01
Modelling in economic evaluation is an unavoidable fact of life. Cohort-based state transition models are most common, though discrete event simulation (DES) is increasingly being used to implement more complex model structures. The benefits of DES relate to the greater flexibility around the implementation and population of complex models, which may provide more accurate or valid estimates of the incremental costs and benefits of alternative health technologies. The costs of DES relate to the time and expertise required to implement and review complex models, when perhaps a simpler model would suffice. The costs are not borne solely by the analyst, but also by reviewers. In particular, modelled economic evaluations are often submitted to support reimbursement decisions for new technologies, for which detailed model reviews are generally undertaken on behalf of the funding body. This paper reports the results from a review of published DES-based economic evaluations. Factors underlying the use of DES were defined, and the characteristics of applied models were considered, to inform options for assessing the potential benefits of DES in relation to each factor. Four broad factors underlying the use of DES were identified: baseline heterogeneity, continuous disease markers, time varying event rates, and the influence of prior events on subsequent event rates. If relevant, individual-level data are available, representation of the four factors is likely to improve model validity, and it is possible to assess the importance of their representation in individual cases. A thorough model performance evaluation is required to overcome the costs of DES from the users' perspective, but few of the reviewed DES models reported such a process. More generally, further direct, empirical comparisons of complex models with simpler models would better inform the benefits of DES to implement more complex models, and the circumstances in which such benefits are most likely.
The sensitivity of ecosystem service models to choices of input data and spatial resolution
Bagstad, Kenneth J.; Cohen, Erika; Ancona, Zachary H.; McNulty, Steven; Sun, Ge
2018-01-01
Although ecosystem service (ES) modeling has progressed rapidly in the last 10–15 years, comparative studies on data and model selection effects have become more common only recently. Such studies have drawn mixed conclusions about whether different data and model choices yield divergent results. In this study, we compared the results of different models to address these questions at national, provincial, and subwatershed scales in Rwanda. We compared results for carbon, water, and sediment as modeled using InVEST and WaSSI using (1) land cover data at 30 and 300 m resolution and (2) three different input land cover datasets. WaSSI and simpler InVEST models (carbon storage and annual water yield) were relatively insensitive to the choice of spatial resolution, but more complex InVEST models (seasonal water yield and sediment regulation) produced large differences when applied at differing resolution. Six out of nine ES metrics (InVEST annual and seasonal water yield and WaSSI) gave similar predictions for at least two different input land cover datasets. Despite differences in mean values when using different data sources and resolution, we found significant and highly correlated results when using Spearman's rank correlation, indicating consistent spatial patterns of high and low values. Our results confirm and extend conclusions of past studies, showing that in certain cases (e.g., simpler models and national-scale analyses), results can be robust to data and modeling choices. For more complex models, those with different output metrics, and subnational to site-based analyses in heterogeneous environments, data and model choices may strongly influence study findings.
The Effect of Sensor Performance on Safe Minefield Transit
2002-12-01
the results of the simpler model are not good approximations of the results obtained with the more complex model, suggesting that even greater complexity in maneuver modeling may be desirable for some purposes.
Applying the compound Poisson process model to the reporting of injury-related mortality rates.
Kegler, Scott R
2007-02-16
Injury-related mortality rate estimates are often analyzed under the assumption that case counts follow a Poisson distribution. Certain types of injury incidents occasionally involve multiple fatalities, however, resulting in dependencies between cases that are not reflected in the simple Poisson model and which can affect even basic statistical analyses. This paper explores the compound Poisson process model as an alternative, emphasizing adjustments to some commonly used interval estimators for population-based rates and rate ratios. The adjusted estimators involve relatively simple closed-form computations, which in the absence of multiple-case incidents reduce to familiar estimators based on the simpler Poisson model. Summary data from the National Violent Death Reporting System are referenced in several examples demonstrating application of the proposed methodology.
On the limitations of General Circulation Climate Models
NASA Technical Reports Server (NTRS)
Stone, Peter H.; Risbey, James S.
1990-01-01
General Circulation Models (GCMs) by definition calculate large-scale dynamical and thermodynamical processes and their associated feedbacks from first principles. This aspect of GCMs is widely believed to give them an advantage in simulating global scale climate changes as compared to simpler models which do not calculate the large-scale processes from first principles. However, it is pointed out that the meridional transports of heat simulated GCMs used in climate change experiments differ from observational analyses and from other GCMs by as much as a factor of two. It is also demonstrated that GCM simulations of the large scale transports of heat are sensitive to the (uncertain) subgrid scale parameterizations. This leads to the question whether current GCMs are in fact superior to simpler models for simulating temperature changes associated with global scale climate change.
NASA Technical Reports Server (NTRS)
Stieglitz, Marc; Ducharne, Agnes; Koster, Randy; Suarez, Max; Busalacchi, Antonio J. (Technical Monitor)
2000-01-01
The three-layer snow model is coupled to the global catchment-based Land Surface Model (LSM) of the NASA Seasonal to Interannual Prediction Project (NSIPP) project, and the combined models are used to simulate the growth and ablation of snow cover over the North American continent for the period 1987-1988. The various snow processes included in the three-layer model, such as snow melting and re-freezing, dynamic changes in snow density, and snow insulating properties, are shown (through a comparison with the corresponding simulation using a much simpler snow model) to lead to an improved simulation of ground thermodynamics on the continental scale.
Zhou, Kun; Gao, Chun-Fang; Zhao, Yun-Peng; Liu, Hai-Lin; Zheng, Rui-Dan; Xian, Jian-Chun; Xu, Hong-Tao; Mao, Yi-Min; Zeng, Min-De; Lu, Lun-Gen
2010-09-01
In recent years, a great interest has been dedicated to the development of noninvasive predictive models to substitute liver biopsy for fibrosis assessment and follow-up. Our aim was to provide a simpler model consisting of routine laboratory markers for predicting liver fibrosis in patients chronically infected with hepatitis B virus (HBV) in order to optimize their clinical management. Liver fibrosis was staged in 386 chronic HBV carriers who underwent liver biopsy and routine laboratory testing. Correlations between routine laboratory markers and fibrosis stage were statistically assessed. After logistic regression analysis, a novel predictive model was constructed. This S index was validated in an independent cohort of 146 chronic HBV carriers in comparison to the SLFG model, Fibrometer, Hepascore, Hui model, Forns score and APRI using receiver operating characteristic (ROC) curves. The diagnostic values of each marker panels were better than single routine laboratory markers. The S index consisting of gamma-glutamyltransferase (GGT), platelets (PLT) and albumin (ALB) (S-index: 1000 x GGT/(PLT x ALB(2))) had a higher diagnostic accuracy in predicting degree of fibrosis than any other mathematical model tested. The areas under the ROC curves (AUROC) were 0.812 and 0.890 for predicting significant fibrosis and cirrhosis in the validation cohort, respectively. The S index, a simpler mathematical model consisting of routine laboratory markers predicts significant fibrosis and cirrhosis in patients with chronic HBV infection with a high degree of accuracy, potentially decreasing the need for liver biopsy.
Preparation of name and address data for record linkage using hidden Markov models
Churches, Tim; Christen, Peter; Lim, Kim; Zhu, Justin Xi
2002-01-01
Background Record linkage refers to the process of joining records that relate to the same entity or event in one or more data collections. In the absence of a shared, unique key, record linkage involves the comparison of ensembles of partially-identifying, non-unique data items between pairs of records. Data items with variable formats, such as names and addresses, need to be transformed and normalised in order to validly carry out these comparisons. Traditionally, deterministic rule-based data processing systems have been used to carry out this pre-processing, which is commonly referred to as "standardisation". This paper describes an alternative approach to standardisation, using a combination of lexicon-based tokenisation and probabilistic hidden Markov models (HMMs). Methods HMMs were trained to standardise typical Australian name and address data drawn from a range of health data collections. The accuracy of the results was compared to that produced by rule-based systems. Results Training of HMMs was found to be quick and did not require any specialised skills. For addresses, HMMs produced equal or better standardisation accuracy than a widely-used rule-based system. However, acccuracy was worse when used with simpler name data. Possible reasons for this poorer performance are discussed. Conclusion Lexicon-based tokenisation and HMMs provide a viable and effort-effective alternative to rule-based systems for pre-processing more complex variably formatted data such as addresses. Further work is required to improve the performance of this approach with simpler data such as names. Software which implements the methods described in this paper is freely available under an open source license for other researchers to use and improve. PMID:12482326
Chaining for Flexible and High-Performance Key-Value Systems
2012-09-01
store that is fault tolerant achieves high performance and availability, and offers strong data consistency? We present a new replication protocol...effective high performance data access and analytics, many sites use simpler data model “ NoSQL ” systems. ese systems store and retrieve data only by...DRAM, Flash, and disk-based storage; can act as an unreliable cache or a durable store ; and can offer strong or weak data consistency. e value of
Graff, Mario; Poli, Riccardo; Flores, Juan J
2013-01-01
Modeling the behavior of algorithms is the realm of evolutionary algorithm theory. From a practitioner's point of view, theory must provide some guidelines regarding which algorithm/parameters to use in order to solve a particular problem. Unfortunately, most theoretical models of evolutionary algorithms are difficult to apply to realistic situations. However, in recent work (Graff and Poli, 2008, 2010), where we developed a method to practically estimate the performance of evolutionary program-induction algorithms (EPAs), we started addressing this issue. The method was quite general; however, it suffered from some limitations: it required the identification of a set of reference problems, it required hand picking a distance measure in each particular domain, and the resulting models were opaque, typically being linear combinations of 100 features or more. In this paper, we propose a significant improvement of this technique that overcomes the three limitations of our previous method. We achieve this through the use of a novel set of features for assessing problem difficulty for EPAs which are very general, essentially based on the notion of finite difference. To show the capabilities or our technique and to compare it with our previous performance models, we create models for the same two important classes of problems-symbolic regression on rational functions and Boolean function induction-used in our previous work. We model a variety of EPAs. The comparison showed that for the majority of the algorithms and problem classes, the new method produced much simpler and more accurate models than before. To further illustrate the practicality of the technique and its generality (beyond EPAs), we have also used it to predict the performance of both autoregressive models and EPAs on the problem of wind speed forecasting, obtaining simpler and more accurate models that outperform in all cases our previous performance models.
Alternative to Ritt's pseudodivision for finding the input-output equations of multi-output models.
Meshkat, Nicolette; Anderson, Chris; DiStefano, Joseph J
2012-09-01
Differential algebra approaches to structural identifiability analysis of a dynamic system model in many instances heavily depend upon Ritt's pseudodivision at an early step in analysis. The pseudodivision algorithm is used to find the characteristic set, of which a subset, the input-output equations, is used for identifiability analysis. A simpler algorithm is proposed for this step, using Gröbner Bases, along with a proof of the method that includes a reduced upper bound on derivative requirements. Efficacy of the new algorithm is illustrated with several biosystem model examples. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Almbladh, C.-O.; Morales, A. L.
1989-02-01
Auger CVV spectra of simple metals are generally believed to be well described by one-electron-like theories in the bulk which account for matrix elements and, in some cases, also static core-hole screening effects. We present here detailed calculations on Li, Be, Na, Mg, and Al using self-consistent bulk wave functions and proper matrix elements. The resulting spectra differ markedly from experiment and peak at too low energies. To explain this discrepancy we investigate effects of the surface and dynamical effects of the sudden disappearance of the core hole in the final state. To study core-hole effects we solve Mahan-Nozières-De Dominicis (MND) model numerically over the entire band. The core-hole potential and other parameters in the MND model are determined by self-consistent calculations of the core-hole impurity. The results are compared with simpler approximations based on the final-state rule due to von Barth and Grossmann. To study surface and mean-free-path effects we perform slab calculations for Al but use a simpler infinite-barrier model in the remaining cases. The model reproduces the slab spectra for Al with very good accuracy. In all cases investigated either the effects of the surface or the effects of the core hole give important modifications and a much improved agreement with experiment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lindskog, M., E-mail: martin.lindskog@teorfys.lu.se; Wacker, A.; Wolf, J. M.
2014-09-08
We study the operation of an 8.5 μm quantum cascade laser based on GaInAs/AlInAs lattice matched to InP using three different simulation models based on density matrix (DM) and non-equilibrium Green's function (NEGF) formulations. The latter advanced scheme serves as a validation for the simpler DM schemes and, at the same time, provides additional insight, such as the temperatures of the sub-band carrier distributions. We find that for the particular quantum cascade laser studied here, the behavior is well described by simple quantum mechanical estimates based on Fermi's golden rule. As a consequence, the DM model, which includes second order currents,more » agrees well with the NEGF results. Both these simulations are in accordance with previously reported data and a second regrown device.« less
Control algorithms and applications of the wavefront sensorless adaptive optics
NASA Astrophysics Data System (ADS)
Ma, Liang; Wang, Bin; Zhou, Yuanshen; Yang, Huizhen
2017-10-01
Compared with the conventional adaptive optics (AO) system, the wavefront sensorless (WFSless) AO system need not to measure the wavefront and reconstruct it. It is simpler than the conventional AO in system architecture and can be applied to the complex conditions. Based on the analysis of principle and system model of the WFSless AO system, wavefront correction methods of the WFSless AO system were divided into two categories: model-free-based and model-based control algorithms. The WFSless AO system based on model-free-based control algorithms commonly considers the performance metric as a function of the control parameters and then uses certain control algorithm to improve the performance metric. The model-based control algorithms include modal control algorithms, nonlinear control algorithms and control algorithms based on geometrical optics. Based on the brief description of above typical control algorithms, hybrid methods combining the model-free-based control algorithm with the model-based control algorithm were generalized. Additionally, characteristics of various control algorithms were compared and analyzed. We also discussed the extensive applications of WFSless AO system in free space optical communication (FSO), retinal imaging in the human eye, confocal microscope, coherent beam combination (CBC) techniques and extended objects.
NASA Astrophysics Data System (ADS)
Jackson-Blake, L. A.; Sample, J. E.; Wade, A. J.; Helliwell, R. C.; Skeffington, R. A.
2017-07-01
Catchment-scale water quality models are increasingly popular tools for exploring the potential effects of land management, land use change and climate change on water quality. However, the dynamic, catchment-scale nutrient models in common usage are complex, with many uncertain parameters requiring calibration, limiting their usability and robustness. A key question is whether this complexity is justified. To explore this, we developed a parsimonious phosphorus model, SimplyP, incorporating a rainfall-runoff model and a biogeochemical model able to simulate daily streamflow, suspended sediment, and particulate and dissolved phosphorus dynamics. The model's complexity was compared to one popular nutrient model, INCA-P, and the performance of the two models was compared in a small rural catchment in northeast Scotland. For three land use classes, less than six SimplyP parameters must be determined through calibration, the rest may be based on measurements, while INCA-P has around 40 unmeasurable parameters. Despite substantially simpler process-representation, SimplyP performed comparably to INCA-P in both calibration and validation and produced similar long-term projections in response to changes in land management. Results support the hypothesis that INCA-P is overly complex for the study catchment. We hope our findings will help prompt wider model comparison exercises, as well as debate among the water quality modeling community as to whether today's models are fit for purpose. Simpler models such as SimplyP have the potential to be useful management and research tools, building blocks for future model development (prototype code is freely available), or benchmarks against which more complex models could be evaluated.
NASA Astrophysics Data System (ADS)
Małoszewski, P.; Zuber, A.
1982-06-01
Three new lumped-parameter models have been developed for the interpretation of environmental radioisotope data in groundwater systems. Two of these models combine other simpler models, i.e. the piston flow model is combined either with the exponential model (exponential distribution of transit times) or with the linear model (linear distribution of transit times). The third model is based on a new solution to the dispersion equation which more adequately represents the real systems than the conventional solution generally applied so far. The applicability of models was tested by the reinterpretation of several known case studies (Modry Dul, Cheju Island, Rasche Spring and Grafendorf). It has been shown that two of these models, i.e. the exponential-piston flow model and the dispersive model give better fitting than other simpler models. Thus, the obtained values of turnover times are more reliable, whereas the additional fitting parameter gives some information about the structure of the system. In the examples considered, in spite of a lower number of fitting parameters, the new models gave practically the same fitting as the multiparameter finite state mixing-cell models. It has been shown that in the case of a constant tracer input a prior physical knowledge of the groundwater system is indispensable for determining the turnover time. The piston flow model commonly used for age determinations by the 14C method is an approximation applicable only in the cases of low dispersion. In some cases the stable-isotope method aids in the interpretation of systems containing mixed waters of different ages. However, when 14C method is used for mixed-water systems a serious mistake may arise by neglecting the different bicarbonate contents in particular water components.
NASA Astrophysics Data System (ADS)
Daniel, M.; Lemonsu, Aude; Déqué, M.; Somot, S.; Alias, A.; Masson, V.
2018-06-01
Most climate models do not explicitly model urban areas and at best describe them as rock covers. Nonetheless, the very high resolutions reached now by the regional climate models may justify and require a more realistic parameterization of surface exchanges between urban canopy and atmosphere. To quantify the potential impact of urbanization on the regional climate, and evaluate the benefits of a detailed urban canopy model compared with a simpler approach, a sensitivity study was carried out over France at a 12-km horizontal resolution with the ALADIN-Climate regional model for 1980-2009 time period. Different descriptions of land use and urban modeling were compared, corresponding to an explicit modeling of cities with the urban canopy model TEB, a conventional and simpler approach representing urban areas as rocks, and a vegetated experiment for which cities are replaced by natural covers. A general evaluation of ALADIN-Climate was first done, that showed an overestimation of the incoming solar radiation but satisfying results in terms of precipitation and near-surface temperatures. The sensitivity analysis then highlighted that urban areas had a significant impact on modeled near-surface temperature. A further analysis on a few large French cities indicated that over the 30 years of simulation they all induced a warming effect both at daytime and nighttime with values up to + 1.5 °C for the city of Paris. The urban model also led to a regional warming extending beyond the urban areas boundaries. Finally, the comparison to temperature observations available for Paris area highlighted that the detailed urban canopy model improved the modeling of the urban heat island compared with a simpler approach.
Equation-based languages – A new paradigm for building energy modeling, simulation and optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wetter, Michael; Bonvini, Marco; Nouidui, Thierry S.
Most of the state-of-the-art building simulation programs implement models in imperative programming languages. This complicates modeling and excludes the use of certain efficient methods for simulation and optimization. In contrast, equation-based modeling languages declare relations among variables, thereby allowing the use of computer algebra to enable much simpler schematic modeling and to generate efficient code for simulation and optimization. We contrast the two approaches in this paper. We explain how such manipulations support new use cases. In the first of two examples, we couple models of the electrical grid, multiple buildings, HVAC systems and controllers to test a controller thatmore » adjusts building room temperatures and PV inverter reactive power to maintain power quality. In the second example, we contrast the computing time for solving an optimal control problem for a room-level model predictive controller with and without symbolic manipulations. As a result, exploiting the equation-based language led to 2, 200 times faster solution« less
Equation-based languages – A new paradigm for building energy modeling, simulation and optimization
Wetter, Michael; Bonvini, Marco; Nouidui, Thierry S.
2016-04-01
Most of the state-of-the-art building simulation programs implement models in imperative programming languages. This complicates modeling and excludes the use of certain efficient methods for simulation and optimization. In contrast, equation-based modeling languages declare relations among variables, thereby allowing the use of computer algebra to enable much simpler schematic modeling and to generate efficient code for simulation and optimization. We contrast the two approaches in this paper. We explain how such manipulations support new use cases. In the first of two examples, we couple models of the electrical grid, multiple buildings, HVAC systems and controllers to test a controller thatmore » adjusts building room temperatures and PV inverter reactive power to maintain power quality. In the second example, we contrast the computing time for solving an optimal control problem for a room-level model predictive controller with and without symbolic manipulations. As a result, exploiting the equation-based language led to 2, 200 times faster solution« less
Efficient 3D movement-based kernel density estimator and application to wildlife ecology
Tracey-PR, Jeff; Sheppard, James K.; Lockwood, Glenn K.; Chourasia, Amit; Tatineni, Mahidhar; Fisher, Robert N.; Sinkovits, Robert S.
2014-01-01
We describe an efficient implementation of a 3D movement-based kernel density estimator for determining animal space use from discrete GPS measurements. This new method provides more accurate results, particularly for species that make large excursions in the vertical dimension. The downside of this approach is that it is much more computationally expensive than simpler, lower-dimensional models. Through a combination of code restructuring, parallelization and performance optimization, we were able to reduce the time to solution by up to a factor of 1000x, thereby greatly improving the applicability of the method.
High-Fidelity Dynamic Modeling of Spacecraft in the Continuum--Rarefied Transition Regime
NASA Astrophysics Data System (ADS)
Turansky, Craig P.
The state of the art of spacecraft rarefied aerodynamics seldom accounts for detailed rigid-body dynamics. In part because of computational constraints, simpler models based upon the ballistic and drag coefficients are employed. Of particular interest is the continuum-rarefied transition regime of Earth's thermosphere where gas dynamic simulation is difficult yet wherein many spacecraft operate. The feasibility of increasing the fidelity of modeling spacecraft dynamics is explored by coupling rarefied aerodynamics with rigid-body dynamics modeling similar to that traditionally used for aircraft in atmospheric flight. Presented is a framework of analysis and guiding principles which capitalize on the availability of increasing computational methods and resources. Aerodynamic force inputs for modeling spacecraft in two dimensions in a rarefied flow are provided by analytical equations in the free-molecular regime, and the direct simulation Monte Carlo method in the transition regime. The application of the direct simulation Monte Carlo method to this class of problems is examined in detail with a new code specifically designed for engineering-level rarefied aerodynamic analysis. Time-accurate simulations of two distinct geometries in low thermospheric flight and atmospheric entry are performed, demonstrating non-linear dynamics that cannot be predicted using simpler approaches. The results of this straightforward approach to the aero-orbital coupled-field problem highlight the possibilities for future improvements in drag prediction, control system design, and atmospheric science. Furthermore, a number of challenges for future work are identified in the hope of stimulating the development of a new subfield of spacecraft dynamics.
Appleton, D J; Rand, J S; Sunvold, G D
2005-06-01
The objective of this study was to compare simpler indices of insulin sensitivity with the minimal model-derived insulin sensitivity index to identify a simple and reliable alternative method for assessing insulin sensitivity in cats. In addition, we aimed to determine whether this simpler measure or measures showed consistency of association across differing body weights and glucose tolerance levels. Data from glucose tolerance and insulin sensitivity tests performed in 32 cats with varying body weights (underweight to obese), including seven cats with impaired glucose tolerance, were used to assess the relationship between Bergman's minimal model-derived insulin sensitivity index (S(I)), and various simpler measures of insulin sensitivity. The most useful overall predictors of insulin sensitivity were basal plasma insulin concentrations and the homeostasis model assessment (HOMA), which is the product of basal glucose and insulin concentrations divided by 22.5. It is concluded that measurement of plasma insulin concentrations in cats with food withheld for 24 h, in conjunction with HOMA, could be used in clinical research projects and by practicing veterinarians to screen for reduced insulin sensitivity in cats. Such cats may be at increased risk of developing impaired glucose tolerance and type 2 diabetes mellitus. Early detection of these cats would enable preventative intervention programs such as weight reduction, increased physical activity and dietary modifications to be instigated.
An ODE-Based Wall Model for Turbulent Flow Simulations
NASA Technical Reports Server (NTRS)
Berger, Marsha J.; Aftosmis, Michael J.
2017-01-01
Fully automated meshing for Reynolds-Averaged Navier-Stokes Simulations, Mesh generation for complex geometry continues to be the biggest bottleneck in the RANS simulation process; Fully automated Cartesian methods routinely used for inviscid simulations about arbitrarily complex geometry; These methods lack of an obvious & robust way to achieve near wall anisotropy; Goal: Extend these methods for RANS simulation without sacrificing automation, at an affordable cost; Note: Nothing here is limited to Cartesian methods, and much becomes simpler in a body-fitted setting.
Simulation of a navigator algorithm for a low-cost GPS receiver
NASA Technical Reports Server (NTRS)
Hodge, W. F.
1980-01-01
The analytical structure of an existing navigator algorithm for a low cost global positioning system receiver is described in detail to facilitate its implementation on in-house digital computers and real-time simulators. The material presented includes a simulation of GPS pseudorange measurements, based on a two-body representation of the NAVSTAR spacecraft orbits, and a four component model of the receiver bias errors. A simpler test for loss of pseudorange measurements due to spacecraft shielding is also noted.
Planar dielectric waveguides in rotation are optical fibers: comparison with the classical model.
Peña García, Antonio; Pérez-Ocón, Francisco; Jiménez, José Ramón
2008-01-21
A novel and simpler method to calculate the main parameters in fiber optics is presented. This method is based in a planar dielectric waveguide in rotation and, as an example, it is applied to calculate the turning points and the inner caustic in an optical fiber with a parabolic refractive index. It is shown that the solution found using this method agrees with the standard (and more complex) method, whose solutions for these points are also summarized in this paper.
Disruptive innovation for social change.
Christensen, Clayton M; Baumann, Heiner; Ruggles, Rudy; Sadtler, Thomas M
2006-12-01
Countries, organizations, and individuals around the globe spend aggressively to solve social problems, but these efforts often fail to deliver. Misdirected investment is the primary reason for that failure. Most of the money earmarked for social initiatives goes to organizations that are structured to support specific groups of recipients, often with sophisticated solutions. Such organizations rarely reach the broader populations that could be served by simpler alternatives. There is, however, an effective way to get to those underserved populations. The authors call it "catalytic innovation." Based on Clayton Christensen's disruptive-innovation model, catalytic innovations challenge organizational incumbents by offering simpler, good-enough solutions aimed at underserved groups. Unlike disruptive innovations, though, catalytic innovations are focused on creating social change. Catalytic innovators are defined by five distinct qualities. First, they create social change through scaling and replication. Second, they meet a need that is either overserved (that is, the existing solution is more complex than necessary for many people) or not served at all. Third, the products and services they offer are simpler and cheaper than alternatives, but recipients view them as good enough. Fourth, they bring in resources in ways that initially seem unattractive to incumbents. And fifth, they are often ignored, put down, or even encouraged by existing organizations, which don't see the catalytic innovators' solutions as viable. As the authors show through examples in health care, education, and economic development, both nonprofit and for-profit groups are finding ways to create catalytic innovation that drives social change.
COSP - A computer model of cyclic oxidation
NASA Technical Reports Server (NTRS)
Lowell, Carl E.; Barrett, Charles A.; Palmer, Raymond W.; Auping, Judith V.; Probst, Hubert B.
1991-01-01
A computer model useful in predicting the cyclic oxidation behavior of alloys is presented. The model considers the oxygen uptake due to scale formation during the heating cycle and the loss of oxide due to spalling during the cooling cycle. The balance between scale formation and scale loss is modeled and used to predict weight change and metal loss kinetics. A simple uniform spalling model is compared to a more complex random spall site model. In nearly all cases, the simpler uniform spall model gave predictions as accurate as the more complex model. The model has been applied to several nickel-base alloys which, depending upon composition, form Al2O3 or Cr2O3 during oxidation. The model has been validated by several experimental approaches. Versions of the model that run on a personal computer are available.
Simplified process model discovery based on role-oriented genetic mining.
Zhao, Weidong; Liu, Xi; Dai, Weihui
2014-01-01
Process mining is automated acquisition of process models from event logs. Although many process mining techniques have been developed, most of them are based on control flow. Meanwhile, the existing role-oriented process mining methods focus on correctness and integrity of roles while ignoring role complexity of the process model, which directly impacts understandability and quality of the model. To address these problems, we propose a genetic programming approach to mine the simplified process model. Using a new metric of process complexity in terms of roles as the fitness function, we can find simpler process models. The new role complexity metric of process models is designed from role cohesion and coupling, and applied to discover roles in process models. Moreover, the higher fitness derived from role complexity metric also provides a guideline for redesigning process models. Finally, we conduct case study and experiments to show that the proposed method is more effective for streamlining the process by comparing with related studies.
Holomorphic solutions of the susy Grassmannian σ-model and gauge invariance
NASA Astrophysics Data System (ADS)
Hussin, V.; Lafrance, M.; Yurduşen, İ.; Zakrzewski, W. J.
2018-05-01
We study the gauge invariance of the supersymmetric Grassmannian sigma model . It is richer then its purely bosonic submodel and we show how to use it in order to reduce some constant curvature holomorphic solutions of the model into simpler expressions.
Regier, Michael D; Moodie, Erica E M
2016-05-01
We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.
Measurement system and model for simultaneously measuring 6DOF geometric errors.
Zhao, Yuqiong; Zhang, Bin; Feng, Qibo
2017-09-04
A measurement system to simultaneously measure six degree-of-freedom (6DOF) geometric errors is proposed. The measurement method is based on a combination of mono-frequency laser interferometry and laser fiber collimation. A simpler and more integrated optical configuration is designed. To compensate for the measurement errors introduced by error crosstalk, element fabrication error, laser beam drift, and nonparallelism of two measurement beam, a unified measurement model, which can improve the measurement accuracy, is deduced and established using the ray-tracing method. A numerical simulation using the optical design software Zemax is conducted, and the results verify the correctness of the model. Several experiments are performed to demonstrate the feasibility and effectiveness of the proposed system and measurement model.
NASA Astrophysics Data System (ADS)
Attia, Khalid A. M.; Nassar, Mohammed W. I.; El-Zeiny, Mohamed B.; Serag, Ahmed
2017-01-01
For the first time, a new variable selection method based on swarm intelligence namely firefly algorithm is coupled with three different multivariate calibration models namely, concentration residual augmented classical least squares, artificial neural network and support vector regression in UV spectral data. A comparative study between the firefly algorithm and the well-known genetic algorithm was developed. The discussion revealed the superiority of using this new powerful algorithm over the well-known genetic algorithm. Moreover, different statistical tests were performed and no significant differences were found between all the models regarding their predictabilities. This ensures that simpler and faster models were obtained without any deterioration of the quality of the calibration.
Calibrating cellular automaton models for pedestrians walking through corners
NASA Astrophysics Data System (ADS)
Dias, Charitha; Lovreglio, Ruggiero
2018-05-01
Cellular Automata (CA) based pedestrian simulation models have gained remarkable popularity as they are simpler and easier to implement compared to other microscopic modeling approaches. However, incorporating traditional floor field representations in CA models to simulate pedestrian corner navigation behavior could result in unrealistic behaviors. Even though several previous studies have attempted to enhance CA models to realistically simulate pedestrian maneuvers around bends, such modifications have not been calibrated or validated against empirical data. In this study, two static floor field (SFF) representations, namely 'discrete representation' and 'continuous representation', are calibrated for CA-models to represent pedestrians' walking behavior around 90° bends. Trajectory data collected through a controlled experiment are used to calibrate these model representations. Calibration results indicate that although both floor field representations can represent pedestrians' corner navigation behavior, the 'continuous' representation fits the data better. Output of this study could be beneficial for enhancing the reliability of existing CA-based models by representing pedestrians' corner navigation behaviors more realistically.
Finite volume model for two-dimensional shallow environmental flow
Simoes, F.J.M.
2011-01-01
This paper presents the development of a two-dimensional, depth integrated, unsteady, free-surface model based on the shallow water equations. The development was motivated by the desire of balancing computational efficiency and accuracy by selective and conjunctive use of different numerical techniques. The base framework of the discrete model uses Godunov methods on unstructured triangular grids, but the solution technique emphasizes the use of a high-resolution Riemann solver where needed, switching to a simpler and computationally more efficient upwind finite volume technique in the smooth regions of the flow. Explicit time marching is accomplished with strong stability preserving Runge-Kutta methods, with additional acceleration techniques for steady-state computations. A simplified mass-preserving algorithm is used to deal with wet/dry fronts. Application of the model is made to several benchmark cases that show the interplay of the diverse solution techniques.
Potential formulation of sleep dynamics
NASA Astrophysics Data System (ADS)
Phillips, A. J. K.; Robinson, P. A.
2009-02-01
A physiologically based model of the mechanisms that control the human sleep-wake cycle is formulated in terms of an equivalent nonconservative mechanical potential. The potential is analytically simplified and reduced to a quartic two-well potential, matching the bifurcation structure of the original model. This yields a dynamics-based model that is analytically simpler and has fewer parameters than the original model, allowing easier fitting to experimental data. This model is first demonstrated to semiquantitatively match the dynamics of the physiologically based model from which it is derived, and is then fitted directly to a set of experimentally derived criteria. These criteria place rigorous constraints on the parameter values, and within these constraints the model is shown to reproduce normal sleep-wake dynamics and recovery from sleep deprivation. Furthermore, this approach enables insights into the dynamics by direct analogies to phenomena in well studied mechanical systems. These include the relation between friction in the mechanical system and the timecourse of neurotransmitter action, and the possible relation between stochastic resonance and napping behavior. The model derived here also serves as a platform for future investigations of sleep-wake phenomena from a dynamical perspective.
Towards a climate-dependent paradigm of ammonia emission and deposition
Existing descriptions of bi-directional ammonia (NH3) land–atmosphere exchange incorporate temperature and moisture controls, and are beginning to be used in regional chemical transport models. However, such models have typically applied simpler emission factors to upscale ...
NASA Astrophysics Data System (ADS)
Kwiatkowski, L.; Yool, A.; Allen, J. I.; Anderson, T. R.; Barciela, R.; Buitenhuis, E. T.; Butenschön, M.; Enright, C.; Halloran, P. R.; Le Quéré, C.; de Mora, L.; Racault, M.-F.; Sinha, B.; Totterdell, I. J.; Cox, P. M.
2014-07-01
Ocean biogeochemistry (OBGC) models span a wide range of complexities from highly simplified, nutrient-restoring schemes, through nutrient-phytoplankton-zooplankton-detritus (NPZD) models that crudely represent the marine biota, through to models that represent a broader trophic structure by grouping organisms as plankton functional types (PFT) based on their biogeochemical role (Dynamic Green Ocean Models; DGOM) and ecosystem models which group organisms by ecological function and trait. OBGC models are now integral components of Earth System Models (ESMs), but they compete for computing resources with higher resolution dynamical setups and with other components such as atmospheric chemistry and terrestrial vegetation schemes. As such, the choice of OBGC in ESMs needs to balance model complexity and realism alongside relative computing cost. Here, we present an inter-comparison of six OBGC models that were candidates for implementation within the next UK Earth System Model (UKESM1). The models cover a large range of biological complexity (from 7 to 57 tracers) but all include representations of at least the nitrogen, carbon, alkalinity and oxygen cycles. Each OBGC model was coupled to the Nucleus for the European Modelling of the Ocean (NEMO) ocean general circulation model (GCM), and results from physically identical hindcast simulations were compared. Model skill was evaluated for biogeochemical metrics of global-scale bulk properties using conventional statistical techniques. The computing cost of each model was also measured in standardised tests run at two resource levels. No model is shown to consistently outperform or underperform all other models across all metrics. Nonetheless, the simpler models that are easier to tune are broadly closer to observations across a number of fields, and thus offer a high-efficiency option for ESMs that prioritise high resolution climate dynamics. However, simpler models provide limited insight into more complex marine biogeochemical processes and ecosystem pathways, and a parallel approach of low resolution climate dynamics and high complexity biogeochemistry is desirable in order to provide additional insights into biogeochemistry-climate interactions.
NASA Astrophysics Data System (ADS)
Kwiatkowski, L.; Yool, A.; Allen, J. I.; Anderson, T. R.; Barciela, R.; Buitenhuis, E. T.; Butenschön, M.; Enright, C.; Halloran, P. R.; Le Quéré, C.; de Mora, L.; Racault, M.-F.; Sinha, B.; Totterdell, I. J.; Cox, P. M.
2014-12-01
Ocean biogeochemistry (OBGC) models span a wide variety of complexities, including highly simplified nutrient-restoring schemes, nutrient-phytoplankton-zooplankton-detritus (NPZD) models that crudely represent the marine biota, models that represent a broader trophic structure by grouping organisms as plankton functional types (PFTs) based on their biogeochemical role (dynamic green ocean models) and ecosystem models that group organisms by ecological function and trait. OBGC models are now integral components of Earth system models (ESMs), but they compete for computing resources with higher resolution dynamical setups and with other components such as atmospheric chemistry and terrestrial vegetation schemes. As such, the choice of OBGC in ESMs needs to balance model complexity and realism alongside relative computing cost. Here we present an intercomparison of six OBGC models that were candidates for implementation within the next UK Earth system model (UKESM1). The models cover a large range of biological complexity (from 7 to 57 tracers) but all include representations of at least the nitrogen, carbon, alkalinity and oxygen cycles. Each OBGC model was coupled to the ocean general circulation model Nucleus for European Modelling of the Ocean (NEMO) and results from physically identical hindcast simulations were compared. Model skill was evaluated for biogeochemical metrics of global-scale bulk properties using conventional statistical techniques. The computing cost of each model was also measured in standardised tests run at two resource levels. No model is shown to consistently outperform all other models across all metrics. Nonetheless, the simpler models are broadly closer to observations across a number of fields and thus offer a high-efficiency option for ESMs that prioritise high-resolution climate dynamics. However, simpler models provide limited insight into more complex marine biogeochemical processes and ecosystem pathways, and a parallel approach of low-resolution climate dynamics and high-complexity biogeochemistry is desirable in order to provide additional insights into biogeochemistry-climate interactions.
NASA Astrophysics Data System (ADS)
Samborski, Sylwester; Valvo, Paolo S.
2018-01-01
The paper deals with the numerical and analytical modelling of the end-loaded split test for multi-directional laminates affected by the typical elastic couplings. Numerical analysis of three-dimensional finite element models was performed with the Abaqus software exploiting the virtual crack closure technique (VCCT). The results show possible asymmetries in the widthwise deflections of the specimen, as well as in the strain energy release rate (SERR) distributions along the delamination front. Analytical modelling based on a beam-theory approach was also conducted in simpler cases, where only bending-extension coupling is present, but no out-of-plane effects. The analytical results matched the numerical ones, thus demonstrating that the analytical models are feasible for test design and experimental data reduction.
Jbabdi, Saad; Sotiropoulos, Stamatios N; Savio, Alexander M; Graña, Manuel; Behrens, Timothy EJ
2012-01-01
In this article, we highlight an issue that arises when using multiple b-values in a model-based analysis of diffusion MR data for tractography. The non-mono-exponential decay, commonly observed in experimental data, is shown to induce over-fitting in the distribution of fibre orientations when not considered in the model. Extra fibre orientations perpendicular to the main orientation arise to compensate for the slower apparent signal decay at higher b-values. We propose a simple extension to the ball and stick model based on a continuous Gamma distribution of diffusivities, which significantly improves the fitting and reduces the over-fitting. Using in-vivo experimental data, we show that this model outperforms a simpler, noise floor model, especially at the interfaces between brain tissues, suggesting that partial volume effects are a major cause of the observed non-mono-exponential decay. This model may be helpful for future data acquisition strategies that may attempt to combine multiple shells to improve estimates of fibre orientations in white matter and near the cortex. PMID:22334356
A Toy Model of Electrodynamics in (1 + 1) Dimensions
ERIC Educational Resources Information Center
Boozer, A. D.
2007-01-01
A model is presented that describes a scalar field interacting with a point particle in (1+1) dimensions. The model exhibits many of the same phenomena that appear in classical electrodynamics, such as radiation and radiation damping, yet has a much simpler mathematical structure. By studying these phenomena in a highly simplified model, the…
The Simplest Complete Model of Choice Response Time: Linear Ballistic Accumulation
ERIC Educational Resources Information Center
Brown, Scott D.; Heathcote, Andrew
2008-01-01
We propose a linear ballistic accumulator (LBA) model of decision making and reaction time. The LBA is simpler than other models of choice response time, with independent accumulators that race towards a common response threshold. Activity in the accumulators increases in a linear and deterministic manner. The simplicity of the model allows…
Strategic directions for agent-based modeling: avoiding the YAAWN syndrome.
O'Sullivan, David; Evans, Tom; Manson, Steven; Metcalf, Sara; Ligmann-Zielinska, Arika; Bone, Chris
In this short communication, we examine how agent-based modeling has become common in land change science and is increasingly used to develop case studies for particular times and places. There is a danger that the research community is missing a prime opportunity to learn broader lessons from the use of agent-based modeling (ABM), or at the very least not sharing these lessons more widely. How do we find an appropriate balance between empirically rich, realistic models and simpler theoretically grounded models? What are appropriate and effective approaches to model evaluation in light of uncertainties not only in model parameters but also in model structure? How can we best explore hybrid model structures that enable us to better understand the dynamics of the systems under study, recognizing that no single approach is best suited to this task? Under what circumstances - in terms of model complexity, model evaluation, and model structure - can ABMs be used most effectively to lead to new insight for stakeholders? We explore these questions in the hope of helping the growing community of land change scientists using models in their research to move from 'yet another model' to doing better science with models.
On supermatrix models, Poisson geometry, and noncommutative supersymmetric gauge theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klimčík, Ctirad
2015-12-15
We construct a new supermatrix model which represents a manifestly supersymmetric noncommutative regularisation of the UOSp(2|1) supersymmetric Schwinger model on the supersphere. Our construction is much simpler than those already existing in the literature and it was found by using Poisson geometry in a substantial way.
Strategic directions for agent-based modeling: avoiding the YAAWN syndrome
O’Sullivan, David; Evans, Tom; Manson, Steven; Metcalf, Sara; Ligmann-Zielinska, Arika; Bone, Chris
2015-01-01
In this short communication, we examine how agent-based modeling has become common in land change science and is increasingly used to develop case studies for particular times and places. There is a danger that the research community is missing a prime opportunity to learn broader lessons from the use of agent-based modeling (ABM), or at the very least not sharing these lessons more widely. How do we find an appropriate balance between empirically rich, realistic models and simpler theoretically grounded models? What are appropriate and effective approaches to model evaluation in light of uncertainties not only in model parameters but also in model structure? How can we best explore hybrid model structures that enable us to better understand the dynamics of the systems under study, recognizing that no single approach is best suited to this task? Under what circumstances – in terms of model complexity, model evaluation, and model structure – can ABMs be used most effectively to lead to new insight for stakeholders? We explore these questions in the hope of helping the growing community of land change scientists using models in their research to move from ‘yet another model’ to doing better science with models. PMID:27158257
NASA Technical Reports Server (NTRS)
Hackett, J. E.; Sampath, S.; Phillips, C. G.
1981-01-01
The development of an improved jet-in-crossflow model for estimating wind tunnel blockage and angle-of-attack interference is described. Experiments showed that the simpler existing models fall seriously short of representing far-field flows properly. A new, vortex-source-doublet (VSD) model was therefore developed which employs curved trajectories and experimentally-based singularity strengths. The new model is consistent with existing and new experimental data and it predicts tunnel wall (i.e. far-field) pressures properly. It is implemented as a preprocessor to the wall-pressure-signature-based tunnel interference predictor. The supporting experiments and theoretical studies revealed some new results. Comparative flow field measurements with 1-inch "free-air" and 3-inch impinging jets showed that vortex penetration into the flow, in diameters, was almost unaltered until 'hard' impingement occurred. In modeling impinging cases, a 'plume redirection' term was introduced which is apparently absent in previous models. The effects of this term were found to be very significant.
Simultaneous Co-Clustering and Classification in Customers Insight
NASA Astrophysics Data System (ADS)
Anggistia, M.; Saefuddin, A.; Sartono, B.
2017-04-01
Building predictive model based on the heterogeneous dataset may yield many problems, such as less precise in parameter and prediction accuracy. Such problem can be solved by segmenting the data into relatively homogeneous groups and then build a predictive model for each cluster. The advantage of using this strategy usually gives result in simpler models, more interpretable, and more actionable without any loss in accuracy and reliability. This work concerns on marketing data set which recorded a customer behaviour across products. There are some variables describing customer and product as attributes. The basic idea of this approach is to combine co-clustering and classification simultaneously. The objective of this research is to analyse the customer across product characteristics, so the marketing strategy implemented precisely.
A simple computational algorithm of model-based choice preference.
Toyama, Asako; Katahira, Kentaro; Ohira, Hideki
2017-08-01
A broadly used computational framework posits that two learning systems operate in parallel during the learning of choice preferences-namely, the model-free and model-based reinforcement-learning systems. In this study, we examined another possibility, through which model-free learning is the basic system and model-based information is its modulator. Accordingly, we proposed several modified versions of a temporal-difference learning model to explain the choice-learning process. Using the two-stage decision task developed by Daw, Gershman, Seymour, Dayan, and Dolan (2011), we compared their original computational model, which assumes a parallel learning process, and our proposed models, which assume a sequential learning process. Choice data from 23 participants showed a better fit with the proposed models. More specifically, the proposed eligibility adjustment model, which assumes that the environmental model can weight the degree of the eligibility trace, can explain choices better under both model-free and model-based controls and has a simpler computational algorithm than the original model. In addition, the forgetting learning model and its variation, which assume changes in the values of unchosen actions, substantially improved the fits to the data. Overall, we show that a hybrid computational model best fits the data. The parameters used in this model succeed in capturing individual tendencies with respect to both model use in learning and exploration behavior. This computational model provides novel insights into learning with interacting model-free and model-based components.
Toward a More Robust Pruning Procedure for MLP Networks
NASA Technical Reports Server (NTRS)
Stepniewski, Slawomir W.; Jorgensen, Charles C.
1998-01-01
Choosing a proper neural network architecture is a problem of great practical importance. Smaller models mean not only simpler designs but also lower variance for parameter estimation and network prediction. The widespread utilization of neural networks in modeling highlights an issue in human factors. The procedure of building neural models should find an appropriate level of model complexity in a more or less automatic fashion to make it less prone to human subjectivity. In this paper we present a Singular Value Decomposition based node elimination technique and enhanced implementation of the Optimal Brain Surgeon algorithm. Combining both methods creates a powerful pruning engine that can be used for tuning feedforward connectionist models. The performance of the proposed method is demonstrated by adjusting the structure of a multi-input multi-output model used to calibrate a six-component wind tunnel strain gage.
Evaluating Predictive Uncertainty of Hyporheic Exchange Modelling
NASA Astrophysics Data System (ADS)
Chow, R.; Bennett, J.; Dugge, J.; Wöhling, T.; Nowak, W.
2017-12-01
Hyporheic exchange is the interaction of water between rivers and groundwater, and is difficult to predict. One of the largest contributions to predictive uncertainty for hyporheic fluxes have been attributed to the representation of heterogeneous subsurface properties. This research aims to evaluate which aspect of the subsurface representation - the spatial distribution of hydrofacies or the model for local-scale (within-facies) heterogeneity - most influences the predictive uncertainty. Also, we seek to identify data types that help reduce this uncertainty best. For this investigation, we conduct a modelling study of the Steinlach River meander, in Southwest Germany. The Steinlach River meander is an experimental site established in 2010 to monitor hyporheic exchange at the meander scale. We use HydroGeoSphere, a fully integrated surface water-groundwater model, to model hyporheic exchange and to assess the predictive uncertainty of hyporheic exchange transit times (HETT). A highly parameterized complex model is built and treated as `virtual reality', which is in turn modelled with simpler subsurface parameterization schemes (Figure). Then, we conduct Monte-Carlo simulations with these models to estimate the predictive uncertainty. Results indicate that: Uncertainty in HETT is relatively small for early times and increases with transit times. Uncertainty from local-scale heterogeneity is negligible compared to uncertainty in the hydrofacies distribution. Introducing more data to a poor model structure may reduce predictive variance, but does not reduce predictive bias. Hydraulic head observations alone cannot constrain the uncertainty of HETT, however an estimate of hyporheic exchange flux proves to be more effective at reducing this uncertainty. Figure: Approach for evaluating predictive model uncertainty. A conceptual model is first developed from the field investigations. A complex model (`virtual reality') is then developed based on that conceptual model. This complex model then serves as the basis to compare simpler model structures. Through this approach, predictive uncertainty can be quantified relative to a known reference solution.
Applicability of Similarity Principles to Structural Models
NASA Technical Reports Server (NTRS)
Goodier, J N; Thomson, W T
1944-01-01
A systematic account is given in part I of the use of dimensional analysis in constructing similarity conditions for models and structures. The analysis covers large deflections, buckling, plastic behavior, and materials with nonlinear stress-strain characteristics, as well as the simpler structural problems. (author)
Global horizontal irradiance clear sky models : implementation and analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stein, Joshua S.; Hansen, Clifford W.; Reno, Matthew J.
2012-03-01
Clear sky models estimate the terrestrial solar radiation under a cloudless sky as a function of the solar elevation angle, site altitude, aerosol concentration, water vapor, and various atmospheric conditions. This report provides an overview of a number of global horizontal irradiance (GHI) clear sky models from very simple to complex. Validation of clear-sky models requires comparison of model results to measured irradiance during clear-sky periods. To facilitate validation, we present a new algorithm for automatically identifying clear-sky periods in a time series of GHI measurements. We evaluate the performance of selected clear-sky models using measured data from 30 differentmore » sites, totaling about 300 site-years of data. We analyze the variation of these errors across time and location. In terms of error averaged over all locations and times, we found that complex models that correctly account for all the atmospheric parameters are slightly more accurate than other models, but, primarily at low elevations, comparable accuracy can be obtained from some simpler models. However, simpler models often exhibit errors that vary with time of day and season, whereas the errors for complex models vary less over time.« less
Optical eye simulator for laser dazzle events.
Coelho, João M P; Freitas, José; Williamson, Craig A
2016-03-20
An optical simulator of the human eye and its application to laser dazzle events are presented. The simulator combines optical design software (ZEMAX) with a scientific programming language (MATLAB) and allows the user to implement and analyze a dazzle scenario using practical, real-world parameters. Contrary to conventional analytical glare analysis, this work uses ray tracing and the scattering model and parameters for each optical element of the eye. The theoretical background of each such element is presented in relation to the model. The overall simulator's calibration, validation, and performance analysis are achieved by comparison with a simpler model based uponCIE disability glare data. Results demonstrate that this kind of advanced optical eye simulation can be used to represent laser dazzle and has the potential to extend the range of applicability of analytical models.
Attia, Khalid A M; Nassar, Mohammed W I; El-Zeiny, Mohamed B; Serag, Ahmed
2017-01-05
For the first time, a new variable selection method based on swarm intelligence namely firefly algorithm is coupled with three different multivariate calibration models namely, concentration residual augmented classical least squares, artificial neural network and support vector regression in UV spectral data. A comparative study between the firefly algorithm and the well-known genetic algorithm was developed. The discussion revealed the superiority of using this new powerful algorithm over the well-known genetic algorithm. Moreover, different statistical tests were performed and no significant differences were found between all the models regarding their predictabilities. This ensures that simpler and faster models were obtained without any deterioration of the quality of the calibration. Copyright © 2016 Elsevier B.V. All rights reserved.
Skill of Ensemble Seasonal Probability Forecasts
NASA Astrophysics Data System (ADS)
Smith, Leonard A.; Binter, Roman; Du, Hailiang; Niehoerster, Falk
2010-05-01
In operational forecasting, the computational complexity of large simulation models is, ideally, justified by enhanced performance over simpler models. We will consider probability forecasts and contrast the skill of ENSEMBLES-based seasonal probability forecasts of interest to the finance sector (specifically temperature forecasts for Nino 3.4 and the Atlantic Main Development Region (MDR)). The ENSEMBLES model simulations will be contrasted against forecasts from statistical models based on the observations (climatological distributions) and empirical dynamics based on the observations but conditioned on the current state (dynamical climatology). For some start dates, individual ENSEMBLES models yield significant skill even at a lead-time of 14 months. The nature of this skill is discussed, and chances of application are noted. Questions surrounding the interpretation of probability forecasts based on these multi-model ensemble simulations are then considered; the distributions considered are formed by kernel dressing the ensemble and blending with the climatology. The sources of apparent (RMS) skill in distributions based on multi-model simulations is discussed, and it is demonstrated that the inclusion of "zero-skill" models in the long range can improve Root-Mean-Square-Error scores, casting some doubt on the common justification for the claim that all models should be included in forming an operational probability forecast. It is argued that the rational response varies with lead time.
Machine learning approaches to the social determinants of health in the health and retirement study.
Seligman, Benjamin; Tuljapurkar, Shripad; Rehkopf, David
2018-04-01
Social and economic factors are important predictors of health and of recognized importance for health systems. However, machine learning, used elsewhere in the biomedical literature, has not been extensively applied to study relationships between society and health. We investigate how machine learning may add to our understanding of social determinants of health using data from the Health and Retirement Study. A linear regression of age and gender, and a parsimonious theory-based regression additionally incorporating income, wealth, and education, were used to predict systolic blood pressure, body mass index, waist circumference, and telomere length. Prediction, fit, and interpretability were compared across four machine learning methods: linear regression, penalized regressions, random forests, and neural networks. All models had poor out-of-sample prediction. Most machine learning models performed similarly to the simpler models. However, neural networks greatly outperformed the three other methods. Neural networks also had good fit to the data ( R 2 between 0.4-0.6, versus <0.3 for all others). Across machine learning models, nine variables were frequently selected or highly weighted as predictors: dental visits, current smoking, self-rated health, serial-seven subtractions, probability of receiving an inheritance, probability of leaving an inheritance of at least $10,000, number of children ever born, African-American race, and gender. Some of the machine learning methods do not improve prediction or fit beyond simpler models, however, neural networks performed well. The predictors identified across models suggest underlying social factors that are important predictors of biological indicators of chronic disease, and that the non-linear and interactive relationships between variables fundamental to the neural network approach may be important to consider.
High-Order Central WENO Schemes for Multi-Dimensional Hamilton-Jacobi Equations
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)
2002-01-01
We present new third- and fifth-order Godunov-type central schemes for approximating solutions of the Hamilton-Jacobi (HJ) equation in an arbitrary number of space dimensions. These are the first central schemes for approximating solutions of the HJ equations with an order of accuracy that is greater than two. In two space dimensions we present two versions for the third-order scheme: one scheme that is based on a genuinely two-dimensional Central WENO reconstruction, and another scheme that is based on a simpler dimension-by-dimension reconstruction. The simpler dimension-by-dimension variant is then extended to a multi-dimensional fifth-order scheme. Our numerical examples in one, two and three space dimensions verify the expected order of accuracy of the schemes.
The Purpose of Analytical Models from the Perspective of a Data Provider.
ERIC Educational Resources Information Center
Sheehan, Bernard S.
The purpose of analytical models is to reduce complex institutional management problems and situations to simpler proportions and compressed time frames so that human skills of decision makers can be brought to bear most effectively. Also, modeling cultivates the art of management by forcing explicit and analytical consideration of important…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faby, Sebastian; Maier, Joscha; Sawall, Stefan
2016-07-15
Purpose: To introduce and evaluate an increment matrix approach (IMA) describing the signal statistics of energy-selective photon counting detectors including spatial–spectral correlations between energy bins of neighboring detector pixels. The importance of the occurring correlations for image-based material decomposition is studied. Methods: An IMA describing the counter increase patterns in a photon counting detector is proposed. This IMA has the potential to decrease the number of required random numbers compared to Monte Carlo simulations by pursuing an approach based on convolutions. To validate and demonstrate the IMA, an approximate semirealistic detector model is provided, simulating a photon counting detector inmore » a simplified manner, e.g., by neglecting count rate-dependent effects. In this way, the spatial–spectral correlations on the detector level are obtained and fed into the IMA. The importance of these correlations in reconstructed energy bin images and the corresponding detector performance in image-based material decomposition is evaluated using a statistically optimal decomposition algorithm. Results: The results of IMA together with the semirealistic detector model were compared to other models and measurements using the spectral response and the energy bin sensitivity, finding a good agreement. Correlations between the different reconstructed energy bin images could be observed, and turned out to be of weak nature. These correlations were found to be not relevant in image-based material decomposition. An even simpler simulation procedure based on the energy bin sensitivity was tested instead and yielded similar results for the image-based material decomposition task, as long as the fact that one incident photon can increase multiple counters across neighboring detector pixels is taken into account. Conclusions: The IMA is computationally efficient as it required about 10{sup 2} random numbers per ray incident on a detector pixel instead of an estimated 10{sup 8} random numbers per ray as Monte Carlo approaches would need. The spatial–spectral correlations as described by IMA are not important for the studied image-based material decomposition task. Respecting the absolute photon counts and thus the multiple counter increases by a single x-ray photon, the same material decomposition performance could be obtained with a simpler detector description using the energy bin sensitivity.« less
Eze, Valentine C; Phan, Anh N; Harvey, Adam P
2014-03-01
A more robust kinetic model of base-catalysed transesterification than the conventional reaction scheme has been developed. All the relevant reactions in the base-catalysed transesterification of rapeseed oil (RSO) to fatty acid methyl ester (FAME) were investigated experimentally, and validated numerically in a model implemented using MATLAB. It was found that including the saponification of RSO and FAME side reactions and hydroxide-methoxide equilibrium data explained various effects that are not captured by simpler conventional models. Both the experiment and modelling showed that the "biodiesel reaction" can reach the desired level of conversion (>95%) in less than 2min. Given the right set of conditions, the transesterification can reach over 95% conversion, before the saponification losses become significant. This means that the reaction must be performed in a reactor exhibiting good mixing and good control of residence time, and the reaction mixture must be quenched rapidly as it leaves the reactor. Copyright © 2014 Elsevier Ltd. All rights reserved.
Pérez-Rodríguez, Gael; Dias, Sónia; Pérez-Pérez, Martín; Fdez-Riverola, Florentino; Azevedo, Nuno F; Lourenço, Anália
2018-03-08
Experimental incapacity to track microbe-microbe interactions in structures like biofilms, and the complexity inherent to the mathematical modelling of those interactions, raises the need for feasible, alternative modelling approaches. This work proposes an agent-based representation of the diffusion of N-acyl homoserine lactones (AHL) in a multicellular environment formed by Pseudomonas aeruginosa and Candida albicans. Depending on the spatial location, C. albicans cells were variably exposed to AHLs, an observation that might help explain why phenotypic switching of individual cells in biofilms occurred at different time points. The simulation and algebraic results were similar for simpler scenarios, although some statistical differences could be observed (p < 0.05). The model was also successfully applied to a more complex scenario representing a small multicellular environment containing C. albicans and P. aeruginosa cells encased in a 3-D matrix. Further development of this model may help create a predictive tool to depict biofilm heterogeneity at the single-cell level.
Bursting Transition Dynamics Within the Pre-Bötzinger Complex
NASA Astrophysics Data System (ADS)
Duan, Lixia; Chen, Xi; Tang, Xuhui; Su, Jianzhong
The pre-Bötzinger complex of the mammalian brain stem plays a crucial role in the respiratory rhythms generation. Neurons within the pre-Bötzinger complex have been found experimentally to yield different firing activities. In this paper, we study the spiking and bursting activities related to the respiratory rhythms in the pre-Bötzinger complex based on a mathematical model proposed by Butera. Using the one-dimensional first recurrence map induced by dynamics, we investigate the different bursting patterns and their transition of the pre-Bötzinger complex neurons based on the Butera model, after we derived a one-dimensional map from the dynamical characters of the differential equations, and we obtained conditions for the transition of different bursting patterns. These analytical results were verified through numerical simulations. We conclude that the one-dimensional map contains similar rhythmic patterns as the Butera model and can be used as a simpler modeling tool to study fast-slow models like pre-Bötzinger complex neural circuit.
TOPICS IN THEORY OF GENERALIZED PARTON DISTRIBUTIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Radyushkin, Anatoly V.
Several topics in the theory of generalized parton distributions (GPDs) are reviewed. First, we give a brief overview of the basics of the theory of generalized parton distributions and their relationship with simpler phenomenological functions, viz. form factors, parton densities and distribution amplitudes. Then, we discuss recent developments in building models for GPDs that are based on the formalism of double distributions (DDs). A special attention is given to a careful analysis of the singularity structure of DDs. The DD formalism is applied to construction of a model GPDs with a singular Regge behavior. Within the developed DD-based approach, wemore » discuss the structure of GPD sum rules. It is shown that separation of DDs into the so-called ``plus'' part and the $D$-term part may be treated as a renormalization procedure for the GPD sum rules. This approach is compared with an alternative prescription based on analytic regularization.« less
An Informal Overview of the Unitary Group Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sonnad, V.; Escher, J.; Kruse, M.
The Unitary Groups Approach (UGA) is an elegant and conceptually unified approach to quantum structure calculations. It has been widely used in molecular structure calculations, and holds the promise of a single computational approach to structure calculations in a variety of different fields. We explore the possibility of extending the UGA to computations in atomic and nuclear structure as a simpler alternative to traditional Racah algebra-based approaches. We provide a simple introduction to the basic UGA and consider some of the issues in using the UGA with spin-dependent, multi-body Hamiltonians requiring multi-shell bases adapted to additional symmetries. While the UGAmore » is perfectly capable of dealing with such problems, it is seen that the complexity rises dramatically, and the UGA is not at this time, a simpler alternative to Racah algebra-based approaches.« less
Monotone Boolean approximation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hulme, B.L.
1982-12-01
This report presents a theory of approximation of arbitrary Boolean functions by simpler, monotone functions. Monotone increasing functions can be expressed without the use of complements. Nonconstant monotone increasing functions are important in their own right since they model a special class of systems known as coherent systems. It is shown here that when Boolean expressions for noncoherent systems become too large to treat exactly, then monotone approximations are easily defined. The algorithms proposed here not only provide simpler formulas but also produce best possible upper and lower monotone bounds for any Boolean function. This theory has practical application formore » the analysis of noncoherent fault trees and event tree sequences.« less
NASA Astrophysics Data System (ADS)
Hadia, Sarman K.; Thakker, R. A.; Bhatt, Kirit R.
2016-05-01
The study proposes an application of evolutionary algorithms, specifically an artificial bee colony (ABC), variant ABC and particle swarm optimisation (PSO), to extract the parameters of metal oxide semiconductor field effect transistor (MOSFET) model. These algorithms are applied for the MOSFET parameter extraction problem using a Pennsylvania surface potential model. MOSFET parameter extraction procedures involve reducing the error between measured and modelled data. This study shows that ABC algorithm optimises the parameter values based on intelligent activities of honey bee swarms. Some modifications have also been applied to the basic ABC algorithm. Particle swarm optimisation is a population-based stochastic optimisation method that is based on bird flocking activities. The performances of these algorithms are compared with respect to the quality of the solutions. The simulation results of this study show that the PSO algorithm performs better than the variant ABC and basic ABC algorithm for the parameter extraction of the MOSFET model; also the implementation of the ABC algorithm is shown to be simpler than that of the PSO algorithm.
Kamensky, David; Evans, John A; Hsu, Ming-Chen; Bazilevs, Yuri
2017-11-01
This paper discusses a method of stabilizing Lagrange multiplier fields used to couple thin immersed shell structures and surrounding fluids. The method retains essential conservation properties by stabilizing only the portion of the constraint orthogonal to a coarse multiplier space. This stabilization can easily be applied within iterative methods or semi-implicit time integrators that avoid directly solving a saddle point problem for the Lagrange multiplier field. Heart valve simulations demonstrate applicability of the proposed method to 3D unsteady simulations. An appendix sketches the relation between the proposed method and a high-order-accurate approach for simpler model problems.
Fast calculation of the line-spread-function by transversal directions decoupling
NASA Astrophysics Data System (ADS)
Parravicini, Jacopo; Tartara, Luca; Hasani, Elton; Tomaselli, Alessandra
2016-07-01
We propose a simplified method to calculate the optical spread function of a paradigmatic system constituted by a pupil-lens with a line-shaped illumination (‘line-spread-function’). Our approach is based on decoupling the two transversal directions of the beam and treating the propagation by means of the Fourier optics formalism. This requires simpler calculations with respect to the more usual Bessel-function-based method. The model is discussed and compared with standard calculation methods by carrying out computer simulations. The proposed approach is found to be much faster than the Bessel-function-based one (CPU time ≲ 5% of the standard method), while the results of the two methods present a very good mutual agreement.
NASA Astrophysics Data System (ADS)
Zhou, Cong; Chase, J. Geoffrey; Rodgers, Geoffrey W.; Xu, Chao
2017-02-01
The model-free hysteresis loop analysis (HLA) method for structural health monitoring (SHM) has significant advantages over the traditional model-based SHM methods that require a suitable baseline model to represent the actual system response. This paper provides a unique validation against both an experimental reinforced concrete (RC) building and a calibrated numerical model to delineate the capability of the model-free HLA method and the adaptive least mean squares (LMS) model-based method in detecting, localizing and quantifying damage that may not be visible, observable in overall structural response. Results clearly show the model-free HLA method is capable of adapting to changes in how structures transfer load or demand across structural elements over time and multiple events of different size. However, the adaptive LMS model-based method presented an image of greater spread of lesser damage over time and story when the baseline model is not well defined. Finally, the two algorithms are tested over a simpler hysteretic behaviour typical steel structure to quantify the impact of model mismatch between the baseline model used for identification and the actual response. The overall results highlight the need for model-based methods to have an appropriate model that can capture the observed response, in order to yield accurate results, even in small events where the structure remains linear.
Effect of Shear Deformation and Continuity on Delamination Modelling with Plate Elements
NASA Technical Reports Server (NTRS)
Glaessgen, E. H.; Riddell, W. T.; Raju, I. S.
1998-01-01
The effects of several critical assumptions and parameters on the computation of strain energy release rates for delamination and debond configurations modeled with plate elements have been quantified. The method of calculation is based on the virtual crack closure technique (VCCT), and models that model the upper and lower surface of the delamination or debond with two-dimensional (2D) plate elements rather than three-dimensional (3D) solid elements. The major advantages of the plate element modeling technique are a smaller model size and simpler geometric modeling. Specific issues that are discussed include: constraint of translational degrees of freedom, rotational degrees of freedom or both in the neighborhood of the crack tip; element order and assumed shear deformation; and continuity of material properties and section stiffness in the vicinity of the debond front, Where appropriate, the plate element analyses are compared with corresponding two-dimensional plane strain analyses.
On macromolecular refinement at subatomic resolution withinteratomic scatterers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Afonine, Pavel V.; Grosse-Kunstleve, Ralf W.; Adams, Paul D.
2007-11-09
A study of the accurate electron density distribution in molecular crystals at subatomic resolution, better than {approx} 1.0 {angstrom}, requires more detailed models than those based on independent spherical atoms. A tool conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8-1.0 {angstrom}, the number of experimental data is insufficient for the full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented by additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark datasets gave results comparable in quality withmore » results of multipolar refinement and superior of those for conventional models. Applications to several datasets of both small- and macro-molecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package.« less
On macromolecular refinement at subatomic resolution with interatomic scatterers
Afonine, Pavel V.; Grosse-Kunstleve, Ralf W.; Adams, Paul D.; Lunin, Vladimir Y.; Urzhumtsev, Alexandre
2007-01-01
A study of the accurate electron-density distribution in molecular crystals at subatomic resolution (better than ∼1.0 Å) requires more detailed models than those based on independent spherical atoms. A tool that is conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8–1.0 Å, the number of experimental data is insufficient for full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented by additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark data sets gave results that were comparable in quality with the results of multipolar refinement and superior to those for conventional models. Applications to several data sets of both small molecules and macromolecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package. PMID:18007035
On macromolecular refinement at subatomic resolution with interatomic scatterers.
Afonine, Pavel V; Grosse-Kunstleve, Ralf W; Adams, Paul D; Lunin, Vladimir Y; Urzhumtsev, Alexandre
2007-11-01
A study of the accurate electron-density distribution in molecular crystals at subatomic resolution (better than approximately 1.0 A) requires more detailed models than those based on independent spherical atoms. A tool that is conventionally used in small-molecule crystallography is the multipolar model. Even at upper resolution limits of 0.8-1.0 A, the number of experimental data is insufficient for full multipolar model refinement. As an alternative, a simpler model composed of conventional independent spherical atoms augmented by additional scatterers to model bonding effects has been proposed. Refinement of these mixed models for several benchmark data sets gave results that were comparable in quality with the results of multipolar refinement and superior to those for conventional models. Applications to several data sets of both small molecules and macromolecules are shown. These refinements were performed using the general-purpose macromolecular refinement module phenix.refine of the PHENIX package.
ASSESSING THE INFLUENCE OF THE SOLAR ORBIT ON TERRESTRIAL BIODIVERSITY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, F.; Bailer-Jones, C. A. L.
The terrestrial record shows a significant variation in the extinction and origination rates of species during the past half-billion years. Numerous studies have claimed an association between this variation and the motion of the Sun around the Galaxy, invoking the modulation of cosmic rays, gamma rays, and comet impact frequency as a cause of this biodiversity variation. However, some of these studies exhibit methodological problems, or were based on coarse assumptions (such as a strict periodicity of the solar orbit). Here we investigate this link in more detail, using a model of the Galaxy to reconstruct the solar orbit andmore » thus a predictive model of the temporal variation of the extinction rate due to astronomical mechanisms. We compare these predictions as well as those of various reference models with paleontological data. Our approach involves Bayesian model comparison, which takes into account the uncertainties in the paleontological data as well as the distribution of solar orbits consistent with the uncertainties in the astronomical data. We find that various versions of the orbital model are not favored beyond simpler reference models. In particular, the distribution of mass extinction events can be explained just as well by a uniform random distribution as by any other model tested. Although our negative results on the orbital model are robust to changes in the Galaxy model, the Sun's coordinates, and the errors in the data, we also find that it would be very difficult to positively identify the orbital model even if it were the true one. (In contrast, we do find evidence against simpler periodic models.) Thus, while we cannot rule out there being some connection between solar motion and biodiversity variations on the Earth, we conclude that it is difficult to give convincing positive conclusions of such a connection using current data.« less
Light scattering by marine algae: two-layer spherical and nonspherical models
NASA Astrophysics Data System (ADS)
Quirantes, Arturo; Bernard, Stewart
2004-11-01
Light scattering properties of algae-like particles are modeled using the T-matrix for coated scatterers. Two basic geometries have been considered: off-centered coated spheres and centered spheroids. Extinction, scattering and absorption efficiencies, plus scattering in the backward plane, are compared to simpler models like homogeneous (Mie) and coated (Aden-Kerker) models. The anomalous diffraction approximation (ADA), of widespread use in the oceanographic light-scattering community, has also been used as a first approximation, for both homogeneous and coated spheres. T-matrix calculations show that some light scattering values, such as extinction and scattering efficiencies, have little dependence on particle shape, thus reinforcing the view that simpler (Mie, Aden-Kerker) models can be applied to infer refractive index (RI) data from absorption curves. The backscattering efficiency, on the other hand, is quite sensitive to shape. This calls into question the use of light scattering techniques where the phase function plays a pivotal role, and can help explain the observed discrepancy between theoretical and experimental values of the backscattering coefficient in observed in oceanic studies.
Simple protocols for oblivious transfer and secure identification in the noisy-quantum-storage model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaffner, Christian
2010-09-15
We present simple protocols for oblivious transfer and password-based identification which are secure against general attacks in the noisy-quantum-storage model as defined in R. Koenig, S. Wehner, and J. Wullschleger [e-print arXiv:0906.1030]. We argue that a technical tool from Koenig et al. suffices to prove security of the known protocols. Whereas the more involved protocol for oblivious transfer from Koenig et al. requires less noise in storage to achieve security, our ''canonical'' protocols have the advantage of being simpler to implement and the security error is easier control. Therefore, our protocols yield higher OT rates for many realistic noise parameters.more » Furthermore, a proof of security of a direct protocol for password-based identification against general noisy-quantum-storage attacks is given.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsumoto, Munehisa; Akai, Hisazumi; Doi, Shotaro
2016-06-07
A classical spin model derived ab initio for rare-earth-based permanent magnet compounds is presented. Our target compound, NdFe{sub 12}N, is a material that goes beyond today's champion magnet compound Nd{sub 2}Fe{sub 14}B in its intrinsic magnetic properties with a simpler crystal structure. Calculated temperature dependence of the magnetization and the anisotropy field agrees with the latest experimental results in the leading order. Having put the realistic observables under our numerical control, we propose that engineering 5d-electron-mediated indirect exchange coupling between 4f-electrons in Nd and 3d-electrons from Fe would most critically help enhance the material's utility over the operation-temperature range.
A comparative study of four major approaches to predicting ATES performance
NASA Astrophysics Data System (ADS)
Doughty, C.; Buscheck, T. A.; Bodvarsson, G. S.; Tsang, C. F.
1982-09-01
The International Energy Agency test problem involving Aquifer Thermal Energy Storage was solved using four approaches: the numerical model PF (formerly CCC), the simpler numerical model SFM, and two graphical characterization schemes. Each of the four techniques, with the advantages and disadvantages of each, are discussed.
A Toy Model of Quantum Electrodynamics in (1 + 1) Dimensions
ERIC Educational Resources Information Center
Boozer, A. D.
2008-01-01
We present a toy model of quantum electrodynamics (QED) in (1 + 1) dimensions. The QED model is much simpler than QED in (3 + 1) dimensions but exhibits many of the same physical phenomena, and serves as a pedagogical introduction to both QED and quantum field theory in general. We show how the QED model can be derived by quantizing a toy model of…
Jobson, Harvey E.; Keefer, Thomas N.
1979-01-01
A coupled flow-temperature model has been developed and verified for a 27.9-km reach of the Chattahoochee River between Buford Dam and Norcross, Ga. Flow in this reach of the Chattahoochee is continuous but highly regulated by Buford Dam, a flood-control and hydroelectric facility located near Buford, Ga. Calibration and verification utilized two sets of data collected under highly unsteady discharge conditions. Existing solution techniques, with certain minor improvements, were applied to verify the existing technology of flow and transport modeling. A linear, implicit finite-difference flow model was coupled with implicit, finite-difference transport and temperature models. Both the conservative and nonconservative forms of the transport equation were solved, and the difference in the predicted concentrations of dye were found to be insignificant. The temperature model, therefore, was based on the simpler nonconservative form of the transport equation. (Woodard-USGS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, R.W.; Phillips, A.M.
1990-02-01
Low-permeability reservoirs are currently being propped with sand, resin-coated sand, intermediate-density proppants, and bauxite. This wide range of proppant cost and performance has resulted in the proliferation of proppant selection models. Initially, a rather vague relationship between well depth and proppant strength dictated the choice of proppant. More recently, computerized models of varying complexity that use net-present-value (NPV) calculations have become available. The input is based on the operator's performance goals for each well and specific reservoir properties. Simpler, noncomputerized approaches include cost/performance comparisons and nomographs. Each type of model, including several of the computerized models, is examined here. Bymore » use of these models and NPV calculations, optimum fracturing treatment designs have been developed for such low-permeability reservoirs as the Prue in Oklahoma. Typical well conditions are used in each of the selection models, and the results are compared.« less
Sears, Clinton; Andersson, Zach; Cann, Meredith
2016-01-01
ABSTRACT Background: Supporting the diverse needs of people living with HIV (PLHIV) can help reduce the individual and structural barriers they face in adhering to antiretroviral treatment (ART). The Livelihoods and Food Security Technical Assistance II (LIFT) project sought to improve adherence in Malawi by establishing 2 referral systems linking community-based economic strengthening and livelihoods services to clinical health facilities. One referral system in Balaka district, started in October 2013, connected clients to more than 20 types of services while the other simplified approach in Kasungu and Lilongwe districts, started in July 2014, connected PLHIV attending HIV and nutrition support facilities directly to community savings groups. Methods: From June to July 2015, LIFT visited referral sites in Balaka, Kasungu, and Lilongwe districts to collect qualitative data on referral utility, the perceived association of referrals with client and household health and vulnerability, and the added value of the referral system as perceived by network member providers. We interviewed a random sample of 152 adult clients (60 from Balaka, 57 from Kasungu, and 35 from Lilongwe) who had completed their referral. We also conducted 2 focus group discussions per district with network providers. Findings: Clients in all 3 districts indicated their ability to save money had improved after receiving a referral, although the percentage was higher among clients in the simplified Kasungu and Lilongwe model than the more complex Balaka model (85.6% vs. 56.0%, respectively). Nearly 70% of all clients interviewed had HIV infection; 72.7% of PLHIV in Balaka and 95.7% of PLHIV in Kasungu and Lilongwe credited referrals for helping them stay on their ART. After the referral, 76.0% of clients in Balaka and 92.3% of clients in Kasungu and Lilongwe indicated they would be willing to spend their savings on health costs. The more diverse referral network and use of an mHealth app to manage data in Balaka hindered provider uptake of the system, while the simpler system in Kasungu and Lilongwe, which included only 2 referral options and use of a paper-based referral tool, seemed simpler for the providers to manage. Conclusions: Participation in the referral systems was perceived positively by clients and providers in both models, but more so in Kasungu and Lilongwe where the referral process was simpler. Future referral networks should consider limiting the number of service options included in the network and simplify referral tools to the extent possible to facilitate uptake among network providers. PMID:28031300
Planner-Based Control of Advanced Life Support Systems
NASA Technical Reports Server (NTRS)
Muscettola, Nicola; Kortenkamp, David; Fry, Chuck; Bell, Scott
2005-01-01
The paper describes an approach to the integration of qualitative and quantitative modeling techniques for advanced life support (ALS) systems. Developing reliable control strategies that scale up to fully integrated life support systems requires augmenting quantitative models and control algorithms with the abstractions provided by qualitative, symbolic models and their associated high-level control strategies. This will allow for effective management of the combinatorics due to the integration of a large number of ALS subsystems. By focusing control actions at different levels of detail and reactivity we can use faster: simpler responses at the lowest level and predictive but complex responses at the higher levels of abstraction. In particular, methods from model-based planning and scheduling can provide effective resource management over long time periods. We describe reference implementation of an advanced control system using the IDEA control architecture developed at NASA Ames Research Center. IDEA uses planning/scheduling as the sole reasoning method for predictive and reactive closed loop control. We describe preliminary experiments in planner-based control of ALS carried out on an integrated ALS simulation developed at NASA Johnson Space Center.
NASA Astrophysics Data System (ADS)
Al-Rabadi, Anas N.
2009-10-01
This research introduces a new method of intelligent control for the control of the Buck converter using newly developed small signal model of the pulse width modulation (PWM) switch. The new method uses supervised neural network to estimate certain parameters of the transformed system matrix [Ã]. Then, a numerical algorithm used in robust control called linear matrix inequality (LMI) optimization technique is used to determine the permutation matrix [P] so that a complete system transformation {[B˜], [C˜], [Ẽ]} is possible. The transformed model is then reduced using the method of singular perturbation, and state feedback control is applied to enhance system performance. The experimental results show that the new control methodology simplifies the model in the Buck converter and thus uses a simpler controller that produces the desired system response for performance enhancement.
Recognition and source memory as multivariate decision processes.
Banks, W P
2000-07-01
Recognition memory, source memory, and exclusion performance are three important domains of study in memory, each with its own findings, it specific theoretical developments, and its separate research literature. It is proposed here that results from all three domains can be treated with a single analytic model. This article shows how to generate a comprehensive memory representation based on multidimensional signal detection theory and how to make predictions for each of these paradigms using decision axes drawn through the space. The detection model is simpler than the comparable multinomial model, it is more easily generalizable, and it does not make threshold assumptions. An experiment using the same memory set for all three tasks demonstrates the analysis and tests the model. The results show that some seemingly complex relations between the paradigms derive from an underlying simplicity of structure.
A Particle Model Explaining Mass and Relativity in a Physical Way
NASA Astrophysics Data System (ADS)
Giese, Albrecht
Physicists' understanding of relativity and the way it is handled is up to present days dominated by the interpretation of Albert Einstein, who related relativity to specific properties of space and time. The principal alternative to Einstein's interpretation is based on a concept proposed by Hendrik A. Lorentz, which uses knowledge of classical physics alone to explain relativistic phenomena. In this paper, we will show that on the one hand the Lorentz-based interpretation provides a simpler mathematical way of arriving at the known results for both Special and General Relativity. On the other hand, it is able to solve problems which have remained open to this day. Furthermore, a particle model will be presented, based on Lorentzian relativity and the quantum mechanical concept of Louis de Broglie, which explains the origin of mass without the use of the Higgs mechanism. It is based on the finiteness of the speed of light and provides classical results for particle properties which are currently only accessible through quantum mechanics.
Photometric functions for photoclinometry and other applications
McEwen, A.S.
1991-01-01
Least-squared fits to the brightness profiles across a disk or "limb darkening" described by Hapke's photometric function are found for the simpler Minnaert and lunar-Lambert functions. The simpler functions are needed to reduce the number of unknown parameters in photoclinometry, especially to distinguish the brightness variations of the surface materials from that due to the resolved topography. The limb darkening varies with the Hapke parameters for macroscopic roughness (??), the single-scattering albedo (w), and the asymmetry factor of the particle phase function (g). Both of the simpler functions generally provide good matches to the limb darkening described by Hapke's function, but the lunar-Lambert function is superior when viewing angles are high and when (??) is less than 30??. Although a nonunique solution for the Minnaert function at high phase angles has been described for smooth surfaces, the discrepancy decreases with increasing (??) and virtually disappears when (??) reaches 30?? to 40??. The variation in limb darkening with w and g, pronounced for smooth surfaces, is reduced or eliminated when the Hapke parameters are in the range typical of most planetary surfaces; this result simplifies the problem of photoclinometry across terrains with variable surface materials. The Minnaert or lunar-Lambert fits to published Hapke models will give photoclinometric solutions that are very similar (>1?? slope discrepancy) to the Hapke-function solutions for nearly all of the bodies and terrains thus far modeled by Hapke's function. ?? 1991.
A Brief Review of Elasticity and Viscoelasticity
2010-05-27
through electromagnetic or acoustic means. Creating a model that accurately describes these Rayleigh waves is key to modeling and understanding the...technology to be feasible, a mathematical model that describes the propagation of the acoustic wave from the stenosis to the chest wall will be necessary...viscoelastic model is simpler to use than poroelastic models but yields similar results for a wide range of soils and dynamic 30 loadings. In addition
Preliminary report on electromagnetic model studies
Frischknecht, F.C.; Mangan, G.B.
1960-01-01
More than 70 resopnse curves for various models have been obtained using the slingram and turam electromagnetic methods. Results show that for the slingram method, horizontal co-planar coils are usually more sensitive than vertical, co-axial or vertical, co-planar coils. The shape of the anomaly usually is simpler for the vertical coils.
ERIC Educational Resources Information Center
Rea, Shane L.; Graham, Brett H.; Nakamaru-Ogiso, Eiko; Kar, Adwitiya; Falk, Marni J.
2010-01-01
The extensive conservation of mitochondrial structure, composition, and function across evolution offers a unique opportunity to expand our understanding of human mitochondrial biology and disease. By investigating the biology of much simpler model organisms, it is often possible to answer questions that are unreachable at the clinical level.…
Vindbjerg, Erik; Carlsson, Jessica; Mortensen, Erik Lykke; Elklit, Ask; Makransky, Guido
2016-09-05
Refugees are known to have high rates of post-traumatic stress disorder (PTSD). Although recent years have seen an increase in the number of refugees from Arabic speaking countries in the Middle East, no study so far has validated the construct of PTSD in an Arabic speaking sample of refugees. Responses to the Harvard Trauma Questionnaire (HTQ) were obtained from 409 Arabic-speaking refugees diagnosed with PTSD and undergoing treatment in Denmark. Confirmatory factor analysis was used to test and compare five alternative models. All four- and five-factor models provided sufficient fit indices. However, a combination of excessively small clusters, and a case of mistranslation in the official Arabic translation of the HTQ, rendered results two of the models inadmissible. A post hoc analysis revealed that a simpler factor structure is supported, once local dependence is addressed. Overall, the construct of PTSD is supported in this sample of Arabic-speaking refugees. Apart from pursuing maximum fit, future studies may wish to test simpler, potentially more stable models, which allow a more informative analysis of individual items.
Real-time advanced spinal surgery via visible patient model and augmented reality system.
Wu, Jing-Ren; Wang, Min-Liang; Liu, Kai-Che; Hu, Ming-Hsien; Lee, Pei-Yuan
2014-03-01
This paper presents an advanced augmented reality system for spinal surgery assistance, and develops entry-point guidance prior to vertebroplasty spinal surgery. Based on image-based marker detection and tracking, the proposed camera-projector system superimposes pre-operative 3-D images onto patients. The patients' preoperative 3-D image model is registered by projecting it onto the patient such that the synthetic 3-D model merges with the real patient image, enabling the surgeon to see through the patients' anatomy. The proposed method is much simpler than heavy and computationally challenging navigation systems, and also reduces radiation exposure. The system is experimentally tested on a preoperative 3D model, dummy patient model and animal cadaver model. The feasibility and accuracy of the proposed system is verified on three patients undergoing spinal surgery in the operating theater. The results of these clinical trials are extremely promising, with surgeons reporting favorably on the reduced time of finding a suitable entry point and reduced radiation dose to patients. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
On the Reproduction Number of a Gut Microbiota Model.
Barril, Carles; Calsina, Àngel; Ripoll, Jordi
2017-11-01
A spatially structured linear model of the growth of intestinal bacteria is analysed from two generational viewpoints. Firstly, the basic reproduction number associated with the bacterial population, i.e. the expected number of daughter cells per bacterium, is given explicitly in terms of biological parameters. Secondly, an alternative quantity is introduced based on the number of bacteria produced within the intestine by one bacterium originally in the external media. The latter depends on the parameters in a simpler way and provides more biological insight than the standard reproduction number, allowing the design of experimental procedures. Both quantities coincide and are equal to one at the extinction threshold, below which the bacterial population becomes extinct. Optimal values of both reproduction numbers are derived assuming parameter trade-offs.
Koopman Operator Framework for Time Series Modeling and Analysis
NASA Astrophysics Data System (ADS)
Surana, Amit
2018-01-01
We propose an interdisciplinary framework for time series classification, forecasting, and anomaly detection by combining concepts from Koopman operator theory, machine learning, and linear systems and control theory. At the core of this framework is nonlinear dynamic generative modeling of time series using the Koopman operator which is an infinite-dimensional but linear operator. Rather than working with the underlying nonlinear model, we propose two simpler linear representations or model forms based on Koopman spectral properties. We show that these model forms are invariants of the generative model and can be readily identified directly from data using techniques for computing Koopman spectral properties without requiring the explicit knowledge of the generative model. We also introduce different notions of distances on the space of such model forms which is essential for model comparison/clustering. We employ the space of Koopman model forms equipped with distance in conjunction with classical machine learning techniques to develop a framework for automatic feature generation for time series classification. The forecasting/anomaly detection framework is based on using Koopman model forms along with classical linear systems and control approaches. We demonstrate the proposed framework for human activity classification, and for time series forecasting/anomaly detection in power grid application.
Calculus domains modelled using an original bool algebra based on polygons
NASA Astrophysics Data System (ADS)
Oanta, E.; Panait, C.; Raicu, A.; Barhalescu, M.; Axinte, T.
2016-08-01
Analytical and numerical computer based models require analytical definitions of the calculus domains. The paper presents a method to model a calculus domain based on a bool algebra which uses solid and hollow polygons. The general calculus relations of the geometrical characteristics that are widely used in mechanical engineering are tested using several shapes of the calculus domain in order to draw conclusions regarding the most effective methods to discretize the domain. The paper also tests the results of several CAD commercial software applications which are able to compute the geometrical characteristics, being drawn interesting conclusions. The tests were also targeting the accuracy of the results vs. the number of nodes on the curved boundary of the cross section. The study required the development of an original software consisting of more than 1700 computer code lines. In comparison with other calculus methods, the discretization using convex polygons is a simpler approach. Moreover, this method doesn't lead to large numbers as the spline approximation did, in that case being required special software packages in order to offer multiple, arbitrary precision. The knowledge resulted from this study may be used to develop complex computer based models in engineering.
Giacomo, Della Riccia; Stefania, Del Zotto
2013-12-15
Fumonisins are mycotoxins produced by Fusarium species that commonly live in maize. Whereas fungi damage plants, fumonisins cause disease both to cattle breedings and human beings. Law limits set fumonisins tolerable daily intake with respect to several maize based feed and food. Chemical techniques assure the most reliable and accurate measurements, but they are expensive and time consuming. A method based on Near Infrared spectroscopy and multivariate statistical regression is described as a simpler, cheaper and faster alternative. We apply Partial Least Squares with full cross validation. Two models are described, having high correlation of calibration (0.995, 0.998) and of validation (0.908, 0.909), respectively. Description of observed phenomenon is accurate and overfitting is avoided. Screening of contaminated maize with respect to European legal limit of 4 mg kg(-1) should be assured. Copyright © 2013 Elsevier Ltd. All rights reserved.
Managing Disease Risks from Trade: Strategic Behavior with Many Choices and Price Effects.
Chitchumnong, Piyayut; Horan, Richard D
2018-03-16
An individual's infectious disease risks, and hence the individual's incentives for risk mitigation, may be influenced by others' risk management choices. If so, then there will be strategic interactions among individuals, whereby each makes his or her own risk management decisions based, at least in part, on the expected decisions of others. Prior work has shown that multiple equilibria could arise in this setting, with one equilibrium being a coordination failure in which individuals make too few investments in protection. However, these results are largely based on simplified models involving a single management choice and fixed prices that may influence risk management incentives. Relaxing these assumptions, we find strategic interactions influence, and are influenced by, choices involving multiple management options and market price effects. In particular, we find these features can reduce or eliminate concerns about multiple equilibria and coordination failure. This has important policy implications relative to simpler models.
A nonlinear CDM based damage growth law for ductile materials
NASA Astrophysics Data System (ADS)
Gautam, Abhinav; Priya Ajit, K.; Sarkar, Prabir Kumar
2018-02-01
A nonlinear ductile damage growth criterion is proposed based on continuum damage mechanics (CDM) approach. The model is derived in the framework of thermodynamically consistent CDM assuming damage to be isotropic. In this study, the damage dissipation potential is also derived to be a function of varying strain hardening exponent in addition to damage strain energy release rate density. Uniaxial tensile tests and load-unload-cyclic tensile tests for AISI 1020 steel, AISI 1030 steel and Al 2024 aluminum alloy are considered for the determination of their respective damage variable D and other parameters required for the model(s). The experimental results are very closely predicted, with a deviation of 0%-3%, by the proposed model for each of the materials. The model is also tested with predictabilities of damage growth by other models in the literature. Present model detects the state of damage quantitatively at any level of plastic strain and uses simpler material tests to find the parameters of the model. So, it should be useful in metal forming industries to assess the damage growth for the desired deformation level a priori. The superiority of the new model is clarified by the deviations in the predictability of test results by other models.
Bu, Xiangwei; Wu, Xiaoyan; Tian, Mingyan; Huang, Jiaqi; Zhang, Rui; Ma, Zhen
2015-09-01
In this paper, an adaptive neural controller is exploited for a constrained flexible air-breathing hypersonic vehicle (FAHV) based on high-order tracking differentiator (HTD). By utilizing functional decomposition methodology, the dynamic model is reasonably decomposed into the respective velocity subsystem and altitude subsystem. For the velocity subsystem, a dynamic inversion based neural controller is constructed. By introducing the HTD to adaptively estimate the newly defined states generated in the process of model transformation, a novel neural based altitude controller that is quite simpler than the ones derived from back-stepping is addressed based on the normal output-feedback form instead of the strict-feedback formulation. Based on minimal-learning parameter scheme, only two neural networks with two adaptive parameters are needed for neural approximation. Especially, a novel auxiliary system is explored to deal with the problem of control inputs constraints. Finally, simulation results are presented to test the effectiveness of the proposed control strategy in the presence of system uncertainties and actuators constraints. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Poudel, Deepesh; Klumpp, John A.; Waters, Tom L.; ...
2017-07-14
The NCRP-156 Report proposes seven different biokinetic models for the wound cases depending on the physicochemistry of the contaminant. Because the models were heavily based on experimental animal data, the authors of the report encouraged application and validation of the models using bioassay data from actual human exposures. Each of the wound models was applied to three plutonium-contaminated wounds, and the models resulted in a good agreement to only one of the cases. We then applied a simpler biokinetic model structure to the bioassay data and showed that fitting the transfer rates from this model structure yielded better agreement withmore » the data than does the best-fitting NCRP-156 model. Because the biokinetics of radioactive material in each wound is different, it is impractical to propose a discrete set of model parameters to describe the biokinetics of radionuclides in all wounds, and thus each wound should be treated empirically.« less
Interactive, process-oriented climate modeling with CLIMLAB
NASA Astrophysics Data System (ADS)
Rose, B. E. J.
2016-12-01
Global climate is a complex emergent property of the rich interactions between simpler components of the climate system. We build scientific understanding of this system by breaking it down into component process models (e.g. radiation, large-scale dynamics, boundary layer turbulence), understanding each components, and putting them back together. Hands-on experience and freedom to tinker with climate models (whether simple or complex) is invaluable for building physical understanding. CLIMLAB is an open-ended software engine for interactive, process-oriented climate modeling. With CLIMLAB you can interactively mix and match model components, or combine simpler process models together into a more comprehensive model. It was created primarily to support classroom activities, using hands-on modeling to teach fundamentals of climate science at both undergraduate and graduate levels. CLIMLAB is written in Python and ties in with the rich ecosystem of open-source scientific Python tools for numerics and graphics. The Jupyter Notebook format provides an elegant medium for distributing interactive example code. I will give an overview of the current capabilities of CLIMLAB, the curriculum we have developed thus far, and plans for the future. Using CLIMLAB requires some basic Python coding skills. We consider this an educational asset, as we are targeting upper-level undergraduates and Python is an increasingly important language in STEM fields.
Relativity Based on Physical Processes Rather Than Space-Time
NASA Astrophysics Data System (ADS)
Giese, Albrecht
2013-09-01
Physicists' understanding of relativity and the way it is handled is at present dominated by the interpretation of Albert Einstein, who related relativity to specific properties of space and time. The principal alternative to Einstein's interpretation is based on a concept proposed by Hendrik A. Lorentz, which uses knowledge of classical physics to explain relativistic phenomena. In this paper, we will show that on the one hand the Lorentz-based interpretation provides a simpler mathematical way of arriving at the known results for both Special and General Relativity. On the other hand, it is able to solve problems which have remained open to this day. Furthermore, a particle model will be presented, based on Lorentzian relativity, which explains the origin of mass without the use of the Higgs mechanism, based on the finiteness of the speed of light, and which provides the classical results for particle properties that are currently only accessible through quantum mechanics.
NASA Technical Reports Server (NTRS)
Baldwin, B. S.; Maccormack, R. W.; Deiwert, G. S.
1975-01-01
The time-splitting explicit numerical method of MacCormack is applied to separated turbulent boundary layer flow problems. Modifications of this basic method are developed to counter difficulties associated with complicated geometry and severe numerical resolution requirements of turbulence model equations. The accuracy of solutions is investigated by comparison with exact solutions for several simple cases. Procedures are developed for modifying the basic method to improve the accuracy. Numerical solutions of high-Reynolds-number separated flows over an airfoil and shock-separated flows over a flat plate are obtained. A simple mixing length model of turbulence is used for the transonic flow past an airfoil. A nonorthogonal mesh of arbitrary configuration facilitates the description of the flow field. For the simpler geometry associated with the flat plate, a rectangular mesh is used, and solutions are obtained based on a two-equation differential model of turbulence.
NASA Technical Reports Server (NTRS)
Lopez, Armando E.; Buell, Donald A.; Tinling, Bruce E.
1959-01-01
Wind-tunnel measurements were made of the static and dynamic rotary stability derivatives of an airplane model having sweptback wing and tail surfaces. The Mach number range of the tests was from 0.23 to 0.94. The components of the model were tested in various combinations so that the separate contribution to the stability derivatives of the component parts and the interference effects could be determined. Estimates of the dynamic rotary derivatives based on some of the simpler existing procedures which utilize static force data were found to be in reasonable agreement with the experimental results at low angles of attack. The results of the static and dynamic measurements were used to compute the short-period oscillatory characteristics of an airplane geometrically similar to the test model. The results of these calculations are compared with military flying qualities requirements.
A new MRI land surface model HAL
NASA Astrophysics Data System (ADS)
Hosaka, M.
2011-12-01
A land surface model HAL is newly developed for MRI-ESM1. It is used for the CMIP simulations. HAL consists of three submodels: SiByl (vegetation), SNOWA (snow) and SOILA (soil) in the current version. It also contains a land coupler LCUP which connects some submodels and an atmospheric model. The vegetation submodel SiByl has surface vegetation processes similar to JMA/SiB (Sato et al. 1987, Hirai et al. 2007). SiByl has 2 vegetation layers (canopy and grass) and calculates heat, moisture, and momentum fluxes between the land surface and the atmosphere. The snow submodel SNOWA can have any number of snow layers and the maximum value is set to 8 for the CMIP5 experiments. Temperature, SWE, density, grain size and the aerosol deposition contents of each layer are predicted. The snow properties including the grain size are predicted due to snow metamorphism processes (Niwano et al., 2011), and the snow albedo is diagnosed from the aerosol mixing ratio, the snow properties and the temperature (Aoki et al., 2011). The soil submodel SOILA can also have any number of soil layers, and is composed of 14 soil layers in the CMIP5 experiments. The temperature of each layer is predicted by solving heat conduction equations. The soil moisture is predicted by solving the Darcy equation, in which hydraulic conductivity depends on the soil moisture. The land coupler LCUP is designed to enable the complicated constructions of the submidels. HAL can include some competing submodels (precise and detailed ones, and simpler ones), and they can run at the same simulations. LCUP enables a 2-step model validation, in which we compare the results of the detailed submodels with the in-situ observation directly at the 1st step, and follows the comparison between them and those of the simpler ones at the 2nd step. When the performances of the detailed ones are good, we can improve the simpler ones by using the detailed ones as reference models.
Towards Run-time Assurance of Advanced Propulsion Algorithms
NASA Technical Reports Server (NTRS)
Wong, Edmond; Schierman, John D.; Schlapkohl, Thomas; Chicatelli, Amy
2014-01-01
This paper covers the motivation and rationale for investigating the application of run-time assurance methods as a potential means of providing safety assurance for advanced propulsion control systems. Certification is becoming increasingly infeasible for such systems using current verification practices. Run-time assurance systems hold the promise of certifying these advanced systems by continuously monitoring the state of the feedback system during operation and reverting to a simpler, certified system if anomalous behavior is detected. The discussion will also cover initial efforts underway to apply a run-time assurance framework to NASA's model-based engine control approach. Preliminary experimental results are presented and discussed.
Schryver, Jack; Nutaro, James; Shankar, Mallikarjun
2015-10-30
An agent-based simulation model hierarchy emulating disease states and behaviors critical to progression of diabetes type 2 was designed and implemented in the DEVS framework. The models are translations of basic elements of an established system dynamics model of diabetes. In this model hierarchy, which mimics diabetes progression over an aggregated U.S. population, was dis-aggregated and reconstructed bottom-up at the individual (agent) level. Four levels of model complexity were defined in order to systematically evaluate which parameters are needed to mimic outputs of the system dynamics model. Moreover, the four estimated models attempted to replicate stock counts representing disease statesmore » in the system dynamics model, while estimating impacts of an elderliness factor, obesity factor and health-related behavioral parameters. Health-related behavior was modeled as a simple realization of the Theory of Planned Behavior, a joint function of individual attitude and diffusion of social norms that spread over each agent s social network. Although the most complex agent-based simulation model contained 31 adjustable parameters, all models were considerably less complex than the system dynamics model which required numerous time series inputs to make its predictions. In all three elaborations of the baseline model provided significantly improved fits to the output of the system dynamics model. The performances of the baseline agent-based model and its extensions illustrate a promising approach to translate complex system dynamics models into agent-based model alternatives that are both conceptually simpler and capable of capturing main effects of complex local agent-agent interactions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schryver, Jack; Nutaro, James; Shankar, Mallikarjun
An agent-based simulation model hierarchy emulating disease states and behaviors critical to progression of diabetes type 2 was designed and implemented in the DEVS framework. The models are translations of basic elements of an established system dynamics model of diabetes. In this model hierarchy, which mimics diabetes progression over an aggregated U.S. population, was dis-aggregated and reconstructed bottom-up at the individual (agent) level. Four levels of model complexity were defined in order to systematically evaluate which parameters are needed to mimic outputs of the system dynamics model. Moreover, the four estimated models attempted to replicate stock counts representing disease statesmore » in the system dynamics model, while estimating impacts of an elderliness factor, obesity factor and health-related behavioral parameters. Health-related behavior was modeled as a simple realization of the Theory of Planned Behavior, a joint function of individual attitude and diffusion of social norms that spread over each agent s social network. Although the most complex agent-based simulation model contained 31 adjustable parameters, all models were considerably less complex than the system dynamics model which required numerous time series inputs to make its predictions. In all three elaborations of the baseline model provided significantly improved fits to the output of the system dynamics model. The performances of the baseline agent-based model and its extensions illustrate a promising approach to translate complex system dynamics models into agent-based model alternatives that are both conceptually simpler and capable of capturing main effects of complex local agent-agent interactions.« less
Earth observation data based rapid flood-extent modelling for tsunami-devastated coastal areas
NASA Astrophysics Data System (ADS)
Hese, Sören; Heyer, Thomas
2016-04-01
Earth observation (EO)-based mapping and analysis of natural hazards plays a critical role in various aspects of post-disaster aid management. Spatial very high-resolution Earth observation data provide important information for managing post-tsunami activities on devastated land and monitoring re-cultivation and reconstruction. The automatic and fast use of high-resolution EO data for rapid mapping is, however, complicated by high spectral variability in densely populated urban areas and unpredictable textural and spectral land-surface changes. The present paper presents the results of the SENDAI project, which developed an automatic post-tsunami flood-extent modelling concept using RapidEye multispectral satellite data and ASTER Global Digital Elevation Model Version 2 (GDEM V2) data of the eastern coast of Japan (captured after the Tohoku earthquake). In this paper, the authors developed both a bathtub-modelling approach and a cost-distance approach, and integrated the roughness parameters of different land-use types to increase the accuracy of flood-extent modelling. Overall, the accuracy of the developed models reached 87-92%, depending on the analysed test site. The flood-modelling approach was explained and results were compared with published approaches. We came to the conclusion that the cost-factor-based approach reaches accuracy comparable to published results from hydrological modelling. However the proposed cost-factor approach is based on a much simpler dataset, which is available globally.
Molitor, John
2012-03-01
Bayesian methods have seen an increase in popularity in a wide variety of scientific fields, including epidemiology. One of the main reasons for their widespread application is the power of the Markov chain Monte Carlo (MCMC) techniques generally used to fit these models. As a result, researchers often implicitly associate Bayesian models with MCMC estimation procedures. However, Bayesian models do not always require Markov-chain-based methods for parameter estimation. This is important, as MCMC estimation methods, while generally quite powerful, are complex and computationally expensive and suffer from convergence problems related to the manner in which they generate correlated samples used to estimate probability distributions for parameters of interest. In this issue of the Journal, Cole et al. (Am J Epidemiol. 2012;175(5):368-375) present an interesting paper that discusses non-Markov-chain-based approaches to fitting Bayesian models. These methods, though limited, can overcome some of the problems associated with MCMC techniques and promise to provide simpler approaches to fitting Bayesian models. Applied researchers will find these estimation approaches intuitively appealing and will gain a deeper understanding of Bayesian models through their use. However, readers should be aware that other non-Markov-chain-based methods are currently in active development and have been widely published in other fields.
Adaptation of a general circulation model to ocean dynamics
NASA Technical Reports Server (NTRS)
Turner, R. E.; Rees, T. H.; Woodbury, G. E.
1976-01-01
A primitive-variable general circulation model of the ocean was formulated in which fast external gravity waves are suppressed with rigid-lid surface constraint pressires which also provide a means for simulating the effects of large-scale free-surface topography. The surface pressure method is simpler to apply than the conventional stream function models, and the resulting model can be applied to both global ocean and limited region situations. Strengths and weaknesses of the model are also presented.
Connector For Embedded Optical Fiber
NASA Technical Reports Server (NTRS)
Wilkerson, Charles; Hiles, Steven; Houghton, J. Richard; Holland, Brent W.
1994-01-01
Partly embedded fixture is simpler and sturdier than other types of outlets for optical fibers embedded in solid structures. No need to align coupling prism and lenses. Fixture includes base, tube bent at 45 degree angle, and ceramic ferrule.
Velocity-image model for online signature verification.
Khan, Mohammad A U; Niazi, Muhammad Khalid Khan; Khan, Muhammad Aurangzeb
2006-11-01
In general, online signature capturing devices provide outputs in the form of shape and velocity signals. In the past, strokes have been extracted while tracking velocity signal minimas. However, the resulting strokes are larger and complicated in shape and thus make the subsequent job of generating a discriminative template difficult. We propose a new stroke-based algorithm that splits velocity signal into various bands. Based on these bands, strokes are extracted which are smaller and more simpler in nature. Training of our proposed system revealed that low- and high-velocity bands of the signal are unstable, whereas the medium-velocity band can be used for discrimination purposes. Euclidean distances of strokes extracted on the basis of medium velocity band are used for verification purpose. The experiments conducted show improvement in discriminative capability of the proposed stroke-based system.
Blur identification by multilayer neural network based on multivalued neurons.
Aizenberg, Igor; Paliy, Dmitriy V; Zurada, Jacek M; Astola, Jaakko T
2008-05-01
A multilayer neural network based on multivalued neurons (MLMVN) is a neural network with a traditional feedforward architecture. At the same time, this network has a number of specific different features. Its backpropagation learning algorithm is derivative-free. The functionality of MLMVN is superior to that of the traditional feedforward neural networks and of a variety kernel-based networks. Its higher flexibility and faster adaptation to the target mapping enables to model complex problems using simpler networks. In this paper, the MLMVN is used to identify both type and parameters of the point spread function, whose precise identification is of crucial importance for the image deblurring. The simulation results show the high efficiency of the proposed approach. It is confirmed that the MLMVN is a powerful tool for solving classification problems, especially multiclass ones.
Cobelli, Claudio; Dalla Man, Chiara; Toffolo, Gianna; Basu, Rita; Vella, Adrian; Rizza, Robert
2014-01-01
The simultaneous assessment of insulin action, secretion, and hepatic extraction is key to understanding postprandial glucose metabolism in nondiabetic and diabetic humans. We review the oral minimal method (i.e., models that allow the estimation of insulin sensitivity, β-cell responsivity, and hepatic insulin extraction from a mixed-meal or an oral glucose tolerance test). Both of these oral tests are more physiologic and simpler to administer than those based on an intravenous test (e.g., a glucose clamp or an intravenous glucose tolerance test). The focus of this review is on indices provided by physiological-based models and their validation against the glucose clamp technique. We discuss first the oral minimal model method rationale, data, and protocols. Then we present the three minimal models and the indices they provide. The disposition index paradigm, a widely used β-cell function metric, is revisited in the context of individual versus population modeling. Adding a glucose tracer to the oral dose significantly enhances the assessment of insulin action by segregating insulin sensitivity into its glucose disposal and hepatic components. The oral minimal model method, by quantitatively portraying the complex relationships between the major players of glucose metabolism, is able to provide novel insights regarding the regulation of postprandial metabolism. PMID:24651807
Jardine, Bartholomew; Raymond, Gary M; Bassingthwaighte, James B
2015-01-01
The Modular Program Constructor (MPC) is an open-source Java based modeling utility, built upon JSim's Mathematical Modeling Language (MML) ( http://www.physiome.org/jsim/) that uses directives embedded in model code to construct larger, more complicated models quickly and with less error than manually combining models. A major obstacle in writing complex models for physiological processes is the large amount of time it takes to model the myriad processes taking place simultaneously in cells, tissues, and organs. MPC replaces this task with code-generating algorithms that take model code from several different existing models and produce model code for a new JSim model. This is particularly useful during multi-scale model development where many variants are to be configured and tested against data. MPC encodes and preserves information about how a model is built from its simpler model modules, allowing the researcher to quickly substitute or update modules for hypothesis testing. MPC is implemented in Java and requires JSim to use its output. MPC source code and documentation are available at http://www.physiome.org/software/MPC/.
Estimating economic thresholds for pest control: an alternative procedure.
Ramirez, O A; Saunders, J L
1999-04-01
An alternative methodology to determine profit maximizing economic thresholds is developed and illustrated. An optimization problem based on the main biological and economic relations involved in determining a profit maximizing economic threshold is first advanced. From it, a more manageable model of 2 nonsimultaneous reduced-from equations is derived, which represents a simpler but conceptually and statistically sound alternative. The model recognizes that yields and pest control costs are a function of the economic threshold used. Higher (less strict) economic thresholds can result in lower yields and, therefore, a lower gross income from the sale of the product, but could also be less costly to maintain. The highest possible profits will be obtained by using the economic threshold that results in a maximum difference between gross income and pest control cost functions.
Gureckis, Todd M.; Love, Bradley C.
2009-01-01
We evaluate two broad classes of cognitive mechanisms that might support the learning of sequential patterns. According to the first, learning is based on the gradual accumulation of direct associations between events based on simple conditioning principles. The other view describes learning as the process of inducing the transformational structure that defines the material. Each of these learning mechanisms predict differences in the rate of acquisition for differently organized sequences. Across a set of empirical studies, we compare the predictions of each class of model with the behavior of human subjects. We find that learning mechanisms based on transformations of an internal state, such as recurrent network architectures (e.g., Elman, 1990), have difficulty accounting for the pattern of human results relative to a simpler (but more limited) learning mechanism based on learning direct associations. Our results suggest new constraints on the cognitive mechanisms supporting sequential learning behavior. PMID:20396653
Novel Phenotype Issues Raised in Cross-National Epidemiological Research on Drug Dependence
Anthony, James C.
2010-01-01
Stage-transition models based on the American Diagnostic and Statistical Manual (DSM) generally are applied in epidemiology and genetics research on drug dependence syndromes associated with cannabis, cocaine, and other internationally regulated drugs (IRD). Difficulties with DSM stage-transition models have surfaced during cross-national research intended to provide a truly global perspective, such as the work of the World Mental Health Surveys (WMHS) Consortium. Alternative simpler dependence-related phenotypes are possible, including population-level count process models for steps early and before coalescence of clinical features into a coherent syndrome (e.g., zero-inflated Poisson regression). Selected findings are reviewed, based on ZIP modeling of alcohol, tobacco, and IRD count processes, with an illustration that may stimulate new research on genetic susceptibility traits. The annual National Surveys on Drug Use and Health can be readily modified for this purpose, along the lines of a truly anonymous research approach that can help make NSDUH-type cross-national epidemiological surveys more useful in the context of subsequent genome wide association (GWAS) research and post-GWAS investigations with a truly global health perspective. PMID:20201862
Slash pine plantation site index curves for the West Gulf
Stanley J. Zarnoch; D.P. Feduccia
1984-01-01
New slash pine (Pinus elliottii var. elliottii Engelm) plantation site index curves have been developed for the West Gulf. The guide curve is mathematically simpler than other available models, tracks the data well, and is more biologically reasonable outside the range of data.
NASA Astrophysics Data System (ADS)
Song, H. S.; Li, M.; Qian, W.; Song, X.; Chen, X.; Scheibe, T. D.; Fredrickson, J.; Zachara, J. M.; Liu, C.
2016-12-01
Modeling environmental microbial communities at individual organism level is currently intractable due to overwhelming structural complexity. Functional guild-based approaches alleviate this problem by lumping microorganisms into fewer groups based on their functional similarities. This reduction may become ineffective, however, when individual species perform multiple functions as environmental conditions vary. In contrast, the functional enzyme-based modeling approach we present here describes microbial community dynamics based on identified functional enzymes (rather than individual species or their groups). Previous studies in the literature along this line used biomass or functional genes as surrogate measures of enzymes due to the lack of analytical methods for quantifying enzymes in environmental samples. Leveraging our recent development of a signature peptide-based technique enabling sensitive quantification of functional enzymes in environmental samples, we developed a genetically structured microbial community model (GSMCM) to incorporate enzyme concentrations and various other omics measurements (if available) as key modeling input. We formulated the GSMCM based on the cybernetic metabolic modeling framework to rationally account for cellular regulation without relying on empirical inhibition kinetics. In the case study of modeling denitrification process in Columbia River hyporheic zone sediments collected from the Hanford Reach, our GSMCM provided a quantitative fit to complex experimental data in denitrification, including the delayed response of enzyme activation to the change in substrate concentration. Our future goal is to extend the modeling scope to the prediction of carbon and nitrogen cycles and contaminant fate. Integration of a simpler version of the GSMCM with PFLOTRAN for multi-scale field simulations is in progress.
NASA Astrophysics Data System (ADS)
Nguyen, Tien Long; Sansour, Carlo; Hjiaj, Mohammed
2017-05-01
In this paper, an energy-momentum method for geometrically exact Timoshenko-type beam is proposed. The classical time integration schemes in dynamics are known to exhibit instability in the non-linear regime. The so-called Timoshenko-type beam with the use of rotational degree of freedom leads to simpler strain relations and simpler expressions of the inertial terms as compared to the well known Bernoulli-type model. The treatment of the Bernoulli-model has been recently addressed by the authors. In this present work, we extend our approach of using the strain rates to define the strain fields to in-plane geometrically exact Timoshenko-type beams. The large rotational degrees of freedom are exactly computed. The well-known enhanced strain method is used to avoid locking phenomena. Conservation of energy, momentum and angular momentum is proved formally and numerically. The excellent performance of the formulation will be demonstrated through a range of examples.
USDA-ARS?s Scientific Manuscript database
Molecular detection of bacterial pathogens based on LAMP methods is a faster and simpler approach than conventional culture methods. Although different LAMP-based methods for pathogenic bacterial detection are available, a systematic comparison of these different LAMP assays has not been performed. ...
Rebreathed air as a reference for breath-alcohol testers
DOT National Transportation Integrated Search
1975-01-01
A technique has been devised for a reference measurement of the performance of breath-alcohol measuring instruments directly from the respiratory system. It is shown that this technique is superior and simpler than comparison measurements based on bl...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peng, Ren-Ci; Nan, Ce-Wen, E-mail: jzw12@psu.edu, E-mail: cwnan@tsinghua.edu.cn; Wang, J. J., E-mail: jzw12@psu.edu, E-mail: cwnan@tsinghua.edu.cn
Based on phase field modeling and thermodynamic analysis, purely electric-field-driven magnetization reversal was shown to be possible in a multiferroic heterostructure of a square-shaped amorphous Co{sub 40}Fe{sub 40}B{sub 20} nanomagnet on top of a ferroelectric layer through electrostrain. The reversal is made possible by engineering the mutual interactions among the built-in uniaxial magnetic anisotropy, the geometry-dependent magnetic configuration anisotropy, and the magnetoelastic anisotropy. Particularly, the incorporation of the built-in uniaxial anisotropy made it possible to reverse magnetization with one single unipolar electrostrain pulse, which is simpler than previous designs involving the use of bipolar electrostrains and may alleviate ferroelectric fatigue.more » Critical conditions for triggering the magnetization reversal are identified.« less
Thermal and structural analysis of the GOES scan mirror's on orbit performance
NASA Technical Reports Server (NTRS)
Zurmehly, G. E.; Hookman, R. A.
1991-01-01
The on-orbit performance of the GOES satellite's scan mirror has been predicted by means of thermal, structural, and optical models. A simpler-than-conventional thermal model was used to reduce the time required to obtain orbital predictions, and the structural model was used to predict on-earth gravity sag and on-orbit distortions. The transfer of data from the thermal model to the structural model was automated for a given set of thermal nodes and structural grids.
Wavelet-based 3-D inversion for frequency-domain airborne EM data
NASA Astrophysics Data System (ADS)
Liu, Yunhe; Farquharson, Colin G.; Yin, Changchun; Baranwal, Vikas C.
2018-04-01
In this paper, we propose a new wavelet-based 3-D inversion method for frequency-domain airborne electromagnetic (FDAEM) data. Instead of inverting the model in the space domain using a smoothing constraint, this new method recovers the model in the wavelet domain based on a sparsity constraint. In the wavelet domain, the model is represented by two types of coefficients, which contain both large- and fine-scale informations of the model, meaning the wavelet-domain inversion has inherent multiresolution. In order to accomplish a sparsity constraint, we minimize an L1-norm measure in the wavelet domain that mostly gives a sparse solution. The final inversion system is solved by an iteratively reweighted least-squares method. We investigate different orders of Daubechies wavelets to accomplish our inversion algorithm, and test them on synthetic frequency-domain AEM data set. The results show that higher order wavelets having larger vanishing moments and regularity can deliver a more stable inversion process and give better local resolution, while the lower order wavelets are simpler and less smooth, and thus capable of recovering sharp discontinuities if the model is simple. At last, we test this new inversion algorithm on a frequency-domain helicopter EM (HEM) field data set acquired in Byneset, Norway. Wavelet-based 3-D inversion of HEM data is compared to L2-norm-based 3-D inversion's result to further investigate the features of the new method.
A Selected Library of Transport Coefficients for Combustion and Plasma Physics Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cloutman, L.D.
2000-08-01
COYOTE and similar combustion programs based on the multicomponent Navier-Stokes equations require the mixture viscosity, thermal conductivity, and species transport coefficients as input. This report documents a model of these molecular transport coefficients that is simpler than the general theory, but which provides adequate accuracy for many purposes. This model leads to a computationally convenient, self-contained, and easy-to-use source of such data in a format suitable for use by such programs. We present the data for various neutral species in two forms. The first form is a simple functional fit to the transport coefficients. The second form is the usemore » of tabulated Lennard-Jones parameters in simple theoretical expressions for the gas-phase transport coefficients. The model then is extended to the case of a two-temperature plasma. Lennard-Jones parameters are given for a number of chemical species of interest in combustion research.« less
Simple estimate of critical volume
NASA Technical Reports Server (NTRS)
Fedors, R. F.
1980-01-01
Method for estimating critical molar volume of materials is faster and simpler than previous procedures. Formula sums no more than 18 different contributions from components of chemical structure of material, and is as accurate (within 3 percent) as older more complicated models. Method should expedite many thermodynamic design calculations.
Getting SaaS-y. Why the sisters of Mercy Health System opted for on-demand portfolio management.
Carter, Jay
2011-03-01
Sisters of Mercy Health System chose the SaaS model as a simpler way to plan, execute, and monitor strategic business initiatives. It also provided something that was easy to use and offered quick time to value.
Petersen, James H.; DeAngelis, Donald L.
1992-01-01
The behavior of individual northern squawfish (Ptychocheilus oregonensis) preying on juvenile salmonids was modeled to address questions about capture rate and the timing of prey captures (random versus contagious). Prey density, predator weight, prey weight, temperature, and diel feeding pattern were first incorporated into predation equations analogous to Holling Type 2 and Type 3 functional response models. Type 2 and Type 3 equations fit field data from the Columbia River equally well, and both models predicted predation rates on five of seven independent dates. Selecting a functional response type may be complicated by variable predation rates, analytical methods, and assumptions of the model equations. Using the Type 2 functional response, random versus contagious timing of prey capture was tested using two related models. ln the simpler model, salmon captures were assumed to be controlled by a Poisson renewal process; in the second model, several salmon captures were assumed to occur during brief "feeding bouts", modeled with a compound Poisson process. Salmon captures by individual northern squawfish were clustered through time, rather than random, based on comparison of model simulations and field data. The contagious-feeding result suggests that salmonids may be encountered as patches or schools in the river.
Church, Sheri A; Livingstone, Kevin; Lai, Zhao; Kozik, Alexander; Knapp, Steven J; Michelmore, Richard W; Rieseberg, Loren H
2007-02-01
Using likelihood-based variable selection models, we determined if positive selection was acting on 523 EST sequence pairs from two lineages of sunflower and lettuce. Variable rate models are generally not used for comparisons of sequence pairs due to the limited information and the inaccuracy of estimates of specific substitution rates. However, previous studies have shown that the likelihood ratio test (LRT) is reliable for detecting positive selection, even with low numbers of sequences. These analyses identified 56 genes that show a signature of selection, of which 75% were not identified by simpler models that average selection across codons. Subsequent mapping studies in sunflower show four of five of the positively selected genes identified by these methods mapped to domestication QTLs. We discuss the validity and limitations of using variable rate models for comparisons of sequence pairs, as well as the limitations of using ESTs for identification of positively selected genes.
Min, Hua; Zheng, Ling; Perl, Yehoshua; Halper, Michael; De Coronado, Sherri; Ochs, Christopher
2017-05-18
Ontologies are knowledge structures that lend support to many health-information systems. A study is carried out to assess the quality of ontological concepts based on a measure of their complexity. The results show a relation between complexity of concepts and error rates of concepts. A measure of lateral complexity defined as the number of exhibited role types is used to distinguish between more complex and simpler concepts. Using a framework called an area taxonomy, a kind of abstraction network that summarizes the structural organization of an ontology, concepts are divided into two groups along these lines. Various concepts from each group are then subjected to a two-phase QA analysis to uncover and verify errors and inconsistencies in their modeling. A hierarchy of the National Cancer Institute thesaurus (NCIt) is used as our test-bed. A hypothesis pertaining to the expected error rates of the complex and simple concepts is tested. Our study was done on the NCIt's Biological Process hierarchy. Various errors, including missing roles, incorrect role targets, and incorrectly assigned roles, were discovered and verified in the two phases of our QA analysis. The overall findings confirmed our hypothesis by showing a statistically significant difference between the amounts of errors exhibited by more laterally complex concepts vis-à-vis simpler concepts. QA is an essential part of any ontology's maintenance regimen. In this paper, we reported on the results of a QA study targeting two groups of ontology concepts distinguished by their level of complexity, defined in terms of the number of exhibited role types. The study was carried out on a major component of an important ontology, the NCIt. The findings suggest that more complex concepts tend to have a higher error rate than simpler concepts. These findings can be utilized to guide ongoing efforts in ontology QA.
Lobach, Irvna; Fan, Ruzone; Carroll, Raymond T.
2011-01-01
With the advent of dense single nucleotide polymorphism genotyping, population-based association studies have become the major tools for identifying human disease genes and for fine gene mapping of complex traits. We develop a genotype-based approach for association analysis of case-control studies of gene-environment interactions in the case when environmental factors are measured with error and genotype data are available on multiple genetic markers. To directly use the observed genotype data, we propose two genotype-based models: genotype effect and additive effect models. Our approach offers several advantages. First, the proposed risk functions can directly incorporate the observed genotype data while modeling the linkage disequihbrium information in the regression coefficients, thus eliminating the need to infer haplotype phase. Compared with the haplotype-based approach, an estimating procedure based on the proposed methods can be much simpler and significantly faster. In addition, there is no potential risk due to haplotype phase estimation. Further, by fitting the proposed models, it is possible to analyze the risk alleles/variants of complex diseases, including their dominant or additive effects. To model measurement error, we adopt the pseudo-likelihood method by Lobach et al. [2008]. Performance of the proposed method is examined using simulation experiments. An application of our method is illustrated using a population-based case-control study of association between calcium intake with the risk of colorectal adenoma development. PMID:21031455
Bayes factors for the linear ballistic accumulator model of decision-making.
Evans, Nathan J; Brown, Scott D
2018-04-01
Evidence accumulation models of decision-making have led to advances in several different areas of psychology. These models provide a way to integrate response time and accuracy data, and to describe performance in terms of latent cognitive processes. Testing important psychological hypotheses using cognitive models requires a method to make inferences about different versions of the models which assume different parameters to cause observed effects. The task of model-based inference using noisy data is difficult, and has proven especially problematic with current model selection methods based on parameter estimation. We provide a method for computing Bayes factors through Monte-Carlo integration for the linear ballistic accumulator (LBA; Brown and Heathcote, 2008), a widely used evidence accumulation model. Bayes factors are used frequently for inference with simpler statistical models, and they do not require parameter estimation. In order to overcome the computational burden of estimating Bayes factors via brute force integration, we exploit general purpose graphical processing units; we provide free code for this. This approach allows estimation of Bayes factors via Monte-Carlo integration within a practical time frame. We demonstrate the method using both simulated and real data. We investigate the stability of the Monte-Carlo approximation, and the LBA's inferential properties, in simulation studies.
NASA Astrophysics Data System (ADS)
Bush, Drew; Sieber, Renee; Seiler, Gale; Chandler, Mark
2018-04-01
This study with 79 students in Montreal, Quebec, compared the educational use of a National Aeronautics and Space Administration (NASA) global climate model (GCM) to climate education technologies developed for classroom use that included simpler interfaces and processes. The goal was to show how differing climate education technologies succeed and fail at getting students to evolve in their understanding of anthropogenic global climate change (AGCC). Many available climate education technologies aim to convey key AGCC concepts or Earth systems processes; the educational GCM used here aims to teach students the methods and processes of global climate modeling. We hypothesized that challenges to learning about AGCC make authentic technology-enabled inquiry important in developing accurate understandings of not just the issue but how scientists research it. The goal was to determine if student learning trajectories differed between the comparison and treatment groups based on whether each climate education technology allowed authentic scientific research. We trace learning trajectories using pre/post exams, practice quizzes, and written student reflections. To examine the reasons for differing learning trajectories, we discuss student pre/post questionnaires, student exit interviews, and 535 min of recorded classroom video. Students who worked with a GCM demonstrated learning trajectories with larger gains, higher levels of engagement, and a better idea of how climate scientists conduct research. Students who worked with simpler climate education technologies scored lower in the course because of lower levels of engagement with inquiry processes that were perceived to not actually resemble the work of climate scientists.
A Program Structure for Event-Based Speech Synthesis by Rules within a Flexible Segmental Framework.
ERIC Educational Resources Information Center
Hill, David R.
1978-01-01
A program structure based on recently developed techniques for operating system simulation has the required flexibility for use as a speech synthesis algorithm research framework. This program makes synthesis possible with less rigid time and frequency-component structure than simpler schemes. It also meets real-time operation and memory-size…
Mikhalevich, Irina
2017-01-01
Behavioural flexibility is often treated as the gold standard of evidence for more sophisticated or complex forms of animal cognition, such as planning, metacognition and mindreading. However, the evidential link between behavioural flexibility and complex cognition has not been explicitly or systematically defended. Such a defence is particularly pressing because observed flexible behaviours can frequently be explained by putatively simpler cognitive mechanisms. This leaves complex cognition hypotheses open to ‘deflationary’ challenges that are accorded greater evidential weight precisely because they offer putatively simpler explanations of equal explanatory power. This paper challenges the blanket preference for simpler explanations, and shows that once this preference is dispensed with, and the full spectrum of evidence—including evolutionary, ecological and phylogenetic data—is accorded its proper weight, an argument in support of the prevailing assumption that behavioural flexibility can serve as evidence for complex cognitive mechanisms may begin to take shape. An adaptive model of cognitive-behavioural evolution is proposed, according to which the existence of convergent trait–environment clusters in phylogenetically disparate lineages may serve as evidence for the same trait–environment clusters in other lineages. This, in turn, could permit inferences of cognitive complexity in cases of experimental underdetermination, thereby placing the common view that behavioural flexibility can serve as evidence for complex cognition on firmer grounds. PMID:28479981
ERIC Educational Resources Information Center
Armoni, Michal; Gal-Ezer, Judith
2005-01-01
When dealing with a complex problem, solving it by reduction to simpler problems, or problems for which the solution is already known, is a common method in mathematics and other scientific disciplines, as in computer science and, specifically, in the field of computability. However, when teaching computational models (as part of computability)…
Stupid Tutoring Systems, Intelligent Humans
ERIC Educational Resources Information Center
Baker, Ryan S.
2016-01-01
The initial vision for intelligent tutoring systems involved powerful, multi-faceted systems that would leverage rich models of students and pedagogies to create complex learning interactions. But the intelligent tutoring systems used at scale today are much simpler. In this article, I present hypotheses on the factors underlying this development,…
Comment on ``Spectroscopy of samarium isotopes in the sdg interacting boson model''
NASA Astrophysics Data System (ADS)
Kuyucak, Serdar; Lac, Vi-Sieu
1993-04-01
We point out that the data used in the sdg boson model calculations by Devi and Kota [Phys. Rev. C 45, 2238 (1992)] can be equally well described by the much simpler sd boson model. We present additional data for the Sm isotopes which cannot be explained in the sd model and hence may justify such an extension to the sdg bosons. We also comment on the form of the Hamiltonian and the transition operators used in this paper.
Numerical modeling of reflux solar receivers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hogan, R.E. Jr.
1993-05-01
Using reflux solar receivers to collect solar energy for dish-Stirling electric power generation systems is presently being investigated by several organizations, including Sandia National Laboratories, Albuquerque, N. Mex. In support of this program, Sandia has developed two numerical models describing the thermal performance of pool-boiler and heat-pipe reflux receivers. Both models are applicable to axisymmetric geometries and they both consider the radiative and convective energy transfer within the receiver cavity, the conductive and convective energy transfer from the receiver housing, and the energy transfer to the receiver working fluid. The primary difference between the models is the level of detailmore » in modeling the heat conduction through the receiver walls. The more detailed model uses a two-dimensional finite control volume method, whereas the simpler model uses a one-dimensional thermal resistance approach. The numerical modeling concepts presented are applicable to conventional tube-type solar receivers, as well as to reflux receivers. Good agreement between the two models is demonstrated by comparing the predicted and measured performance of a pool-boiler reflux receiver being tested at Sandia. For design operating conditions, the receiver thermal efficiencies agree within 1 percent and the average receiver cavity temperature within 1.3 percent. The thermal efficiency and receiver temperatures predicted by the simpler thermal resistance model agree well with experimental data from on-sun tests of the Sandia reflux pool-boiler receiver. An analysis of these comparisons identifies several plausible explanations for the differences between the predicted results and the experimental data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Shekhar; Koganti, S.B.
2008-07-01
Acetohydroxamic acid (AHA) is a novel complexant for recycle of nuclear-fuel materials. It can be used in ordinary centrifugal extractors, eliminating the need for electro-redox equipment or complex maintenance requirements in a remotely maintained hot cell. In this work, the effect of AHA on Pu(IV) distribution ratios in 30% TBP system was quantified, modeled, and integrated in SIMPSEX code. Two sets of batch experiments involving macro Pu concentrations (conducted at IGCAR) and one high-Pu flowsheet (literature) were simulated for AHA based U-Pu separation. Based on the simulation and validation results, AHA based next-generation reprocessing flowsheets are proposed for co-processing basedmore » FBR and thermal-fuel reprocessing as well as evaporator-less macro-level Pu concentration process required for MOX fuel fabrication. Utilization of AHA results in significant simplification in plant design and simpler technology implementations with significant cost savings. (authors)« less
An experimental investigation of the flow physics of high-lift systems
NASA Technical Reports Server (NTRS)
Thomas, Flint O.; Nelson, R. C.
1995-01-01
This progress report, a series of viewgraphs, outlines experiments on the flow physics of confluent boundary layers for high lift systems. The design objective is to design high lift systems with improved C(sub Lmax) for landing approach and improved take-off L/D and simultaneously reduce acquisition and maintenance costs. In effect, achieve improved performance with simpler designs. The research objectives include: establish the role of confluent boundary layer flow physics in high-lift production; contrast confluent boundary layer structure for optimum and non-optimum C(sub L) cases; formation of a high quality, detailed archival data base for CFD/modeling; and examination of the role of relaminarization and streamline curvature.
NASA Astrophysics Data System (ADS)
Wang, Jia; Hou, Xi; Wan, Yongjian; Shi, Chunyan
2017-10-01
An optimized method to calculate error correction capability of tool influence function (TIF) in certain polishing conditions will be proposed based on smoothing spectral function. The basic mathematical model for this method will be established in theory. A set of polishing experimental data with rigid conformal tool is used to validate the optimized method. The calculated results can quantitatively indicate error correction capability of TIF for different spatial frequency errors in certain polishing conditions. The comparative analysis with previous method shows that the optimized method is simpler in form and can get the same accuracy results with less calculating time in contrast to previous method.
Effects of capillarity and microtopography on wetland specific yield
Sumner, D.M.
2007-01-01
Hydrologic models aid in describing water flows and levels in wetlands. Frequently, these models use a specific yield conceptualization to relate water flows to water level changes. Traditionally, a simple conceptualization of specific yield is used, composed of two constant values for above- and below-surface water levels and neglecting the effects of soil capillarity and land surface microtopography. The effects of capiltarity and microtopography on specific yield were evaluated at three wetland sites in the Florida Everglades. The effect of capillarity on specific yield was incorporated based on the fillable pore space within a soil moisture profile at hydrostatic equilibrium with the water table. The effect of microtopography was based on areal averaging of topographically varying values of specific yield. The results indicate that a more physically-based conceptualization of specific yield incorporating capillary and microtopographic considerations can be substantially different from the traditional two-part conceptualization, and from simpler conceptualizations incorporating only capillarity or only microtopography. For the sites considered, traditional estimates of specific yield could under- or overestimate the more physically based estimates by a factor of two or more. The results suggest that consideration of both capillarity and microtopography is important to the formulation of specific yield in physically based hydrologic models of wetlands. ?? 2007, The Society of Wetland Scientists.
Mechanical transduction via a single soft polymer
NASA Astrophysics Data System (ADS)
Hou, Ruizheng; Wang, Nan; Bao, Weizhu; Wang, Zhisong
2018-04-01
Molecular machines from biology and nanotechnology often depend on soft structures to perform mechanical functions, but the underlying mechanisms and advantages or disadvantages over rigid structures are not fully understood. We report here a rigorous study of mechanical transduction along a single soft polymer based on exact solutions to the realistic three-dimensional wormlike-chain model and augmented with analytical relations derived from simpler polymer models. The results reveal surprisingly that a soft polymer with vanishingly small persistence length below a single chemical bond still transduces biased displacement and mechanical work up to practically significant amounts. This "soft" approach possesses unique advantages over the conventional wisdom of rigidity-based transduction, and potentially leads to a unified mechanism for effective allosterylike transduction and relay of mechanical actions, information, control, and molecules from one position to another in molecular devices and motors. This study also identifies an entropy limit unique to the soft transduction, and thereby suggests a possibility of detecting higher efficiency for kinesin motor and mutants in future experiments.
Multiplex High-Throughput Targeted Proteomic Assay To Identify Induced Pluripotent Stem Cells.
Baud, Anna; Wessely, Frank; Mazzacuva, Francesca; McCormick, James; Camuzeaux, Stephane; Heywood, Wendy E; Little, Daniel; Vowles, Jane; Tuefferd, Marianne; Mosaku, Olukunbi; Lako, Majlinda; Armstrong, Lyle; Webber, Caleb; Cader, M Zameel; Peeters, Pieter; Gissen, Paul; Cowley, Sally A; Mills, Kevin
2017-02-21
Induced pluripotent stem cells have great potential as a human model system in regenerative medicine, disease modeling, and drug screening. However, their use in medical research is hampered by laborious reprogramming procedures that yield low numbers of induced pluripotent stem cells. For further applications in research, only the best, competent clones should be used. The standard assays for pluripotency are based on genomic approaches, which take up to 1 week to perform and incur significant cost. Therefore, there is a need for a rapid and cost-effective assay able to distinguish between pluripotent and nonpluripotent cells. Here, we describe a novel multiplexed, high-throughput, and sensitive peptide-based multiple reaction monitoring mass spectrometry assay, allowing for the identification and absolute quantitation of multiple core transcription factors and pluripotency markers. This assay provides simpler and high-throughput classification into either pluripotent or nonpluripotent cells in 7 min analysis while being more cost-effective than conventional genomic tests.
Food-web complexity, meta-community complexity and community stability.
Mougi, A; Kondoh, M
2016-04-13
What allows interacting, diverse species to coexist in nature has been a central question in ecology, ever since the theoretical prediction that a complex community should be inherently unstable. Although the role of spatiality in species coexistence has been recognized, its application to more complex systems has been less explored. Here, using a meta-community model of food web, we show that meta-community complexity, measured by the number of local food webs and their connectedness, elicits a self-regulating, negative-feedback mechanism and thus stabilizes food-web dynamics. Moreover, the presence of meta-community complexity can give rise to a positive food-web complexity-stability effect. Spatiality may play a more important role in stabilizing dynamics of complex, real food webs than expected from ecological theory based on the models of simpler food webs.
Jonnalagadda, Siddhartha; Gonzalez, Graciela
2010-11-13
BioSimplify is an open source tool written in Java that introduces and facilitates the use of a novel model for sentence simplification tuned for automatic discourse analysis and information extraction (as opposed to sentence simplification for improving human readability). The model is based on a "shot-gun" approach that produces many different (simpler) versions of the original sentence by combining variants of its constituent elements. This tool is optimized for processing biomedical scientific literature such as the abstracts indexed in PubMed. We tested our tool on its impact to the task of PPI extraction and it improved the f-score of the PPI tool by around 7%, with an improvement in recall of around 20%. The BioSimplify tool and test corpus can be downloaded from https://biosimplify.sourceforge.net.
NUMERICAL FLOW AND TRANSPORT SIMULATIONS SUPPORTING THE SALTSTONE FACILITY PERFORMANCE ASSESSMENT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flach, G.
2009-02-28
The Saltstone Disposal Facility Performance Assessment (PA) is being revised to incorporate requirements of Section 3116 of the Ronald W. Reagan National Defense Authorization Act for Fiscal Year 2005 (NDAA), and updated data and understanding of vault performance since the 1992 PA (Cook and Fowler 1992) and related Special Analyses. A hybrid approach was chosen for modeling contaminant transport from vaults and future disposal cells to exposure points. A higher resolution, largely deterministic, analysis is performed on a best-estimate Base Case scenario using the PORFLOW numerical analysis code. a few additional sensitivity cases are simulated to examine alternative scenarios andmore » parameter settings. Stochastic analysis is performed on a simpler representation of the SDF system using the GoldSim code to estimate uncertainty and sensitivity about the Base Case. This report describes development of PORFLOW models supporting the SDF PA, and presents sample results to illustrate model behaviors and define impacts relative to key facility performance objectives. The SDF PA document, when issued, should be consulted for a comprehensive presentation of results.« less
Methods for improving simulations of biological systems: systemic computation and fractal proteins
Bentley, Peter J.
2009-01-01
Modelling and simulation are becoming essential for new fields such as synthetic biology. Perhaps the most important aspect of modelling is to follow a clear design methodology that will help to highlight unwanted deficiencies. The use of tools designed to aid the modelling process can be of benefit in many situations. In this paper, the modelling approach called systemic computation (SC) is introduced. SC is an interaction-based language, which enables individual-based expression and modelling of biological systems, and the interactions between them. SC permits a precise description of a hypothetical mechanism to be written using an intuitive graph-based or a calculus-based notation. The same description can then be directly run as a simulation, merging the hypothetical mechanism and the simulation into the same entity. However, even when using well-designed modelling tools to produce good models, the best model is not always the most accurate one. Frequently, computational constraints or lack of data make it infeasible to model an aspect of biology. Simplification may provide one way forward, but with inevitable consequences of decreased accuracy. Instead of attempting to replace an element with a simpler approximation, it is sometimes possible to substitute the element with a different but functionally similar component. In the second part of this paper, this modelling approach is described and its advantages are summarized using an exemplar: the fractal protein model. Finally, the paper ends with a discussion of good biological modelling practice by presenting lessons learned from the use of SC and the fractal protein model. PMID:19324681
Relation between cooperative molecular motors and active Brownian particles.
Touya, Clément; Schwalger, Tilo; Lindner, Benjamin
2011-05-01
Active Brownian particles (ABPs), obeying a nonlinear Langevin equation with speed-dependent drift and noise amplitude, are well-known models used to describe self-propelled motion in biology. In this paper we study a model describing the stochastic dynamics of a group of coupled molecular motors (CMMs). Using two independent numerical methods, one based on the stationary velocity distribution of the motors and the other one on the local increments (also known as the Kramers-Moyal coefficients) of the velocity, we establish a connection between the CMM and the ABP models. The parameters extracted for the ABP via the two methods show good agreement for both symmetric and asymmetric cases and are independent of N, the number of motors, provided that N is not too small. This indicates that one can indeed describe the CMM problem with a simpler ABP model. However, the power spectrum of velocity fluctuations in the CMM model reveals a peak at a finite frequency, a peak which is absent in the velocity spectrum of the ABP model. This implies richer dynamic features of the CMM model which cannot be captured by an ABP model.
Relation between cooperative molecular motors and active Brownian particles
NASA Astrophysics Data System (ADS)
Touya, Clément; Schwalger, Tilo; Lindner, Benjamin
2011-05-01
Active Brownian particles (ABPs), obeying a nonlinear Langevin equation with speed-dependent drift and noise amplitude, are well-known models used to describe self-propelled motion in biology. In this paper we study a model describing the stochastic dynamics of a group of coupled molecular motors (CMMs). Using two independent numerical methods, one based on the stationary velocity distribution of the motors and the other one on the local increments (also known as the Kramers-Moyal coefficients) of the velocity, we establish a connection between the CMM and the ABP models. The parameters extracted for the ABP via the two methods show good agreement for both symmetric and asymmetric cases and are independent of N, the number of motors, provided that N is not too small. This indicates that one can indeed describe the CMM problem with a simpler ABP model. However, the power spectrum of velocity fluctuations in the CMM model reveals a peak at a finite frequency, a peak which is absent in the velocity spectrum of the ABP model. This implies richer dynamic features of the CMM model which cannot be captured by an ABP model.
Cognitive and neural foundations of discrete sequence skill: a TMS study.
Ruitenberg, Marit F L; Verwey, Willem B; Schutter, Dennis J L G; Abrahamse, Elger L
2014-04-01
Executing discrete movement sequences typically involves a shift with practice from a relatively slow, stimulus-based mode to a fast mode in which performance is based on retrieving and executing entire motor chunks. The dual processor model explains the performance of (skilled) discrete key-press sequences in terms of an interplay between a cognitive processor and a motor system. In the present study, we tested and confirmed the core assumptions of this model at the behavioral level. In addition, we explored the involvement of the pre-supplementary motor area (pre-SMA) in discrete sequence skill by applying inhibitory 20 min 1-Hz off-line repetitive transcranial magnetic stimulation (rTMS). Based on previous work, we predicted pre-SMA involvement in the selection/initiation of motor chunks, and this was confirmed by our results. The pre-SMA was further observed to be more involved in more complex than in simpler sequences, while no evidence was found for pre-SMA involvement in direct stimulus-response translations or associative learning processes. In conclusion, support is provided for the dual processor model, and for pre-SMA involvement in the initiation of motor chunks. Copyright © 2014 Elsevier Ltd. All rights reserved.
A review of surrogate models and their application to groundwater modeling
NASA Astrophysics Data System (ADS)
Asher, M. J.; Croke, B. F. W.; Jakeman, A. J.; Peeters, L. J. M.
2015-08-01
The spatially and temporally variable parameters and inputs to complex groundwater models typically result in long runtimes which hinder comprehensive calibration, sensitivity, and uncertainty analysis. Surrogate modeling aims to provide a simpler, and hence faster, model which emulates the specified output of a more complex model in function of its inputs and parameters. In this review paper, we summarize surrogate modeling techniques in three categories: data-driven, projection, and hierarchical-based approaches. Data-driven surrogates approximate a groundwater model through an empirical model that captures the input-output mapping of the original model. Projection-based models reduce the dimensionality of the parameter space by projecting the governing equations onto a basis of orthonormal vectors. In hierarchical or multifidelity methods the surrogate is created by simplifying the representation of the physical system, such as by ignoring certain processes, or reducing the numerical resolution. In discussing the application to groundwater modeling of these methods, we note several imbalances in the existing literature: a large body of work on data-driven approaches seemingly ignores major drawbacks to the methods; only a fraction of the literature focuses on creating surrogates to reproduce outputs of fully distributed groundwater models, despite these being ubiquitous in practice; and a number of the more advanced surrogate modeling methods are yet to be fully applied in a groundwater modeling context.
Operational Retrievals of Evapotranspiration: Are we there yet?
NASA Astrophysics Data System (ADS)
Neale, C. M. U.; Anderson, M. C.; Hain, C.; Schull, M.; Isidro, C., Sr.; Goncalves, I. Z.
2017-12-01
Remote sensing based retrievals of evapotranspiration (ET) have progressed significantly over the last two decades with the improvement of methods and algorithms and the availability of multiple satellite sensors with shortwave and thermal infrared bands on polar orbiting platforms. The modeling approaches include simpler vegetation index (VI) based methods such as the reflectance-based crop coefficient approach coupled with surface reference evapotranspiration estimates to derive actual evapotranspiration of crops or, direct inputs to the Penman-Monteith equation through VI relationships with certain input variables. Methods that are more complex include one-layer or two-layer energy balance approaches that make use of both shortwave and longwave spectral band information to estimate different inputs to the energy balance equation. These models mostly differ in the estimation of sensible heat fluxes. For continental and global scale applications, other satellite-based products such as solar radiation, vegetation leaf area and cover are used as inputs, along with gridded re-analysis weather information. This presentation will review the state-of-the-art in satellite-based evapotranspiration estimation, giving examples of existing efforts to obtain operational ET retrievals over continental and global scales and discussing difficulties and challenges.
Autonomous Guidance of Agile Small-scale Rotorcraft
NASA Technical Reports Server (NTRS)
Mettler, Bernard; Feron, Eric
2004-01-01
This report describes a guidance system for agile vehicles based on a hybrid closed-loop model of the vehicle dynamics. The hybrid model represents the vehicle dynamics through a combination of linear-time-invariant control modes and pre-programmed, finite-duration maneuvers. This particular hybrid structure can be realized through a control system that combines trim controllers and a maneuvering control logic. The former enable precise trajectory tracking, and the latter enables trajectories at the edge of the vehicle capabilities. The closed-loop model is much simpler than the full vehicle equations of motion, yet it can capture a broad range of dynamic behaviors. It also supports a consistent link between the physical layer and the decision-making layer. The trajectory generation was formulated as an optimization problem using mixed-integer-linear-programming. The optimization is solved in a receding horizon fashion. Several techniques to improve the computational tractability were investigate. Simulation experiments using NASA Ames 'R-50 model show that this approach fully exploits the vehicle's agility.
Observations and modeling of cool, evolved stars: from chromospheric to wind regions
NASA Astrophysics Data System (ADS)
Rau, Gioia; Carpenter, Ken G.; Nielsen, Krister E.; Kober, Gladys V.; Josef Hron, Bernard Aringer, Kjell Eriksson, Paola Marigo, Claudia Paladini
2018-01-01
Evolved stars are fundamental contributors to the enrichment of the interstellar medium, via their mass loss, with heavy elements produced in their interior, and with the dust formed in their envelope. We present the results of the first systematic comparison (Rau et al. 2017, 2015) of multi-technique observations of a sample of C-rich Mira, semi-regular and irregular stars with the predictions from dynamic model atmospheres (Mattsson et al. 2010) and simpler models based on hydrostatic atmospheres combined with dusty envelopes. The chromosphere, located in the outer atmosphere of these stars, plays a crucial role in driving the mass loss in evolved K-M giant stars (see e.g. Carpenter et al. 2014, 1988). Despite recent efforts, details of the mass-loss scenario remain mysterious, as well as a complete understanding of the dynamic line formation regions, profiles, and structures. To solve these riddles, we present observation of flow and turbulent velocities, together with preliminary derivation of thermodynamic constraints for theoretical models (Rau, Carpenter, et al., in prep).
Analytical methods for the development of Reynolds stress closures in turbulence
NASA Technical Reports Server (NTRS)
Speziale, Charles G.
1990-01-01
Analytical methods for the development of Reynolds stress models in turbulence are reviewed in detail. Zero, one and two equation models are discussed along with second-order closures. A strong case is made for the superior predictive capabilities of second-order closure models in comparison to the simpler models. The central points are illustrated by examples from both homogeneous and inhomogeneous turbulence. A discussion of the author's views concerning the progress made in Reynolds stress modeling is also provided along with a brief history of the subject.
A flexible framework for process-based hydraulic and water ...
Background Models that allow for design considerations of green infrastructure (GI) practices to control stormwater runoff and associated contaminants have received considerable attention in recent years. While popular, generally, the GI models are relatively simplistic. However, GI model predictions are being relied upon by many municipalities and State/Local agencies to make decisions about grey vs. green infrastructure improvement planning. Adding complexity to GI modeling frameworks may preclude their use in simpler urban planning situations. Therefore, the goal here was to develop a sophisticated, yet flexible tool that could be used by design engineers and researchers to capture and explore the effect of design factors and properties of the media used in the performance of GI systems at a relatively small scale. We deemed it essential to have a flexible GI modeling tool that is capable of simulating GI system components and specific biophysical processes affecting contaminants such as reactions, and particle-associated transport accurately while maintaining a high degree of flexibly to account for the myriad of GI alternatives. The mathematical framework for a stand-alone GI performance assessment tool has been developed and will be demonstrated.Framework Features The process-based model framework developed here can be used to model a diverse range of GI practices such as green roof, retention pond, bioretention, infiltration trench, permeable pavement and
NASA Astrophysics Data System (ADS)
Dash, Rajashree
2017-11-01
Forecasting purchasing power of one currency with respect to another currency is always an interesting topic in the field of financial time series prediction. Despite the existence of several traditional and computational models for currency exchange rate forecasting, there is always a need for developing simpler and more efficient model, which will produce better prediction capability. In this paper, an evolutionary framework is proposed by using an improved shuffled frog leaping (ISFL) algorithm with a computationally efficient functional link artificial neural network (CEFLANN) for prediction of currency exchange rate. The model is validated by observing the monthly prediction measures obtained for three currency exchange data sets such as USD/CAD, USD/CHF, and USD/JPY accumulated within same period of time. The model performance is also compared with two other evolutionary learning techniques such as Shuffled frog leaping algorithm and Particle Swarm optimization algorithm. Practical analysis of results suggest that, the proposed model developed using the ISFL algorithm with CEFLANN network is a promising predictor model for currency exchange rate prediction compared to other models included in the study.
Time series regression and ARIMAX for forecasting currency flow at Bank Indonesia in Sulawesi region
NASA Astrophysics Data System (ADS)
Suharsono, Agus; Suhartono, Masyitha, Aulia; Anuravega, Arum
2015-12-01
The purpose of the study is to forecast the outflow and inflow of currency at Indonesian Central Bank or Bank Indonesia (BI) in Sulawesi Region. The currency outflow and inflow data tend to have a trend pattern which is influenced by calendar variation effects. Therefore, this research focuses to apply some forecasting methods that could handle calendar variation effects, i.e. Time Series Regression (TSR) and ARIMAX models, and compare the forecast accuracy with ARIMA model. The best model is selected based on the lowest of Root Mean Squares Errors (RMSE) at out-sample dataset. The results show that ARIMA is the best model for forecasting the currency outflow and inflow at South Sulawesi. Whereas, the best model for forecasting the currency outflow at Central Sulawesi and Southeast Sulawesi, and for forecasting the currency inflow at South Sulawesi and North Sulawesi is TSR. Additionally, ARIMAX is the best model for forecasting the currency outflow at North Sulawesi. Hence, the results show that more complex models do not neccessary yield more accurate forecast than the simpler one.
ERIC Educational Resources Information Center
Ito, Hiroyuki; Tani, Iori; Yukihiro, Ryoji; Adachi, Jun; Hara, Koichi; Ogasawara, Megumi; Inoue, Masahiko; Kamio, Yoko; Nakamura, Kazuhiko; Uchiyama, Tokio; Ichikawa, Hironobu; Sugiyama, Toshiro; Hagiwara, Taku; Tsujii, Masatsugu
2012-01-01
The pervasive developmental disorders (PDDs) Autism Society Japan Rating Scale (PARS), an interview-based instrument for evaluating PDDs, has been developed in Japan with the aim of providing a method that (1) can be used to evaluate PDD symptoms and related support needs and (2) is simpler and easier than the currently used "gold…
Mandic, Sandra; Walker, Robert; Stevens, Emily; Nye, Edwin R; Body, Dianne; Barclay, Leanne; Williams, Michael J A
2013-01-01
Compared with symptom-limited cardiopulmonary exercise test (CPET), timed walking tests are cheaper, well-tolerated and simpler alternative for assessing exercise capacity in coronary artery disease (CAD) patients. We developed multivariate models for predicting peak oxygen consumption (VO2peak) from 6-minute walk test (6MWT) distance and peak shuttle walk speed for elderly stable CAD patients. Fifty-eight CAD patients (72 SD 6 years, 66% men) completed: (1) CPET with expired gas analysis on a cycle ergometer, (2) incremental 10-meter shuttle walk test, (3) two 6MWTs, (4) anthropometric assessment and (5) 30-second chair stands. Linear regression models were developed for estimating VO2peak from 6MWT distance and peak shuttle walk speed as well as demographic, anthropometric and functional variables. Measured VO2peak was significantly related to 6MWT distance (r = 0.719, p < 0.001) and peak shuttle walk speed (r = 0.717, p < 0.001). The addition of demographic (age, gender), anthropometric (height, weight, body mass index, body composition) and functional characteristics (30-second chair stands) increased the accuracy of predicting VO2peak from both 6MWT distance and peak shuttle walk speed (from 51% to 73% of VO2peak variance explained). Addition of demographic, anthropometric and functional characteristics improves the accuracy of VO2peak estimate based on walking tests in elderly individuals with stable CAD. Implications for Rehabilitation Timed walking tests are cheaper, well-tolerated and simpler alternative for assessing exercise capacity in cardiac patients. Walking tests could be used to assess individual's functional capacity and response to therapeutic interventions when symptom-limited cardiopulmonary exercise testing is not practical or not necessary for clinical reasons. Addition of demographic, anthropometric and functional characteristics improves the accuracy of peak oxygen consumption estimate based on 6-minute walk test distance and peak shuttle walk speed in elderly patients with coronary artery disease.
Simple, Flexible, Trigonometric Taper Equations
Charles E. Thomas; Bernard R. Parresol
1991-01-01
There have been numerous approaches to modeling stem form in recent decades. The majority have concentrated on the simpler coniferous bole form and have become increasingly complex mathematical expressions. Use of trigonometric equations provides a simple expression of taper that is flexible enough to fit both coniferous and hard-wood bole forms. As an illustration, we...
Who Needs Lewis Structures to Get VSEPR Geometries?
ERIC Educational Resources Information Center
Lindmark, Alan F.
2010-01-01
Teaching the VSEPR (valence shell electron-pair repulsion) model can be a tedious process. Traditionally, Lewis structures are drawn and the number of "electron clouds" (groups) around the central atom are counted and related to the standard VSEPR table of possible geometries. A simpler method to deduce the VSEPR structure without first drawing…
2014-01-01
and 50 kT, to within 30% of first-principles code ( MCNP ) for complicated cities and 10% for simpler cities. 15. SUBJECT TERMS Radiation Transport...Use of MCNP for Dose Calculations .................................................................... 3 2.3 MCNP Open-Field Absorbed Dose...Calculations .................................................. 4 2.4 The MCNP Urban Model
Quality Schools, Quality Outcomes
ERIC Educational Resources Information Center
Australian Government Department of Education and Training, 2016
2016-01-01
A strong level of funding is important for Australia's school system. The Government has further committed to a new, simpler and fairer funding model that distributes this funding on the basis of need. However, while funding is important, evidence shows that what you do with that funding matters more. Despite significant funding growth in the past…
78 FR 16808 - Connect America Fund; High-Cost Universal Service Support
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-19
... to use one regression to generate a single cap on total loop costs for each study area. A single cap.... * * * A preferable, and simpler, approach would be to develop one conditional quantile model for aggregate.... Total universal service support for such carriers was approaching $2 billion annually--more than 40...
Implicit Learning of Recursive Context-Free Grammars
Rohrmeier, Martin; Fu, Qiufang; Dienes, Zoltan
2012-01-01
Context-free grammars are fundamental for the description of linguistic syntax. However, most artificial grammar learning experiments have explored learning of simpler finite-state grammars, while studies exploring context-free grammars have not assessed awareness and implicitness. This paper explores the implicit learning of context-free grammars employing features of hierarchical organization, recursive embedding and long-distance dependencies. The grammars also featured the distinction between left- and right-branching structures, as well as between centre- and tail-embedding, both distinctions found in natural languages. People acquired unconscious knowledge of relations between grammatical classes even for dependencies over long distances, in ways that went beyond learning simpler relations (e.g. n-grams) between individual words. The structural distinctions drawn from linguistics also proved important as performance was greater for tail-embedding than centre-embedding structures. The results suggest the plausibility of implicit learning of complex context-free structures, which model some features of natural languages. They support the relevance of artificial grammar learning for probing mechanisms of language learning and challenge existing theories and computational models of implicit learning. PMID:23094021
NASA Astrophysics Data System (ADS)
Destro, Elisa; Amponsah, William; Nikolopoulos, Efthymios I.; Marchi, Lorenzo; Marra, Francesco; Zoccatelli, Davide; Borga, Marco
2018-03-01
The concurrence of flash floods and debris flows is of particular concern, because it may amplify the hazard corresponding to the individual generative processes. This paper presents a coupled modelling framework for the predictions of flash flood response and of the occurrence of debris flows initiated by channel bed mobilization. The framework combines a spatially distributed flash flood response model and a debris flow initiation model to define a threshold value for the peak flow which permits identification of channelized debris flow initiation. The threshold is defined over the channel network as a function of the upslope area and of the local channel bed slope, and it is based on assumptions concerning the properties of the channel bed material and of the morphology of the channel network. The model is validated using data from an extreme rainstorm that impacted the 140 km2 Vizze basin in the Eastern Italian Alps on August 4-5, 2012. The results show that the proposed methodology has improved skill in identifying the catchments where debris-flows are triggered, compared to the use of simpler thresholds based on rainfall properties.
NASA Astrophysics Data System (ADS)
Yahia, Eman; Premnath, Kannan
2017-11-01
Resolving multiscale flow physics (e.g. for boundary layer or mixing layer flows) effectively generally requires the use of different grid resolutions in different coordinate directions. Here, we present a new formulation of a multiple relaxation time (MRT)-lattice Boltzmann (LB) model for anisotropic meshes. It is based on a simpler and more stable non-orthogonal moment basis while the use of MRT introduces additional flexibility, and the model maintains a stream-collide procedure; its second order moment equilibria are augmented with additional velocity gradient terms dependent on grid aspect ratio that fully restores the required isotropy of the transport coefficients of the normal and shear stresses. Furthermore, by introducing additional cubic velocity corrections, it maintains Galilean invariance. The consistency of this stretched lattice based LB scheme with the Navier-Stokes equations is shown via a Chapman-Enskog expansion. Numerical study for a variety of benchmark flow problems demonstrate its ability for accurate and effective simulations at relatively high Reynolds numbers. The MRT-LB scheme is also shown to be more stable compared to prior LB models for rectangular grids, even for grid aspect ratios as small as 0.1 and for Reynolds numbers of 10000.
Manifold Coal-Slurry Transport System
NASA Technical Reports Server (NTRS)
Liddle, S. G.; Estus, J. M.; Lavin, M. L.
1986-01-01
Feeding several slurry pipes into main pipeline reduces congestion in coal mines. System based on manifold concept: feeder pipelines from each working entry joined to main pipeline that carries coal slurry out of panel and onto surface. Manifold concept makes coal-slurry haulage much simpler than existing slurry systems.
Microprocessor-Based Valved Controller
NASA Technical Reports Server (NTRS)
Norman, Arnold M., Jr.
1987-01-01
New controller simpler, more precise, and lighter than predecessors. Mass-flow controller compensates for changing supply pressure and temperature such as occurs when gas-supply tank becomes depleted. By periodically updating calculation of mass-flow rate, controller determines correct new position for valve and keeps mass-flow rate nearly constant.
Optimal harvesting for a predator-prey agent-based model using difference equations.
Oremland, Matthew; Laubenbacher, Reinhard
2015-03-01
In this paper, a method known as Pareto optimization is applied in the solution of a multi-objective optimization problem. The system in question is an agent-based model (ABM) wherein global dynamics emerge from local interactions. A system of discrete mathematical equations is formulated in order to capture the dynamics of the ABM; while the original model is built up analytically from the rules of the model, the paper shows how minor changes to the ABM rule set can have a substantial effect on model dynamics. To address this issue, we introduce parameters into the equation model that track such changes. The equation model is amenable to mathematical theory—we show how stability analysis can be performed and validated using ABM data. We then reduce the equation model to a simpler version and implement changes to allow controls from the ABM to be tested using the equations. Cohen's weighted κ is proposed as a measure of similarity between the equation model and the ABM, particularly with respect to the optimization problem. The reduced equation model is used to solve a multi-objective optimization problem via a technique known as Pareto optimization, a heuristic evolutionary algorithm. Results show that the equation model is a good fit for ABM data; Pareto optimization provides a suite of solutions to the multi-objective optimization problem that can be implemented directly in the ABM.
Binder, Harald; Sauerbrei, Willi; Royston, Patrick
2013-06-15
In observational studies, many continuous or categorical covariates may be related to an outcome. Various spline-based procedures or the multivariable fractional polynomial (MFP) procedure can be used to identify important variables and functional forms for continuous covariates. This is the main aim of an explanatory model, as opposed to a model only for prediction. The type of analysis often guides the complexity of the final model. Spline-based procedures and MFP have tuning parameters for choosing the required complexity. To compare model selection approaches, we perform a simulation study in the linear regression context based on a data structure intended to reflect realistic biomedical data. We vary the sample size, variance explained and complexity parameters for model selection. We consider 15 variables. A sample size of 200 (1000) and R(2) = 0.2 (0.8) is the scenario with the smallest (largest) amount of information. For assessing performance, we consider prediction error, correct and incorrect inclusion of covariates, qualitative measures for judging selected functional forms and further novel criteria. From limited information, a suitable explanatory model cannot be obtained. Prediction performance from all types of models is similar. With a medium amount of information, MFP performs better than splines on several criteria. MFP better recovers simpler functions, whereas splines better recover more complex functions. For a large amount of information and no local structure, MFP and the spline procedures often select similar explanatory models. Copyright © 2012 John Wiley & Sons, Ltd.
Chatrath, Jatin; Aziz, Mohsin; Helaoui, Mohamed
2018-01-01
Reconfigurable and multi-standard RF front-ends for wireless communication and sensor networks have gained importance as building blocks for the Internet of Things. Simpler and highly-efficient transmitter architectures, which can transmit better quality signals with reduced impairments, are an important step in this direction. In this regard, mixer-less transmitter architecture, namely, the three-way amplitude modulator-based transmitter, avoids the use of imperfect mixers and frequency up-converters, and their resulting distortions, leading to an improved signal quality. In this work, an augmented memory polynomial-based model for the behavioral modeling of such mixer-less transmitter architecture is proposed. Extensive simulations and measurements have been carried out in order to validate the accuracy of the proposed modeling strategy. The performance of the proposed model is evaluated using normalized mean square error (NMSE) for long-term evolution (LTE) signals. NMSE for a LTE signal of 1.4 MHz bandwidth with 100,000 samples for digital combining and analog combining are recorded as −36.41 dB and −36.9 dB, respectively. Similarly, for a 5 MHz signal the proposed models achieves −31.93 dB and −32.08 dB NMSE using digital and analog combining, respectively. For further validation of the proposed model, amplitude-to-amplitude (AM-AM), amplitude-to-phase (AM-PM), and the spectral response of the modeled and measured data are plotted, reasonably meeting the desired modeling criteria. PMID:29510501
Rotationally Actuated Prosthetic Hand
NASA Technical Reports Server (NTRS)
Norton, William E.; Belcher, Jewell G., Jr.; Carden, James R.; Vest, Thomas W.
1991-01-01
Prosthetic hand attached to end of remaining part of forearm and to upper arm just above elbow. Pincerlike fingers pushed apart to degree depending on rotation of forearm. Simpler in design, simpler to operate, weighs less, and takes up less space.
NASA Astrophysics Data System (ADS)
Dai, Mingzhi; Khan, Karim; Zhang, Shengnan; Jiang, Kemin; Zhang, Xingye; Wang, Weiliang; Liang, Lingyan; Cao, Hongtao; Wang, Pengjun; Wang, Peng; Miao, Lijing; Qin, Haiming; Jiang, Jun; Xue, Lixin; Chu, Junhao
2016-06-01
Sub-gap density of states (DOS) is a key parameter to impact the electrical characteristics of semiconductor materials-based transistors in integrated circuits. Previously, spectroscopy methodologies for DOS extractions include the static methods, temperature dependent spectroscopy and photonic spectroscopy. However, they might involve lots of assumptions, calculations, temperature or optical impacts into the intrinsic distribution of DOS along the bandgap of the materials. A direct and simpler method is developed to extract the DOS distribution from amorphous oxide-based thin-film transistors (TFTs) based on Dual gate pulse spectroscopy (GPS), introducing less extrinsic factors such as temperature and laborious numerical mathematical analysis than conventional methods. From this direct measurement, the sub-gap DOS distribution shows a peak value on the band-gap edge and in the order of 1017-1021/(cm3·eV), which is consistent with the previous results. The results could be described with the model involving both Gaussian and exponential components. This tool is useful as a diagnostics for the electrical properties of oxide materials and this study will benefit their modeling and improvement of the electrical properties and thus broaden their applications.
Fatigue crack growth in fiber reinforced plastics
NASA Technical Reports Server (NTRS)
Mandell, J. F.
1979-01-01
Fatigue crack growth in fiber composites occurs by such complex modes as to frustrate efforts at developing comprehensive theories and models. Under certain loading conditions and with certain types of reinforcement, simpler modes of fatigue crack growth are observed. These modes are more amenable to modeling efforts, and the fatigue crack growth rate can be predicted in some cases. Thus, a formula for prediction of ligamented mode fatigue crack growth rate is available.
Probabilistic Modeling and Simulation of Metal Fatigue Life Prediction
2002-09-01
distribution demonstrate the central limit theorem? Obviously not! This is much the same as materials testing. If only NBA basketball stars are...60 near the exit of a NBA locker room. There would obviously be some pseudo-normal distribution with a very small standard deviation. The mean...completed, the investigators must understand how the midgets and the NBA stars will affect the total solution. D. IT IS MUCH SIMPLER TO MODEL THE
Reynolds stress closure modeling in wall-bounded flows
NASA Technical Reports Server (NTRS)
Durbin, Paul A.
1993-01-01
This report describes two projects. Firstly, a Reynolds stress closure for near-wall turbulence is described. It was motivated by the simpler k-epsilon-(v-bar(exp 2)) model described in last year's annual research brief. Direct Numerical Simulation of three-dimensional channel flow shows a curious decrease of the turbulent kinetic energy. The second topic of this report is a model which reproduces this effect. That model is described and used to discuss the relevance of the three dimensional channel flow simulation to swept wing boundary layers.
Comment on Spectroscopy of samarium isotopes in the [ital sdg] interacting boson model''
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuyucak, S.; Lac, V.
We point out that the data used in the [ital sdg] boson model calculations by Devi and Kota [Phys. Rev. C 45, 2238 (1992)] can be equally well described by the much simpler [ital sd] boson model. We present additional data for the Sm isotopes which cannot be explained in the [ital sd] model and hence may justify such an extension to the [ital sdg] bosons. We also comment on the form of the Hamiltonian and the transition operators used in this paper.
Gurarie, David; King, Charles H; Yoon, Nara; Li, Emily
2016-08-04
Schistosoma parasites sustain a complex transmission process that cycles between a definitive human host, two free-swimming larval stages, and an intermediate snail host. Multiple factors modify their transmission and affect their control, including heterogeneity in host populations and environment, the aggregated distribution of human worm burdens, and features of parasite reproduction and host snail biology. Because these factors serve to enhance local transmission, their inclusion is important in attempting accurate quantitative prediction of the outcomes of schistosomiasis control programs. However, their inclusion raises many mathematical and computational challenges. To address these, we have recently developed a tractable stratified worm burden (SWB) model that occupies an intermediate place between simpler deterministic mean worm burden models and the very computationally-intensive, autonomous agent models. To refine the accuracy of model predictions, we modified an earlier version of the SWB by incorporating factors representing essential in-host biology (parasite mating, aggregation, density-dependent fecundity, and random egg-release) into demographically structured host communities. We also revised the snail component of the transmission model to reflect a saturable form of human-to-snail transmission. The new model allowed us to realistically simulate overdispersed egg-test results observed in individual-level field data. We further developed a Bayesian-type calibration methodology that accounted for model and data uncertainties. The new model methodology was applied to multi-year, individual-level field data on S. haematobium infections in coastal Kenya. We successfully derived age-specific estimates of worm burden distributions and worm fecundity and crowding functions for children and adults. Estimates from the new SWB model were compared with those from the older, simpler SWB with some substantial differences noted. We validated our new SWB estimates in prediction of drug treatment-based control outcomes for a typical Kenyan community. The new version of the SWB model provides a better tool to predict the outcomes of ongoing schistosomiasis control programs. It reflects parasite features that augment and perpetuate transmission, while it also readily incorporates differences in diagnostic testing and human sub-population differences in treatment coverage. Once extended to other Schistosoma species and transmission environments, it will provide a useful and efficient tool for planning control and elimination strategies.
Modeling human target acquisition in ground-to-air weapon systems
NASA Technical Reports Server (NTRS)
Phatak, A. V.; Mohr, R. L.; Vikmanis, M.; Wei, K. C.
1982-01-01
The problems associated with formulating and validating mathematical models for describing and predicting human target acquisition response are considered. In particular, the extension of the human observer model to include the acquisition phase as well as the tracking segment is presented. Relationship of the Observer model structure to the more complex Standard Optimal Control model formulation and to the simpler Transfer Function/Noise representation is discussed. Problems pertinent to structural identifiability and the form of the parameterization are elucidated. A systematic approach toward the identification of the observer acquisition model parameters from ensemble tracking error data is presented.
NASA Astrophysics Data System (ADS)
Pocebneva, Irina; Belousov, Vadim; Fateeva, Irina
2018-03-01
This article provides a methodical description of resource-time analysis for a wide range of requirements imposed for resource consumption processes in scheduling tasks during the construction of high-rise buildings and facilities. The core of the proposed approach and is the resource models being determined. The generalized network models are the elements of those models, the amount of which can be too large to carry out the analysis of each element. Therefore, the problem is to approximate the original resource model by simpler time models, when their amount is not very large.
Raymer, James; Abel, Guy J.; Rogers, Andrei
2012-01-01
Population projection models that introduce uncertainty are a growing subset of projection models in general. In this paper, we focus on the importance of decisions made with regard to the model specifications adopted. We compare the forecasts and prediction intervals associated with four simple regional population projection models: an overall growth rate model, a component model with net migration, a component model with in-migration and out-migration rates, and a multiregional model with destination-specific out-migration rates. Vector autoregressive models are used to forecast future rates of growth, birth, death, net migration, in-migration and out-migration, and destination-specific out-migration for the North, Midlands and South regions in England. They are also used to forecast different international migration measures. The base data represent a time series of annual data provided by the Office for National Statistics from 1976 to 2008. The results illustrate how both the forecasted subpopulation totals and the corresponding prediction intervals differ for the multiregional model in comparison to other simpler models, as well as for different assumptions about international migration. The paper ends end with a discussion of our results and possible directions for future research. PMID:23236221
NASA Technical Reports Server (NTRS)
Plumb, R. A.
1985-01-01
Two dimensional modeling has become an established technique for the simulation of the global structure of trace constituents. Such models are simpler to formulate and cheaper to operate than three dimensional general circulation models, while avoiding some of the gross simplifications of one dimensional models. Nevertheless, the parameterization of eddy fluxes required in a 2-D model is not a trivial problem. This fact has apparently led some to interpret the shortcomings of existing 2-D models as indicating that the parameterization procedure is wrong in principle. There are grounds to believe that these shortcomings result primarily from incorrect implementations of the predictions of eddy transport theory and that a properly based parameterization may provide a good basis for atmospheric modeling. The existence of these GCM-derived coefficients affords an unprecedented opportunity to test the validity of the flux-gradient parameterization. To this end, a zonally averaged (2-D) model was developed, using these coefficients in the transport parameterization. Results from this model for a number of contrived tracer experiments were compared with the parent GCM. The generally good agreement substantially validates the flus-gradient parameterization, and thus the basic principle of 2-D modeling.
Designing Distance Learning Tasks to Help Maximize Vocabulary Development
ERIC Educational Resources Information Center
Loucky, John Paul
2012-01-01
Task-based language learning using the benefits of online computer-assisted language learning (CALL) can be effective for rapid vocabulary expansion, especially when target vocabulary has been pre-arranged into bilingual categories under simpler, common Semantic Field Keywords. Results and satisfaction levels for both Chinese English majors and…
Enhanced polyhydroxyalkanoate production from organic wastes via process control.
Vargas, Alejandro; Montaño, Liliana; Amaya, Rodolfo
2014-03-01
This work explores the use of a model-based control scheme to enhance the productivity of polyhroxyalkanoate (PHA) production in a mixed culture two-stage system fed with synthetic wastewater. The controller supplies pulses of substrate while regulating the dissolved oxygen (DO) concentration and uses the data to fit a dynamic mathematical model, which in turn is used to predict the time until the next pulse addition. Experiments in a bench scale system first determined the optimal DO set-point and initial substrate concentration. Then the proposed feedback control strategy was compared with a simpler empiric algorithm. The results show that a substrate conversion rate of 1.370±0.598mgPHA/mgCOD/d was achieved. The proposed strategy can also indicate when to stop the accumulation of PHA upon saturation, which occurred with a PHA content of 71.0±7.2wt.%. Copyright © 2014 Elsevier Ltd. All rights reserved.
Entropy-based financial asset pricing.
Ormos, Mihály; Zibriczky, Dávid
2014-01-01
We investigate entropy as a financial risk measure. Entropy explains the equity premium of securities and portfolios in a simpler way and, at the same time, with higher explanatory power than the beta parameter of the capital asset pricing model. For asset pricing we define the continuous entropy as an alternative measure of risk. Our results show that entropy decreases in the function of the number of securities involved in a portfolio in a similar way to the standard deviation, and that efficient portfolios are situated on a hyperbola in the expected return-entropy system. For empirical investigation we use daily returns of 150 randomly selected securities for a period of 27 years. Our regression results show that entropy has a higher explanatory power for the expected return than the capital asset pricing model beta. Furthermore we show the time varying behavior of the beta along with entropy.
Entropy-Based Financial Asset Pricing
Ormos, Mihály; Zibriczky, Dávid
2014-01-01
We investigate entropy as a financial risk measure. Entropy explains the equity premium of securities and portfolios in a simpler way and, at the same time, with higher explanatory power than the beta parameter of the capital asset pricing model. For asset pricing we define the continuous entropy as an alternative measure of risk. Our results show that entropy decreases in the function of the number of securities involved in a portfolio in a similar way to the standard deviation, and that efficient portfolios are situated on a hyperbola in the expected return – entropy system. For empirical investigation we use daily returns of 150 randomly selected securities for a period of 27 years. Our regression results show that entropy has a higher explanatory power for the expected return than the capital asset pricing model beta. Furthermore we show the time varying behavior of the beta along with entropy. PMID:25545668
NASA Technical Reports Server (NTRS)
Wen, John T.; Kreutz-Delgado, Kenneth; Bayard, David S.
1992-01-01
A new class of joint level control laws for all-revolute robot arms is introduced. The analysis is similar to a recently proposed energy-like Liapunov function approach, except that the closed-loop potential function is shaped in accordance with the underlying joint space topology. This approach gives way to a much simpler analysis and leads to a new class of control designs which guarantee both global asymptotic stability and local exponential stability. When Coulomb and viscous friction and parameter uncertainty are present as model perturbations, a sliding mode-like modification of the control law results in a robustness-enhancing outer loop. Adaptive control is formulated within the same framework. A linear-in-the-parameters formulation is adopted and globally asymptotically stable adaptive control laws are derived by simply replacing unknown model parameters by their estimates (i.e., certainty equivalence adaptation).
NASA Technical Reports Server (NTRS)
Wen, John T.; Kreutz, Kenneth; Bayard, David S.
1988-01-01
A class of joint-level control laws for all-revolute robot arms is introduced. The analysis is similar to the recently proposed energy Liapunov function approach except that the closed-loop potential function is shaped in accordance with the underlying joint space topology. By using energy Liapunov functions with the modified potential energy, a much simpler analysis can be used to show closed-loop global asymptotic stability and local exponential stability. When Coulomb and viscous friction and model parameter errors are present, a sliding-mode-like modification of the control law is proposed to add a robustness-enhancing outer loop. Adaptive control is also addressed within the same framework. A linear-in-the-parameters formulation is adopted, and globally asymptotically stable adaptive control laws are derived by replacing the model parameters in the nonadaptive control laws by their estimates.
NASA Astrophysics Data System (ADS)
Selker, Ted
1983-05-01
Lens focusing using a hardware model of a retina (Reticon RL256 light sensitive array) with a low cost processor (8085 with 512 bytes of ROM and 512 bytes of RAM) was built. This system was developed and tested on a variety of visual stimuli to demonstrate that: a)an algorithm which moves a lens to maximize the sum of the difference of light level on adjacent light sensors will converge to best focus in all but contrived situations. This is a simpler algorithm than any previously suggested; b) it is feasible to use unmodified video sensor arrays with in-expensive processors to aid video camera use. In the future, software could be developed to extend the processor's usefulness, possibly to track an actor by panning and zooming to give a earners operator increased ease of framing; c) lateral inhibition is an adequate basis for determining best focus. This supports a simple anatomically motivated model of how our brain focuses our eyes.
Gmz: a Gml Compression Model for Webgis
NASA Astrophysics Data System (ADS)
Khandelwal, A.; Rajan, K. S.
2017-09-01
Geography markup language (GML) is an XML specification for expressing geographical features. Defined by Open Geospatial Consortium (OGC), it is widely used for storage and transmission of maps over the Internet. XML schemas provide the convenience to define custom features profiles in GML for specific needs as seen in widely popular cityGML, simple features profile, coverage, etc. Simple features profile (SFP) is a simpler subset of GML profile with support for point, line and polygon geometries. SFP has been constructed to make sure it covers most commonly used GML geometries. Web Feature Service (WFS) serves query results in SFP by default. But it falls short of being an ideal choice due to its high verbosity and size-heavy nature, which provides immense scope for compression. GMZ is a lossless compression model developed to work for SFP compliant GML files. Our experiments indicate GMZ achieves reasonably good compression ratios and can be useful in WebGIS based applications.
On the heat capacity of elements in WMD regime
NASA Astrophysics Data System (ADS)
Hamel, Sebatien
2014-03-01
Once thought to get simpler with increasing pressure, elemental systems have been discovered to exhibit complex structures and multiple phases at high pressure. For carbon, QMD/PIMC simulations have been performed and the results are guiding alternative modelling methodologies for constructing a carbon equation-of-state covering the warm dense matter regime. One of the main results of our new QMD/PIMC carbon equation of state is that the decay of the ion-thermal specific heat with temperature is much faster than previously expected. An important question is whether this is only found in carbon and not other element. In this presentation, based on QMD calculations for several elements, we explore trends in the transition from condensed matter to warm dense matter regime.
FinFET-based Miller encoder for UHF and SHF RFID application
NASA Astrophysics Data System (ADS)
Srinivasulu, Avireni; Sravanthi, G.; Sarada, M.; Pal, Dipankar
2018-01-01
This paper proposes a T-flip-flop and a Miller encoder design for ultra-high frequency and super high frequency, radio-frequency identification (RFID) application using FinFETs. Miller encoder is used in magnetic recording, in optical domain and also in RFID. Performance of the proposed circuit was examined by installing the model parameters of 20-nm FinFET (obtained from open source) on Cadence platform with +0.4 V supply rail at frequencies of 1, 2 and 10 GHz. Simulation results have confirmed that proposed Miller encoder offers a simpler design with reduced transistor count and gives lower power dissipation, higher frequency range of operation at lower supply rail as compared to other candidate designs. Proposed design also promises less propagation delay.
Measurement of the electron structure function F2e at LEP energies
NASA Astrophysics Data System (ADS)
Abdallah, J.; Abreu, P.; Adam, W.; Adzic, P.; Albrecht, T.; Alemany-Fernandez, R.; Allmendinger, T.; Allport, P. P.; Amaldi, U.; Amapane, N.; Amato, S.; Anashkin, E.; Andreazza, A.; Andringa, S.; Anjos, N.; Antilogus, P.; Apel, W.-D.; Arnoud, Y.; Ask, S.; Asman, B.; Augustin, J. E.; Augustinus, A.; Baillon, P.; Ballestrero, A.; Bambade, P.; Barbier, R.; Bardin, D.; Barker, G. J.; Baroncelli, A.; Battaglia, M.; Baubillier, M.; Becks, K.-H.; Begalli, M.; Behrmann, A.; Belous, K.; Ben-Haim, E.; Benekos, N.; Benvenuti, A.; Berat, C.; Berggren, M.; Bertrand, D.; Besancon, M.; Besson, N.; Bloch, D.; Blom, M.; Bluj, M.; Bonesini, M.; Boonekamp, M.; Booth, P. S. L.; Borisov, G.; Botner, O.; Bouquet, B.; Bowcock, T. J. V.; Boyko, I.; Bracko, M.; Brenner, R.; Brodet, E.; Bruckman, P.; Brunet, J. M.; Buschbeck, B.; Buschmann, P.; Calvi, M.; Camporesi, T.; Canale, V.; Carena, F.; Castro, N.; Cavallo, F.; Chapkin, M.; Charpentier, Ph.; Checchia, P.; Chierici, R.; Chliapnikov, P.; Chudoba, J.; Chung, S. U.; Cieslik, K.; Collins, P.; Contri, R.; Cosme, G.; Cossutti, F.; Costa, M. J.; Crennell, D.; Cuevas, J.; D'Hondt, J.; da Silva, T.; da Silva, W.; Della Ricca, G.; de Angelis, A.; de Boer, W.; de Clercq, C.; de Lotto, B.; de Maria, N.; de Min, A.; de Paula, L.; di Ciaccio, L.; di Simone, A.; Doroba, K.; Drees, J.; Eigen, G.; Ekelof, T.; Ellert, M.; Elsing, M.; Espirito Santo, M. C.; Fanourakis, G.; Fassouliotis, D.; Feindt, M.; Fernandez, J.; Ferrer, A.; Ferro, F.; Flagmeyer, U.; Foeth, H.; Fokitis, E.; Fulda-Quenzer, F.; Fuster, J.; Gandelman, M.; Garcia, C.; Gavillet, Ph.; Gazis, E.; Gokieli, R.; Golob, B.; Gomez-Ceballos, G.; Gonçalves, P.; Graziani, E.; Grosdidier, G.; Grzelak, K.; Guy, J.; Haag, C.; Hallgren, A.; Hamacher, K.; Hamilton, K.; Haug, S.; Hauler, F.; Hedberg, V.; Hennecke, M.; Hoffman, J.; Holmgren, S.-O.; Holt, P. J.; Houlden, M. A.; Jackson, J. N.; Jarlskog, G.; Jarry, P.; Jeans, D.; Johansson, E. K.; Jonsson, P.; Joram, C.; Jungermann, L.; Kapusta, F.; Katsanevas, S.; Katsoufis, E.; Kernel, G.; Kersevan, B. P.; Kerzel, U.; King, B. T.; Kjaer, N. J.; Kluit, P.; Kokkinias, P.; Kourkoumelis, C.; Kouznetsov, O.; Krumstein, Z.; Kucharczyk, M.; Lamsa, J.; Leder, G.; Ledroit, F.; Leinonen, L.; Leitner, R.; Lemonne, J.; Lepeltier, V.; Lesiak, T.; Liebig, W.; Liko, D.; Lipniacka, A.; Lopes, J. H.; Lopez, J. M.; Loukas, D.; Lutz, P.; Lyons, L.; MacNaughton, J.; Malek, A.; Maltezos, S.; Mandl, F.; Marco, J.; Marco, R.; Marechal, B.; Margoni, M.; Marin, J.-C.; Mariotti, C.; Markou, A.; Martinez-Rivero, C.; Masik, J.; Mastroyiannopoulos, N.; Matorras, F.; Matteuzzi, C.; Mazzucato, F.; Mazzucato, M.; Mc Nulty, R.; Meroni, C.; Migliore, E.; Mitaroff, W.; Mjoernmark, U.; Moa, T.; Moch, M.; Moenig, K.; Monge, R.; Montenegro, J.; Moraes, D.; Moreno, S.; Morettini, P.; Mueller, U.; Muenich, K.; Mulders, M.; Mundim, L.; Murray, W.; Muryn, B.; Myatt, G.; Myklebust, T.; Nassiakou, M.; Navarria, F.; Nawrocki, K.; Nemecek, S.; Nicolaidou, R.; Nikolenko, M.; Oblakowska-Mucha, A.; Obraztsov, V.; Olshevski, A.; Onofre, A.; Orava, R.; Osterberg, K.; Ouraou, A.; Oyanguren, A.; Paganoni, M.; Paiano, S.; Palacios, J. P.; Palka, H.; Papadopoulou, Th. D.; Pape, L.; Parkes, C.; Parodi, F.; Parzefall, U.; Passeri, A.; Passon, O.; Peralta, L.; Perepelitsa, V.; Perrotta, A.; Petrolini, A.; Piedra, J.; Pieri, L.; Pierre, F.; Pimenta, M.; Piotto, E.; Podobnik, T.; Poireau, V.; Pol, M. E.; Polok, G.; Pozdniakov, V.; Pukhaeva, N.; Pullia, A.; Radojicic, D.; Rebecchi, P.; Rehn, J.; Reid, D.; Reinhardt, R.; Renton, P.; Richard, F.; Ridky, J.; Rivero, M.; Rodriguez, D.; Romero, A.; Ronchese, P.; Roudeau, P.; Rovelli, T.; Ruhlmann-Kleider, V.; Ryabtchikov, D.; Sadovsky, A.; Salmi, L.; Salt, J.; Sander, C.; Savoy-Navarro, A.; Schwickerath, U.; Sekulin, R.; Siebel, M.; Sisakian, A.; Slominski, W.; Smadja, G.; Smirnova, O.; Sokolov, A.; Sopczak, A.; Sosnowski, R.; Spassov, T.; Stanitzki, M.; Stocchi, A.; Strauss, J.; Stugu, B.; Szczekowski, M.; Szeptycka, M.; Szumlak, T.; Szwed, J.; Tabarelli, T.; Tegenfeldt, F.; Timmermans, J.; Tkatchev, L.; Tobin, M.; Todorovova, S.; Tomé, B.; Tonazzo, A.; Tortosa, P.; Travnicek, P.; Treille, D.; Tristram, G.; Trochimczuk, M.; Troncon, C.; Turluer, M.-L.; Tyapkin, I. A.; Tyapkin, P.; Tzamarias, S.; Uvarov, V.; Valenti, G.; van Dam, P.; van Eldik, J.; van Remortel, N.; van Vulpen, I.; Vegni, G.; Veloso, F.; Venus, W.; Verdier, P.; Verzi, V.; Vilanova, D.; Vitale, L.; Vrba, V.; Wahlen, H.; Washbrook, A. J.; Weiser, C.; Wicke, D.; Wickens, J.; Wilkinson, G.; Winter, M.; Witek, M.; Yushchenko, O.; Zalewska, A.; Zalewski, P.; Zavrtanik, D.; Zhuravlov, V.; Zimin, N. I.; Zintchenko, A.; Zupan, M.; Delphi Collaboration
2014-10-01
The hadronic part of the electron structure function F2e has been measured for the first time, using e+e- data collected by the DELPHI experiment at LEP, at centre-of-mass energies of √{ s} = 91.2- 209.5 GeV. The data analysis is simpler than that of the measurement of the photon structure function. The electron structure function F2e data are compared to predictions of phenomenological models based on the photon structure function. It is shown that the contribution of large target photon virtualities is significant. The data presented can serve as a cross-check of the photon structure function F2γ analyses and help in refining existing parameterisations.
State medical licensure for telemedicine and teleradiology.
Hunter, Tim B; Weinstein, Ronald S; Krupinski, Elizabeth A
2015-04-01
Physician medical licensure is state based for historical and constitutional reasons. It may also provide the best method for guaranteeing patient protection from unqualified, incompetent, impaired, or unprofessional practitioners of medicine. However, a significant cost for physicians practicing telemedicine is having to obtain multiple state medical licenses. There is reasonable likelihood that model legislation for the practice of telemedicine across state boundaries will be passed in the next few years, providing physicians with a simpler process for license reciprocity in multiple states via interstate licensing compacts. Physicians would have to be licensed in the state in which the patient resides. Patient complaints would still be adjudicated by the medical licensing board in the state where the patient resides according applicable state legislation.
On directionality of phrase structure building.
Chesi, Cristiano
2015-02-01
Minimalism in grammatical theorizing (Chomsky in The minimalist program. MIT Press, Cambridge, 1995) led to simpler linguistic devices and a better focalization of the core properties of the structure building engine: a lexicon and a free (recursive) phrase formation operation, dubbed Merge, are the basic components that serve in building syntactic structures. Here I suggest that by looking at the elementary restrictions that apply to Merge (i.e., selection and licensing of functional features), we could conclude that a re-orientation of the syntactic derivation (from bottom-up/right-left to top-down/left-right) is necessary to make the theory simpler, especially for long-distance (filler-gap) dependencies, and is also empirically more adequate. If the structure building operations would assemble lexical items in the order they are pronounced (Phillips in Order and structure. PhD thesis, MIT, 1996; Chesi in Phases and cartography in linguistic computation: Toward a cognitively motivated computational model of linguistic competence. PhD thesis, Università di Siena, 2004; Chesi in Competence and computation: Toward a processing friendly minimalist grammar. Unipress, Padova, 2012), on-line performance data could better fit the grammatical model, without resorting to external "performance factors." The phase-based, top-down (and, as a consequence, left-right) Minimalist Grammar here discussed goes in this direction, ultimately showing how strong Islands (Huang in Logical relations in Chinese and the theory of grammar. PhD thesis, MIT, 1982) and intervention effects (Gordon et al. in J Exp Psychol Learn Mem Cogn 27:1411-1423, 2001, Gordon et al. in J Mem Lang 51:97-114, 2004) could be better explained in structural terms assuming this unconventional derivational direction.
Towards a climate-dependent paradigm of ammonia emission and deposition
Sutton, Mark A.; Reis, Stefan; Riddick, Stuart N.; Dragosits, Ulrike; Nemitz, Eiko; Theobald, Mark R.; Tang, Y. Sim; Braban, Christine F.; Vieno, Massimo; Dore, Anthony J.; Mitchell, Robert F.; Wanless, Sarah; Daunt, Francis; Fowler, David; Blackall, Trevor D.; Milford, Celia; Flechard, Chris R.; Loubet, Benjamin; Massad, Raia; Cellier, Pierre; Personne, Erwan; Coheur, Pierre F.; Clarisse, Lieven; Van Damme, Martin; Ngadi, Yasmine; Clerbaux, Cathy; Skjøth, Carsten Ambelas; Geels, Camilla; Hertel, Ole; Wichink Kruit, Roy J.; Pinder, Robert W.; Bash, Jesse O.; Walker, John T.; Simpson, David; Horváth, László; Misselbrook, Tom H.; Bleeker, Albert; Dentener, Frank; de Vries, Wim
2013-01-01
Existing descriptions of bi-directional ammonia (NH3) land–atmosphere exchange incorporate temperature and moisture controls, and are beginning to be used in regional chemical transport models. However, such models have typically applied simpler emission factors to upscale the main NH3 emission terms. While this approach has successfully simulated the main spatial patterns on local to global scales, it fails to address the environment- and climate-dependence of emissions. To handle these issues, we outline the basis for a new modelling paradigm where both NH3 emissions and deposition are calculated online according to diurnal, seasonal and spatial differences in meteorology. We show how measurements reveal a strong, but complex pattern of climatic dependence, which is increasingly being characterized using ground-based NH3 monitoring and satellite observations, while advances in process-based modelling are illustrated for agricultural and natural sources, including a global application for seabird colonies. A future architecture for NH3 emission–deposition modelling is proposed that integrates the spatio-temporal interactions, and provides the necessary foundation to assess the consequences of climate change. Based on available measurements, a first empirical estimate suggests that 5°C warming would increase emissions by 42 per cent (28–67%). Together with increased anthropogenic activity, global NH3 emissions may increase from 65 (45–85) Tg N in 2008 to reach 132 (89–179) Tg by 2100. PMID:23713128
Towards a climate-dependent paradigm of ammonia emission and deposition.
Sutton, Mark A; Reis, Stefan; Riddick, Stuart N; Dragosits, Ulrike; Nemitz, Eiko; Theobald, Mark R; Tang, Y Sim; Braban, Christine F; Vieno, Massimo; Dore, Anthony J; Mitchell, Robert F; Wanless, Sarah; Daunt, Francis; Fowler, David; Blackall, Trevor D; Milford, Celia; Flechard, Chris R; Loubet, Benjamin; Massad, Raia; Cellier, Pierre; Personne, Erwan; Coheur, Pierre F; Clarisse, Lieven; Van Damme, Martin; Ngadi, Yasmine; Clerbaux, Cathy; Skjøth, Carsten Ambelas; Geels, Camilla; Hertel, Ole; Wichink Kruit, Roy J; Pinder, Robert W; Bash, Jesse O; Walker, John T; Simpson, David; Horváth, László; Misselbrook, Tom H; Bleeker, Albert; Dentener, Frank; de Vries, Wim
2013-07-05
Existing descriptions of bi-directional ammonia (NH3) land-atmosphere exchange incorporate temperature and moisture controls, and are beginning to be used in regional chemical transport models. However, such models have typically applied simpler emission factors to upscale the main NH3 emission terms. While this approach has successfully simulated the main spatial patterns on local to global scales, it fails to address the environment- and climate-dependence of emissions. To handle these issues, we outline the basis for a new modelling paradigm where both NH3 emissions and deposition are calculated online according to diurnal, seasonal and spatial differences in meteorology. We show how measurements reveal a strong, but complex pattern of climatic dependence, which is increasingly being characterized using ground-based NH3 monitoring and satellite observations, while advances in process-based modelling are illustrated for agricultural and natural sources, including a global application for seabird colonies. A future architecture for NH3 emission-deposition modelling is proposed that integrates the spatio-temporal interactions, and provides the necessary foundation to assess the consequences of climate change. Based on available measurements, a first empirical estimate suggests that 5°C warming would increase emissions by 42 per cent (28-67%). Together with increased anthropogenic activity, global NH3 emissions may increase from 65 (45-85) Tg N in 2008 to reach 132 (89-179) Tg by 2100.
A label field fusion bayesian model and its penalized maximum rand estimator for image segmentation.
Mignotte, Max
2010-06-01
This paper presents a novel segmentation approach based on a Markov random field (MRF) fusion model which aims at combining several segmentation results associated with simpler clustering models in order to achieve a more reliable and accurate segmentation result. The proposed fusion model is derived from the recently introduced probabilistic Rand measure for comparing one segmentation result to one or more manual segmentations of the same image. This non-parametric measure allows us to easily derive an appealing fusion model of label fields, easily expressed as a Gibbs distribution, or as a nonstationary MRF model defined on a complete graph. Concretely, this Gibbs energy model encodes the set of binary constraints, in terms of pairs of pixel labels, provided by each segmentation results to be fused. Combined with a prior distribution, this energy-based Gibbs model also allows for definition of an interesting penalized maximum probabilistic rand estimator with which the fusion of simple, quickly estimated, segmentation results appears as an interesting alternative to complex segmentation models existing in the literature. This fusion framework has been successfully applied on the Berkeley image database. The experiments reported in this paper demonstrate that the proposed method is efficient in terms of visual evaluation and quantitative performance measures and performs well compared to the best existing state-of-the-art segmentation methods recently proposed in the literature.
NASA Astrophysics Data System (ADS)
Broadbent, A. M.; Georgescu, M.; Krayenhoff, E. S.; Sailor, D.
2017-12-01
Utility-scale solar power plants are a rapidly growing component of the solar energy sector. Utility-scale photovoltaic (PV) solar power generation in the United States has increased by 867% since 2012 (EIA, 2016). This expansion is likely to continue as the cost PV technologies decrease. While most agree that solar power can decrease greenhouse gas emissions, the biophysical effects of PV systems on surface energy balance (SEB), and implications for surface climate, are not well understood. To our knowledge, there has never been a detailed observational study of SEB at a utility-scale solar array. This study presents data from an eddy covariance observational tower, temporarily placed above a utility-scale PV array in Southern Arizona. Comparison of PV SEB with a reference (unmodified) site, shows that solar panels can alter the SEB and near surface climate. SEB observations are used to develop and validate a new and more complete SEB PV model. In addition, the PV model is compared to simpler PV modelling methods. The simpler PV models produce differing results to our newly developed model and cannot capture the more complex processes that influence PV SEB. Finally, hypothetical scenarios of PV expansion across the continental United States (CONUS) were developed using various spatial mapping criteria. CONUS simulations of PV expansion reveal regional variability in biophysical effects of PV expansion. The study presents the first rigorous and validated simulations of the biophysical effects of utility-scale PV arrays.
A Classroom Entry and Exit Game of Supply with Price-Taking Firms
ERIC Educational Resources Information Center
Cheung, Stephen L.
2005-01-01
The author describes a classroom game demonstrating the process of adjustment to long-run equilibrium in a market consisting of price-taking firms. This game unites and extends key insights from several simpler games in a framework more consistent with the standard textbook model of a competitive industry. Because firms have increasing marginal…
General Relativity in (1 + 1) Dimensions
ERIC Educational Resources Information Center
Boozer, A. D.
2008-01-01
We describe a theory of gravity in (1 + 1) dimensions that can be thought of as a toy model of general relativity. The theory should be a useful pedagogical tool, because it is mathematically much simpler than general relativity but shares much of the same conceptual structure; in particular, it gives a simple illustration of how gravity arises…
ERIC Educational Resources Information Center
Bush, Drew; Sieber, Renee; Seiler, Gale; Chandler, Mark
2018-01-01
This study with 79 students in Montreal, Quebec, compared the educational use of a National Aeronautics and Space Administration (NASA) global climate model (GCM) to climate education technologies developed for classroom use that included simpler interfaces and processes. The goal was to show how differing climate education technologies succeed…
Conditional Subspace Clustering of Skill Mastery: Identifying Skills that Separate Students
ERIC Educational Resources Information Center
Nugent, Rebecca; Ayers, Elizabeth; Dean, Nema
2009-01-01
In educational research, a fundamental goal is identifying which skills students have mastered, which skills they have not, and which skills they are in the process of mastering. As the number of examinees, items, and skills increases, the estimation of even simple cognitive diagnosis models becomes difficult. We adopt a faster, simpler approach:…
Photolysis rates in correlated overlapping cloud fields: Cloud-J 7.3
Prather, M. J.
2015-05-27
A new approach for modeling photolysis rates ( J values) in atmospheres with fractional cloud cover has been developed and implemented as Cloud-J – a multi-scattering eight-stream radiative transfer model for solar radiation based on Fast-J. Using observed statistics for the vertical correlation of cloud layers, Cloud-J 7.3 provides a practical and accurate method for modeling atmospheric chemistry. The combination of the new maximum-correlated cloud groups with the integration over all cloud combinations represented by four quadrature atmospheres produces mean J values in an atmospheric column with root-mean-square errors of 4% or less compared with 10–20% errors using simpler approximations.more » Cloud-J is practical for chemistry-climate models, requiring only an average of 2.8 Fast-J calls per atmosphere, vs. hundreds of calls with the correlated cloud groups, or 1 call with the simplest cloud approximations. Another improvement in modeling J values, the treatment of volatile organic compounds with pressure-dependent cross sections is also incorporated into Cloud-J.« less
Photolysis rates in correlated overlapping cloud fields: Cloud-J 7.3c
Prather, M. J.
2015-08-14
A new approach for modeling photolysis rates ( J values) in atmospheres with fractional cloud cover has been developed and is implemented as Cloud-J – a multi-scattering eight-stream radiative transfer model for solar radiation based on Fast-J. Using observations of the vertical correlation of cloud layers, Cloud-J 7.3c provides a practical and accurate method for modeling atmospheric chemistry. The combination of the new maximum-correlated cloud groups with the integration over all cloud combinations by four quadrature atmospheres produces mean J values in an atmospheric column with root mean square (rms) errors of 4 % or less compared with 10–20 %more » errors using simpler approximations. Cloud-J is practical for chemistry–climate models, requiring only an average of 2.8 Fast-J calls per atmosphere vs. hundreds of calls with the correlated cloud groups, or 1 call with the simplest cloud approximations. Another improvement in modeling J values, the treatment of volatile organic compounds with pressure-dependent cross sections, is also incorporated into Cloud-J.« less
Statistical models of global Langmuir mixing
NASA Astrophysics Data System (ADS)
Li, Qing; Fox-Kemper, Baylor; Breivik, Øyvind; Webb, Adrean
2017-05-01
The effects of Langmuir mixing on the surface ocean mixing may be parameterized by applying an enhancement factor which depends on wave, wind, and ocean state to the turbulent velocity scale in the K-Profile Parameterization. Diagnosing the appropriate enhancement factor online in global climate simulations is readily achieved by coupling with a prognostic wave model, but with significant computational and code development expenses. In this paper, two alternatives that do not require a prognostic wave model, (i) a monthly mean enhancement factor climatology, and (ii) an approximation to the enhancement factor based on the empirical wave spectra, are explored and tested in a global climate model. Both appear to reproduce the Langmuir mixing effects as estimated using a prognostic wave model, with nearly identical and substantial improvements in the simulated mixed layer depth and intermediate water ventilation over control simulations, but significantly less computational cost. Simpler approaches, such as ignoring Langmuir mixing altogether or setting a globally constant Langmuir number, are found to be deficient. Thus, the consequences of Stokes depth and misaligned wind and waves are important.
Predicting Deforestation Patterns in Loreto, Peru from 2000-2010 Using a Nested GLM Approach
NASA Astrophysics Data System (ADS)
Vijay, V.; Jenkins, C.; Finer, M.; Pimm, S.
2013-12-01
Loreto is the largest province in Peru, covering about 370,000 km2. Because of its remote location in the Amazonian rainforest, it is also one of the most sparsely populated. Though a majority of the region remains covered by forest, deforestation is being driven by human encroachment through industrial activities and the spread of colonization and agriculture. The importance of accurate predictive modeling of deforestation has spawned an extensive body of literature on the topic. We present a nested GLM approach based on predictions of deforestation from 2000-2010 and using variables representing the expected drivers of deforestation. Models were constructed using 2000 to 2005 changes and tested against data for 2005 to 2010. The most complex model, which included transportation variables (roads and navigable rivers), spatial contagion processes, population centers and industrial activities, performed better in predicting the 2005 to 2010 changes (75.8% accurate) than did a simpler model using only transportation variables (69.2% accurate). Finally we contrast the GLM approach with a more complex spatially articulated model.
Programmable logic construction kits for hyper-real-time neuronal modeling.
Guerrero-Rivera, Ruben; Morrison, Abigail; Diesmann, Markus; Pearce, Tim C
2006-11-01
Programmable logic designs are presented that achieve exact integration of leaky integrate-and-fire soma and dynamical synapse neuronal models and incorporate spike-time dependent plasticity and axonal delays. Highly accurate numerical performance has been achieved by modifying simpler forward-Euler-based circuitry requiring minimal circuit allocation, which, as we show, behaves equivalently to exact integration. These designs have been implemented and simulated at the behavioral and physical device levels, demonstrating close agreement with both numerical and analytical results. By exploiting finely grained parallelism and single clock cycle numerical iteration, these designs achieve simulation speeds at least five orders of magnitude faster than the nervous system, termed here hyper-real-time operation, when deployed on commercially available field-programmable gate array (FPGA) devices. Taken together, our designs form a programmable logic construction kit of commonly used neuronal model elements that supports the building of large and complex architectures of spiking neuron networks for real-time neuromorphic implementation, neurophysiological interfacing, or efficient parameter space investigations.
NASA Technical Reports Server (NTRS)
Lindholm, F. A.
1982-01-01
The derivation of a simple expression for the capacitance C(V) associated with the transition region of a p-n junction under a forward bias is derived by phenomenological reasoning. The treatment of C(V) is based on the conventional Shockley equations, and simpler expressions for C(V) result that are in general accord with the previous analytical and numerical results. C(V) consists of two components resulting from changes in majority carrier concentration and from free hole and electron accumulation in the space-charge region. The space-charge region is conceived as the intrinsic region of an n-i-p structure for a space-charge region markedly wider than the extrinsic Debye lengths at its edges. This region is excited in the sense that the forward bias creates hole and electron densities orders of magnitude larger than those in equilibrium. The recent Shirts-Gordon (1979) modeling of the space-charge region using a dielectric response function is contrasted with the more conventional Schottky-Shockley modeling.
Model for determination of mid-gap states in amorphous metal oxides from thin film transistors
NASA Astrophysics Data System (ADS)
Bubel, S.; Chabinyc, M. L.
2013-06-01
The electronic density of states in metal oxide semiconductors like amorphous zinc oxide (a-ZnO) and its ternary and quaternary oxide alloys with indium, gallium, tin, or aluminum are different from amorphous silicon, or disordered materials such as pentacene, or P3HT. Many ZnO based semiconductors exhibit a steep decaying density of acceptor tail states (trap DOS) and a Fermi level (EF) close to the conduction band energy (EC). Considering thin film transistor (TFT) operation in accumulation mode, the quasi Fermi level for electrons (Eq) moves even closer to EC. Classic analytic TFT simulations use the simplification EC-EF> `several'kT and cannot reproduce exponential tail states with a characteristic energy smaller than 1/2 kT. We demonstrate an analytic model for tail and deep acceptor states, valid for all amorphous metal oxides and include the effect of trap assisted hopping instead of simpler percolation or mobility edge models, to account for the observed field dependent mobility.
Feng, Jin-Mei; Sun, Jun; Xin, De-Dong; Wen, Jian-Fan
2012-01-01
5S rRNA is a highly conserved ribosomal component. Eukaryotic 5S rRNA and its associated proteins (5S rRNA system) have become very well understood. Giardia lamblia was thought by some researchers to be the most primitive extant eukaryote while others considered it a highly evolved parasite. Previous reports have indicated that some aspects of its 5S rRNA system are simpler than that of common eukaryotes. We here explore whether this is true to its entire system, and whether this simplicity is a primitive or parasitic feature. By collecting and confirming pre-existing data and identifying new data, we obtained almost complete datasets of the system of three isolates of G. lamblia, two other parasitic excavates (Trichomonas vaginalis, Trypanosoma cruzi), and one free-living one (Naegleria gruberi). After comprehensively comparing each aspect of the system among these excavates and also with those of archaea and common eukaryotes, we found all the three Giardia isolates to harbor a same simplified 5S rRNA system, which is not only much simpler than that of common eukaryotes but also the simplest one among those of these excavates, and is surprisingly very similar to that of archaea; we also found among these excavates the system in parasitic species is not necessarily simpler than that in free-living species, conversely, the system of free-living species is even simpler in some respects than those of parasitic ones. The simplicity of Giardia 5S rRNA system should be considered a primitive rather than parasitically-degenerated feature. Therefore, Giardia 5S rRNA system might be a primitive system that is intermediate between that of archaea and the common eukaryotic model system, and it may reflect the evolutionary history of the eukaryotic 5S rRNA system from the archaeal form. Our results also imply G. lamblia might be a primitive eukaryote with secondary parasitically-degenerated features.
NASA Technical Reports Server (NTRS)
Haimes, Robert; Follen, Gregory J.
1998-01-01
CAPRI is a CAD-vendor neutral application programming interface designed for the construction of analysis and design systems. By allowing access to the geometry from within all modules (grid generators, solvers and post-processors) such tasks as meshing on the actual surfaces, node enrichment by solvers and defining which mesh faces are boundaries (for the solver and visualization system) become simpler. The overall reliance on file 'standards' is minimized. This 'Geometry Centric' approach makes multi-physics (multi-disciplinary) analysis codes much easier to build. By using the shared (coupled) surface as the foundation, CAPRI provides a single call to interpolate grid-node based data from the surface discretization in one volume to another. Finally, design systems are possible where the results can be brought back into the CAD system (and therefore manufactured) because all geometry construction and modification are performed using the CAD system's geometry kernel.
Multistability in Chua's circuit with two stable node-foci
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, B. C.; Wang, N.; Xu, Q.
2016-04-15
Only using one-stage op-amp based negative impedance converter realization, a simplified Chua's diode with positive outer segment slope is introduced, based on which an improved Chua's circuit realization with more simpler circuit structure is designed. The improved Chua's circuit has identical mathematical model but completely different nonlinearity to the classical Chua's circuit, from which multiple attractors including coexisting point attractors, limit cycle, double-scroll chaotic attractor, or coexisting chaotic spiral attractors are numerically simulated and experimentally captured. Furthermore, with dimensionless Chua's equations, the dynamical properties of the Chua's system are studied including equilibrium and stability, phase portrait, bifurcation diagram, Lyapunov exponentmore » spectrum, and attraction basin. The results indicate that the system has two symmetric stable nonzero node-foci in global adjusting parameter regions and exhibits the unusual and striking dynamical behavior of multiple attractors with multistability.« less
Persistence in a single species CSTR model with suspended flocs and wall attached biofilms.
Mašić, Alma; Eberl, Hermann J
2012-04-01
We consider a mathematical model for a bacterial population in a continuously stirred tank reactor (CSTR) with wall attachment. This is a modification of the Freter model, in which we model the sessile bacteria as a microbial biofilm. Our analysis indicates that the results of the algebraically simpler original Freter model largely carry over. In a computational simulation study, we find that the vast majority of bacteria in the reactor will eventually be sessile. However, we also find that suspended biomass is relatively more efficient in removing substrate from the reactor than biofilm bacteria.
Benefits of detailed models of muscle activation and mechanics
NASA Technical Reports Server (NTRS)
Lehman, S. L.; Stark, L.
1981-01-01
Recent biophysical and physiological studies identified some of the detailed mechanisms involved in excitation-contraction coupling, muscle contraction, and deactivation. Mathematical models incorporating these mechanisms allow independent estimates of key parameters, direct interplay between basic muscle research and the study of motor control, and realistic model behaviors, some of which are not accessible to previous, simpler, models. The existence of previously unmodeled behaviors has important implications for strategies of motor control and identification of neural signals. New developments in the analysis of differential equations make the more detailed models feasible for simulation in realistic experimental situations.
Target modelling for SAR image simulation
NASA Astrophysics Data System (ADS)
Willis, Chris J.
2014-10-01
This paper examines target models that might be used in simulations of Synthetic Aperture Radar imagery. We examine the basis for scattering phenomena in SAR, and briefly review the Swerling target model set, before considering extensions to this set discussed in the literature. Methods for simulating and extracting parameters for the extended Swerling models are presented. It is shown that in many cases the more elaborate extended Swerling models can be represented, to a high degree of fidelity, by simpler members of the model set. Further, it is shown that it is quite unlikely that these extended models would be selected when fitting models to typical data samples.
NASA Technical Reports Server (NTRS)
Zimanyi, L.; Lanyi, J. K.
1993-01-01
The bacteriorhodopsin photocycle contains more than five spectrally distinct intermediates, and the complexity of their interconversions has precluded a rigorous solution of the kinetics. A representation of the photocycle of mutated D96N bacteriorhodopsin near neutral pH was given earlier (Varo, G., and J. K. Lanyi. 1991. Biochemistry. 30:5008-5015) as BRhv-->K<==>L<==>M1-->M2--> BR. Here we have reduced a set of time-resolved difference spectra for this simpler system to three base spectra, each assumed to consist of an unknown mixture of the pure K, L, and M difference spectra represented by a 3 x 3 matrix of concentration values between 0 and 1. After generating all allowed sets of spectra for K, L, and M (i.e., M1 + M2) at a 1:50 resolution of the matrix elements, invalid solutions were eliminated progressively in a search based on what is expected, empirically and from the theory of polyene excited states, for rhodopsin spectra. Significantly, the average matrix values changed little after the first and simplest of the search criteria that disallowed negative absorptions and more than one maximum for the M intermediate. We conclude from the statistics that during the search the solutions strongly converged into a narrow region of the multidimensional space of the concentration matrix. The data at three temperatures between 5 and 25 degrees C yielded a single set of spectra for K, L, and M; their fits are consistent with the earlier derived photocycle model for the D96N protein.
NASA Astrophysics Data System (ADS)
Nowak, W.; Schöniger, A.; Wöhling, T.; Illman, W. A.
2016-12-01
Model-based decision support requires justifiable models with good predictive capabilities. This, in turn, calls for a fine adjustment between predictive accuracy (small systematic model bias that can be achieved with rather complex models), and predictive precision (small predictive uncertainties that can be achieved with simpler models with fewer parameters). The implied complexity/simplicity trade-off depends on the availability of informative data for calibration. If not available, additional data collection can be planned through optimal experimental design. We present a model justifiability analysis that can compare models of vastly different complexity. It rests on Bayesian model averaging (BMA) to investigate the complexity/performance trade-off dependent on data availability. Then, we disentangle the complexity component from the performance component. We achieve this by replacing actually observed data by realizations of synthetic data predicted by the models. This results in a "model confusion matrix". Based on this matrix, the modeler can identify the maximum model complexity that can be justified by the available (or planned) amount and type of data. As a side product, the matrix quantifies model (dis-)similarity. We apply this analysis to aquifer characterization via hydraulic tomography, comparing four models with a vastly different number of parameters (from a homogeneous model to geostatistical random fields). As a testing scenario, we consider hydraulic tomography data. Using subsets of these data, we determine model justifiability as a function of data set size. The test case shows that geostatistical parameterization requires a substantial amount of hydraulic tomography data to be justified, while a zonation-based model can be justified with more limited data set sizes. The actual model performance (as opposed to model justifiability), however, depends strongly on the quality of prior geological information.
Models and numerical methods for the simulation of loss-of-coolant accidents in nuclear reactors
NASA Astrophysics Data System (ADS)
Seguin, Nicolas
2014-05-01
In view of the simulation of the water flows in pressurized water reactors (PWR), many models are available in the literature and their complexity deeply depends on the required accuracy, see for instance [1]. The loss-of-coolant accident (LOCA) may appear when a pipe is broken through. The coolant is composed by light water in its liquid form at very high temperature and pressure (around 300 °C and 155 bar), it then flashes and becomes instantaneously vapor in case of LOCA. A front of liquid/vapor phase transition appears in the pipes and may propagate towards the critical parts of the PWR. It is crucial to propose accurate models for the whole phenomenon, but also sufficiently robust to obtain relevant numerical results. Due to the application we have in mind, a complete description of the two-phase flow (with all the bubbles, droplets, interfaces…) is out of reach and irrelevant. We investigate averaged models, based on the use of void fractions for each phase, which represent the probability of presence of a phase at a given position and at a given time. The most accurate averaged model, based on the so-called Baer-Nunziato model, describes separately each phase by its own density, velocity and pressure. The two phases are coupled by non-conservative terms due to gradients of the void fractions and by source terms for mechanical relaxation, drag force and mass transfer. With appropriate closure laws, it has been proved [2] that this model complies with all the expected physical requirements: positivity of densities and temperatures, maximum principle for the void fraction, conservation of the mixture quantities, decrease of the global entropy… On the basis of this model, it is possible to derive simpler models, which can be used where the flow is still, see [3]. From the numerical point of view, we develop new Finite Volume schemes in [4], which also satisfy the requirements mentioned above. Since they are based on a partial linearization of the physical model, this numerical scheme is also efficient in terms of CPU time. Eventually, simpler models can locally replace the more complex model in order to simplify the overall computation, using some appropriate local error indicators developed in [5], without reducing the accuracy. References 1. Ishii, M., Hibiki, T., Thermo-fluid dynamics of two-phase flow, Springer, New-York, 2006. 2. Gallouët, T. and Hérard, J.-M., Seguin, N., Numerical modeling of two-phase flows using the two-fluid two-pressure approach, Math. Models Methods Appl. Sci., Vol. 14, 2004. 3. Seguin, N., Étude d'équations aux dérivées partielles hyperboliques en mécanique des fluides, Habilitation à diriger des recherches, UPMC-Paris 6, 2011. 4. Coquel, F., Hérard, J-M., Saleh, K., Seguin, N., A Robust Entropy-Satisfying Finite Volume Scheme for the Isentropic Baer-Nunziato Model, ESAIM: Mathematical Modelling and Numerical Analysis, Vol. 48, 2013. 5. Mathis, H., Cancès, C., Godlewski, E., Seguin, N., Dynamic model adaptation for multiscale simulation of hyperbolic systems with relaxation, preprint, 2013.
Tabassum, Shawana; Dong, Liang; Kumar, Ratnesh
2018-03-05
We present an effective yet simple approach to study the dynamic variations in optical properties (such as the refractive index (RI)) of graphene oxide (GO) when exposed to gases in the visible spectral region, using the thin-film interference method. The dynamic variations in the complex refractive index of GO in response to exposure to a gas is an important factor affecting the performance of GO-based gas sensors. In contrast to the conventional ellipsometry, this method alleviates the need of selecting a dispersion model from among a list of model choices, which is limiting if an applicable model is not known a priori. In addition, the method used is computationally simpler, and does not need to employ any functional approximations. Further advantage over ellipsometry is that no bulky optics is required, and as a result it can be easily integrated into the sensing system, thereby allowing the reliable, simple, and dynamic evaluation of the optical performance of any GO-based gas sensor. In addition, the derived values of the dynamically changing RI values of the GO layer obtained from the method we have employed are corroborated by comparing with the values obtained from ellipsometry.
NASA Astrophysics Data System (ADS)
Franklin, Oskar; Han, Wang; Dieckmann, Ulf; Cramer, Wolfgang; Brännström, Åke; Pietsch, Stephan; Rovenskaya, Elena; Prentice, Iain Colin
2017-04-01
Dynamic global vegetation models (DGVMs) are now indispensable for understanding the biosphere and for estimating the capacity of ecosystems to provide services. The models are continuously developed to include an increasing number of processes and to utilize the growing amounts of observed data becoming available. However, while the versatility of the models is increasing as new processes and variables are added, their accuracy suffers from the accumulation of uncertainty, especially in the absence of overarching principles controlling their concerted behaviour. We have initiated a collaborative working group to address this problem based on a 'missing law' - adaptation and optimization principles rooted in natural selection. Even though this 'missing law' constrains relationships between traits, and therefore can vastly reduce the number of uncertain parameters in ecosystem models, it has rarely been applied to DGVMs. Our recent research have shown that optimization- and trait-based models of gross primary production can be both much simpler and more accurate than current models based on fixed functional types, and that observed plant carbon allocations and distributions of plant functional traits are predictable with eco-evolutionary models. While there are also many other examples of the usefulness of these and other theoretical principles, it is not always straight-forward to make them operational in predictive models. In particular on longer time scales, the representation of functional diversity and the dynamical interactions among individuals and species presents a formidable challenge. Here we will present recent ideas on the use of adaptation and optimization principles in vegetation models, including examples of promising developments, but also limitations of the principles and some key challenges.
Supramolecular Based Membrane Sensors
Ganjali, Mohammad Reza; Norouzi, Parviz; Rezapour, Morteza; Faridbod, Farnoush; Pourjavid, Mohammad Reza
2006-01-01
Supramolecular chemistry can be defined as a field of chemistry, which studies the complex multi-molecular species formed from molecular components that have relatively simpler structures. This field has been subject to extensive research over the past four decades. This review discusses classification of supramolecules and their application in design and construction of ion selective sensors.
Agreeing on Validity Arguments
ERIC Educational Resources Information Center
Sireci, Stephen G.
2013-01-01
Kane (this issue) presents a comprehensive review of validity theory and reminds us that the focus of validation is on test score interpretations and use. In reacting to his article, I support the argument-based approach to validity and all of the major points regarding validation made by Dr. Kane. In addition, I call for a simpler, three-step…
Reflections in computer modeling of rooms: Current approaches and possible extensions
NASA Astrophysics Data System (ADS)
Svensson, U. Peter
2005-09-01
Computer modeling of rooms is most commonly done by some calculation technique that is based on decomposing the sound field into separate reflection components. In a first step, a list of possible reflection paths is found and in a second step, an impulse response is constructed from the list of reflections. Alternatively, the list of reflections is used for generating a simpler echogram, the energy decay as function of time. A number of geometrical acoustics-based methods can handle specular reflections, diffuse reflections, edge diffraction, curved surfaces, and locally/non-locally reacting surfaces to various degrees. This presentation gives an overview of how reflections are handled in the image source method and variants of the ray-tracing methods, which are dominating today in commercial software, as well as in the radiosity method and edge diffraction methods. The use of the recently standardized scattering and diffusion coefficients of surfaces is discussed. Possibilities for combining edge diffraction, surface scattering, and impedance boundaries are demonstrated for an example surface. Finally, the number of reflection paths becomes prohibitively high when all such combinations are included as demonstrated for a simple concert hall model. [Work supported by the Acoustic Research Centre through NFR, Norway.
Regression-based adaptive sparse polynomial dimensional decomposition for sensitivity analysis
NASA Astrophysics Data System (ADS)
Tang, Kunkun; Congedo, Pietro; Abgrall, Remi
2014-11-01
Polynomial dimensional decomposition (PDD) is employed in this work for global sensitivity analysis and uncertainty quantification of stochastic systems subject to a large number of random input variables. Due to the intimate structure between PDD and Analysis-of-Variance, PDD is able to provide simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to polynomial chaos (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of the standard method unaffordable for real engineering applications. In order to address this problem of curse of dimensionality, this work proposes a variance-based adaptive strategy aiming to build a cheap meta-model by sparse-PDD with PDD coefficients computed by regression. During this adaptive procedure, the model representation by PDD only contains few terms, so that the cost to resolve repeatedly the linear system of the least-square regression problem is negligible. The size of the final sparse-PDD representation is much smaller than the full PDD, since only significant terms are eventually retained. Consequently, a much less number of calls to the deterministic model is required to compute the final PDD coefficients.
Numerical treatment of free surface problems in ferrohydrodynamics
NASA Astrophysics Data System (ADS)
Lavrova, O.; Matthies, G.; Mitkova, T.; Polevikov, V.; Tobiska, L.
2006-09-01
The numerical treatment of free surface problems in ferrohydrodynamics is considered. Starting from the general model, special attention is paid to field-surface and flow-surface interactions. Since in some situations these feedback interactions can be partly or even fully neglected, simpler models can be derived. The application of such models to the numerical simulation of dissipative systems, rotary shaft seals, equilibrium shapes of ferrofluid drops, and pattern formation in the normal-field instability of ferrofluid layers is given. Our numerical strategy is able to recover solitary surface patterns which were discovered recently in experiments.
Analysis of a New Rocket-Based Combined-Cycle Engine Concept at Low Speed
NASA Technical Reports Server (NTRS)
Yungster, S.; Trefny, C. J.
1999-01-01
An analysis of the Independent Ramjet Stream (IRS) cycle is presented. The IRS cycle is a variation of the conventional ejector-Ramjet, and is used at low speed in a rocket-based combined-cycle (RBCC) propulsion system. In this new cycle, complete mixing between the rocket and ramjet streams is not required, and a single rocket chamber can be used without a long mixing duct. Furthermore, this concept allows flexibility in controlling the thermal choke process. The resulting propulsion system is intended to be simpler, more robust, and lighter than an ejector-ramjet. The performance characteristics of the IRS cycle are analyzed for a new single-stage-to-orbit (SSTO) launch vehicle concept, known as "Trailblazer." The study is based on a quasi-one-dimensional model of the rocket and air streams at speeds ranging from lift-off to Mach 3. The numerical formulation is described in detail. A performance comparison between the IRS and ejector-ramjet cycles is also presented.
Segmentation by fusion of histogram-based k-means clusters in different color spaces.
Mignotte, Max
2008-05-01
This paper presents a new, simple, and efficient segmentation approach, based on a fusion procedure which aims at combining several segmentation maps associated to simpler partition models in order to finally get a more reliable and accurate segmentation result. The different label fields to be fused in our application are given by the same and simple (K-means based) clustering technique on an input image expressed in different color spaces. Our fusion strategy aims at combining these segmentation maps with a final clustering procedure using as input features, the local histogram of the class labels, previously estimated and associated to each site and for all these initial partitions. This fusion framework remains simple to implement, fast, general enough to be applied to various computer vision applications (e.g., motion detection and segmentation), and has been successfully applied on the Berkeley image database. The experiments herein reported in this paper illustrate the potential of this approach compared to the state-of-the-art segmentation methods recently proposed in the literature.
Emerald: an object-based language for distributed programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutchinson, N.C.
1987-01-01
Distributed systems have become more common, however constructing distributed applications remains a very difficult task. Numerous operating systems and programming languages have been proposed that attempt to simplify the programming of distributed applications. Here a programing language called Emerald is presented that simplifies distributed programming by extending the concepts of object-based languages to the distributed environment. Emerald supports a single model of computation: the object. Emerald objects include private entities such as integers and Booleans, as well as shared, distributed entities such as compilers, directories, and entire file systems. Emerald objects may move between machines in the system, but objectmore » invocation is location independent. The uniform semantic model used for describing all Emerald objects makes the construction of distributed applications in Emerald much simpler than in systems where the differences in implementation between local and remote entities are visible in the language semantics. Emerald incorporates a type system that deals only with the specification of objects - ignoring differences in implementation. Thus, two different implementations of the same abstraction may be freely mixed.« less
A rigorous and simpler method of image charges
NASA Astrophysics Data System (ADS)
Ladera, C. L.; Donoso, G.
2016-07-01
The method of image charges relies on the proven uniqueness of the solution of the Laplace differential equation for an electrostatic potential which satisfies some specified boundary conditions. Granted by that uniqueness, the method of images is rightly described as nothing but shrewdly guessing which and where image charges are to be placed to solve the given electrostatics problem. Here we present an alternative image charges method that is based not on guessing but on rigorous and simpler theoretical grounds, namely the constant potential inside any conductor and the application of powerful geometric symmetries. The aforementioned required uniqueness and, more importantly, guessing are therefore both altogether dispensed with. Our two new theoretical fundaments also allow the image charges method to be introduced in earlier physics courses for engineering and sciences students, instead of its present and usual introduction in electromagnetic theory courses that demand familiarity with the Laplace differential equation and its boundary conditions.
NASA Astrophysics Data System (ADS)
Yang, Huizhen; Ma, Liang; Wang, Bin
2018-01-01
In contrast to the conventional adaptive optics (AO) system, the wavefront sensorless (WFSless) AO system doesn't need a WFS to measure the wavefront aberrations. It is simpler than the conventional AO in system architecture and can be applied to the complex conditions. The model-based WFSless system has a great potential in real-time correction applications because of its fast convergence. The control algorithm of the model-based WFSless system is based on an important theory result that is the linear relation between the Mean-Square Gradient (MSG) magnitude of the wavefront aberration and the second moment of the masked intensity distribution in the focal plane (also called as Masked Detector Signal-MDS). The linear dependence between MSG and MDS for the point source imaging with a CCD sensor will be discussed from theory and simulation in this paper. The theory relationship between MSG and MDS is given based on our previous work. To verify the linear relation for the point source, we set up an imaging model under atmospheric turbulence. Additionally, the value of MDS will be deviate from that of theory because of the noise of detector and further the deviation will affect the correction effect. The theory results under noise will be obtained through theoretical derivation and then the linear relation between MDS and MDS under noise will be discussed through the imaging model. Results show the linear relation between MDS and MDS under noise is also maintained well, which provides a theoretical support to applications of the model-based WFSless system.
NASA Astrophysics Data System (ADS)
Ram Prabhakar, J.; Ragavan, K.
2013-07-01
This article proposes new power management based current control strategy for integrated wind-solar-hydro system equipped with battery storage mechanism. In this control technique, an indirect estimation of load current is done, through energy balance model, DC-link voltage control and droop control. This system features simpler energy management strategy and necessitates few power electronic converters, thereby minimizing the cost of the system. The generation-demand (G-D) management diagram is formulated based on the stochastic weather conditions and demand, which would likely moderate the gap between both. The features of management strategy deploying energy balance model include (1) regulating DC-link voltage within specified tolerances, (2) isolated operation without relying on external electric power transmission network, (3) indirect current control of hydro turbine driven induction generator and (4) seamless transition between grid-connected and off-grid operation modes. Furthermore, structuring of the hybrid system with appropriate selection of control variables enables power sharing among each energy conversion systems and battery storage mechanism. By addressing these intricacies, it is viable to regulate the frequency and voltage of the remote network at load end. The performance of the proposed composite scheme is demonstrated through time-domain simulation in MATLAB/Simulink environment.
NASA Astrophysics Data System (ADS)
Monfared, Vahid
2016-12-01
Analytically based model is presented for behavioral analysis of the plastic deformations in the reinforced materials using the circular (trigonometric) functions. The analytical method is proposed to predict creep behavior of the fibrous composites based on basic and constitutive equations under a tensile axial stress. New insight of the work is to predict some important behaviors of the creeping matrix. In the present model, the prediction of the behaviors is simpler than the available methods. Principal creep strain rate behaviors are very noteworthy for designing the fibrous composites in the creeping composites. Analysis of the mentioned parameter behavior in the reinforced materials is necessary to analyze failure, fracture, and fatigue studies in the creep of the short fiber composites. Shuttles, spaceships, turbine blades and discs, and nozzle guide vanes are commonly subjected to the creep effects. Also, predicting the creep behavior is significant to design the optoelectronic and photonic advanced composites with optical fibers. As a result, the uniform behavior with constant gradient is seen in the principal creep strain rate behavior, and also creep rupture may happen at the fiber end. Finally, good agreements are found through comparing the obtained analytical and FEM results.
ERIC Educational Resources Information Center
Tarhini, Ali; Hassouna, Mohammad; Abbasi, Muhammad Sharif; Orozco, Jorge
2015-01-01
Simpler is better. There are a lot of "needs" in e-Learning, and there's often a limit to the time, talent, and money that can be thrown at them individually. Contemporary pedagogy in technology and engineering disciplines, within the higher education context, champion instructional designs that emphasize peer instruction and rich…
Cook, Heather; Brennan, Kathleen; Azziz, Ricardo
2011-01-01
Objective To determine whether assessing the extent of terminal hair growth in a subset of the traditional 9 areas included in the modified Ferriman-Gallwey (mFG) score can serve as a simpler predictor of total body hirsutism when compared to the full scoring system, and to determine if this new model can accurately distinguish hirsute from non-hirsute women. Design Cross-sectional analysis Setting Two tertiary care academic referral centers. Patients 1951 patients presenting for symptoms of androgen excess. Interventions History and physical examination, including mFG score. Main Outcome Measures Total body hirsutism. Results A regression model using all nine body areas indicated that the combination of upper abdomen, lower abdomen and chin was the best predictor of the total full mFG score. Using this subset of three body areas is accurate in distinguishing true hirsute from non-hirsute women when defining true hirsutism as mFG>7. Conclusion Scoring terminal hair growth only on the chin and abdomen can serve as a simple, yet reliable predictor of total body hirsutism when compared to full body scoring using the traditional mFG system. PMID:21924716
Insulator-based dielectrophoresis of microorganisms: theoretical and experimental results.
Moncada-Hernandez, Hector; Baylon-Cardiel, Javier L; Pérez-González, Victor H; Lapizco-Encinas, Blanca H
2011-09-01
Dielectrophoresis (DEP) is the motion of particles due to polarization effects in nonuniform electric fields. DEP has great potential for handling cells and is a non-destructive phenomenon. It has been utilized for different cell analysis, from viability assessments to concentration enrichment and separation. Insulator-based DEP (iDEP) provides an attractive alternative to conventional electrode-based systems; in iDEP, insulating structures are used to generate nonuniform electric fields, resulting in simpler and more robust devices. Despite the rapid development of iDEP microdevices for applications with cells, the fundamentals behind the dielectrophoretic behavior of cells has not been fully elucidated. Understanding the theory behind iDEP is necessary to continue the progress in this field. This work presents the manipulation and separation of bacterial and yeast cells with iDEP. A computational model in COMSOL Multiphysics was employed to predict the effect of direct current-iDEP on cells suspended in a microchannel containing an array of insulating structures. The model allowed predicting particle behavior, pathlines and the regions where dielectrophoretic immobilization should occur. Experimental work was performed at the same operating conditions employed with the model and results were compared, obtaining good agreement. This is the first report on the mathematical modeling of the dielectrophoretic response of yeast and bacterial cells in a DC-iDEP microdevice. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Merkord, C. L.; Liu, Y.; DeVos, M.; Wimberly, M. C.
2015-12-01
Malaria early detection and early warning systems are important tools for public health decision makers in regions where malaria transmission is seasonal and varies from year to year with fluctuations in rainfall and temperature. Here we present a new data-driven dynamic linear model based on the Kalman filter with time-varying coefficients that are used to identify malaria outbreaks as they occur (early detection) and predict the location and timing of future outbreaks (early warning). We fit linear models of malaria incidence with trend and Fourier form seasonal components using three years of weekly malaria case data from 30 districts in the Amhara Region of Ethiopia. We identified past outbreaks by comparing the modeled prediction envelopes with observed case data. Preliminary results demonstrated the potential for improved accuracy and timeliness over commonly-used methods in which thresholds are based on simpler summary statistics of historical data. Other benefits of the dynamic linear modeling approach include robustness to missing data and the ability to fit models with relatively few years of training data. To predict future outbreaks, we started with the early detection model for each district and added a regression component based on satellite-derived environmental predictor variables including precipitation data from the Tropical Rainfall Measuring Mission (TRMM) and land surface temperature (LST) and spectral indices from the Moderate Resolution Imaging Spectroradiometer (MODIS). We included lagged environmental predictors in the regression component of the model, with lags chosen based on cross-correlation of the one-step-ahead forecast errors from the first model. Our results suggest that predictions of future malaria outbreaks can be improved by incorporating lagged environmental predictors.
Cypko, Mario A; Stoehr, Matthaeus; Kozniewski, Marcin; Druzdzel, Marek J; Dietz, Andreas; Berliner, Leonard; Lemke, Heinz U
2017-11-01
Oncological treatment is being increasingly complex, and therefore, decision making in multidisciplinary teams is becoming the key activity in the clinical pathways. The increased complexity is related to the number and variability of possible treatment decisions that may be relevant to a patient. In this paper, we describe validation of a multidisciplinary cancer treatment decision in the clinical domain of head and neck oncology. Probabilistic graphical models and corresponding inference algorithms, in the form of Bayesian networks, can support complex decision-making processes by providing a mathematically reproducible and transparent advice. The quality of BN-based advice depends on the quality of the model. Therefore, it is vital to validate the model before it is applied in practice. For an example BN subnetwork of laryngeal cancer with 303 variables, we evaluated 66 patient records. To validate the model on this dataset, a validation workflow was applied in combination with quantitative and qualitative analyses. In the subsequent analyses, we observed four sources of imprecise predictions: incorrect data, incomplete patient data, outvoting relevant observations, and incorrect model. Finally, the four problems were solved by modifying the data and the model. The presented validation effort is related to the model complexity. For simpler models, the validation workflow is the same, although it may require fewer validation methods. The validation success is related to the model's well-founded knowledge base. The remaining laryngeal cancer model may disclose additional sources of imprecise predictions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kistler, B.L.
DELSOL3 is a revised and updated version of the DELSOL2 computer program (SAND81-8237) for calculating collector field performance and layout and optimal system design for solar thermal central receiver plants. The code consists of a detailed model of the optical performance, a simpler model of the non-optical performance, an algorithm for field layout, and a searching algorithm to find the best system design based on energy cost. The latter two features are coupled to a cost model of central receiver components and an economic model for calculating energy costs. The code can handle flat, focused and/or canted heliostats, and externalmore » cylindrical, multi-aperture cavity, and flat plate receivers. The program optimizes the tower height, receiver size, field layout, heliostat spacings, and tower position at user specified power levels subject to flux limits on the receiver and land constraints for field layout. DELSOL3 maintains the advantages of speed and accuracy which are characteristics of DELSOL2.« less
BRST Quantization of the Proca Model Based on the BFT and the BFV Formalism
NASA Astrophysics Data System (ADS)
Kim, Yong-Wan; Park, Mu-In; Park, Young-Jai; Yoon, Sean J.
The BRST quantization of the Abelian Proca model is performed using the Batalin-Fradkin-Tyutin and the Batalin-Fradkin-Vilkovisky formalism. First, the BFT Hamiltonian method is applied in order to systematically convert a second class constraint system of the model into an effectively first class one by introducing new fields. In finding the involutive Hamiltonian we adopt a new approach which is simpler than the usual one. We also show that in our model the Dirac brackets of the phase space variables in the original second class constraint system are exactly the same as the Poisson brackets of the corresponding modified fields in the extended phase space due to the linear character of the constraints comparing the Dirac or Faddeev-Jackiw formalisms. Then, according to the BFV formalism we obtain that the desired resulting Lagrangian preserving BRST symmetry in the standard local gauge fixing procedure naturally includes the Stückelberg scalar related to the explicit gauge symmetry breaking effect due to the presence of the mass term. We also analyze the nonstandard nonlocal gauge fixing procedure.
Crustal deformation in great California earthquake cycles
NASA Technical Reports Server (NTRS)
Li, Victor C.; Rice, James R.
1986-01-01
Periodic crustal deformation associated with repeated strike slip earthquakes is computed for the following model: A depth L (less than or similiar to H) extending downward from the Earth's surface at a transform boundary between uniform elastic lithospheric plates of thickness H is locked between earthquakes. It slips an amount consistent with remote plate velocity V sub pl after each lapse of earthquake cycle time T sub cy. Lower portions of the fault zone at the boundary slip continuously so as to maintain constant resistive shear stress. The plates are coupled at their base to a Maxwellian viscoelastic asthenosphere through which steady deep seated mantle motions, compatible with plate velocity, are transmitted to the surface plates. The coupling is described approximately through a generalized Elsasser model. It is argued that the model gives a more realistic physical description of tectonic loading, including the time dependence of deep slip and crustal stress build up throughout the earthquake cycle, than do simpler kinematic models in which loading is represented as imposed uniform dislocation slip on the fault below the locked zone.
Neural Modeling of Fuzzy Controllers for Maximum Power Point Tracking in Photovoltaic Energy Systems
NASA Astrophysics Data System (ADS)
Lopez-Guede, Jose Manuel; Ramos-Hernanz, Josean; Altın, Necmi; Ozdemir, Saban; Kurt, Erol; Azkune, Gorka
2018-06-01
One field in which electronic materials have an important role is energy generation, especially within the scope of photovoltaic energy. This paper deals with one of the most relevant enabling technologies within that scope, i.e, the algorithms for maximum power point tracking implemented in the direct current to direct current converters and its modeling through artificial neural networks (ANNs). More specifically, as a proof of concept, we have addressed the problem of modeling a fuzzy logic controller that has shown its performance in previous works, and more specifically the dimensionless duty cycle signal that controls a quadratic boost converter. We achieved a very accurate model since the obtained medium squared error is 3.47 × 10-6, the maximum error is 16.32 × 10-3 and the regression coefficient R is 0.99992, all for the test dataset. This neural implementation has obvious advantages such as a higher fault tolerance and a simpler implementation, dispensing with all the complex elements needed to run a fuzzy controller (fuzzifier, defuzzifier, inference engine and knowledge base) because, ultimately, ANNs are sums and products.
Structural simplicity as a restraint on the structure of amorphous silicon
NASA Astrophysics Data System (ADS)
Cliffe, Matthew J.; Bartók, Albert P.; Kerber, Rachel N.; Grey, Clare P.; Csányi, Gábor; Goodwin, Andrew L.
2017-06-01
Understanding the structural origins of the properties of amorphous materials remains one of the most important challenges in structural science. In this study, we demonstrate that local "structural simplicity", embodied by the degree to which atomic environments within a material are similar to each other, is a powerful concept for rationalizing the structure of amorphous silicon (a -Si) a canonical amorphous material. We show, by restraining a reverse Monte Carlo refinement against pair distribution function (PDF) data to be simpler, that the simplest model consistent with the PDF is a continuous random network (CRN). A further effect of producing a simple model of a -Si is the generation of a (pseudo)gap in the electronic density of states, suggesting that structural homogeneity drives electronic homogeneity. That this method produces models of a -Si that approach the state-of-the-art without the need for chemically specific restraints (beyond the assumption of homogeneity) suggests that simplicity-based refinement approaches may allow experiment-driven structural modeling techniques to be developed for the wide variety of amorphous semiconductors with strong local order.
Lattice Dynamics of Rare Gas Multilayers on the Ag(111) Surface. Theory and Experiment.
1985-08-01
phonon spectra generated from some simpler models, such as a nearest neighbor central force model, and also use of the Lennard - Jones ) Sa potential ... potentials and one from the Lennard - jones 6-12 potential , foc the ehr.. rare aases. The value for ko was defined from the experi- A 4. 7’,.V 19 mentally...derivative divided by the adsorbate mass. It is immediately obvious that the Barker pair potential value for ko is about 50% larger than the Lennard - Jones
1980-12-31
surfaces. Reactions involving the Pt(O)- triphenylphosphine complexes Pt(PPh 3)n, where n = 2, 3, 4, have been shown to have precise analogues on Pt...12], the triphenylphosphine (PPh 3 ) group is modeled by the simpler but chemically similar phosphine (PH3) group. The appropriate Pt-P bond distances...typically refractory oxides ) are of sufficient magnitude as to suggest significant chemical and electronic modifications of the metal at the metal-support
ModelPlex: Verified Runtime Validation of Verified Cyber-Physical System Models
2014-07-01
nondeterministic choice (〈∪〉), deterministic assignment (〈:=〉) and logical con- nectives (∧ r etc.) replace current facts with simpler ones or branch...By sequent proof rule ∃ r , this existentially quantified variable is instantiated with an arbitrary term θ, which is often a new logical variable...that is implicitly existentially quantified [27]. Weakening (Wr) removes facts that are no longer necessary. (〈∗〉) ∃X〈x :=X〉φ 〈x := ∗〉φ 1 (∃ r ) Γ ` φ(θ
MODELING MICROBUBBLE DYNAMICS IN BIOMEDICAL APPLICATIONS*
CHAHINE, Georges L.; HSIAO, Chao-Tsung
2012-01-01
Controlling microbubble dynamics to produce desirable biomedical outcomes when and where necessary and avoid deleterious effects requires advanced knowledge, which can be achieved only through a combination of experimental and numerical/analytical techniques. The present communication presents a multi-physics approach to study the dynamics combining viscous- in-viscid effects, liquid and structure dynamics, and multi bubble interaction. While complex numerical tools are developed and used, the study aims at identifying the key parameters influencing the dynamics, which need to be included in simpler models. PMID:22833696
Building mental models by dissecting physical models.
Srivastava, Anveshna
2016-01-01
When students build physical models from prefabricated components to learn about model systems, there is an implicit trade-off between the physical degrees of freedom in building the model and the intensity of instructor supervision needed. Models that are too flexible, permitting multiple possible constructions require greater supervision to ensure focused learning; models that are too constrained require less supervision, but can be constructed mechanically, with little to no conceptual engagement. We propose "model-dissection" as an alternative to "model-building," whereby instructors could make efficient use of supervisory resources, while simultaneously promoting focused learning. We report empirical results from a study conducted with biology undergraduate students, where we demonstrate that asking them to "dissect" out specific conceptual structures from an already built 3D physical model leads to a significant improvement in performance than asking them to build the 3D model from simpler components. Using questionnaires to measure understanding both before and after model-based interventions for two cohorts of students, we find that both the "builders" and the "dissectors" improve in the post-test, but it is the latter group who show statistically significant improvement. These results, in addition to the intrinsic time-efficiency of "model dissection," suggest that it could be a valuable pedagogical tool. © 2015 The International Union of Biochemistry and Molecular Biology.
NASA Technical Reports Server (NTRS)
Prahl, J. M.; Hamrock, B. J.
1985-01-01
Two analytical models, one based on simple hydrodynamic lubrication and the other on soft elastohydrodynamic lubrication, are presented and compared to delineate the dominant physical parameters that govern the mechanics of a gaseous film between a small droplet of lubricant and the outer race of a ball bearing. Both models are based on the balance of gravity forces, air drag forces, and air film lubrication forces and incorporate a drag coefficient C sub D and a lubrication coefficient C sub L to be determined from experiment. The soft elastohydrodynamic lubrication (EHL) model considers the effects of droplet deformation and solid-surface geometry; the simpler hydrodynamic lubrication (HL) model assumes that the droplet remains essentially spherical. The droplet's angular position depended primarily on the ratio of gas inertia to droplet gravity forces and on the gas Reynolds number and weakly on the ratio of droplet gravity forces to surface tension forces (Bond number) and geometric ratios for the soft EHL. An experimental configuration in which an oil droplet is supported by an air film on the rotating outer race of a ball bearing within a pressure-controlled chamber produced measurements of droplet angular position as a function of outer-race velocity droplet size and type, and chamber pressure.
Colorectal Cancer Deaths Attributable to Nonuse of Screening in the United States
Meester, Reinier G.S.; Doubeni, Chyke A.; Lansdorp-Vogelaar, Iris; Goede, S.L.; Levin, Theodore R.; Quinn, Virginia P.; van Ballegooijen, Marjolein; Corley, Douglas A.; Zauber, Ann G.
2015-01-01
Purpose Screening is a major contributor to colorectal cancer (CRC) mortality reductions in the U.S., but is underutilized. We estimated the fraction of CRC deaths attributable to nonuse of screening to demonstrate the potential benefits from targeted interventions. Methods The established MISCAN-colon microsimulation model was used to estimate the population attributable fraction (PAF) in people aged ≥50 years. The model incorporates long-term patterns and effects of screening by age and type of screening test. PAF for 2010 was estimated using currently available data on screening uptake; PAF was also projected assuming constant future screening rates to incorporate lagged effects from past increases in screening uptake. We also computed PAF using Levin's formula to gauge how this simpler approach differs from the model-based approach. Results There were an estimated 51,500 CRC deaths in 2010, about 63% (N∼32,200) of which were attributable to non-screening. The PAF decreases slightly to 58% in 2020. Levin's approach yielded a considerably more conservative PAF of 46% (N∼23,600) for 2010. Conclusions The majority of current U.S. CRC deaths are attributable to non-screening. This underscores the potential benefits of increasing screening uptake in the population. Traditional methods of estimating PAF underestimated screening effects compared with model-based approaches. PMID:25721748
Analysis of Slug Tests in Formations of High Hydraulic Conductivity
Butler, J.J.; Garnett, E.J.; Healey, J.M.
2003-01-01
A new procedure is presented for the analysis of slug tests performed in partially penetrating wells in formations of high hydraulic conductivity. This approach is a simple, spreadsheet-based implementation of existing models that can be used for analysis of tests from confined or unconfined aquifers. Field examples of tests exhibiting oscillatory and nonoscillatory behavior are used to illustrate the procedure and to compare results with estimates obtained using alternative approaches. The procedure is considerably simpler than recently proposed methods for this hydrogeologic setting. Although the simplifications required by the approach can introduce error into hydraulic-conductivity estimates, this additional error becomes negligible when appropriate measures are taken in the field. These measures are summarized in a set of practical field guidelines for slug tests in highly permeable aquifers.
Analysis of slug tests in formations of high hydraulic conductivity.
Butler, James J; Garnett, Elizabeth J; Healey, John M
2003-01-01
A new procedure is presented for the analysis of slug tests performed in partially penetrating wells in formations of high hydraulic conductivity. This approach is a simple, spreadsheet-based implementation of existing models that can be used for analysis of tests from confined or unconfined aquifers. Field examples of tests exhibiting oscillatory and nonoscillatory behavior are used to illustrate the procedure and to compare results with estimates obtained using alternative approaches. The procedure is considerably simpler than recently proposed methods for this hydrogeologic setting. Although the simplifications required by the approach can introduce error into hydraulic-conductivity estimates, this additional error becomes negligible when appropriate measures are taken in the field. These measures are summarized in a set of practical field guidelines for slug tests in highly permeable aquifers.
Faraday anomalous dispersion optical tuners
NASA Technical Reports Server (NTRS)
Wanninger, P.; Valdez, E. C.; Shay, T. M.
1992-01-01
Common methods for frequency stabilizing diode lasers systems employ gratings, etalons, optical electric double feedback, atomic resonance, and a Faraday cell with low magnetic field. Our method, the Faraday Anomalous Dispersion Optical Transmitter (FADOT) laser locking, is much simpler than other schemes. The FADOT uses commercial laser diodes with no antireflection coatings, an atomic Faraday cell with a single polarizer, and an output coupler to form a compound cavity. This method is vibration insensitive, thermal expansion effects are minimal, and the system has a frequency pull in range of 443.2 GHz (9A). Our technique is based on the Faraday anomalous dispersion optical filter. This method has potential applications in optical communication, remote sensing, and pumping laser excited optical filters. We present the first theoretical model for the FADOT and compare the calculations to our experimental results.
The induced electric field due to a current transient
NASA Astrophysics Data System (ADS)
Beck, Y.; Braunstein, A.; Frankental, S.
2007-05-01
Calculations and measurements of the electric fields, induced by a lightning strike, are important for understanding the phenomenon and developing effective protection systems. In this paper, a novel approach to the calculation of the electric fields due to lightning strikes, using a relativistic approach, is presented. This approach is based on a known current wave-pair model, representing the lightning current wave. The model presented is one that describes the lightning current wave, either at the first stage of the descending charge wave from the cloud or at the later stage of the return stroke. The electric fields computed are cylindrically symmetric. A simplified method for the calculation of the electric field is achieved by using special relativity theory and relativistic considerations. The proposed approach, described in this paper, is based on simple expressions (by applying Coulomb's law) compared with much more complicated partial differential equations based on Maxwell's equations. A straight forward method of calculating the electric field due to a lightning strike, modelled as a negative-positive (NP) wave-pair, is determined by using the special relativity theory in order to calculate the 'velocity field' and relativistic concepts for calculating the 'acceleration field'. These fields are the basic elements required for calculating the total field resulting from the current wave-pair model. Moreover, a modified simpler method using sub models is represented. The sub-models are filaments of either static charges or charges at constant velocity only. Combining these simple sub-models yields the total wave-pair model. The results fully agree with that obtained by solving Maxwell's equations for the discussed problem.
An approximate spin design criterion for monoplanes, 1 May 1939
NASA Technical Reports Server (NTRS)
Seidman, O.; Donlan, C. J.
1976-01-01
An approximate empirical criterion, based on the projected side area and the mass distribution of the airplane, was formulated. The British results were analyzed and applied to American designs. A simpler design criterion, based solely on the type and the dimensions of the tail, was developed; it is useful in a rapid estimation of whether a new design is likely to comply with the minimum requirements for safety in spinning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saurav, Kumar; Chandan, Vikas
District-heating-and-cooling (DHC) systems are a proven energy solution that has been deployed for many years in a growing number of urban areas worldwide. They comprise a variety of technologies that seek to develop synergies between the production and supply of heat, cooling, domestic hot water and electricity. Although the benefits of DHC systems are significant and have been widely acclaimed, yet the full potential of modern DHC systems remains largely untapped. There are several opportunities for development of energy efficient DHC systems, which will enable the effective exploitation of alternative renewable resources, waste heat recovery, etc., in order to increasemore » the overall efficiency and facilitate the transition towards the next generation of DHC systems. This motivated the need for modelling these complex systems. Large-scale modelling of DHC-networks is challenging, as it has several components such as buildings, pipes, valves, heating source, etc., interacting with each other. In this paper, we focus on building modelling. In particular, we present a gray-box methodology for thermal modelling of buildings. Gray-box modelling is a hybrid of data driven and physics based models where, coefficients of the equations from physics based models are learned using data. This approach allows us to capture the dynamics of the buildings more effectively as compared to pure data driven approach. Additionally, this approach results in a simpler models as compared to pure physics based models. We first develop the individual components of the building such as temperature evolution, flow controller, etc. These individual models are then integrated in to the complete gray-box model for the building. The model is validated using data collected from one of the buildings at Lule{\\aa}, a city on the coast of northern Sweden.« less
Quantitative Diagnosis of Continuous-Valued, Stead-State Systems
NASA Technical Reports Server (NTRS)
Rouquette, N.
1995-01-01
Quantitative diagnosis involves numerically estimating the values of unobservable parameters that best explain the observed parameter values. We consider quantitative diagnosis for continuous, lumped- parameter, steady-state physical systems because such models are easy to construct and the diagnosis problem is considerably simpler than that for corresponding dynamic models. To further tackle the difficulties of numerically inverting a simulation model to compute a diagnosis, we propose to decompose a physical system model in terms of feedback loops. This decomposition reduces the dimension of the problem and consequently decreases the diagnosis search space. We illustrate this approach on a model of thermal control system studied in earlier research.
Climate Model Ensemble Methodology: Rationale and Challenges
NASA Astrophysics Data System (ADS)
Vezer, M. A.; Myrvold, W.
2012-12-01
A tractable model of the Earth's atmosphere, or, indeed, any large, complex system, is inevitably unrealistic in a variety of ways. This will have an effect on the model's output. Nonetheless, we want to be able to rely on certain features of the model's output in studies aiming to detect, attribute, and project climate change. For this, we need assurance that these features reflect the target system, and are not artifacts of the unrealistic assumptions that go into the model. One technique for overcoming these limitations is to study ensembles of models which employ different simplifying assumptions and different methods of modelling. One then either takes as reliable certain outputs on which models in the ensemble agree, or takes the average of these outputs as the best estimate. Since the Intergovernmental Panel on Climate Change's Fourth Assessment Report (IPCC AR4) modellers have aimed to improve ensemble analysis by developing techniques to account for dependencies among models, and to ascribe unequal weights to models according to their performance. The goal of this paper is to present as clearly and cogently as possible the rationale for climate model ensemble methodology, the motivation of modellers to account for model dependencies, and their efforts to ascribe unequal weights to models. The method of our analysis is as follows. We will consider a simpler, well-understood case of taking the mean of a number of measurements of some quantity. Contrary to what is sometimes said, it is not a requirement of this practice that the errors of the component measurements be independent; one must, however, compensate for any lack of independence. We will also extend the usual accounts to include cases of unknown systematic error. We draw parallels between this simpler illustration and the more complex example of climate model ensembles, detailing how ensembles can provide more useful information than any of their constituent models. This account emphasizes the epistemic importance of considering degrees of model dependence, and the practice of ascribing unequal weights to models of unequal skill.
Hot cheese: a processed Swiss cheese model.
Li, Y; Thimbleby, H
2014-01-01
James Reason's classic Swiss cheese model is a vivid and memorable way to visualise how patient harm happens only when all system defences fail. Although Reason's model has been criticised for its simplicity and static portrait of complex systems, its use has been growing, largely because of the direct clarity of its simple and memorable metaphor. A more general, more flexible and equally memorable model of accident causation in complex systems is needed. We present the hot cheese model, which is more realistic, particularly in portraying defence layers as dynamic and active - more defences may cause more hazards. The hot cheese model, being more flexible, encourages deeper discussion of incidents than the simpler Swiss cheese model permits.
ERIC Educational Resources Information Center
Burgess, Carol A.
Sixth grade students can use cinquain poems to explore language, learn grammar, and write creatively. Before learning about cinquains, students should be introduced to simpler poetic forms. To introduce cinquains, the teacher writes a simple example on the board and has the students informally figure out the parts of speech and grammatical…
Toward a Standardized ODH Analysis Technique
Degraff, Brian D.
2016-12-01
Standardization of ODH analysis and mitigation policy thus represents an opportunity for the cryogenic community. There are several benefits for industry and government facilities to develop an applicable unified standard for ODH. The number of reviewers would increase, and review projects across different facilities would be simpler. Here, it would also present the opportunity for the community to broaden the development of expertise in modeling complicated flow geometries.
Introduction to biological complexity as a missing link in drug discovery.
Gintant, Gary A; George, Christopher H
2018-06-06
Despite a burgeoning knowledge of the intricacies and mechanisms responsible for human disease, technological advances in medicinal chemistry, and more efficient assays used for drug screening, it remains difficult to discover novel and effective pharmacologic therapies. Areas covered: By reference to the primary literature and concepts emerging from academic and industrial drug screening landscapes, the authors propose that this disconnect arises from the inability to scale and integrate responses from simpler model systems to outcomes from more complex and human-based biological systems. Expert opinion: Further collaborative efforts combining target-based and phenotypic-based screening along with systems-based pharmacology and informatics will be necessary to harness the technological breakthroughs of today to derive the novel drug candidates of tomorrow. New questions must be asked of enabling technologies-while recognizing inherent limitations-in a way that moves drug development forward. Attempts to integrate mechanistic and observational information acquired across multiple scales frequently expose the gap between our knowledge and our understanding as the level of complexity increases. We hope that the thoughts and actionable items highlighted will help to inform the directed evolution of the drug discovery process.
Surrogate oracles, generalized dependency and simpler models
NASA Technical Reports Server (NTRS)
Wilson, Larry
1990-01-01
Software reliability models require the sequence of interfailure times from the debugging process as input. It was previously illustrated that using data from replicated debugging could greatly improve reliability predictions. However, inexpensive replication of the debugging process requires the existence of a cheap, fast error detector. Laboratory experiments can be designed around a gold version which is used as an oracle or around an n-version error detector. Unfortunately, software developers can not be expected to have an oracle or to bear the expense of n-versions. A generic technique is being investigated for approximating replicated data by using the partially debugged software as a difference detector. It is believed that the failure rate of each fault has significant dependence on the presence or absence of other faults. Thus, in order to discuss a failure rate for a known fault, the presence or absence of each of the other known faults needs to be specified. Also, in simpler models which use shorter input sequences without sacrificing accuracy are of interest. In fact, a possible gain in performance is conjectured. To investigate these propositions, NASA computers running LIC (RTI) versions are used to generate data. This data will be used to label the debugging graph associated with each version. These labeled graphs will be used to test the utility of a surrogate oracle, to analyze the dependent nature of fault failure rates and to explore the feasibility of reliability models which use the data of only the most recent failures.
Towards a universal trait-based model of terrestrial primary production
NASA Astrophysics Data System (ADS)
Wang, H.; Prentice, I. C.; Cornwell, W.; Keenan, T. F.; Davis, T.; Wright, I. J.; Evans, B. J.; Peng, C.
2015-12-01
Systematic variations of plant traits along environmental gradients have been observed for decades. For example, the tendencies of leaf nitrogen per unit area to increase, and of the leaf-internal to ambient CO2 concentration ratio (ci:ca) to decrease, with aridity are well established. But ecosystem models typically represent trait variation based purely on empirical relationships, or on untested conjectures, or not at all. Neglect of quantitative trait variation and its adapative significance probably contributes to the persistent large uncertainties among models in predicting the response of the carbon cycle to environmental change. However, advances in ecological theory and the accumulation of extensive data sets during recent decades suggest that theoretically based and testable predictions of trait variation could be achieved. Based on well-established ecophysiological principles and consideration of the adaptive significance of traits, we propose universal relationships between photosynthetic traits (ci:ca, carbon fixation capacity, and the ratio of electron transport capacity to carbon fixation capacity) and primary environmental variables, which capture observed trait variations both within and between plant functional types. Moreover, incorporating these traits into the standard model of C3photosynthesis allows gross primary production (GPP) of natural vegetation to be predicted by a single equation with just two free parameters, which can be estimated from independent observations. The resulting model performs as well as much more complex models. Our results provide a fresh perspective with potentially high reward: the possibility of a deeper understanding of the relationships between plant traits and environment, simpler and more robust and reliable representation of land processes in Earth system models, and thus improved predictability for biosphere-atmosphere interactions and climate feedbacks.
Balancing the stochastic description of uncertainties as a function of hydrologic model complexity
NASA Astrophysics Data System (ADS)
Del Giudice, D.; Reichert, P.; Albert, C.; Kalcic, M.; Logsdon Muenich, R.; Scavia, D.; Bosch, N. S.; Michalak, A. M.
2016-12-01
Uncertainty analysis is becoming an important component of forecasting water and pollutant fluxes in urban and rural environments. Properly accounting for errors in the modeling process can help to robustly assess the uncertainties associated with the inputs (e.g. precipitation) and outputs (e.g. runoff) of hydrological models. In recent years we have investigated several Bayesian methods to infer the parameters of a mechanistic hydrological model along with those of the stochastic error component. The latter describes the uncertainties of model outputs and possibly inputs. We have adapted our framework to a variety of applications, ranging from predicting floods in small stormwater systems to nutrient loads in large agricultural watersheds. Given practical constraints, we discuss how in general the number of quantities to infer probabilistically varies inversely with the complexity of the mechanistic model. Most often, when evaluating a hydrological model of intermediate complexity, we can infer the parameters of the model as well as of the output error model. Describing the output errors as a first order autoregressive process can realistically capture the "downstream" effect of inaccurate inputs and structure. With simpler runoff models we can additionally quantify input uncertainty by using a stochastic rainfall process. For complex hydrologic transport models, instead, we show that keeping model parameters fixed and just estimating time-dependent output uncertainties could be a viable option. The common goal across all these applications is to create time-dependent prediction intervals which are both reliable (cover the nominal amount of validation data) and precise (are as narrow as possible). In conclusion, we recommend focusing both on the choice of the hydrological model and of the probabilistic error description. The latter can include output uncertainty only, if the model is computationally-expensive, or, with simpler models, it can separately account for different sources of errors like in the inputs and the structure of the model.
Children's selective trust decisions: rational competence and limiting performance factors.
Hermes, Jonas; Behne, Tanya; Bich, Anna Elisa; Thielert, Christa; Rakoczy, Hannes
2018-03-01
Recent research has amply documented that even preschoolers learn selectively from others, preferring, for example, reliable over unreliable and competent over incompetent models. It remains unclear, however, what the cognitive foundations of such selective learning are, in particular, whether it builds on rational inferences or on less sophisticated processes. The current study, therefore, was designed to test directly the possibility that children are in principle capable of selective learning based on rational inference, yet revert to simpler strategies such as global impression formation under certain circumstances. Preschoolers (N = 75) were shown pairs of models that either differed in their degree of competence within one domain (strong vs. weak or knowledgeable vs. ignorant) or were both highly competent, but in different domains (e.g., strong vs. knowledgeable model). In the test trials, children chose between the models for strength- or knowledge-related tasks. The results suggest that, in fact, children are capable of rational inference-based selective trust: when both models were highly competent, children preferred the model with the competence most predictive and relevant for a given task. However, when choosing between two models that differed in competence on one dimension, children reverted to halo-style wide generalizations and preferred the competent models for both relevant and irrelevant tasks. These findings suggest that the rational strategies for selective learning, that children master in principle, can get masked by various performance factors. © 2017 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Joiner, D. A.; Stevenson, D. E.; Panoff, R. M.
2000-12-01
The Computational Science Reference Desk is an online tool designed to provide educators in math, physics, astronomy, biology, chemistry, and engineering with information on how to use computational science to enhance inquiry based learning in the undergraduate and pre college classroom. The Reference Desk features a showcase of original content exploration activities, including lesson plans and background materials; a catalog of websites which contain models, lesson plans, software, and instructional resources; and a forum to allow educators to communicate their ideas. Many of the recent advances in astronomy rely on the use of computer simulation, and tools are being developed by CSERD to allow students to experiment with some of the models that have guided scientific discovery. One of these models allows students to study how scientists use spectral information to determine the makeup of the interstellar medium by modeling the interstellar extinction curve using spherical grains of silicate, amorphous carbon, or graphite. Students can directly compare their model to the average interstellar extinction curve, and experiment with how small changes in their model alter the shape of the interstellar extinction curve. A simpler model allows students to visualize spatial relationships between the Earth, Moon, and Sun to understand the cause of the phases of the moon. A report on the usefulness of these models in two classes, the Computational Astrophysics workshop at The Shodor Education Foundation and the Conceptual Astronomy class at the University of North Carolina at Greensboro, will be presented.
Design Optimization Tool for Synthetic Jet Actuators Using Lumped Element Modeling
NASA Technical Reports Server (NTRS)
Gallas, Quentin; Sheplak, Mark; Cattafesta, Louis N., III; Gorton, Susan A. (Technical Monitor)
2005-01-01
The performance specifications of any actuator are quantified in terms of an exhaustive list of parameters such as bandwidth, output control authority, etc. Flow-control applications benefit from a known actuator frequency response function that relates the input voltage to the output property of interest (e.g., maximum velocity, volumetric flow rate, momentum flux, etc.). Clearly, the required performance metrics are application specific, and methods are needed to achieve the optimal design of these devices. Design and optimization studies have been conducted for piezoelectric cantilever-type flow control actuators, but the modeling issues are simpler compared to synthetic jets. Here, lumped element modeling (LEM) is combined with equivalent circuit representations to estimate the nonlinear dynamic response of a synthetic jet as a function of device dimensions, material properties, and external flow conditions. These models provide reasonable agreement between predicted and measured frequency response functions and thus are suitable for use as design tools. In this work, we have developed a Matlab-based design optimization tool for piezoelectric synthetic jet actuators based on the lumped element models mentioned above. Significant improvements were achieved by optimizing the piezoceramic diaphragm dimensions. Synthetic-jet actuators were fabricated and benchtop tested to fully document their behavior and validate a companion optimization effort. It is hoped that the tool developed from this investigation will assist in the design and deployment of these actuators.
1986-12-31
synthesize synchronization skeletons"Science of Computer Programming 2, 1982, pp. 241-266 [Gel85] Gelernter, David, "Generative communication in...effective computation based on given primitives . An architecture is an abstract object-type, whose instances are computing systems. By a parallel computing...explaining the language primitives on this basis. We explain how such a basis can be "simpler" than a general-purpose manual-programming language such as
Kreienkamp, Amelia B.; Liu, Lucy Y.; Minkara, Mona S.; Knepley, Matthew G.; Bardhan, Jaydeep P.; Radhakrishnan, Mala L.
2013-01-01
We analyze and suggest improvements to a recently developed approximate continuum-electrostatic model for proteins. The model, called BIBEE/I (boundary-integral based electrostatics estimation with interpolation), was able to estimate electrostatic solvation free energies to within a mean unsigned error of 4% on a test set of more than 600 proteins—a significant improvement over previous BIBEE models. In this work, we tested the BIBEE/I model for its capability to predict residue-by-residue interactions in protein–protein binding, using the widely studied model system of trypsin and bovine pancreatic trypsin inhibitor (BPTI). Finding that the BIBEE/I model performs surprisingly less well in this task than simpler BIBEE models, we seek to explain this behavior in terms of the models’ differing spectral approximations of the exact boundary-integral operator. Calculations of analytically solvable systems (spheres and tri-axial ellipsoids) suggest two possibilities for improvement. The first is a modified BIBEE/I approach that captures the asymptotic eigenvalue limit correctly, and the second involves the dipole and quadrupole modes for ellipsoidal approximations of protein geometries. Our analysis suggests that fast, rigorous approximate models derived from reduced-basis approximation of boundary-integral equations might reach unprecedented accuracy, if the dipole and quadrupole modes can be captured quickly for general shapes. PMID:24466561
Eberts, S.M.; Böhlke, J.K.; Kauffman, L.J.; Jurgens, B.C.
2012-01-01
Environmental age tracers have been used in various ways to help assess vulnerability of drinking-water production wells to contamination. The most appropriate approach will depend on the information that is available and that which is desired. To understand how the well will respond to changing nonpoint-source contaminant inputs at the water table, some representation of the distribution of groundwater ages in the well is needed. Such information for production wells is sparse and difficult to obtain, especially in areas lacking detailed field studies. In this study, age distributions derived from detailed groundwater-flow models with advective particle tracking were compared with those generated from lumped-parameter models to examine conditions in which estimates from simpler, less resource-intensive lumped-parameter models could be used in place of estimates from particle-tracking models. In each of four contrasting hydrogeologic settings in the USA, particle-tracking and lumped-parameter models yielded roughly similar age distributions and largely indistinguishable contaminant trends when based on similar conceptual models and calibrated to similar tracer data. Although model calibrations and predictions were variably affected by tracer limitations and conceptual ambiguities, results illustrated the importance of full age distributions, rather than apparent tracer ages or model mean ages, for trend analysis and forecasting.
Constitutive modelling of lubricants in concentrated contacts at high slide to roll ratios
NASA Technical Reports Server (NTRS)
Tevaarwerk, J. L.
1985-01-01
A constitutive lubricant friction model for rolling/sliding concentrated contacts such as gears and cams was developed, based upon the Johnson and Tevaarwerk fluid rheology model developed earlier. The friction model reported herein differs from the earlier rheological models in that very large slide to roll ratios can now be accommodated by modifying the thermal response of the model. Also the elastic response of the fluid has been omitted from the model, thereby making it much simpler for use in the high slide to roll contacts. The effects of this simplification are very minimal on the outcome of the predicted friction losses (less than 1%). In essence then the lubricant friction model developed for the high slide to roll ratios treats the fluid in the concentrated contact as consisting of a nonlinear viscous element that is pressure, temperature, and strain rate dependent in its shear response. The fluid rheological constants required for the prediction of the friction losses at different contact conditions are obtained by traction measurements on several of the currently used gear lubricants. An example calculation, using this model and the fluid parameters obtained from the experiments, shows that it correctly predicts trends and magnitude of gear mesh losses measured elsewhere for the same fluids tested here.
Spatially explicit modeling in ecology: A review
DeAngelis, Donald L.; Yurek, Simeon
2017-01-01
The use of spatially explicit models (SEMs) in ecology has grown enormously in the past two decades. One major advancement has been that fine-scale details of landscapes, and of spatially dependent biological processes, such as dispersal and invasion, can now be simulated with great precision, due to improvements in computer technology. Many areas of modeling have shifted toward a focus on capturing these fine-scale details, to improve mechanistic understanding of ecosystems. However, spatially implicit models (SIMs) have played a dominant role in ecology, and arguments have been made that SIMs, which account for the effects of space without specifying spatial positions, have an advantage of being simpler and more broadly applicable, perhaps contributing more to understanding. We address this debate by comparing SEMs and SIMs in examples from the past few decades of modeling research. We argue that, although SIMs have been the dominant approach in the incorporation of space in theoretical ecology, SEMs have unique advantages for addressing pragmatic questions concerning species populations or communities in specific places, because local conditions, such as spatial heterogeneities, organism behaviors, and other contingencies, produce dynamics and patterns that usually cannot be incorporated into simpler SIMs. SEMs are also able to describe mechanisms at the local scale that can create amplifying positive feedbacks at that scale, creating emergent patterns at larger scales, and therefore are important to basic ecological theory. We review the use of SEMs at the level of populations, interacting populations, food webs, and ecosystems and argue that SEMs are not only essential in pragmatic issues, but must play a role in the understanding of causal relationships on landscapes.
Cho, Sun-Joo; Goodwin, Amanda P
2016-04-01
When word learning is supported by instruction in experimental studies for adolescents, word knowledge outcomes tend to be collected from complex data structure, such as multiple aspects of word knowledge, multilevel reader data, multilevel item data, longitudinal design, and multiple groups. This study illustrates how generalized linear mixed models can be used to measure and explain word learning for data having such complexity. Results from this application provide deeper understanding of word knowledge than could be attained from simpler models and show that word knowledge is multidimensional and depends on word characteristics and instructional contexts.
NASA Technical Reports Server (NTRS)
Senocak, I.; Ackerman, A. S.; Kirkpatrick, M. P.; Stevens, D. E.; Mansour, N. N.
2004-01-01
Large-eddy simulation (LES) is a widely used technique in armospheric modeling research. In LES, large, unsteady, three dimensional structures are resolved and small structures that are not resolved on the computational grid are modeled. A filtering operation is applied to distinguish between resolved and unresolved scales. We present two near-surface models that have found use in atmospheric modeling. We also suggest a simpler eddy viscosity model that adopts Prandtl's mixing length model (Prandtl 1925) in the vicinity of the surface and blends with the dynamic Smagotinsky model (Germano et al, 1991) away from the surface. We evaluate the performance of these surface models by simulating a neutraly stratified atmospheric boundary layer.
Verification of Orthogrid Finite Element Modeling Techniques
NASA Technical Reports Server (NTRS)
Steeve, B. E.
1996-01-01
The stress analysis of orthogrid structures, specifically with I-beam sections, is regularly performed using finite elements. Various modeling techniques are often used to simplify the modeling process but still adequately capture the actual hardware behavior. The accuracy of such 'Oshort cutso' is sometimes in question. This report compares three modeling techniques to actual test results from a loaded orthogrid panel. The finite element models include a beam, shell, and mixed beam and shell element model. Results show that the shell element model performs the best, but that the simpler beam and beam and shell element models provide reasonable to conservative results for a stress analysis. When deflection and stiffness is critical, it is important to capture the effect of the orthogrid nodes in the model.
An analytical model for train-induced ground vibrations from railways
NASA Astrophysics Data System (ADS)
Karlström, A.; Boström, A.
2006-04-01
To investigate ground vibrations from railways an analytical approach is taken. The ground is modelled as a stratified half-space with linearly viscoelastic layers. On top of the ground a rectangular embankment is placed, supporting the rails and the sleepers. The rails are modelled as Euler-Bernoulli beams where the propagating forces (wheel loads) are acting and the sleepers are modelled with an anisotropic Kirchhoff plate. The solution is based on Fourier transforms in time and along the track. In the transverse direction the fields in the embankment are developed in Fourier series and in the half-space with Fourier transforms. The resulting numerical scheme is very efficient, permitting displacement fields far outside the track to be calculated. Numerical examples are given for an X2 train that operates at the site Ledsgard in Sweden. The displacements are simulated at 70 and 200 km/h and are compared with the displacements from simpler models. The simulations are also validated against measurements, with very good agreement. At 70 km/h the track displacements agree almost exactly and at 200 km/h the displacements are a very good approximation of the measurement.
Xin, De-Dong; Wen, Jian-Fan
2012-01-01
Background 5S rRNA is a highly conserved ribosomal component. Eukaryotic 5S rRNA and its associated proteins (5S rRNA system) have become very well understood. Giardia lamblia was thought by some researchers to be the most primitive extant eukaryote while others considered it a highly evolved parasite. Previous reports have indicated that some aspects of its 5S rRNA system are simpler than that of common eukaryotes. We here explore whether this is true to its entire system, and whether this simplicity is a primitive or parasitic feature. Methodology/Principal Findings By collecting and confirming pre-existing data and identifying new data, we obtained almost complete datasets of the system of three isolates of G. lamblia, two other parasitic excavates (Trichomonas vaginalis, Trypanosoma cruzi), and one free-living one (Naegleria gruberi). After comprehensively comparing each aspect of the system among these excavates and also with those of archaea and common eukaryotes, we found all the three Giardia isolates to harbor a same simplified 5S rRNA system, which is not only much simpler than that of common eukaryotes but also the simplest one among those of these excavates, and is surprisingly very similar to that of archaea; we also found among these excavates the system in parasitic species is not necessarily simpler than that in free-living species, conversely, the system of free-living species is even simpler in some respects than those of parasitic ones. Conclusion/Significance The simplicity of Giardia 5S rRNA system should be considered a primitive rather than parasitically-degenerated feature. Therefore, Giardia 5S rRNA system might be a primitive system that is intermediate between that of archaea and the common eukaryotic model system, and it may reflect the evolutionary history of the eukaryotic 5S rRNA system from the archaeal form. Our results also imply G. lamblia might be a primitive eukaryote with secondary parasitically-degenerated features. PMID:22685540
A symbiotic approach to fluid equations and non-linear flux-driven simulations of plasma dynamics
NASA Astrophysics Data System (ADS)
Halpern, Federico
2017-10-01
The fluid framework is ubiquitous in studies of plasma transport and stability. Typical forms of the fluid equations are motivated by analytical work dating several decades ago, before computer simulations were indispensable, and can be, therefore, not optimal for numerical computation. We demonstrate a new first-principles approach to obtaining manifestly consistent, skew-symmetric fluid models, ensuring internal consistency and conservation properties even in discrete form. Mass, kinetic, and internal energy become quadratic (and always positive) invariants of the system. The model lends itself to a robust, straightforward discretization scheme with inherent non-linear stability. A simpler, drift-ordered form of the equations is obtained, and first results of their numerical implementation as a binary framework for bulk-fluid global plasma simulations are demonstrated. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Fusion Energy Sciences, Theory Program, under Award No. DE-FG02-95ER54309.
NASA-IGES Translator and Viewer
NASA Technical Reports Server (NTRS)
Chou, Jin J.; Logan, Michael A.
1995-01-01
NASA-IGES Translator (NIGEStranslator) is a batch program that translates a general IGES (Initial Graphics Exchange Specification) file to a NASA-IGES-Nurbs-Only (NINO) file. IGES is the most popular geometry exchange standard among Computer Aided Geometric Design (CAD) systems. NINO format is a subset of IGES, implementing the simple and yet the most popular NURBS (Non-Uniform Rational B-Splines) representation. NIGEStranslator converts a complex IGES file to the simpler NINO file to simplify the tasks of CFD grid generation for models in CAD format. The NASA-IGES Viewer (NIGESview) is an Open-Inventor-based, highly interactive viewer/ editor for NINO files. Geometry in the IGES files can be viewed, copied, transformed, deleted, and inquired. Users can use NIGEStranslator to translate IGES files from CAD systems to NINO files. The geometry then can be examined with NIGESview. Extraneous geometries can be interactively removed, and the cleaned model can be written to an IGES file, ready to be used in grid generation.
ERIC Educational Resources Information Center
Foo, Patrick; Warren, William H.; Duchon, Andrew; Tarr, Michael J.
2005-01-01
Do humans integrate experience on specific routes into metric survey knowledge of the environment, or do they depend on a simpler strategy of landmark navigation? The authors tested this question using a novel shortcut paradigm during walking in a virtual environment. The authors find that participants could not take successful shortcuts in a…
ERIC Educational Resources Information Center
Connolly, John J.; Glessner, Joseph T.; Hakonarson, Hakon
2013-01-01
Efforts to understand the causes of autism spectrum disorders (ASDs) have been hampered by genetic complexity and heterogeneity among individuals. One strategy for reducing complexity is to target endophenotypes, simpler biologically based measures that may involve fewer genes and constitute a more homogenous sample. A genome-wide association…
Resolved motion rate and resolved acceleration servo-control of wheeled mobile robots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muir, P.F.; Neuman, C.P.; Carnegie-Mellon Univ., Pittsburgh, PA
1989-01-01
Accurate motion control of wheeled mobile robots (WMRs) is required for their application to autonomous, semi-autonomous and teleoperated tasks. The similarities between WMRs and stationary manipulators suggest that current, successful, model-based manipulator control algorithms may be applied to WMRs. Special characteristics of WMRs including higher-pairs, closed-chains, friction and unactuated and unsensed joints require innovative modeling methodologies. The WMR modeling challenge has been recently overcome, thus enabling the application of manipulator control algorithms to WMRs. This realization lays the foundation for significant technology transfer from manipulator control to WMR control. We apply two Cartesian-space manipulator control algorithms: resolved motion rate (kinematics-based)more » and resolved acceleration (dynamics-based) control to WMR servo-control. We evaluate simulation studies of two exemplary WMRs: Uranus (a three degree-of-freedom WMR constructed at Carnegie Mellon University), and Bicsun-Bicas (a two degree-of-freedom WMR being constructed at Sandia National Laboratories) under the control of these algorithms. Although resolved motion rate servo-control is adequate for the control of Uranus, resolved acceleration servo-control is required for the control of the mechanically simpler Bicsun-Bicas because it exhibits more dynamic coupling and nonlinearities. Successful accurate motion control of these WMRs in simulation is driving current experimental research studies. 18 refs., 7 figs., 5 tabs.« less
A group evolving-based framework with perturbations for link prediction
NASA Astrophysics Data System (ADS)
Si, Cuiqi; Jiao, Licheng; Wu, Jianshe; Zhao, Jin
2017-06-01
Link prediction is a ubiquitous application in many fields which uses partially observed information to predict absence or presence of links between node pairs. The group evolving study provides reasonable explanations on the behaviors of nodes, relations between nodes and community formation in a network. Possible events in group evolution include continuing, growing, splitting, forming and so on. The changes discovered in networks are to some extent the result of these events. In this work, we present a group evolving-based characterization of node's behavioral patterns, and via which we can estimate the probability they tend to interact. In general, the primary aim of this paper is to offer a minimal toy model to detect missing links based on evolution of groups and give a simpler explanation on the rationality of the model. We first introduce perturbations into networks to obtain stable cluster structures, and the stable clusters determine the stability of each node. Then fluctuations, another node behavior, are assumed by the participation of each node to its own belonging group. Finally, we demonstrate that such characteristics allow us to predict link existence and propose a model for link prediction which outperforms many classical methods with a decreasing computational time in large scales. Encouraging experimental results obtained on real networks show that our approach can effectively predict missing links in network, and even when nearly 40% of the edges are missing, it also retains stationary performance.
Lascola, Robert; O'Rourke, Patrick E.; Kyser, Edward A.
2017-10-05
Here, we have developed a piecewise local (PL) partial least squares (PLS) analysis method for total plutonium measurements by absorption spectroscopy in nitric acid-based nuclear material processing streams. Instead of using a single PLS model that covers all expected solution conditions, the method selects one of several local models based on an assessment of solution absorbance, acidity, and Pu oxidation state distribution. The local models match the global model for accuracy against the calibration set, but were observed in several instances to be more robust to variations associated with measurements in the process. The improvements are attributed to the relativemore » parsimony of the local models. Not all of the sources of spectral variation are uniformly present at each part of the calibration range. Thus, the global model is locally overfitting and susceptible to increased variance when presented with new samples. A second set of models quantifies the relative concentrations of Pu(III), (IV), and (VI). Standards containing a mixture of these species were not at equilibrium due to a disproportionation reaction. Therefore, a separate principal component analysis is used to estimate of the concentrations of the individual oxidation states in these standards in the absence of independent confirmatory analysis. The PL analysis approach is generalizable to other systems where the analysis of chemically complicated systems can be aided by rational division of the overall range of solution conditions into simpler sub-regions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lascola, Robert; O'Rourke, Patrick E.; Kyser, Edward A.
Here, we have developed a piecewise local (PL) partial least squares (PLS) analysis method for total plutonium measurements by absorption spectroscopy in nitric acid-based nuclear material processing streams. Instead of using a single PLS model that covers all expected solution conditions, the method selects one of several local models based on an assessment of solution absorbance, acidity, and Pu oxidation state distribution. The local models match the global model for accuracy against the calibration set, but were observed in several instances to be more robust to variations associated with measurements in the process. The improvements are attributed to the relativemore » parsimony of the local models. Not all of the sources of spectral variation are uniformly present at each part of the calibration range. Thus, the global model is locally overfitting and susceptible to increased variance when presented with new samples. A second set of models quantifies the relative concentrations of Pu(III), (IV), and (VI). Standards containing a mixture of these species were not at equilibrium due to a disproportionation reaction. Therefore, a separate principal component analysis is used to estimate of the concentrations of the individual oxidation states in these standards in the absence of independent confirmatory analysis. The PL analysis approach is generalizable to other systems where the analysis of chemically complicated systems can be aided by rational division of the overall range of solution conditions into simpler sub-regions.« less
Path integration mediated systematic search: a Bayesian model.
Vickerstaff, Robert J; Merkle, Tobias
2012-08-21
The systematic search behaviour is a backup system that increases the chances of desert ants finding their nest entrance after foraging when the path integrator has failed to guide them home accurately enough. Here we present a mathematical model of the systematic search that is based on extensive behavioural studies in North African desert ants Cataglyphis fortis. First, a simple search heuristic utilising Bayesian inference and a probability density function is developed. This model, which optimises the short-term nest detection probability, is then compared to three simpler search heuristics and to recorded search patterns of Cataglyphis ants. To compare the different searches a method to quantify search efficiency is established as well as an estimate of the error rate in the ants' path integrator. We demonstrate that the Bayesian search heuristic is able to automatically adapt to increasing levels of positional uncertainty to produce broader search patterns, just as desert ants do, and that it outperforms the three other search heuristics tested. The searches produced by it are also arguably the most similar in appearance to the ant's searches. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Hou, X. Y.; Koh, C. G.; Kuang, K. S. C.; Lee, W. H.
2017-07-01
This paper investigates the capability of a novel piezoelectric sensor for low-frequency and low-amplitude vibration measurement. The proposed design effectively amplifies the input acceleration via two amplifying mechanisms and thus eliminates the use of the external charge amplifier or conditioning amplifier typically employed for measurement system. The sensor is also self-powered, i.e. no external power unit is required. Consequently, wiring and electrical insulation for on-site measurement are considerably simpler. In addition, the design also greatly reduces the interference from rotational motion which often accompanies the translational acceleration to be measured. An analytical model is developed based on a set of piezoelectric constitutive equations and beam theory. Closed-form expression is derived to correlate sensor geometry and material properties with its dynamic performance. Experimental calibration is then carried out to validate the analytical model. After calibration, experiments are carried out to check the feasibility of the new sensor in structural vibration detection. From experimental results, it is concluded that the proposed sensor is suitable for measuring low-frequency and low-amplitude vibrations.
Development and validation of a new soot formation model for gas turbine combustor simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Di Domenico, Massimiliano; Gerlinger, Peter; Aigner, Manfred
2010-02-15
In this paper a new soot formation model for gas turbine combustor simulations is presented. A sectional approach for the description of Polycyclic Aromatic Hydrocarbons (PAHs) and a two-equation model for soot particle dynamics are introduced. By including the PAH chemistry the formulation becomes more general in that the soot formation is neither directly linked to the fuel nor to C{sub 2}-like species, as it is the case in simpler soot models currently available for CFD applications. At the same time, the sectional approach for the PAHs keeps the required computational resources low if compared to models based on amore » detailed description of the PAH kinetics. These features of the new model allow an accurate yet affordable calculation of soot in complex gas turbine combustion chambers. A careful model validation will be presented for diffusion and partially premixed flames. Fuels ranging from methane to kerosene are investigated. Thus, flames with different sooting characteristics are covered. An excellent agreement with experimental data is achieved for all configurations investigated. A fundamental feature of the new model is that with a single set of constants it is able to accurately describe the soot dynamics of different fuels at different operating conditions. (author)« less
NASA Astrophysics Data System (ADS)
Solovjov, Vladimir P.; Webb, Brent W.; Andre, Frederic
2018-07-01
Following previous theoretical development based on the assumption of a rank correlated spectrum, the Rank Correlated Full Spectrum k-distribution (RC-FSK) method is proposed. The method proves advantageous in modeling radiation transfer in high temperature gases in non-uniform media in two important ways. First, and perhaps most importantly, the method requires no specification of a reference gas thermodynamic state. Second, the spectral construction of the RC-FSK model is simpler than original correlated FSK models, requiring only two cumulative k-distributions. Further, although not exhaustive, example problems presented here suggest that the method may also yield improved accuracy relative to prior methods, and may exhibit less sensitivity to the blackbody source temperature used in the model predictions. This paper outlines the theoretical development of the RC-FSK method, comparing the spectral construction with prior correlated spectrum FSK method formulations. Further the RC-FSK model's relationship to the Rank Correlated Spectral Line Weighted-sum-of-gray-gases (RC-SLW) model is defined. The work presents predictions using the Rank Correlated FSK method and previous FSK methods in three different example problems. Line-by-line benchmark predictions are used to assess the accuracy.
Sanz, Ana B; Sanchez-Niño, María Dolores; Martín-Cleary, Catalina; Ortiz, Alberto; Ramos, Adrián M
2013-07-01
Acute kidney injury (AKI) is a clinical syndrome characterized by the acute loss of kidney function. AKI is increasingly frequent and is associated with impaired survival and chronic kidney disease progression. Experimental AKI models have contributed to a better understanding of pathophysiological mechanisms but they have not yet resulted in routine clinical application of novel therapeutic approaches. The authors present the advances in experimental AKI models over the last decade. Furthermore, the authors review their current and expected impact on novel drug discovery. New AKI models have been developed in rodents and non-rodents. Non-rodents allow the evaluation of specific aspects of AKI in both bigger animals and simpler organisms such as drosophila and zebrafish. New rodent models have recently reproduced described clinical entities, such as aristolochic and warfarin nephropathies, and have also provided better models for old entities such as thrombotic microangiopathy-induced AKI. Several therapies identified in animal models are now undergoing clinical trials in human AKI, including p53 RNAi and bone-marrow derived mesenchymal stem cells. It is conceivable that further refinement of animal models in combination with ongoing trials and novel trials based on already identified potential targets will eventually yield effective therapies for clinical AKI.
Gurarie, David; King, Charles H.
2014-01-01
Mathematical modeling is widely used for predictive analysis of control options for infectious agents. Challenging problems arise for modeling host-parasite systems having complex life-cycles and transmission environments. Macroparasites, like Schistosoma, inhabit highly fragmented habitats that shape their reproductive success and distribution. Overdispersion and mating success are important factors to consider in modeling control options for such systems. Simpler models based on mean worm burden (MWB) formulations do not take these into account and overestimate transmission. Proposed MWB revisions have employed prescribed distributions and mating factor corrections to derive modified MWB models that have qualitatively different equilibria, including ‘breakpoints’ below which the parasite goes to extinction, suggesting the possibility of elimination via long-term mass-treatment control. Despite common use, no one has attempted to validate the scope and hypotheses underlying such MWB approaches. We conducted a systematic analysis of both the classical MWB and more recent “stratified worm burden” (SWB) modeling that accounts for mating and reproductive hurdles (Allee effect). Our analysis reveals some similarities, including breakpoints, between MWB and SWB, but also significant differences between the two types of model. We show the classic MWB has inherent inconsistencies, and propose SWB as a reliable alternative for projection of long-term control outcomes. PMID:25549362
Discrete stochastic analogs of Erlang epidemic models.
Getz, Wayne M; Dougherty, Eric R
2018-12-01
Erlang differential equation models of epidemic processes provide more realistic disease-class transition dynamics from susceptible (S) to exposed (E) to infectious (I) and removed (R) categories than the ubiquitous SEIR model. The latter is itself is at one end of the spectrum of Erlang SE[Formula: see text]I[Formula: see text]R models with [Formula: see text] concatenated E compartments and [Formula: see text] concatenated I compartments. Discrete-time models, however, are computationally much simpler to simulate and fit to epidemic outbreak data than continuous-time differential equations, and are also much more readily extended to include demographic and other types of stochasticity. Here we formulate discrete-time deterministic analogs of the Erlang models, and their stochastic extension, based on a time-to-go distributional principle. Depending on which distributions are used (e.g. discretized Erlang, Gamma, Beta, or Uniform distributions), we demonstrate that our formulation represents both a discretization of Erlang epidemic models and generalizations thereof. We consider the challenges of fitting SE[Formula: see text]I[Formula: see text]R models and our discrete-time analog to data (the recent outbreak of Ebola in Liberia). We demonstrate that the latter performs much better than the former; although confining fits to strict SEIR formulations reduces the numerical challenges, but sacrifices best-fit likelihood scores by at least 7%.
A computationally tractable version of the collective model
NASA Astrophysics Data System (ADS)
Rowe, D. J.
2004-05-01
A computationally tractable version of the Bohr-Mottelson collective model is presented which makes it possible to diagonalize realistic collective models and obtain convergent results in relatively small appropriately chosen subspaces of the collective model Hilbert space. Special features of the proposed model are that it makes use of the beta wave functions given analytically by the softened-beta version of the Wilets-Jean model, proposed by Elliott et al., and a simple algorithm for computing SO(5)⊃SO(3) spherical harmonics. The latter has much in common with the methods of Chacon, Moshinsky, and Sharp but is conceptually and computationally simpler. Results are presented for collective models ranging from the spherical vibrator to the Wilets-Jean and axially symmetric rotor-vibrator models.
Dry Rainbelts: Understanding Boundary Layer Controls on the ITCZ Using a Dry Dynamical Core
NASA Astrophysics Data System (ADS)
Hill, S. A.; Bordoni, S.; Mitchell, J.
2017-12-01
Though migrations of Earth's Intertropical Convergence Zone (ITCZ) are often interpreted in terms of meridional energy transports, a recent study using an idealized, aquaplanet GCM indicates that the ITCZ's position is also linked to the character of the boundary layer momentum budget. Namely, moist convection within the ITCZ roughly coincides with a transition in the role of relative vorticity advection in the boundary layer, from being of leading-order to lower-order importance. This is insensitive to the presence of mid-latitude eddies or thermal inertia and holds over a range of planetary rotation rates, with this transitional regime and the ITCZ extending farther poleward the slower the planet is rotating. We use an even simpler model, a dry dynamical core, to further refine the theoretical understanding of these results, via simulations analogous to and extending the aforementioned moist cases. The importance of planetary rotation and lack thereof for both baroclinic eddies and thermal inertia emerge in the dry simulations also, implying base causes rooted in simpler, steady-state, solsticial, axisymmetric, dry dynamics. We further elucidate the role of the boundary layer dynamical processes through comparison with arguments dating to at least 1972 (although largely overlooked in recent literature) that convection is forced by convergence driven by a shallowing of the boundary layer depth, with this shallowing resulting from the transition from an advective to an Ekman balance on frictional drag. We discuss the potential links between this dynamical perspective and the popular energetic framework for ITCZ migrations and the resulting implications for moist convection on Earth and other planetary bodies.
Computer-automated opponent for manned air-to-air combat simulations
NASA Technical Reports Server (NTRS)
Hankins, W. W., III
1979-01-01
Two versions of a real-time digital-computer program that operates a fighter airplane interactively against a human pilot in simulated air combat were evaluated. They function by replacing one of two pilots in the Langley differential maneuvering simulator. Both versions make maneuvering decisions from identical information and logic; they differ essentially in the aerodynamic models that they control. One is very complete, but the other is much simpler, primarily characterizing the airplane's performance (lift, drag, and thrust). Both models competed extremely well against highly trained U.S. fighter pilots.
Numerical study of combustion processes in afterburners
NASA Technical Reports Server (NTRS)
Zhou, Xiaoqing; Zhang, Xiaochun
1986-01-01
Mathematical models and numerical methods are presented for computer modeling of aeroengine afterburners. A computer code GEMCHIP is described briefly. The algorithms SIMPLER, for gas flow predictions, and DROPLET, for droplet flow calculations, are incorporated in this code. The block correction technique is adopted to facilitate convergence. The method of handling irregular shapes of combustors and flameholders is described. The predicted results for a low-bypass-ratio turbofan afterburner in the cases of gaseous combustion and multiphase spray combustion are provided and analyzed, and engineering guides for afterburner optimization are presented.
NASA Astrophysics Data System (ADS)
Glatzmaier, G. A.
2010-12-01
There has been considerable interest during the past few years about the banded zonal winds and global magnetic field on Saturn (and Jupiter). Questions regarding the depth to which the intense winds extend below the surface and the role they play in maintaining the dynamo continue to be debated. The types of computer models employed to address these questions fall into two main classes: general circulation models (GCMs) based on hydrostatic shallow-water assumptions from the atmospheric and ocean modeling communities and global non-hydrostatic deep convection models from the geodynamo and solar dynamo communities. The latter class can be further divided into Boussinesq models, which do not account for density stratification, and anelastic models, which do. Recent efforts to convert GCMs to deep circulation anelastic models have succeeded in producing fluid flows similar to those obtained from the original deep convection anelastic models. We describe results from one of the original anelastic convective dynamo simulations and compare them to a recent anelastic dynamo benchmark for giant gas planets. This benchmark is based on a polytropic reference state that spans five density scale heights with a radius and rotation rate similar to those of our solar system gas giants. The resulting magnetic Reynolds number is about 3000. Better spatial resolution will be required to produce more realistic predictions that capture the effects of both the density and electrical conductivity stratifications and include enough of the turbulent kinetic energy spectrum. Important additional physics may also be needed in the models. However, the basic models used in all simulation studies of the global dynamics of giant planets will hopefully first be validated by doing these simpler benchmarks.
Solute partitioning in multi-component γ/γ' Co–Ni-base superalloys with near-zero lattice misfit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meher, S.; Carroll, L. J.; Pollock, T. M.
The addition of nickel to cobalt-base alloys enables alloys with a near zero γ – γ' lattice misfit. The solute partitioning between ordered γ' precipitates and the disordered γ matrix have been investigated using atom probe tomography. Lastly, the unique shift in solute partitioning in these alloys, as compared to that in simpler Co-base alloys, derives from changes in site substitution of solutes as the relative amounts of Co and Ni change, highlighting new opportunities for the development of advanced tailored alloys.
Solute partitioning in multi-component γ/γ' Co–Ni-base superalloys with near-zero lattice misfit
Meher, S.; Carroll, L. J.; Pollock, T. M.; ...
2015-11-21
The addition of nickel to cobalt-base alloys enables alloys with a near zero γ – γ' lattice misfit. The solute partitioning between ordered γ' precipitates and the disordered γ matrix have been investigated using atom probe tomography. Lastly, the unique shift in solute partitioning in these alloys, as compared to that in simpler Co-base alloys, derives from changes in site substitution of solutes as the relative amounts of Co and Ni change, highlighting new opportunities for the development of advanced tailored alloys.
Cooling and Trapping of Neutral Atoms
2009-04-30
Schrodinger equation in which the absence of the rotating wave approximation accounts for the two frequencies [18]. This result can be described in...depict this energy conservation process is the Jaynes - Cummings view, where the light field can be described as a number state. Then it becomes clear...of the problem under consideration. Find a suitable approximation for the normal modes; the simpler, the better. Decide how to model the light
Cognitive Complexity, Attitudinal Affect, and Dispersion in Affect Ratings for Products.
Durand, Richard M
1979-04-01
The purpose of this study was to examine the relationships between cognitive complexity, attitudinal affect, and dispersion of affect scores (N = 102 male business administration undergraduates). Models of automobiles and toothpaste brands were the content domains studied. Analysis using Pearson product-moment correlation supported the hypothesis that cognitive complex Ss had a lower level of affect and greater dispersion of affect scores than did simpler Ss.
The Epistemic Representation of Information Flow Security in Probabilistic Systems
1995-06-01
The new characterization also means that our security crite- rion is expressible in a simpler logic and model. 1 Introduction Multilevel security is...ber generator) during its execution. Such probabilistic choices are useful in a multilevel security context for Supported by grants HKUST 608/94E from... 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and
Mirus, B.B.; Ebel, B.A.; Heppner, C.S.; Loague, K.
2011-01-01
Concept development simulation with distributed, physics-based models provides a quantitative approach for investigating runoff generation processes across environmental conditions. Disparities within data sets employed to design and parameterize boundary value problems used in heuristic simulation inevitably introduce various levels of bias. The objective was to evaluate the impact of boundary value problem complexity on process representation for different runoff generation mechanisms. The comprehensive physics-based hydrologic response model InHM has been employed to generate base case simulations for four well-characterized catchments. The C3 and CB catchments are located within steep, forested environments dominated by subsurface stormflow; the TW and R5 catchments are located in gently sloping rangeland environments dominated by Dunne and Horton overland flows. Observational details are well captured within all four of the base case simulations, but the characterization of soil depth, permeability, rainfall intensity, and evapotranspiration differs for each. These differences are investigated through the conversion of each base case into a reduced case scenario, all sharing the same level of complexity. Evaluation of how individual boundary value problem characteristics impact simulated runoff generation processes is facilitated by quantitative analysis of integrated and distributed responses at high spatial and temporal resolution. Generally, the base case reduction causes moderate changes in discharge and runoff patterns, with the dominant process remaining unchanged. Moderate differences between the base and reduced cases highlight the importance of detailed field observations for parameterizing and evaluating physics-based models. Overall, similarities between the base and reduced cases indicate that the simpler boundary value problems may be useful for concept development simulation to investigate fundamental controls on the spectrum of runoff generation mechanisms. Copyright 2011 by the American Geophysical Union.
Crustal deformation in Great California Earthquake cycles
NASA Technical Reports Server (NTRS)
Li, Victor C.; Rice, James R.
1987-01-01
A model in which coupling is described approximately through a generalized Elsasser model is proposed for computation of the periodic crustal deformation associated with repeated strike-slip earthquakes. The model is found to provide a more realistic physical description of tectonic loading than do simpler kinematic models. Parameters are chosen to model the 1857 and 1906 San Andreas ruptures, and predictions are found to be consistent with data on variations of contemporary surface strain and displacement rates as a function of distance from the 1857 and 1906 rupture traces. Results indicate that the asthenosphere appropriate to describe crustal deformation on the earthquake cycle time scale lies in the lower crust and perhaps the crust-mantle transition zone.
Optimal remediation of unconfined aquifers: Numerical applications and derivative calculations
NASA Astrophysics Data System (ADS)
Mansfield, Christopher M.; Shoemaker, Christine A.
1999-05-01
This paper extends earlier work on derivative-based optimization for cost-effective remediation to unconfined aquifers, which have more complex, nonlinear flow dynamics than confined aquifers. Most previous derivative-based optimization of contaminant removal has been limited to consideration of confined aquifers; however, contamination is more common in unconfined aquifers. Exact derivative equations are presented, and two computationally efficient approximations, the quasi-confined (QC) and head independent from previous (HIP) unconfined-aquifer finite element equation derivative approximations, are presented and demonstrated to be highly accurate. The derivative approximations can be used with any nonlinear optimization method requiring derivatives for computation of either time-invariant or time-varying pumping rates. The QC and HIP approximations are combined with the nonlinear optimal control algorithm SALQR into the unconfined-aquifer algorithm, which is shown to compute solutions for unconfined aquifers in CPU times that were not significantly longer than those required by the confined-aquifer optimization model. Two of the three example unconfined-aquifer cases considered obtained pumping policies with substantially lower objective function values with the unconfined model than were obtained with the confined-aquifer optimization, even though the mean differences in hydraulic heads predicted by the unconfined- and confined-aquifer models were small (less than 0.1%). We suggest a possible geophysical index based on differences in drawdown predictions between unconfined- and confined-aquifer models to estimate which aquifers require unconfined-aquifer optimization and which can be adequately approximated by the simpler confined-aquifer analysis.
Park, Tae-Joon; Lee, Sang-Hyun
2012-01-01
Objective The purpose of this study was to develop superimposition method on the lower arch using 3-dimensional (3D) cone beam computed tomography (CBCT) images and orthodontic 3D digital modeling. Methods Integrated 3D CBCT images were acquired by substituting the dental portion of 3D CBCT images with precise dental images of an orthodontic 3D digital model. Images were acquired before and after treatment. For the superimposition, 2 superimposition methods were designed. Surface superimposition was based on the basal bone structure of the mandible by surface-to-surface matching (best-fit method). Plane superimposition was based on anatomical structures (mental and lingual foramen). For the evaluation, 10 landmarks including teeth and anatomic structures were assigned, and 30 times of superimpositions and measurements were performed to determine the more reproducible and reliable method. Results All landmarks demonstrated that the surface superimposition method produced relatively more consistent coordinate values. The mean distances of measured landmarks values from the means were statistically significantly lower with the surface superimpositions method. Conclusions Between the 2 superimposition methods designed for the evaluation of 3D changes in the lower arch, surface superimposition was the simpler, more reproducible, reliable method. PMID:23112948
Making Interoperability Easier with the NASA Metadata Management Tool
NASA Astrophysics Data System (ADS)
Shum, D.; Reese, M.; Pilone, D.; Mitchell, A. E.
2016-12-01
ISO 19115 has enabled interoperability amongst tools, yet many users find it hard to build ISO metadata for their collections because it can be large and overly flexible for their needs. The Metadata Management Tool (MMT), part of NASA's Earth Observing System Data and Information System (EOSDIS), offers users a modern, easy to use browser based tool to develop ISO compliant metadata. Through a simplified UI experience, metadata curators can create and edit collections without any understanding of the complex ISO-19115 format, while still generating compliant metadata. The MMT is also able to assess the completeness of collection level metadata by evaluating it against a variety of metadata standards. The tool provides users with clear guidance as to how to change their metadata in order to improve their quality and compliance. It is based on NASA's Unified Metadata Model for Collections (UMM-C) which is a simpler metadata model which can be cleanly mapped to ISO 19115. This allows metadata authors and curators to meet ISO compliance requirements faster and more accurately. The MMT and UMM-C have been developed in an agile fashion, with recurring end user tests and reviews to continually refine the tool, the model and the ISO mappings. This process is allowing for continual improvement and evolution to meet the community's needs.
NASA Astrophysics Data System (ADS)
Cao, Qian; Thawait, Gaurav; Gang, Grace J.; Zbijewski, Wojciech; Reigel, Thomas; Brown, Tyler; Corner, Brian; Demehri, Shadpour; Siewerdsen, Jeffrey H.
2015-02-01
Joint space morphology can be indicative of the risk, presence, progression, and/or treatment response of disease or trauma. We describe a novel methodology of characterizing joint space morphology in high-resolution 3D images (e.g. cone-beam CT (CBCT)) using a model based on elementary electrostatics that overcomes a variety of basic limitations of existing 2D and 3D methods. The method models each surface of a joint as a conductor at fixed electrostatic potential and characterizes the intra-articular space in terms of the electric field lines resulting from the solution of Gauss’ Law and the Laplace equation. As a test case, the method was applied to discrimination of healthy and osteoarthritic subjects (N = 39) in 3D images of the knee acquired on an extremity CBCT system. The method demonstrated improved diagnostic performance (area under the receiver operating characteristic curve, AUC > 0.98) compared to simpler methods of quantitative measurement and qualitative image-based assessment by three expert musculoskeletal radiologists (AUC = 0.87, p-value = 0.007). The method is applicable to simple (e.g. the knee or elbow) or multi-axial joints (e.g. the wrist or ankle) and may provide a useful means of quantitatively assessing a variety of joint pathologies.
Wu, Zhen; Jia, Pei-Qiao; Hu, Zhong-Jun; Chen, Li-Qiao; Gu, Zhi-Min; Liu, Qi-Gen
2012-03-01
Based on the 2008-2009 survey data of fishery resources and eco-environment of Fenshuijiang Reservoir, a mass balance model for the Reservoir ecosystem was constructed by Ecopath with Ecosim software. The model was composed of 14 functional groups, including silver carp, bighead carp, Hemibarbus maculates, Cutler alburnus, Microlepis and other fishes, Oligochaeta, aquatic insect, zooplankton, phytoplankton, and organic detritus, etc. , being able to better simulate Fenshuijiang Reservoir ecosystem. In this ecosystem, there were five trophic levels (TLs), and the nutrient flow mainly occurred in the first three TLs. Grazing and detritus food chains were the main energy flows in the ecosystem, but the food web was simpler and susceptible to be disturbed by outer environment. The transfer efficiency at lower TLs was relatively low, indicating that the ecosystem had a lower capability in energy utilization, and the excessive stock of nutrients in the ecosystem could lead to eutrophication. The lower connectance index, system omnivory index, Finn' s cycled index, and Finn's mean path length demonstrated that the ecosystem was unstable, while the high ecosystem property indices such as Pp/R and Pp/B showed that the ecosystem was immature and highly productive. It was suggested that Fenshuijiang Reservoir was still a developing new reservoir ecosystem, with a very short history and comparatively high primary productivity.
Selection of Worst-Case Pesticide Leaching Scenarios for Pesticide Registration
NASA Astrophysics Data System (ADS)
Vereecken, H.; Tiktak, A.; Boesten, J.; Vanderborght, J.
2010-12-01
The use of pesticides, fertilizers and manure in intensive agriculture may have a negative impact on the quality of ground- and surface water resources. Legislative action has been undertaken in many countries to protect surface and groundwater resources from contamination by surface applied agrochemicals. Of particular concern are pesticides. The registration procedure plays an important role in the regulation of pesticide use in the European Union. In order to register a certain pesticide use, the notifier needs to prove that the use does not entail a risk of groundwater contamination. Therefore, leaching concentrations of the pesticide need to be assessed using model simulations for so called worst-case scenarios. In the current procedure, a worst-case scenario represents a parameterized pesticide fate model for a certain soil and a certain time series of weather conditions that tries to represent all relevant processes such as transient water flow, root water uptake, pesticide transport, sorption, decay and volatilisation as accurate as possible. Since this model has been parameterized for only one soil and weather time series, it is uncertain whether it represents a worst-case condition for a certain pesticide use. We discuss an alternative approach that uses a simpler model that requires less detailed information about the soil and weather conditions but still represents the effect of soil and climate on pesticide leaching using information that is available for the entire European Union. A comparison between the two approaches demonstrates that the higher precision that the detailed model provides for the prediction of pesticide leaching at a certain site is counteracted by its smaller accuracy to represent a worst case condition. The simpler model predicts leaching concentrations less precise at a certain site but has a complete coverage of the area so that it selects a worst-case condition more accurately.
The Levy sections theorem revisited
NASA Astrophysics Data System (ADS)
Figueiredo, Annibal; Gleria, Iram; Matsushita, Raul; Da Silva, Sergio
2007-06-01
This paper revisits the Levy sections theorem. We extend the scope of the theorem to time series and apply it to historical daily returns of selected dollar exchange rates. The elevated kurtosis usually observed in such series is then explained by their volatility patterns. And the duration of exchange rate pegs explains the extra elevated kurtosis in the exchange rates of emerging markets. In the end, our extension of the theorem provides an approach that is simpler than the more common explicit modelling of fat tails and dependence. Our main purpose is to build up a technique based on the sections that allows one to artificially remove the fat tails and dependence present in a data set. By analysing data through the lenses of the Levy sections theorem one can find common patterns in otherwise very different data sets.
Rethinking 'rational imitation' in 14-month-old infants: a perceptual distraction approach.
Beisert, Miriam; Zmyj, Norbert; Liepelt, Roman; Jung, Franziska; Prinz, Wolfgang; Daum, Moritz M
2012-01-01
In their widely noticed study, Gergely, Bekkering, and Király (2002) showed that 14-month-old infants imitated an unusual action only if the model freely chose to perform this action and not if the choice of the action could be ascribed to external constraints. They attributed this kind of selective imitation to the infants' capacity of understanding the principle of rational action. In the current paper, we present evidence that a simpler approach of perceptual distraction may be more appropriate to explain their results. When we manipulated the saliency of context stimuli in the two original conditions, the results were exactly opposite to what rational imitation predicts. Based on these findings, we reject the claim that the notion of rational action plays a key role in selective imitation in 14-month-olds.
YORP torque as the function of shape harmonics
NASA Astrophysics Data System (ADS)
Breiter, Sławomir; Michalska, Hanna
2008-08-01
The second-order analytical approximation of the mean Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) torque components is given as an explicit function of the shape spherical harmonics coefficients for a sufficiently regular minor body. The results are based upon a new expression for the insolation function, significantly simpler than in previous works. Linearized plane-parallel model of the temperature distribution derived from the insolation function allows us to take into account a non-zero conductivity. Final expressions for the three average components of the YORP torque related with rotation period, obliquity and precession are given in a form of the Legendre series of the cosine of obliquity. The series have good numerical properties and can be easily truncated according to the degree of the Legendre polynomials or associated functions, with first two terms playing the principal role.
Application of dynamic recurrent neural networks in nonlinear system identification
NASA Astrophysics Data System (ADS)
Du, Yun; Wu, Xueli; Sun, Huiqin; Zhang, Suying; Tian, Qiang
2006-11-01
An adaptive identification method of simple dynamic recurrent neural network (SRNN) for nonlinear dynamic systems is presented in this paper. This method based on the theory that by using the inner-states feed-back of dynamic network to describe the nonlinear kinetic characteristics of system can reflect the dynamic characteristics more directly, deduces the recursive prediction error (RPE) learning algorithm of SRNN, and improves the algorithm by studying topological structure on recursion layer without the weight values. The simulation results indicate that this kind of neural network can be used in real-time control, due to its less weight values, simpler learning algorithm, higher identification speed, and higher precision of model. It solves the problems of intricate in training algorithm and slow rate in convergence caused by the complicate topological structure in usual dynamic recurrent neural network.
Bankhead, Armand; Magnuson, Nancy S; Heckendorn, Robert B
2007-06-07
A computer simulation is used to model ductal carcinoma in situ, a form of non-invasive breast cancer. The simulation uses known histological morphology, cell types, and stochastic cell proliferation to evolve tumorous growth within a duct. The ductal simulation is based on a hybrid cellular automaton design using genetic rules to determine each cell's behavior. The genetic rules are a mutable abstraction that demonstrate genetic heterogeneity in a population. Our goal was to examine the role (if any) that recently discovered mammary stem cell hierarchies play in genetic heterogeneity, DCIS initiation and aggressiveness. Results show that simpler progenitor hierarchies result in greater genetic heterogeneity and evolve DCIS significantly faster. However, the more complex progenitor hierarchy structure was able to sustain the rapid reproduction of a cancer cell population for longer periods of time.
A novel unsplit perfectly matched layer for the second-order acoustic wave equation.
Ma, Youneng; Yu, Jinhua; Wang, Yuanyuan
2014-08-01
When solving acoustic field equations by using numerical approximation technique, absorbing boundary conditions (ABCs) are widely used to truncate the simulation to a finite space. The perfectly matched layer (PML) technique has exhibited excellent absorbing efficiency as an ABC for the acoustic wave equation formulated as a first-order system. However, as the PML was originally designed for the first-order equation system, it cannot be applied to the second-order equation system directly. In this article, we aim to extend the unsplit PML to the second-order equation system. We developed an efficient unsplit implementation of PML for the second-order acoustic wave equation based on an auxiliary-differential-equation (ADE) scheme. The proposed method can benefit to the use of PML in simulations based on second-order equations. Compared with the existing PMLs, it has simpler implementation and requires less extra storage. Numerical results from finite-difference time-domain models are provided to illustrate the validity of the approach. Copyright © 2014 Elsevier B.V. All rights reserved.
Adjoint-based optimization of PDEs in moving domains
NASA Astrophysics Data System (ADS)
Protas, Bartosz; Liao, Wenyuan
2008-02-01
In this investigation we address the problem of adjoint-based optimization of PDE systems in moving domains. As an example we consider the one-dimensional heat equation with prescribed boundary temperatures and heat fluxes. We discuss two methods of deriving an adjoint system necessary to obtain a gradient of a cost functional. In the first approach we derive the adjoint system after mapping the problem to a fixed domain, whereas in the second approach we derive the adjoint directly in the moving domain by employing methods of the noncylindrical calculus. We show that the operations of transforming the system from a variable to a fixed domain and deriving the adjoint do not commute and that, while the gradient information contained in both systems is the same, the second approach results in an adjoint problem with a simpler structure which is therefore easier to implement numerically. This approach is then used to solve a moving boundary optimization problem for our model system.
Fireside corrosion in oxy-fuel combustion of coal
Holcomb, Gordon R.; Tylczak, Joseph; Meier, Gerald H.; ...
2011-08-01
Oxy-fuel combustion is based on burning fossil fuels in a mixture of recirculated flue gas and oxygen, rather than in air. An optimized oxy-combustion power plant will have ultra-low emissions since the flue gas that results from oxy-fuel combustion consists almost entirely of CO2 and water vapor. Once the water vapor is condensed, it is relatively easy to sequester the CO2 so that it does not escape into the atmosphere. A variety of laboratory tests comparing air-firing to oxy-firing conditions, and tests examining specific simpler combinations of oxidants, were conducted at 650-700 C. Alloys studied included model Fe-Cr and Ni-Crmore » alloys, commercial ferritic steels, austenitic steels, and nickel base superalloys. Furthermore, the observed corrosion behavior shows accelerated corrosion even with sulfate additions that remain solid at the tested temperatures, encapsulation of ash components in outer iron oxide scales, and a differentiation between oxy-fuel combustion flue gas recirculation choices.« less
Six new mechanics corresponding to further shape theories
NASA Astrophysics Data System (ADS)
Anderson, Edward
2016-02-01
In this paper, suite of relational notions of shape are presented at the level of configuration space geometry, with corresponding new theories of shape mechanics and shape statistics. These further generalize two quite well known examples: (i) Kendall’s (metric) shape space with his shape statistics and Barbour’s mechanics thereupon. (ii) Leibnizian relational space alias metric scale-and-shape space to which corresponds Barbour-Bertotti mechanics. This paper’s new theories include, using the invariant and group namings, (iii) Angle alias conformal shape mechanics. (iv) Area ratio alias e shape mechanics. (v) Area alias e scale-and-shape mechanics. (iii)-(v) rest respectively on angle space, area-ratio space, and area space configuration spaces. Probability and statistics applications are also pointed to in outline. (vi) Various supersymmetric counterparts of (i)-(v) are considered. Since supergravity differs considerably from GR-based conceptions of background independence, some of the new supersymmetric shape mechanics are compared with both. These reveal compatibility between supersymmetry and GR-based conceptions of background independence, at least within these simpler model arenas.
Transmission-grating-based wavefront tilt sensor.
Iwata, Koichi; Fukuda, Hiroki; Moriwaki, Kousuke
2009-07-10
We propose a new type of tilt sensor. It consists of a grating and an image sensor. It detects the tilt of the collimated wavefront reflected from a plane mirror. Its principle is described and analyzed based on wave optics. Experimental results show its validity. Simulations of the ordinary autocollimator and the proposed tilt sensor show that the effect of noise on the measured angle is smaller for the latter. These results show a possibility of making a smaller and simpler tilt sensor.
A simple next-best alternative to seasonal predictions in Europe
NASA Astrophysics Data System (ADS)
Buontempo, Carlo; De Felice, Matteo
2016-04-01
In order to build a climate proof society, we need to learn how to best use the climate information we have. Having spent time and resources in developing complex numerical models has often blinded us on the value some of this information really has in the eyes of a decision maker. An effective way to assess this is to check the quality of the forecast (and its cost) to the quality of the forecast from a prediction system based on simpler assumption (and thus cheaper to run). Such a practice is common in marketing analysis where it is often referred to as the next-best alternative. As a way to facilitate such an analysis, climate service providers should always provide alongside the predictions a set of skill scores. These are usually based on climatological means, anomaly persistence or more recently multiple linear regressions. We here present an equally simple benchmark based on a Markov chain process locally trained at a monthly or seasonal time-scale. We demonstrate that in spite of its simplicity the model easily outperforms not only the standard benchmark but also most of the seasonal predictions system at least in EUROPE. We suggest that a benchmark of this kind could represent a useful next-best alternative for a number of users.
Haeufle, D F B; Günther, M; Wunner, G; Schmitt, S
2014-01-01
In biomechanics and biorobotics, muscles are often associated with reduced movement control effort and simplified control compared to technical actuators. This is based on evidence that the nonlinear muscle properties positively influence movement control. It is, however, open how to quantify the simplicity aspect of control effort and compare it between systems. Physical measures, such as energy consumption, stability, or jerk, have already been applied to compare biological and technical systems. Here a physical measure of control effort based on information entropy is presented. The idea is that control is simpler if a specific movement is generated with less processed sensor information, depending on the control scheme and the physical properties of the systems being compared. By calculating the Shannon information entropy of all sensor signals required for control, an information cost function can be formulated allowing the comparison of models of biological and technical control systems. Exemplarily applied to (bio-)mechanical models of hopping, the method reveals that the required information for generating hopping with a muscle driven by a simple reflex control scheme is only I=32 bits versus I=660 bits with a DC motor and a proportional differential controller. This approach to quantifying control effort captures the simplicity of a control scheme and can be used to compare completely different actuators and control approaches.
Data assimilation experiments using the diffusive back and forth nudging for the NEMO ocean model
NASA Astrophysics Data System (ADS)
Ruggiero, G. A.; Ourmières, Y.; Cosme, E.; Blum, J.; Auroux, D.; Verron, J.
2014-07-01
The Diffusive Back and Forth Nudging (DBFN) is an easy-to-implement iterative data assimilation method based on the well-known Nudging method. It consists in a sequence of forward and backward model integrations, within a given time window, both of them using a feedback term to the observations. Therefore in the DBFN, the Nudging asymptotic behavior is translated into an infinite number of iterations within a bounded time domain. In this method, the backward integration is carried out thanks to what is called backward model, which is basically the forward model with reversed time step sign. To maintain numeral stability the diffusion terms also have their sign reversed, giving a diffusive character to the algorithm. In this article the DBFN performance to control a primitive equation ocean model is investigated. In this kind of model non-resolved scales are modeled by diffusion operators which dissipate energy that cascade from large to small scales. Thus, in this article the DBFN approximations and their consequences on the data assimilation system set-up are analyzed. Our main result is that the DBFN may provide results which are comparable to those produced by a 4Dvar implementation with a much simpler implementation and a shorter CPU time for convergence.
Mass balance modelling of contaminants in river basins: a flexible matrix approach.
Warren, Christopher; Mackay, Don; Whelan, Mick; Fox, Kay
2005-12-01
A novel and flexible approach is described for simulating the behaviour of chemicals in river basins. A number (n) of river reaches are defined and their connectivity is described by entries in an n x n matrix. Changes in segmentation can be readily accommodated by altering the matrix entries, without the need for model revision. Two models are described. The simpler QMX-R model only considers advection and an overall loss due to the combined processes of volatilization, net transfer to sediment and degradation. The rate constant for the overall loss is derived from fugacity calculations for a single segment system. The more rigorous QMX-F model performs fugacity calculations for each segment and explicitly includes the processes of advection, evaporation, water-sediment exchange and degradation in both water and sediment. In this way chemical exposure in all compartments (including equilibrium concentrations in biota) can be estimated. Both models are designed to serve as intermediate-complexity exposure assessment tools for river basins with relatively low data requirements. By considering the spatially explicit nature of emission sources and the changes in concentration which occur with transport in the channel system, the approach offers significant advantages over simple one-segment simulations while being more readily applicable than more sophisticated, highly segmented, GIS-based models.
NASA Tech Briefs, November 2007
NASA Technical Reports Server (NTRS)
2007-01-01
Topics include: Wireless Measurement of Contact and Motion Between Contact Surfaces; Wireless Measurement of Rotation and Displacement Rate; Portable Microleak-Detection System; Free-to-Roll Testing of Airplane Models in Wind Tunnels; Cryogenic Shrouds for Testing Thermal-Insulation Panels; Optoelectronic System Measures Distances to Multiple Targets; Tachometers Derived From a Brushless DC Motor; Algorithm-Based Fault Tolerance for Numerical Subroutines; Computational Support for Technology- Investment Decisions; DSN Resource Scheduling; Distributed Operations Planning; Phase-Oriented Gear Systems; Freeze Tape Casting of Functionally Graded Porous Ceramics; Electrophoretic Deposition on Porous Non- Conductors; Two Devices for Removing Sludge From Bioreactor Wastewater; Portable Unit for Metabolic Analysis; Flash Diffusivity Technique Applied to Individual Fibers; System for Thermal Imaging of Hot Moving Objects; Large Solar-Rejection Filter; Improved Readout Scheme for SQUID-Based Thermometry; Error Rates and Channel Capacities in Multipulse PPM; Two Mathematical Models of Nonlinear Vibrations; Simpler Adaptive Selection of Golomb Power-of- Two Codes; VCO PLL Frequency Synthesizers for Spacecraft Transponders; Wide Tuning Capability for Spacecraft Transponders; Adaptive Deadband Synchronization for a Spacecraft Formation; Analysis of Performance of Stereoscopic-Vision Software; Estimating the Inertia Matrix of a Spacecraft; Spatial Coverage Planning for Exploration Robots; and Increasing the Life of a Xenon-Ion Spacecraft Thruster.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hileman, B.
Some changes are noted in the concern shown by top levels of the United Staets government regarding the problem of acid rain. A recent government report indicates that the problem is serious enough to warrant a search for immediate solutions, that emissions of sulfur dioxide and nitrogen oxides are at least 10 times greater from human activities than arises from natural happenings, that the areas receiving the highest deposition are within and downwind of the major source regions, and that some lakes in the major receptor areas have become more acidic in the past two decades. A National Research Councilmore » report agrees that the current increase of acidic substances in the environment cannot be arising from natural causes. The conclusion is based on analysis of historical trends, a comparison between the historical molar ratio of sulfur dioxide and nitrogen oxides in emissions to the molar ratio of sulfates and nitrates in deposition and theoretical calculations based on lab studies of the chemical reactions involved in conversion of the oxides to the sulfates and nitrates. Confidence in current mathematical models describing the movement of acid-forming pollutants over long distances is not high. These models have not been compared with each other or with simpler schemes.« less
Raisali, Gholamreza; Mirzakhanian, Lalageh; Masoudi, Seyed Farhad; Semsarha, Farid
2013-01-01
In this work the number of DNA single-strand breaks (SSB) and double-strand breaks (DSB) due to direct and indirect effects of Auger electrons from incorporated (123)I and (125)I have been calculated by using the Geant4-DNA toolkit. We have performed and compared the calculations for several cases: (125)I versus (123)I, source positions and direct versus indirect breaks to study the capability of the Geant4-DNA in calculations of DNA damage yields. Two different simple geometries of a 41 base pair of B-DNA have been simulated. The location of (123)I has been considered to be in (123)IdUrd and three different locations for (125)I. The results showed that the simpler geometry is sufficient for direct break calculations while indirect damage yield is more sensitive to the helical shape of DNA. For (123)I Auger electrons, the average number of DSB due to the direct hits is almost twice the DSB due to the indirect hits. Furthermore, a comparison between the average number of SSB or DSB caused by Auger electrons of (125)I and (123)I in (125)IdUrd and (123)IdUrd shows that (125)I is 1.5 times more effective than (123)I per decay. The results are in reasonable agreement with previous experimental and theoretical results which shows the applicability of the Geant-DNA toolkit in nanodosimetry calculations which benefits from the open-source accessibility with the advantage that the DNA models used in this work enable us to save the computational time. Also, the results showed that the simpler geometry is suitable for direct break calculations, while for the indirect damage yield, the more precise model is preferred.
Modeling, simulation, and analysis of optical remote sensing systems
NASA Technical Reports Server (NTRS)
Kerekes, John Paul; Landgrebe, David A.
1989-01-01
Remote Sensing of the Earth's resources from space-based sensors has evolved in the past 20 years from a scientific experiment to a commonly used technological tool. The scientific applications and engineering aspects of remote sensing systems have been studied extensively. However, most of these studies have been aimed at understanding individual aspects of the remote sensing process while relatively few have studied their interrelations. A motivation for studying these interrelationships has arisen with the advent of highly sophisticated configurable sensors as part of the Earth Observing System (EOS) proposed by NASA for the 1990's. Two approaches to investigating remote sensing systems are developed. In one approach, detailed models of the scene, the sensor, and the processing aspects of the system are implemented in a discrete simulation. This approach is useful in creating simulated images with desired characteristics for use in sensor or processing algorithm development. A less complete, but computationally simpler method based on a parametric model of the system is also developed. In this analytical model the various informational classes are parameterized by their spectral mean vector and covariance matrix. These class statistics are modified by models for the atmosphere, the sensor, and processing algorithms and an estimate made of the resulting classification accuracy among the informational classes. Application of these models is made to the study of the proposed High Resolution Imaging Spectrometer (HRIS). The interrelationships among observational conditions, sensor effects, and processing choices are investigated with several interesting results.
Comparison of wavefront sensor models for simulation of adaptive optics.
Wu, Zhiwen; Enmark, Anita; Owner-Petersen, Mette; Andersen, Torben
2009-10-26
The new generation of extremely large telescopes will have adaptive optics. Due to the complexity and cost of such systems, it is important to simulate their performance before construction. Most systems planned will have Shack-Hartmann wavefront sensors. Different mathematical models are available for simulation of such wavefront sensors. The choice of wavefront sensor model strongly influences computation time and simulation accuracy. We have studied the influence of three wavefront sensor models on performance calculations for a generic, adaptive optics (AO) system designed for K-band operation of a 42 m telescope. The performance of this AO system has been investigated both for reduced wavelengths and for reduced r(0) in the K band. The telescope AO system was designed for K-band operation, that is both the subaperture size and the actuator pitch were matched to a fixed value of r(0) in the K-band. We find that under certain conditions, such as investigating limiting guide star magnitude for large Strehl-ratios, a full model based on Fraunhofer propagation to the subimages is significantly more accurate. It does however require long computation times. The shortcomings of simpler models based on either direct use of average wavefront tilt over the subapertures for actuator control, or use of the average tilt to move a precalculated point spread function in the subimages are most pronounced for studies of system limitations to operating parameter variations. In the long run, efficient parallelization techniques may be developed to overcome the problem.
Dynamic PET Image reconstruction for parametric imaging using the HYPR kernel method
NASA Astrophysics Data System (ADS)
Spencer, Benjamin; Qi, Jinyi; Badawi, Ramsey D.; Wang, Guobao
2017-03-01
Dynamic PET image reconstruction is a challenging problem because of the ill-conditioned nature of PET and the lowcounting statistics resulted from short time-frames in dynamic imaging. The kernel method for image reconstruction has been developed to improve image reconstruction of low-count PET data by incorporating prior information derived from high-count composite data. In contrast to most of the existing regularization-based methods, the kernel method embeds image prior information in the forward projection model and does not require an explicit regularization term in the reconstruction formula. Inspired by the existing highly constrained back-projection (HYPR) algorithm for dynamic PET image denoising, we propose in this work a new type of kernel that is simpler to implement and further improves the kernel-based dynamic PET image reconstruction. Our evaluation study using a physical phantom scan with synthetic FDG tracer kinetics has demonstrated that the new HYPR kernel-based reconstruction can achieve a better region-of-interest (ROI) bias versus standard deviation trade-off for dynamic PET parametric imaging than the post-reconstruction HYPR denoising method and the previously used nonlocal-means kernel.
Research on key technology of yacht positioning based on binocular parallax
NASA Astrophysics Data System (ADS)
Wang, Wei; Wei, Ping; Liu, Zengzhi
2016-10-01
Yacht has become a fashionable way for entertainment. However, to obtain the precise location of a yacht docked at a port has become one of the concerns of a yacht manager. To deal with this issue, we adopt a positioning method based on the principle of binocular parallax and background difference in this paper. Binocular parallax uses cameras to get multi-dimensional perspective of the yacht based on geometric principle of imaging. In order to simplify the yacht localization problem, we install LED light indicator as the key point on a yacht. And let it flash at a certain frequency during day time and night time. After getting the distance between the LED and the cameras, locating the yacht is easy. Compared with other traditional positioning methods, this method is simpler and easier to implement. In this paper, we study the yacht positioning method using the LED indicator. Simulation experiment is done for a yacht model in the distance of 3 meters. The experimental result shows that our method is feasible and easy to implement with a small 15% positioning error.
Web-services-based spatial decision support system to facilitate nuclear waste siting
NASA Astrophysics Data System (ADS)
Huang, L. Xinglai; Sheng, Grant
2006-10-01
The availability of spatial web services enables data sharing among managers, decision and policy makers and other stakeholders in much simpler ways than before and subsequently has created completely new opportunities in the process of spatial decision making. Though generally designed for a certain problem domain, web-services-based spatial decision support systems (WSDSS) can provide a flexible problem-solving environment to explore the decision problem, understand and refine problem definition, and generate and evaluate multiple alternatives for decision. This paper presents a new framework for the development of a web-services-based spatial decision support system. The WSDSS is comprised of distributed web services that either have their own functions or provide different geospatial data and may reside in different computers and locations. WSDSS includes six key components, namely: database management system, catalog, analysis functions and models, GIS viewers and editors, report generators, and graphical user interfaces. In this study, the architecture of a web-services-based spatial decision support system to facilitate nuclear waste siting is described as an example. The theoretical, conceptual and methodological challenges and issues associated with developing web services-based spatial decision support system are described.
DeGiorgio, Michael; Jakobsson, Mattias; Rosenberg, Noah A
2009-09-22
Studies of worldwide human variation have discovered three trends in summary statistics as a function of increasing geographic distance from East Africa: a decrease in heterozygosity, an increase in linkage disequilibrium (LD), and a decrease in the slope of the ancestral allele frequency spectrum. Forward simulations of unlinked loci have shown that the decline in heterozygosity can be described by a serial founder model, in which populations migrate outward from Africa through a process where each of a series of populations is formed from a subset of the previous population in the outward expansion. Here, we extend this approach by developing a retrospective coalescent-based serial founder model that incorporates linked loci. Our model both recovers the observed decline in heterozygosity with increasing distance from Africa and produces the patterns observed in LD and the ancestral allele frequency spectrum. Surprisingly, although migration between neighboring populations and limited admixture between modern and archaic humans can be accommodated in the model while continuing to explain the three trends, a competing model in which a wave of outward modern human migration expands into a series of preexisting archaic populations produces nearly opposite patterns to those observed in the data. We conclude by developing a simpler model to illustrate that the feature that permits the serial founder model but not the archaic persistence model to explain the three trends observed with increasing distance from Africa is its incorporation of a cumulative effect of genetic drift as humans colonized the world.
Mathematical Model for a Simplified Calculation of the Input Momentum Coefficient for AFC Purposes
NASA Astrophysics Data System (ADS)
Hirsch, Damian; Gharib, Morteza
2016-11-01
Active Flow Control (AFC) is an emerging technology which aims at enhancing the aerodynamic performance of flight vehicles (i.e., to save fuel). A viable AFC system must consider the limited resources available on a plane for attaining performance goals. A higher performance goal (i.e., airplane incremental lift) demands a higher input fluidic requirement (i.e., mass flow rate). Therefore, the key requirement for a successful and practical design is to minimize power input while maximizing performance to achieve design targets. One of the most used design parameters is the input momentum coefficient Cμ. The difficulty associated with Cμ lies in obtaining the parameters for its calculation. In the literature two main approaches can be found, which both have their own disadvantages (assumptions, difficult measurements). A new, much simpler calculation approach will be presented that is based on a mathematical model that can be applied to most jet designs (i.e., steady or sweeping jets). The model-incorporated assumptions will be justified theoretically as well as experimentally. Furthermore, the model's capabilities are exploited to give new insight to the AFC technology and its physical limitations. Supported by Boeing.
Recognizing sights, smells, and sounds with gnostic fields.
Kanan, Christopher
2013-01-01
Mammals rely on vision, audition, and olfaction to remotely sense stimuli in their environment. Determining how the mammalian brain uses this sensory information to recognize objects has been one of the major goals of psychology and neuroscience. Likewise, researchers in computer vision, machine audition, and machine olfaction have endeavored to discover good algorithms for stimulus classification. Almost 50 years ago, the neuroscientist Jerzy Konorski proposed a theoretical model in his final monograph in which competing sets of "gnostic" neurons sitting atop sensory processing hierarchies enabled stimuli to be robustly categorized, despite variations in their presentation. Much of what Konorski hypothesized has been remarkably accurate, and neurons with gnostic-like properties have been discovered in visual, aural, and olfactory brain regions. Surprisingly, there have not been any attempts to directly transform his theoretical model into a computational one. Here, I describe the first computational implementation of Konorski's theory. The model is not domain specific, and it surpasses the best machine learning algorithms on challenging image, music, and olfactory classification tasks, while also being simpler. My results suggest that criticisms of exemplar-based models of object recognition as being computationally intractable due to limited neural resources are unfounded.
Recognizing Sights, Smells, and Sounds with Gnostic Fields
Kanan, Christopher
2013-01-01
Mammals rely on vision, audition, and olfaction to remotely sense stimuli in their environment. Determining how the mammalian brain uses this sensory information to recognize objects has been one of the major goals of psychology and neuroscience. Likewise, researchers in computer vision, machine audition, and machine olfaction have endeavored to discover good algorithms for stimulus classification. Almost 50 years ago, the neuroscientist Jerzy Konorski proposed a theoretical model in his final monograph in which competing sets of “gnostic” neurons sitting atop sensory processing hierarchies enabled stimuli to be robustly categorized, despite variations in their presentation. Much of what Konorski hypothesized has been remarkably accurate, and neurons with gnostic-like properties have been discovered in visual, aural, and olfactory brain regions. Surprisingly, there have not been any attempts to directly transform his theoretical model into a computational one. Here, I describe the first computational implementation of Konorski's theory. The model is not domain specific, and it surpasses the best machine learning algorithms on challenging image, music, and olfactory classification tasks, while also being simpler. My results suggest that criticisms of exemplar-based models of object recognition as being computationally intractable due to limited neural resources are unfounded. PMID:23365648
New developments in isotropic turbulent models for FENE-P fluids
NASA Astrophysics Data System (ADS)
Resende, P. R.; Cavadas, A. S.
2018-04-01
The evolution of viscoelastic turbulent models, in the last years, has been significant due to the direct numeric simulation (DNS) advances, which allowed us to capture in detail the evolution of the viscoelastic effects and the development of viscoelastic closures. New viscoelastic closures are proposed for viscoelastic fluids described by the finitely extensible nonlinear elastic-Peterlin constitutive model. One of the viscoelastic closure developed in the context of isotropic turbulent models, consists in a modification of the turbulent viscosity to include an elastic effect, capable of predicting, with good accuracy, the behaviour for different drag reductions. Another viscoelastic closure essential to predict drag reduction relates the viscoelastic term involving velocity and the tensor conformation fluctuations. The DNS data show the high impact of this term to predict correctly the drag reduction, and for this reason is proposed a simpler closure capable of predicting the viscoelastic behaviour with good performance. In addition, a new relation is developed to predict the drag reduction, quantity based on the trace of the tensor conformation at the wall, eliminating the need of the typically parameters of Weissenberg and Reynolds numbers, which depend on the friction velocity. This allows future developments for complex geometries.
NASA Astrophysics Data System (ADS)
Hajigeorgiou, Photos G.
2016-12-01
An analytical model for the diatomic potential energy function that was recently tested as a universal function (Hajigeorgiou, 2010) has been further modified and tested as a suitable model for direct-potential-fit analysis. Applications are presented for the ground electronic states of three diatomic molecules: oxygen, carbon monoxide, and hydrogen fluoride. The adjustable parameters of the extended Lennard-Jones potential model are determined through nonlinear regression by fits to calculated rovibrational energy term values or experimental spectroscopic line positions. The model is shown to lead to reliable, compact and simple representations for the potential energy functions of these systems and could therefore be classified as a suitable and attractive model for direct-potential-fit analysis.
USDA-ARS?s Scientific Manuscript database
As baits, fermented food products are generally attractive to many types of insects, making it difficult to sort through nontarget insects to monitor a pest species of interest. We test the hypothesis that a chemically simpler and more defined attractant developed for a target insect is more specifi...
Neophytou, Andreas M; Picciotto, Sally; Brown, Daniel M; Gallagher, Lisa E; Checkoway, Harvey; Eisen, Ellen A; Costello, Sadie
2018-02-13
Prolonged exposures can have complex relationships with health outcomes, as timing, duration, and intensity of exposure are all potentially relevant. Summary measures such as cumulative exposure or average intensity of exposure may not fully capture these relationships. We applied penalized and unpenalized distributed lag non-linear models (DLNMs) with flexible exposure-response and lag-response functions in order to examine the association between crystalline silica exposure and mortality from lung cancer and non-malignant respiratory disease in a cohort study of 2,342 California diatomaceous earth workers, followed 1942-2011. We also assessed associations using simple measures of cumulative exposure assuming linear exposure-response and constant lag-response. Measures of association from DLNMs were generally higher than from simpler models. Rate ratios from penalized DLNMs corresponding to average daily exposures of 0.4 mg/m3 during lag years 31-50 prior to the age of observed cases were 1.47 (95% confidence interval (CI) 0.92, 2.35) for lung cancer and 1.80 (95% CI: 1.14, 2.85) for non-malignant respiratory disease. Rate ratios from the simpler models for the same exposure scenario were 1.15 (95% CI: 0.89-1.48) and 1.23 (95% CI: 1.03-1.46) respectively. Longitudinal cohort studies of prolonged exposures and chronic health outcomes should explore methods allowing for flexibility and non-linearities in the exposure-lag-response. © The Author(s) 2018. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.
What do we gain from simplicity versus complexity in species distribution models?
Merow, Cory; Smith, Matthew J.; Edwards, Thomas C.; Guisan, Antoine; McMahon, Sean M.; Normand, Signe; Thuiller, Wilfried; Wuest, Rafael O.; Zimmermann, Niklaus E.; Elith, Jane
2014-01-01
Species distribution models (SDMs) are widely used to explain and predict species ranges and environmental niches. They are most commonly constructed by inferring species' occurrence–environment relationships using statistical and machine-learning methods. The variety of methods that can be used to construct SDMs (e.g. generalized linear/additive models, tree-based models, maximum entropy, etc.), and the variety of ways that such models can be implemented, permits substantial flexibility in SDM complexity. Building models with an appropriate amount of complexity for the study objectives is critical for robust inference. We characterize complexity as the shape of the inferred occurrence–environment relationships and the number of parameters used to describe them, and search for insights into whether additional complexity is informative or superfluous. By building ‘under fit’ models, having insufficient flexibility to describe observed occurrence–environment relationships, we risk misunderstanding the factors shaping species distributions. By building ‘over fit’ models, with excessive flexibility, we risk inadvertently ascribing pattern to noise or building opaque models. However, model selection can be challenging, especially when comparing models constructed under different modeling approaches. Here we argue for a more pragmatic approach: researchers should constrain the complexity of their models based on study objective, attributes of the data, and an understanding of how these interact with the underlying biological processes. We discuss guidelines for balancing under fitting with over fitting and consequently how complexity affects decisions made during model building. Although some generalities are possible, our discussion reflects differences in opinions that favor simpler versus more complex models. We conclude that combining insights from both simple and complex SDM building approaches best advances our knowledge of current and future species ranges.
Common sense reasoning about petroleum flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenberg, S.
1981-02-01
This paper describes an expert system for understanding and Reasoning in a petroleum resources domain. A basic model is implemented in FRL (Frame Representation Language). Expertise is encoded as rule frames. The model consists of a set of episodic contexts which are sequentially generated over time. Reasoning occurs in separate reasoning contexts consisting of a buffer frame and packets of rules. These function similar to small production systems. reasoning is linked to the model through an interface of Sentinels (instance driven demons) which notice anomalous conditions. Heuristics and metaknowledge are used through the creation of further reasoning contexts which overlaymore » the simpler ones.« less
Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario
2016-01-01
The development of an error compensation model for coordinate measuring machines (CMMs) and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included. PMID:27690052
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Chao; Xu, Zhijie; Lai, Canhai
A hierarchical model calibration and validation is proposed for quantifying the confidence level of mass transfer prediction using a computational fluid dynamics (CFD) model, where the solvent-based carbon dioxide (CO2) capture is simulated and simulation results are compared to the parallel bench-scale experimental data. Two unit problems with increasing level of complexity are proposed to breakdown the complex physical/chemical processes of solvent-based CO2 capture into relatively simpler problems to separate the effects of physical transport and chemical reaction. This paper focuses on the calibration and validation of the first unit problem, i.e. the CO2 mass transfer across a falling ethanolaminemore » (MEA) film in absence of chemical reaction. This problem is investigated both experimentally and numerically using nitrous oxide (N2O) as a surrogate for CO2. To capture the motion of gas-liquid interface, a volume of fluid method is employed together with a one-fluid formulation to compute the mass transfer between the two phases. Bench-scale parallel experiments are designed and conducted to validate and calibrate the CFD models using a general Bayesian calibration. Two important transport parameters, e.g. Henry’s constant and gas diffusivity, are calibrated to produce the posterior distributions, which will be used as the input for the second unit problem to address the chemical adsorption of CO2 across the MEA falling film, where both mass transfer and chemical reaction are involved.« less
Ligmann-Zielinska, Arika; Kramer, Daniel B; Spence Cheruvelil, Kendra; Soranno, Patricia A
2014-01-01
Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.
Ligmann-Zielinska, Arika; Kramer, Daniel B.; Spence Cheruvelil, Kendra; Soranno, Patricia A.
2014-01-01
Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system. PMID:25340764
Interaction in Balanced Cross Nested Designs
NASA Astrophysics Data System (ADS)
Ramos, Paulo; Mexia, João T.; Carvalho, Francisco; Covas, Ricardo
2011-09-01
Commutative Jordan Algebras, CJA, are used in the study of mixed models obtained, through crossing and nesting, from simpler ones. In the study of cross nested models the interaction between nested factors have been systematically discarded. However this can constitutes an artificial simplification of the models. We point out that, when two crossed factors interact, such interaction is symmetric, both factors playing in it equivalent roles, while when two nested factors interact, the interaction is determined by the nesting factor. These interactions will be called interactions with nesting. In this work we present a coherent formulation of the algebraic structure of models enabling the choice of families of interactions between cross and nested factors using binary operations on CJA.
Theoretical and Experimental Analysis of an Evolutionary Social-Learning Game
2012-01-13
Nettle outlines the circumstances in which verbal communication is evolutionarily adaptive, and why few species have developed the ability to use...language despite its apparent advantages [28]. Nettle uses a significantly simpler model than the Cultaptation game, but provides insight that may be useful...provided by Kearns et al. was designed as an online algorithm, so it only returns the near-optimal action for the state at the root of the search tree
Mesoscale Fracture Analysis of Multiphase Cementitious Composites Using Peridynamics
Yaghoobi, Amin; Chorzepa, Mi G.; Kim, S. Sonny; Durham, Stephan A.
2017-01-01
Concrete is a complex heterogeneous material, and thus, it is important to develop numerical modeling methods to enhance the prediction accuracy of the fracture mechanism. In this study, a two-dimensional mesoscale model is developed using a non-ordinary state-based peridynamic (NOSBPD) method. Fracture in a concrete cube specimen subjected to pure tension is studied. The presence of heterogeneous materials consisting of coarse aggregates, interfacial transition zones, air voids and cementitious matrix is characterized as particle points in a two-dimensional mesoscale model. Coarse aggregates and voids are generated using uniform probability distributions, while a statistical study is provided to comprise the effect of random distributions of constituent materials. In obtaining the steady-state response, an incremental and iterative solver is adopted for the dynamic relaxation method. Load-displacement curves and damage patterns are compared with available experimental and finite element analysis (FEA) results. Although the proposed model uses much simpler material damage models and discretization schemes, the load-displacement curves show no difference from the FEA results. Furthermore, no mesh refinement is necessary, as fracture is inherently characterized by bond breakages. Finally, a sensitivity study is conducted to understand the effect of aggregate volume fraction and porosity on the load capacity of the proposed mesoscale model. PMID:28772518
Yobbi, D.K.
2000-01-01
A nonlinear least-squares regression technique for estimation of ground-water flow model parameters was applied to an existing model of the regional aquifer system underlying west-central Florida. The regression technique minimizes the differences between measured and simulated water levels. Regression statistics, including parameter sensitivities and correlations, were calculated for reported parameter values in the existing model. Optimal parameter values for selected hydrologic variables of interest are estimated by nonlinear regression. Optimal estimates of parameter values are about 140 times greater than and about 0.01 times less than reported values. Independently estimating all parameters by nonlinear regression was impossible, given the existing zonation structure and number of observations, because of parameter insensitivity and correlation. Although the model yields parameter values similar to those estimated by other methods and reproduces the measured water levels reasonably accurately, a simpler parameter structure should be considered. Some possible ways of improving model calibration are to: (1) modify the defined parameter-zonation structure by omitting and/or combining parameters to be estimated; (2) carefully eliminate observation data based on evidence that they are likely to be biased; (3) collect additional water-level data; (4) assign values to insensitive parameters, and (5) estimate the most sensitive parameters first, then, using the optimized values for these parameters, estimate the entire data set.
Modeling the direction-continuous time-of-arrival in head-related transfer functions
Ziegelwanger, Harald; Majdak, Piotr
2015-01-01
Head-related transfer functions (HRTFs) describe the filtering of the incoming sound by the torso, head, and pinna. As a consequence of the propagation path from the source to the ear, each HRTF contains a direction-dependent, broadband time-of-arrival (TOA). TOAs are usually estimated independently for each direction from HRTFs, a method prone to artifacts and limited by the spatial sampling. In this study, a continuous-direction TOA model combined with an outlier-removal algorithm is proposed. The model is based on a simplified geometric representation of the listener, and his/her arbitrary position within the HRTF measurement. The outlier-removal procedure uses the extreme studentized deviation test to remove implausible TOAs. The model was evaluated for numerically calculated HRTFs of sphere, torso, and pinna under various conditions. The accuracy of estimated parameters was within the resolution given by the sampling rate. Applied to acoustically measured HRTFs of 172 listeners, the estimated parameters were consistent with realistic listener geometry. The outlier removal further improved the goodness-of-fit, particularly for some problematic fits. The comparison with a simpler model that fixed the listener position to the center of the measurement geometry showed a clear advantage of listener position as an additional free model parameter. PMID:24606268
Ariyama, Kaoru; Horita, Hiroshi; Yasui, Akemi
2004-09-22
The composition of concentration ratios of 19 inorganic elements to Mg (hereinafter referred to as 19-element/Mg composition) was applied to chemometric techniques to determine the geographic origin (Japan or China) of Welsh onions (Allium fistulosum L.). Using a composition of element ratios has the advantage of simplified sample preparation, and it was possible to determine the geographic origin of a Welsh onion within 2 days. The classical technique based on 20 element concentrations was also used along with the new simpler one based on 19 elements/Mg in order to validate the new technique. Twenty elements, Na, P, K, Ca, Mg, Mn, Fe, Cu, Zn, Sr, Ba, Co, Ni, Rb, Mo, Cd, Cs, La, Ce, and Tl, in 244 Welsh onion samples were analyzed by flame atomic absorption spectroscopy, inductively coupled plasma atomic emission spectrometry, and inductively coupled plasma mass spectrometry. Linear discriminant analysis (LDA) on 20-element concentrations and 19-element/Mg composition was applied to these analytical data, and soft independent modeling of class analogy (SIMCA) on 19-element/Mg composition was applied to these analytical data. The results showed that techniques based on 19-element/Mg composition were effective. LDA, based on 19-element/Mg composition for classification of samples from Japan and from Shandong, Shanghai, and Fujian in China, classified 101 samples used for modeling 97% correctly and predicted another 119 samples excluding 24 nonauthentic samples 93% correctly. In discriminations by 10 times of SIMCA based on 19-element/Mg composition modeled using 101 samples, 220 samples from known production areas including samples used for modeling and excluding 24 nonauthentic samples were predicted 92% correctly.
NASA Astrophysics Data System (ADS)
Peterson, Gary; Abeytunge, Sanjeewa; Eastman, Zachary; Rajadhyaksha, Milind
2012-02-01
Reflectance confocal microscopy with a line scanning approach potentially offers a smaller, simpler and less expensive approach than traditional methods of point scanning for imaging in living tissues. With one moving mechanical element (galvanometric scanner), a linear array detector and off-the-shelf optics, we designed a compact (102x102x76mm) line scanning confocal reflectance microscope (LSCRM) for imaging human tissues in vivo in a clinical setting. Custom-designed electronics, based on field programmable gate array (FPGA) logic has been developed. With 405 nm illumination and a custom objective lens of numerical aperture 0.5, lateral resolution was measured to be 0.8 um (calculated 0.64 um). The calculated optical sectioning is 3.2 um. Preliminary imaging shows nuclear and cellular detail in human skin and oral epithelium in vivo. Blood flow is also visualized in the deeper connective tissue (lamina propria) in oral mucosa. Since a line is confocal only in one dimension (parallel) but not in the other, the detection is more sensitive to multiply scattered out of focus background noise than in the traditional point scanning configuration. Based on the results of our translational studies thus far, a simpler, smaller and lower-cost approach based on a LSCRM appears to be promising for clinical imaging.
Phase-field-based lattice Boltzmann modeling of large-density-ratio two-phase flows
NASA Astrophysics Data System (ADS)
Liang, Hong; Xu, Jiangrong; Chen, Jiangxing; Wang, Huili; Chai, Zhenhua; Shi, Baochang
2018-03-01
In this paper, we present a simple and accurate lattice Boltzmann (LB) model for immiscible two-phase flows, which is able to deal with large density contrasts. This model utilizes two LB equations, one of which is used to solve the conservative Allen-Cahn equation, and the other is adopted to solve the incompressible Navier-Stokes equations. A forcing distribution function is elaborately designed in the LB equation for the Navier-Stokes equations, which make it much simpler than the existing LB models. In addition, the proposed model can achieve superior numerical accuracy compared with previous Allen-Cahn type of LB models. Several benchmark two-phase problems, including static droplet, layered Poiseuille flow, and spinodal decomposition are simulated to validate the present LB model. It is found that the present model can achieve relatively small spurious velocity in the LB community, and the obtained numerical results also show good agreement with the analytical solutions or some available results. Lastly, we use the present model to investigate the droplet impact on a thin liquid film with a large density ratio of 1000 and the Reynolds number ranging from 20 to 500. The fascinating phenomena of droplet splashing is successfully reproduced by the present model and the numerically predicted spreading radius exhibits to obey the power law reported in the literature.
Highly Physical Solar Radiation Pressure Modeling During Penumbra Transitions
NASA Astrophysics Data System (ADS)
Robertson, Robert V.
Solar radiation pressure (SRP) is one of the major non-gravitational forces acting on spacecraft. Acceleration by radiation pressure depends on the radiation flux; on spacecraft shape, attitude, and mass; and on the optical properties of the spacecraft surfaces. Precise modeling of SRP is needed for dynamic satellite orbit determination, space mission design and control, and processing of data from space-based science instruments. During Earth penumbra transitions, sunlight is passing through Earth's lower atmosphere and, in the process, its path, intensity, spectral composition, and shape are significantly affected. This dissertation presents a new method for highly physical SRP modeling in Earth's penumbra called Solar radiation pressure with Oblateness and Lower Atmospheric Absorption, Refraction, and Scattering (SOLAARS). The fundamental geometry and approach mirrors past work, where the solar radiation field is modeled using a number of light rays, rather than treating the Sun as a single point source. This dissertation aims to clarify this approach, simplify its implementation, and model previously overlooked factors. The complex geometries involved in modeling penumbra solar radiation fields are described in a more intuitive and complete way to simplify implementation. Atmospheric effects due to solar radiation passing through the troposphere and stratosphere are modeled, and the results are tabulated to significantly reduce computational cost. SOLAARS includes new, more efficient and accurate approaches to modeling atmospheric effects which allow us to consider the spatial and temporal variability in lower atmospheric conditions. A new approach to modeling the influence of Earth's polar flattening draws on past work to provide a relatively simple but accurate method for this important effect. Previous penumbra SRP models tend to lie at two extremes of complexity and computational cost, and so the significant improvement in accuracy provided by the complex models has often been lost in the interest of convenience and efficiency. This dissertation presents a simple model which provides an accurate alternative to the full, high precision SOLAARS model with reduced complexity and computational cost. This simpler method is based on curve fitting to results of the full SOLAARS model and is called SOLAARS Curve Fit (SOLAARS-CF). Both the high precision SOLAARS model and the simpler SOLAARS-CF model are applied to the Gravity Recovery and Climate Experiment (GRACE) satellites. Modeling results are compared to the sub-nm/s2 precision GRACE accelerometer data and the results of a traditional penumbra SRP model. These comparisons illustrate the improved accuracy of the SOLAARS and SOLAARS-CF models. A sensitivity analyses for the GRACE orbit illustrates the significance of various input parameters and features of the SOLAARS model on results. The SOLAARS-CF model is applied to a study of penumbra SRP and the Earth flyby anomaly. Beyond the value of its results to the scientific community, this study provides an application example where the computational efficiency of the simplified SOLAARS-CF model is necessary. The Earth flyby anomaly is an open question in orbit determination which has gone unsolved for over 20 years. This study quantifies the influence of penumbra SRP modeling errors on the observed anomalies from the Galileo, Cassini, and Rosetta Earth flybys. The results of this study prove that penumbra SRP is not an explanation for or significant contributor to the Earth flyby anomaly.
A statistical approach to develop a detailed soot growth model using PAH characteristics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raj, Abhijeet; Celnik, Matthew; Shirley, Raphael
A detailed PAH growth model is developed, which is solved using a kinetic Monte Carlo algorithm. The model describes the structure and growth of planar PAH molecules, and is referred to as the kinetic Monte Carlo-aromatic site (KMC-ARS) model. A detailed PAH growth mechanism based on reactions at radical sites available in the literature, and additional reactions obtained from quantum chemistry calculations are used to model the PAH growth processes. New rates for the reactions involved in the cyclodehydrogenation process for the formation of 6-member rings on PAHs are calculated in this work based on density functional theory simulations. Themore » KMC-ARS model is validated by comparing experimentally observed ensembles on PAHs with the computed ensembles for a C{sub 2}H{sub 2} and a C{sub 6}H{sub 6} flame at different heights above the burner. The motivation for this model is the development of a detailed soot particle population balance model which describes the evolution of an ensemble of soot particles based on their PAH structure. However, at present incorporating such a detailed model into a population balance is computationally unfeasible. Therefore, a simpler model referred to as the site-counting model has been developed, which replaces the structural information of the PAH molecules by their functional groups augmented with statistical closure expressions. This closure is obtained from the KMC-ARS model, which is used to develop correlations and statistics in different flame environments which describe such PAH structural information. These correlations and statistics are implemented in the site-counting model, and results from the site-counting model and the KMC-ARS model are in good agreement. Additionally the effect of steric hindrance in large PAH structures is investigated and correlations for sites unavailable for reaction are presented. (author)« less
High-fidelity meshes from tissue samples for diffusion MRI simulations.
Panagiotaki, Eleftheria; Hall, Matt G; Zhang, Hui; Siow, Bernard; Lythgoe, Mark F; Alexander, Daniel C
2010-01-01
This paper presents a method for constructing detailed geometric models of tissue microstructure for synthesizing realistic diffusion MRI data. We construct three-dimensional mesh models from confocal microscopy image stacks using the marching cubes algorithm. Random-walk simulations within the resulting meshes provide synthetic diffusion MRI measurements. Experiments optimise simulation parameters and complexity of the meshes to achieve accuracy and reproducibility while minimizing computation time. Finally we assess the quality of the synthesized data from the mesh models by comparison with scanner data as well as synthetic data from simple geometric models and simplified meshes that vary only in two dimensions. The results support the extra complexity of the three-dimensional mesh compared to simpler models although sensitivity to the mesh resolution is quite robust.
Modeling of Biometric Identification System Using the Colored Petri Nets
NASA Astrophysics Data System (ADS)
Petrosyan, G. R.; Ter-Vardanyan, L. A.; Gaboutchian, A. V.
2015-05-01
In this paper we present a model of biometric identification system transformed into Petri Nets. Petri Nets, as a graphical and mathematical tool, provide a uniform environment for modelling, formal analysis, and design of discrete event systems. The main objective of this paper is to introduce the fundamental concepts of Petri Nets to the researchers and practitioners, both from identification systems, who are involved in the work in the areas of modelling and analysis of biometric identification types of systems, as well as those who may potentially be involved in these areas. In addition, the paper introduces high-level Petri Nets, as Colored Petri Nets (CPN). In this paper the model of Colored Petri Net describes the identification process much simpler.
Should biomedical research be like Airbnb?
Bonazzi, Vivien R; Bourne, Philip E
2017-04-01
The thesis presented here is that biomedical research is based on the trusted exchange of services. That exchange would be conducted more efficiently if the trusted software platforms to exchange those services, if they exist, were more integrated. While simpler and narrower in scope than the services governing biomedical research, comparison to existing internet-based platforms, like Airbnb, can be informative. We illustrate how the analogy to internet-based platforms works and does not work and introduce The Commons, under active development at the National Institutes of Health (NIH) and elsewhere, as an example of the move towards platforms for research.
Should biomedical research be like Airbnb?
Bonazzi, Vivien R.
2017-01-01
The thesis presented here is that biomedical research is based on the trusted exchange of services. That exchange would be conducted more efficiently if the trusted software platforms to exchange those services, if they exist, were more integrated. While simpler and narrower in scope than the services governing biomedical research, comparison to existing internet-based platforms, like Airbnb, can be informative. We illustrate how the analogy to internet-based platforms works and does not work and introduce The Commons, under active development at the National Institutes of Health (NIH) and elsewhere, as an example of the move towards platforms for research. PMID:28388615
NASA Astrophysics Data System (ADS)
Wang, F.; Annable, M. D.; Jawitz, J. W.
2012-12-01
The equilibrium streamtube model (EST) has demonstrated the ability to accurately predict dense nonaqueous phase liquid (DNAPL) dissolution in laboratory experiments and numerical simulations. Here the model is applied to predict DNAPL dissolution at a PCE-contaminated dry cleaner site, located in Jacksonville, Florida. The EST is an analytical solution with field-measurable input parameters. Here, measured data from a field-scale partitioning tracer test were used to parameterize the EST model and the predicted PCE dissolution was compared to measured data from an in-situ alcohol (ethanol) flood. In addition, a simulated partitioning tracer test from a calibrated spatially explicit multiphase flow model (UTCHEM) was also used to parameterize the EST analytical solution. The ethanol prediction based on both the field partitioning tracer test and the UTCHEM tracer test simulation closely matched the field data. The PCE EST prediction showed a peak shift to an earlier arrival time that was concluded to be caused by well screen interval differences between the field tracer test and alcohol flood. This observation was based on a modeling assessment of potential factors that may influence predictions by using UTCHEM simulations. The imposed injection and pumping flow pattern at this site for both the partitioning tracer test and alcohol flood was more complex than the natural gradient flow pattern (NGFP). Both the EST model and UTCHEM were also used to predict PCE dissolution under natural gradient conditions, with much simpler flow patterns than the forced-gradient double five spot of the alcohol flood. The NGFP predictions based on parameters determined from tracer tests conducted with complex flow patterns underestimated PCE concentrations and total mass removal. This suggests that the flow patterns influence aqueous dissolution and that the aqueous dissolution under the NGFP is more efficient than dissolution under complex flow patterns.
Gorman, Jamie C; Crites, Michael J
2013-08-01
We report an experiment in which we investigated differential transfer between unimanual (one-handed), bimanual (two-handed), and intermanual (different peoples' hands) coordination modes. People perform some manual tasks faster than others ("mode effects"). However, little is known about transfer between coordination modes. To investigate differential transfer, we draw hypotheses from two perspectives--information based and constraint based--of bimanual and interpersonal coordination and skill acquisition. Participants drove a teleoperated rover around a circular path in sets of two 2-min trials using two of the different coordination modes. Speed and variability of the rover's path were measured. Order of coordination modes was manipulated to examine differential transfer and mode effects. Differential transfer analyses revealed patterns of positive transfer from simpler (localized spatiotemporal constraints) to more complex (distributed spatiotemporal constraints) coordination modes paired with negative transfer in the opposite direction. Mode effects indicated that intermanual performance was significantly faster than unimanual performance, and bimanual performance was intermediate. Importantly, all of these effects disappeared with practice. The observed patterns of differential transfer between coordination modes may be better accounted for by a constraint-based explanation of differential transfer than by an information-based one. Mode effects may be attributable to anticipatory movements based on dyads' access to mutual visual information. Although people may be faster using more-complex coordination modes, when operators transition between modes, they may be more effective transitioning from simpler (e.g., bimanual) to more complex (e.g., intermanual) modes than vice versa. However, this difference may be critical only for novel or rarely practiced tasks.
About the mechanism of ERP-system pilot test
NASA Astrophysics Data System (ADS)
Mitkov, V. V.; Zimin, V. V.
2018-05-01
In the paper the mathematical problem of defining the scope of pilot test is stated, which is a task of quadratic programming. The procedure of the problem solving includes the method of network programming based on the structurally similar network representation of the criterion and constraints and which reduces the original problem to a sequence of simpler evaluation tasks. The evaluation tasks are solved by the method of dichotomous programming.
Handling Quality Requirements for Advanced Aircraft Design: Longitudinal Mode
1979-08-01
phases of air -to- air combat, for example). This is far simpler than the general problem of control law definition. How- ever, the results of such...unlimited. Ali FORCE FUGHT DYNAMICS LABORATORYAIR FORCE WRIGHT AERONAUTICALLABORATORIES AIR FORCE SYSTEMS COMMANDI * WRIGHT-PATITERSON AIR FORCE BASE...not necessarily shared by the Air Force. Brian. W. VauVliet Project Engineer S Rorad0. Anderson, Chief Control Dynamics Branch Flight Control Division
Wu, Fei; Sioshansi, Ramteen
2017-05-25
Electric vehicles (EVs) hold promise to improve the energy efficiency and environmental impacts of transportation. However, widespread EV use can impose significant stress on electricity-distribution systems due to their added charging loads. This paper proposes a centralized EV charging-control model, which schedules the charging of EVs that have flexibility. This flexibility stems from EVs that are parked at the charging station for a longer duration of time than is needed to fully recharge the battery. The model is formulated as a two-stage stochastic optimization problem. The model captures the use of distributed energy resources and uncertainties around EV arrival timesmore » and charging demands upon arrival, non-EV loads on the distribution system, energy prices, and availability of energy from the distributed energy resources. We use a Monte Carlo-based sample-average approximation technique and an L-shaped method to solve the resulting optimization problem efficiently. We also apply a sequential sampling technique to dynamically determine the optimal size of the randomly sampled scenario tree to give a solution with a desired quality at minimal computational cost. Here, we demonstrate the use of our model on a Central-Ohio-based case study. We show the benefits of the model in reducing charging costs, negative impacts on the distribution system, and unserved EV-charging demand compared to simpler heuristics. Lastly, we also conduct sensitivity analyses, to show how the model performs and the resulting costs and load profiles when the design of the station or EV-usage parameters are changed.« less
A simple reactive-transport model of calcite precipitation in soils and other porous media
NASA Astrophysics Data System (ADS)
Kirk, G. J. D.; Versteegen, A.; Ritz, K.; Milodowski, A. E.
2015-09-01
Calcite formation in soils and other porous media generally occurs around a localised source of reactants, such as a plant root or soil macro-pore, and the rate depends on the transport of reactants to and from the precipitation zone as well as the kinetics of the precipitation reaction itself. However most studies are made in well mixed systems, in which such transport limitations are largely removed. We developed a mathematical model of calcite precipitation near a source of base in soil, allowing for transport limitations and precipitation kinetics. We tested the model against experimentally-determined rates of calcite precipitation and reactant concentration-distance profiles in columns of soil in contact with a layer of HCO3--saturated exchange resin. The model parameter values were determined independently. The agreement between observed and predicted results was satisfactory given experimental limitations, indicating that the model correctly describes the important processes. A sensitivity analysis showed that all model parameters are important, indicating a simpler treatment would be inadequate. The sensitivity analysis showed that the amount of calcite precipitated and the spread of the precipitation zone were sensitive to parameters controlling rates of reactant transport (soil moisture content, salt content, pH, pH buffer power and CO2 pressure), as well as to the precipitation rate constant. We illustrate practical applications of the model with two examples: pH changes and CaCO3 precipitation in the soil around a plant root, and around a soil macro-pore containing a source of base such as urea.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spence, R.D.; Godbee, H.W.; Tallent, O.K.
1991-01-01
Despite the demonstrated importance of diffusion control in leaching, other mechanisms have been observed to play a role and leaching from porous solid bodies is not simple diffusion. Only simple diffusion theory has been developed well enough for extrapolation, as yet. The well developed diffusion theory, used in data analysis by ANSI/ANS-16.1 and the NEWBOX program, can help in trying to extrapolate and predict the performance of solidified waste forms over decades and centuries, but the limitations and increased uncertainty must be understood in so doing. Treating leaching as a semi-infinite medium problem, as done in the Cote model, resultsmore » in simpler equations, but limits, application to early leaching behavior when less than 20% of a given component has been leached. 18 refs., 2 tabs.« less
NASA Technical Reports Server (NTRS)
Lyle, Karen H.
2008-01-01
The Space Shuttle Columbia Accident Investigation Board recommended that NASA develop, validate, and maintain a modeling tool capable of predicting the damage threshold for debris impacts on the Space Shuttle Reinforced Carbon-Carbon (RCC) wing leading edge and nosecap assembly. The results presented in this paper are one part of a multi-level approach that supported the development of the predictive tool used to recertify the shuttle for flight following the Columbia Accident. The assessment of predictive capability was largely based on test analysis comparisons for simpler component structures. This paper provides comparisons of finite element simulations with test data for external tank foam debris impacts onto 6-in. square RCC flat panels. Both quantitative displacement and qualitative damage assessment correlations are provided. The comparisons show good agreement and provided the Space Shuttle Program with confidence in the predictive tool.
Noise transmission and reduction in turboprop aircraft
NASA Astrophysics Data System (ADS)
MacMartin, Douglas G.; Basso, Gordon L.; Leigh, Barry
1994-09-01
There is considerable interest in reducing the cabin noise environment in turboprop aircraft. Various approaches have been considered at deHaviland Inc., including passive tuned-vibration absorbers, speaker-based noise cancellation, and structural vibration control of the fuselage. These approaches will be discussed briefly. In addition to controlling the noise, a method of predicting the internal noise is required both to evaluate potential noise reduction approaches, and to validate analytical design models. Instead of costly flight tests, or carrying out a ground simulation of the propeller pressure field, a much simpler reciprocal technique can be used. A capacitive scanner is used to measure the fuselage vibration response on a deHaviland Dash-8 fuselage, due to an internal noise source. The approach is validated by comparing this reciprocal noise transmission measurement with the direct measurement. The fuselage noise transmission information is then combined with computer predictions of the propeller pressure field data to predict the internal noise at two points.
Structure at every scale: A semantic network account of the similarities between unrelated concepts.
De Deyne, Simon; Navarro, Daniel J; Perfors, Amy; Storms, Gert
2016-09-01
Similarity plays an important role in organizing the semantic system. However, given that similarity cannot be defined on purely logical grounds, it is important to understand how people perceive similarities between different entities. Despite this, the vast majority of studies focus on measuring similarity between very closely related items. When considering concepts that are very weakly related, little is known. In this article, we present 4 experiments showing that there are reliable and systematic patterns in how people evaluate the similarities between very dissimilar entities. We present a semantic network account of these similarities showing that a spreading activation mechanism defined over a word association network naturally makes correct predictions about weak similarities, whereas, though simpler, models based on direct neighbors between word pairs derived using the same network cannot. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Mixed-state fidelity susceptibility through iterated commutator series expansion
NASA Astrophysics Data System (ADS)
Tonchev, N. S.
2014-11-01
We present a perturbative approach to the problem of computation of mixed-state fidelity susceptibility (MFS) for thermal states. The mathematical techniques used provide an analytical expression for the MFS as a formal expansion in terms of the thermodynamic mean values of successively higher commutators of the Hamiltonian with the operator involved through the control parameter. That expression is naturally divided into two parts: the usual isothermal susceptibility and a constituent in the form of an infinite series of thermodynamic mean values which encodes the noncommutativity in the problem. If the symmetry properties of the Hamiltonian are given in terms of the generators of some (finite-dimensional) algebra, the obtained expansion may be evaluated in a closed form. This issue is tested on several popular models, for which it is shown that the calculations are much simpler if they are based on the properties from the representation theory of the Heisenberg or SU(1, 1) Lie algebra.
Multicasting in Wireless Communications (Ad-Hoc Networks): Comparison against a Tree-Based Approach
NASA Astrophysics Data System (ADS)
Rizos, G. E.; Vasiliadis, D. C.
2007-12-01
We examine on-demand multicasting in ad hoc networks. The Core Assisted Mesh Protocol (CAMP) is a well-known protocol for multicast routing in ad-hoc networks, generalizing the notion of core-based trees employed for internet multicasting into multicast meshes that have much richer connectivity than trees. On the other hand, wireless tree-based multicast routing protocols use much simpler structures for determining route paths, using only parent-child relationships. In this work, we compare the performance of the CAMP protocol against the performance of wireless tree-based multicast routing protocols, in terms of two important factors, namely packet delay and ratio of dropped packets.
NASA Astrophysics Data System (ADS)
Khan, Sahubar Ali Mohd. Nadhar; Ramli, Razamin; Baten, M. D. Azizul
2015-12-01
Agricultural production process typically produces two types of outputs which are economic desirable as well as environmentally undesirable outputs (such as greenhouse gas emission, nitrate leaching, effects to human and organisms and water pollution). In efficiency analysis, this undesirable outputs cannot be ignored and need to be included in order to obtain the actual estimation of firms efficiency. Additionally, climatic factors as well as data uncertainty can significantly affect the efficiency analysis. There are a number of approaches that has been proposed in DEA literature to account for undesirable outputs. Many researchers has pointed that directional distance function (DDF) approach is the best as it allows for simultaneous increase in desirable outputs and reduction of undesirable outputs. Additionally, it has been found that interval data approach is the most suitable to account for data uncertainty as it is much simpler to model and need less information regarding its distribution and membership function. In this paper, an enhanced DEA model based on DDF approach that considers undesirable outputs as well as climatic factors and interval data is proposed. This model will be used to determine the efficiency of rice farmers who produces undesirable outputs and operates under uncertainty. It is hoped that the proposed model will provide a better estimate of rice farmers' efficiency.
Transient high frequency signal estimation: A model-based processing approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnes, F.L.
1985-03-22
By utilizing the superposition property of linear systems a method of estimating the incident signal from reflective nondispersive data is developed. One of the basic merits of this approach is that, the reflections were removed by direct application of a Weiner type estimation algorithm, after the appropriate input was synthesized. The structure of the nondispersive signal model is well documented, and thus its' credence is established. The model is stated and more effort is devoted to practical methods of estimating the model parameters. Though a general approach was developed for obtaining the reflection weights, a simpler approach was employed here,more » since a fairly good reflection model is available. The technique essentially consists of calculating ratios of the autocorrelation function at lag zero and that lag where the incident and first reflection coincide. We initially performed our processing procedure on a measurement of a single signal. Multiple application of the processing procedure was required when we applied the reflection removal technique on a measurement containing information from the interaction of two physical phenomena. All processing was performed using SIG, an interactive signal processing package. One of the many consequences of using SIG was that repetitive operations were, for the most part, automated. A custom menu was designed to perform the deconvolution process.« less
Control fast or control smart: When should invading pathogens be controlled?
Thompson, Robin N; Gilligan, Christopher A; Cunniffe, Nik J
2018-02-01
The intuitive response to an invading pathogen is to start disease management as rapidly as possible, since this would be expected to minimise the future impacts of disease. However, since more spread data become available as an outbreak unfolds, processes underpinning pathogen transmission can almost always be characterised more precisely later in epidemics. This allows the future progression of any outbreak to be forecast more accurately, and so enables control interventions to be targeted more precisely. There is also the chance that the outbreak might die out without any intervention whatsoever, making prophylactic control unnecessary. Optimal decision-making involves continuously balancing these potential benefits of waiting against the possible costs of further spread. We introduce a generic, extensible data-driven algorithm based on parameter estimation and outbreak simulation for making decisions in real-time concerning when and how to control an invading pathogen. The Control Smart Algorithm (CSA) resolves the trade-off between the competing advantages of controlling as soon as possible and controlling later when more information has become available. We show-using a generic mathematical model representing the transmission of a pathogen of agricultural animals or plants through a population of farms or fields-how the CSA allows the timing and level of deployment of vaccination or chemical control to be optimised. In particular, the algorithm outperforms simpler strategies such as intervening when the outbreak size reaches a pre-specified threshold, or controlling when the outbreak has persisted for a threshold length of time. This remains the case even if the simpler methods are fully optimised in advance. Our work highlights the potential benefits of giving careful consideration to the question of when to start disease management during emerging outbreaks, and provides a concrete framework to allow policy-makers to make this decision.
Signal restoration through deconvolution applied to deep mantle seismic probes
NASA Astrophysics Data System (ADS)
Stefan, W.; Garnero, E.; Renaut, R. A.
2006-12-01
We present a method of signal restoration to improve the signal-to-noise ratio, sharpen seismic arrival onset, and act as an empirical source deconvolution of specific seismic arrivals. Observed time-series gi are modelled as a convolution of a simpler time-series fi, and an invariant point spread function (PSF) h that attempts to account for the earthquake source process. The method is used on the shear wave time window containing SKS and S, whereby using a Gaussian PSF produces more impulsive, narrower, signals in the wave train. The resulting restored time-series facilitates more accurate and objective relative traveltime estimation of the individual seismic arrivals. We demonstrate the accuracy of the reconstruction method on synthetic seismograms generated by the reflectivity method. Clean and sharp reconstructions are obtained with real data, even for signals with relatively high noise content. Reconstructed signals are simpler, more impulsive, and narrower, which allows highlighting of some details of arrivals that are not readily apparent in raw waveforms. In particular, phases nearly coincident in time can be separately identified after processing. This is demonstrated for two seismic wave pairs used to probe deep mantle and core-mantle boundary structure: (1) the Sab and Scd arrivals, which travel above and within, respectively, a 200-300-km-thick, higher than average shear wave velocity layer at the base of the mantle, observable in the 88-92 deg epicentral distance range and (2) SKS and SPdiff KS, which are core waves with the latter having short arcs of P-wave diffraction, and are nearly identical in timing near 108-110 deg in distance. A Java/Matlab algorithm was developed for the signal restoration, which can be downloaded from the authors web page, along with example data and synthetic seismograms.
1993-01-01
sixteenth century revolution in astronomy . Until then, astronomers shared the Ptolemaic view of the universe--that all other heavenly bodies orbited the...revolutionary theory that the earth and all other planets orbited the sun. Great tumult rocked the fields of astronomy and religion, but great... Copernicus made no great new discoveries. Instead, he looked at well-known facts from a new perspective and came up with a simpler, more useful model
2013-06-21
potential temperature (Tripoli and Cotton , 1981), total wa- ter mixing ratio and cloud microphysics. The microphysics scheme has categories for cloud droplets...components, with diurnal variation, are both activated when the radiation scheme is included. A simpler scheme developed by Chen and Cotton (1987) is an...radiation. Additionally, one more simula- tion, Experiment 17, was conducted using the Chen– Cotton radiation scheme instead of the Harrington scheme
Heat transfer correlations for multilayer insulation systems
NASA Astrophysics Data System (ADS)
Krishnaprakas, C. K.; Badari Narayana, K.; Dutta, Pradip
2000-01-01
Multilayer insulation (MLI) blankets are extensively used in spacecrafts as lightweight thermal protection systems. Heat transfer analysis of MLI is sometimes too complex to use in practical design applications. Hence, for practical engineering design purposes, it is necessary to have simpler procedures to evaluate the heat transfer rate through MLI. In this paper, four different empirical models for heat transfer are evaluated by fitting against experimentally observed heat flux through MLI blankets of various configurations, and the results are discussed.
Methods for predicting properties and tailoring salt solutions for industrial processes
NASA Technical Reports Server (NTRS)
Ally, Moonis R.
1993-01-01
An algorithm developed at Oak Ridge National Laboratory accurately and quickly predicts thermodynamic properties of concentrated aqueous salt solutions. This algorithm is much simpler and much faster than other modeling schemes and is unique because it can predict solution behavior at very high concentrations and under varying conditions. Typical industrial applications of this algorithm would be in manufacture of inorganic chemicals by crystallization, thermal storage, refrigeration and cooling, extraction of metals, emissions controls, etc.
As Simple as Possible, But No Simpler: A Gentle Introduction to Simulation Modeling
2006-12-01
cultures, people waiting for a bus mimic the concept by standing in a row. However, there are some cultures where no line forms but it is considered...mathematical equations such as the equations of motion Report Documentation Page Form ApprovedOMB No. 0704-0188 Public reporting burden for the...PERSON a. REPORT unclassified b. ABSTRACT unclassified c. THIS PAGE unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18
Hefron, Ryan; Borghetti, Brett; Schubert Kabban, Christine; Christensen, James; Estepp, Justin
2018-04-26
Applying deep learning methods to electroencephalograph (EEG) data for cognitive state assessment has yielded improvements over previous modeling methods. However, research focused on cross-participant cognitive workload modeling using these techniques is underrepresented. We study the problem of cross-participant state estimation in a non-stimulus-locked task environment, where a trained model is used to make workload estimates on a new participant who is not represented in the training set. Using experimental data from the Multi-Attribute Task Battery (MATB) environment, a variety of deep neural network models are evaluated in the trade-space of computational efficiency, model accuracy, variance and temporal specificity yielding three important contributions: (1) The performance of ensembles of individually-trained models is statistically indistinguishable from group-trained methods at most sequence lengths. These ensembles can be trained for a fraction of the computational cost compared to group-trained methods and enable simpler model updates. (2) While increasing temporal sequence length improves mean accuracy, it is not sufficient to overcome distributional dissimilarities between individuals’ EEG data, as it results in statistically significant increases in cross-participant variance. (3) Compared to all other networks evaluated, a novel convolutional-recurrent model using multi-path subnetworks and bi-directional, residual recurrent layers resulted in statistically significant increases in predictive accuracy and decreases in cross-participant variance.
Hefron, Ryan; Borghetti, Brett; Schubert Kabban, Christine; Christensen, James; Estepp, Justin
2018-01-01
Applying deep learning methods to electroencephalograph (EEG) data for cognitive state assessment has yielded improvements over previous modeling methods. However, research focused on cross-participant cognitive workload modeling using these techniques is underrepresented. We study the problem of cross-participant state estimation in a non-stimulus-locked task environment, where a trained model is used to make workload estimates on a new participant who is not represented in the training set. Using experimental data from the Multi-Attribute Task Battery (MATB) environment, a variety of deep neural network models are evaluated in the trade-space of computational efficiency, model accuracy, variance and temporal specificity yielding three important contributions: (1) The performance of ensembles of individually-trained models is statistically indistinguishable from group-trained methods at most sequence lengths. These ensembles can be trained for a fraction of the computational cost compared to group-trained methods and enable simpler model updates. (2) While increasing temporal sequence length improves mean accuracy, it is not sufficient to overcome distributional dissimilarities between individuals’ EEG data, as it results in statistically significant increases in cross-participant variance. (3) Compared to all other networks evaluated, a novel convolutional-recurrent model using multi-path subnetworks and bi-directional, residual recurrent layers resulted in statistically significant increases in predictive accuracy and decreases in cross-participant variance. PMID:29701668
Uncertainty in spatially explicit animal dispersal models
Mooij, Wolf M.; DeAngelis, Donald L.
2003-01-01
Uncertainty in estimates of survival of dispersing animals is a vexing difficulty in conservation biology. The current notion is that this uncertainty decreases the usefulness of spatially explicit population models in particular. We examined this problem by comparing dispersal models of three levels of complexity: (1) an event-based binomial model that considers only the occurrence of mortality or arrival, (2) a temporally explicit exponential model that employs mortality and arrival rates, and (3) a spatially explicit grid-walk model that simulates the movement of animals through an artificial landscape. Each model was fitted to the same set of field data. A first objective of the paper is to illustrate how the maximum-likelihood method can be used in all three cases to estimate the means and confidence limits for the relevant model parameters, given a particular set of data on dispersal survival. Using this framework we show that the structure of the uncertainty for all three models is strikingly similar. In fact, the results of our unified approach imply that spatially explicit dispersal models, which take advantage of information on landscape details, suffer less from uncertainly than do simpler models. Moreover, we show that the proposed strategy of model development safeguards one from error propagation in these more complex models. Finally, our approach shows that all models related to animal dispersal, ranging from simple to complex, can be related in a hierarchical fashion, so that the various approaches to modeling such dispersal can be viewed from a unified perspective.
Explicit solutions for exit-only radioactive decay chains
NASA Astrophysics Data System (ADS)
Yuan, Ding; Kernan, Warnick
2007-05-01
In this study, we extended Bateman's [Proc. Cambridge Philos. Soc. 15, 423 (1910)] original work for solving radioactive decay chains and explicitly derived analytic solutions for generic exit-only radioactive decay problems under given initial conditions. Instead of using the conventional Laplace transform for solving Bateman's equations, we used a much simpler algebraic approach. Finally, we discuss methods of breaking down certain classes of large decay chains into collections of simpler chains for easy handling.
Finding idle machines in a workstation-based distributed system
NASA Technical Reports Server (NTRS)
Theimer, Marvin M.; Lantz, Keith A.
1989-01-01
The authors describe the design and performance of scheduling facilities for finding idle hosts in a workstation-based distributed system. They focus on the tradeoffs between centralized and decentralized architectures with respect to scalability, fault tolerance, and simplicity of design, as well as several implementation issues of interest when multicast communication is used. They conclude that the principal tradeoff between the two approaches is that a centralized architecture can be scaled to a significantly greater degree and can more easily monitor global system statistics, whereas a decentralized architecture is simpler to implement.
Atmospheric planetary wave response to external forcing
NASA Technical Reports Server (NTRS)
Stevens, D. E.; Reiter, E. R.
1985-01-01
The tools of observational analysis, complex general circulation modeling, and simpler modeling approaches were combined in order to attack problems on the largest spatial scales of the earth's atmosphere. Two different models were developed and applied. The first is a two level, global spectral model which was designed primarily to test the effects of north-south sea surface temperature anomaly (SSTA) gradients between the equatorial and midlatitude north Pacific. The model is nonlinear, contains both radiation and a moisture budget with associated precipitation and surface evaporation, and utilizes a linear balance dynamical framework. Supporting observational analysis of atmospheric planetary waves is briefly summarized. More extensive general circulation models have also been used to consider the problem of the atmosphere's response, especially in the horizontal propagation of planetary scale waves, to SSTA.
Using CV-GLUE procedure in analysis of wetland model predictive uncertainty.
Huang, Chun-Wei; Lin, Yu-Pin; Chiang, Li-Chi; Wang, Yung-Chieh
2014-07-01
This study develops a procedure that is related to Generalized Likelihood Uncertainty Estimation (GLUE), called the CV-GLUE procedure, for assessing the predictive uncertainty that is associated with different model structures with varying degrees of complexity. The proposed procedure comprises model calibration, validation, and predictive uncertainty estimation in terms of a characteristic coefficient of variation (characteristic CV). The procedure first performed two-stage Monte-Carlo simulations to ensure predictive accuracy by obtaining behavior parameter sets, and then the estimation of CV-values of the model outcomes, which represent the predictive uncertainties for a model structure of interest with its associated behavior parameter sets. Three commonly used wetland models (the first-order K-C model, the plug flow with dispersion model, and the Wetland Water Quality Model; WWQM) were compared based on data that were collected from a free water surface constructed wetland with paddy cultivation in Taipei, Taiwan. The results show that the first-order K-C model, which is simpler than the other two models, has greater predictive uncertainty. This finding shows that predictive uncertainty does not necessarily increase with the complexity of the model structure because in this case, the more simplistic representation (first-order K-C model) of reality results in a higher uncertainty in the prediction made by the model. The CV-GLUE procedure is suggested to be a useful tool not only for designing constructed wetlands but also for other aspects of environmental management. Copyright © 2014 Elsevier Ltd. All rights reserved.
On why dynamic subgrid-scale models work
NASA Technical Reports Server (NTRS)
Jimenez, J.
1995-01-01
Dynamic subgrid models have proved to be remarkably successful in predicting the behavior of turbulent flows. Part of the reasons for their success are well understood. Since they are constructed to generate an effective viscosity which is proportional to some measure of the turbulent energy at the high wavenumber end of the spectrum, their eddy viscosity vanishes as the flow becomes laminar. This alone would justify their use over simpler models. But beyond this obvious advantage, which is confined to inhomogeneous and evolving flows, the reason why they also work better in simpler homogeneous cases, and how they do it without any obvious adjustable parameter, is not clear. This lack of understanding of the internal mechanisms of a useful tool is disturbing, not only as an intellectual challenge, but because it raises the doubt of whether it will work in all cases. This note is an attempt to clarify those mechanisms. We will see why dynamic models are robust and how they can get away with even comparatively gross errors in their formulations. This will suggest that they are only particular cases of a larger family of robust models, all of which would be relatively insensitive to large simplifications in the physics of the flow. We will also construct some such models, although mostly as research tools. It will turn out, however, that the standard dynamic formulation is not only robust to errors, but also behaves as if it were substantially well formulated. The details of why this is so will still not be clear at the end of this note, specially since it will be shown that the 'a priori' testing of the stresses gives, as is usual in most subgrid models, very poor results. But it will be argued that the basic reason is that the dynamic formulation mimics the condition that the total dissipation is approximately equal to the production measured at the test filter level.
Kinetic and Stochastic Models of 1D yeast ``prions"
NASA Astrophysics Data System (ADS)
Kunes, Kay
2005-03-01
Mammalian prion proteins (PrP) are of public health interest because of mad cow and chronic wasting diseases. Yeasts have proteins, which can undergo similar reconformation and aggregation processes to PrP; yeast ``prions" are simpler to experimentally study and model. Recent in vitro studies of the SUP35 protein (1), showed long aggregates and pure exponential growth of the misfolded form. To explain this data, we have extended a previous model of aggregation kinetics along with our own stochastic approach (2). Both models assume reconformation only upon aggregation, and include aggregate fissioning and an initial nucleation barrier. We find for sufficiently small nucleation rates or seeding by small dimer concentrations that we can achieve the requisite exponential growth and long aggregates.
NASA Astrophysics Data System (ADS)
Hu, Yanpu; Egbert, Gary; Ji, Yanju; Fang, Guangyou
2017-01-01
In this study, we apply fictitious wave domain (FWD) methods, based on the correspondence principle for the wave and diffusion fields, to finite difference (FD) modeling of transient electromagnetic (TEM) diffusion problems for geophysical applications. A novel complex frequency shifted perfectly matched layer (PML) boundary condition is adapted to the FWD to truncate the computational domain, with the maximum electromagnetic wave propagation velocity in the FWD used to set the absorbing parameters for the boundary layers. Using domains of varying spatial extent we demonstrate that these boundary conditions offer significant improvements over simpler PML approaches, which can result in spurious reflections and large errors in the FWD solutions, especially for low frequencies and late times. In our development, resistive air layers are directly included in the FWD, allowing simulation of TEM responses in the presence of topography, as is commonly encountered in geophysical applications. We compare responses obtained by our new FD-FWD approach and with the spectral Lanczos decomposition method on 3-D resistivity models of varying complexity. The comparisons demonstrate that our absorbing boundary condition in FWD for the TEM diffusion problems works well even in complex high-contrast conductivity models.
Improved Speech Coding Based on Open-Loop Parameter Estimation
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan; Chen, Ya-Chin; Longman, Richard W.
2000-01-01
A nonlinear optimization algorithm for linear predictive speech coding was developed early that not only optimizes the linear model coefficients for the open loop predictor, but does the optimization including the effects of quantization of the transmitted residual. It also simultaneously optimizes the quantization levels used for each speech segment. In this paper, we present an improved method for initialization of this nonlinear algorithm, and demonstrate substantial improvements in performance. In addition, the new procedure produces monotonically improving speech quality with increasing numbers of bits used in the transmitted error residual. Examples of speech encoding and decoding are given for 8 speech segments and signal to noise levels as high as 47 dB are produced. As in typical linear predictive coding, the optimization is done on the open loop speech analysis model. Here we demonstrate that minimizing the error of the closed loop speech reconstruction, instead of the simpler open loop optimization, is likely to produce negligible improvement in speech quality. The examples suggest that the algorithm here is close to giving the best performance obtainable from a linear model, for the chosen order with the chosen number of bits for the codebook.
Mode-Locking Behavior of Izhikevich Neuron Under Periodic External Forcing
NASA Astrophysics Data System (ADS)
Farokhniaee, Amirali; Large, Edward
2015-03-01
In this study we obtained the regions of existence of various mode-locked states on the periodic-strength plane, which are called Arnold Tongues, for Izhikevich neurons. The study is based on the new model for neurons by Izhikevich (2003) which is the normal form of Hodgkin-Huxley neuron. This model is much simpler in terms of the dimension of the coupled non-linear differential equations compared to other existing models, but excellent for generating the complex spiking patterns observed in real neurons. Many neurons in the auditory system of the brain must encode amplitude variations of a periodic signal. These neurons under periodic stimulation display rich dynamical states including mode-locking and chaotic responses. Periodic stimuli such as sinusoidal waves and amplitude modulated (AM) sounds can lead to various forms of n : m mode-locked states, similar to mode-locking phenomenon in a LASER resonance cavity. Obtaining Arnold tongues provides useful insight into the organization of mode-locking behavior of neurons under periodic forcing. Hence we can describe the construction of harmonic and sub-harmonic responses in the early processing stages of the auditory system, such as the auditory nerve and cochlear nucleus.
EIT image reconstruction with four dimensional regularization.
Dai, Tao; Soleimani, Manuchehr; Adler, Andy
2008-09-01
Electrical impedance tomography (EIT) reconstructs internal impedance images of the body from electrical measurements on body surface. The temporal resolution of EIT data can be very high, although the spatial resolution of the images is relatively low. Most EIT reconstruction algorithms calculate images from data frames independently, although data are actually highly correlated especially in high speed EIT systems. This paper proposes a 4-D EIT image reconstruction for functional EIT. The new approach is developed to directly use prior models of the temporal correlations among images and 3-D spatial correlations among image elements. A fast algorithm is also developed to reconstruct the regularized images. Image reconstruction is posed in terms of an augmented image and measurement vector which are concatenated from a specific number of previous and future frames. The reconstruction is then based on an augmented regularization matrix which reflects the a priori constraints on temporal and 3-D spatial correlations of image elements. A temporal factor reflecting the relative strength of the image correlation is objectively calculated from measurement data. Results show that image reconstruction models which account for inter-element correlations, in both space and time, show improved resolution and noise performance, in comparison to simpler image models.
Shock-wave generation and bubble formation in the retina by lasers
NASA Astrophysics Data System (ADS)
Sun, Jinming; Gerstman, Bernard S.; Li, Bin
2000-06-01
The generation of shock waves and bubbles has been experimentally observed due to absorption of sub-nanosecond laser pulses by melanosomes, which are found in retinal pigment epithelium cells. Both the shock waves and bubbles may be the cause of retinal damage at threshold fluence levels. The theoretical modeling of shock wave parameters such as amplitude, and bubble size, is a complicated problem due to the non-linearity of the phenomena. We have used two different approaches for treating pressure variations in water: the Tait Equation and a full Equation Of State (EOS). The Tait Equation has the advantage of being developed specifically to model pressure variations in water and is therefore simpler, quicker computationally, and allows the liquid to sustain negative pressures. Its disadvantage is that it does not allow for a change of phase, which prevents modeling of bubbles and leads to non-physical behavior such as the sustaining of ridiculously large negative pressures. The full EOS treatment includes more of the true thermodynamic behavior, such as phase changes that produce bubbles and avoids the generation of large negative pressures. Its disadvantage is that the usual stable equilibrium EOS allows for no negative pressures at all, since tensile stress is unstable with respect to a transition to the vapor phase. In addition, the EOS treatment requires longer computational times. In this paper, we compare shock wave generation for various laser pulses using the two different mathematical approaches and determine the laser pulse regime for which the simpler Tait Equation can be used with confidence. We also present results of our full EOS treatment in which both shock waves and bubbles are simultaneously modeled.
Evaluation of MODFLOW-LGR in connection with a synthetic regional-scale model
Vilhelmsen, T.N.; Christensen, S.; Mehl, S.W.
2012-01-01
This work studies costs and benefits of utilizing local-grid refinement (LGR) as implemented in MODFLOW-LGR to simulate groundwater flow in a buried tunnel valley interacting with a regional aquifer. Two alternative LGR methods were used: the shared-node (SN) method and the ghost-node (GN) method. To conserve flows the SN method requires correction of sources and sinks in cells at the refined/coarse-grid interface. We found that the optimal correction method is case dependent and difficult to identify in practice. However, the results showed little difference and suggest that identifying the optimal method was of minor importance in our case. The GN method does not require corrections at the models' interface, and it uses a simpler head interpolation scheme than the SN method. The simpler scheme is faster but less accurate so that more iterations may be necessary. However, the GN method solved our flow problem more efficiently than the SN method. The MODFLOW-LGR results were compared with the results obtained using a globally coarse (GC) grid. The LGR simulations required one to two orders of magnitude longer run times than the GC model. However, the improvements of the numerical resolution around the buried valley substantially increased the accuracy of simulated heads and flows compared with the GC simulation. Accuracy further increased locally around the valley flanks when improving the geological resolution using the refined grid. Finally, comparing MODFLOW-LGR simulation with a globally refined (GR) grid showed that the refinement proportion of the model should not exceed 10% to 15% in order to secure method efficiency. ?? 2011, The Author(s). Ground Water ?? 2011, National Ground Water Association.
Zulkifley, Mohd Asyraf; Rawlinson, David; Moran, Bill
2012-01-01
In video analytics, robust observation detection is very important as the content of the videos varies a lot, especially for tracking implementation. Contrary to the image processing field, the problems of blurring, moderate deformation, low illumination surroundings, illumination change and homogenous texture are normally encountered in video analytics. Patch-Based Observation Detection (PBOD) is developed to improve detection robustness to complex scenes by fusing both feature- and template-based recognition methods. While we believe that feature-based detectors are more distinctive, however, for finding the matching between the frames are best achieved by a collection of points as in template-based detectors. Two methods of PBOD—the deterministic and probabilistic approaches—have been tested to find the best mode of detection. Both algorithms start by building comparison vectors at each detected points of interest. The vectors are matched to build candidate patches based on their respective coordination. For the deterministic method, patch matching is done in 2-level test where threshold-based position and size smoothing are applied to the patch with the highest correlation value. For the second approach, patch matching is done probabilistically by modelling the histograms of the patches by Poisson distributions for both RGB and HSV colour models. Then, maximum likelihood is applied for position smoothing while a Bayesian approach is applied for size smoothing. The result showed that probabilistic PBOD outperforms the deterministic approach with average distance error of 10.03% compared with 21.03%. This algorithm is best implemented as a complement to other simpler detection methods due to heavy processing requirement. PMID:23202226
Quantum private query based on single-photon interference
NASA Astrophysics Data System (ADS)
Xu, Sheng-Wei; Sun, Ying; Lin, Song
2016-08-01
Quantum private query (QPQ) has become a research hotspot recently. Specially, the quantum key distribution (QKD)-based QPQ attracts lots of attention because of its practicality. Various such kind of QPQ protocols have been proposed based on different technologies of quantum communications. Single-photon interference is one of such technologies, on which the famous QKD protocol GV95 is just based. In this paper, we propose two QPQ protocols based on single-photon interference. The first one is simpler and easier to realize, and the second one is loss tolerant and flexible, and more practical than the first one. Furthermore, we analyze both the user privacy and the database privacy in the proposed protocols.
SKIRT: The design of a suite of input models for Monte Carlo radiative transfer simulations
NASA Astrophysics Data System (ADS)
Baes, M.; Camps, P.
2015-09-01
The Monte Carlo method is the most popular technique to perform radiative transfer simulations in a general 3D geometry. The algorithms behind and acceleration techniques for Monte Carlo radiative transfer are discussed extensively in the literature, and many different Monte Carlo codes are publicly available. On the contrary, the design of a suite of components that can be used for the distribution of sources and sinks in radiative transfer codes has received very little attention. The availability of such models, with different degrees of complexity, has many benefits. For example, they can serve as toy models to test new physical ingredients, or as parameterised models for inverse radiative transfer fitting. For 3D Monte Carlo codes, this requires algorithms to efficiently generate random positions from 3D density distributions. We describe the design of a flexible suite of components for the Monte Carlo radiative transfer code SKIRT. The design is based on a combination of basic building blocks (which can be either analytical toy models or numerical models defined on grids or a set of particles) and the extensive use of decorators that combine and alter these building blocks to more complex structures. For a number of decorators, e.g. those that add spiral structure or clumpiness, we provide a detailed description of the algorithms that can be used to generate random positions. Advantages of this decorator-based design include code transparency, the avoidance of code duplication, and an increase in code maintainability. Moreover, since decorators can be chained without problems, very complex models can easily be constructed out of simple building blocks. Finally, based on a number of test simulations, we demonstrate that our design using customised random position generators is superior to a simpler design based on a generic black-box random position generator.
Flatness-based model inverse for feed-forward braking control
NASA Astrophysics Data System (ADS)
de Vries, Edwin; Fehn, Achim; Rixen, Daniel
2010-12-01
For modern cars an increasing number of driver assistance systems have been developed. Some of these systems interfere/assist with the braking of a car. Here, a brake actuation algorithm for each individual wheel that can respond to both driver inputs and artificial vehicle deceleration set points is developed. The algorithm consists of a feed-forward control that ensures, within the modelled system plant, the optimal behaviour of the vehicle. For the quarter-car model with LuGre-tyre behavioural model, an inverse model can be derived using v x as the 'flat output', that is, the input for the inverse model. A number of time derivatives of the flat output are required to calculate the model input, brake torque. Polynomial trajectory planning provides the needed time derivatives of the deceleration request. The transition time of the planning can be adjusted to meet actuator constraints. It is shown that the output of the trajectory planning would ripple and introduce a time delay when a gradual continuous increase of deceleration is requested by the driver. Derivative filters are then considered: the Bessel filter provides the best symmetry in its step response. A filter of same order and with negative real-poles is also used, exhibiting no overshoot nor ringing. For these reasons, the 'real-poles' filter would be preferred over the Bessel filter. The half-car model can be used to predict the change in normal load on the front and rear axle due to the pitching of the vehicle. The anticipated dynamic variation of the wheel load can be included in the inverse model, even though it is based on a quarter-car. Brake force distribution proportional to normal load is established. It provides more natural and simpler equations than a fixed force ratio strategy.
Stein, George Juraj; Múcka, Peter; Chmúrny, Rudolf; Hinz, Barbara; Blüthner, Ralph
2007-01-01
For modelling purposes and for evaluation of driver's seat performance in the vertical direction various mechano-mathematical models of the seated human body have been developed and standardized by the ISO. No such models exist hitherto for human body sitting in an upright position in a cushioned seat upper part, used in industrial environment, where the fore-and-aft vibrations play an important role. The interaction with the steering wheel has to be taken into consideration, as well as, the position of the human body upper torso with respect to the cushioned seat back as observed in real driving conditions. This complex problem has to be simplified first to arrive at manageable simpler models, which still reflect the main problem features. In a laboratory study accelerations and forces in x-direction were measured at the seat base during whole-body vibration in the fore-and-aft direction (random signal in the frequency range between 0.3 and 30 Hz, vibration magnitudes 0.28, 0.96, and 2.03 ms(-2) unweighted rms). Thirteen male subjects with body masses between 62.2 and 103.6 kg were chosen for the tests. They sat on a cushioned driver seat with hands on a support and backrest contact in the lumbar region only. Based on these laboratory measurements a linear model of the system-seated human body and cushioned seat in the fore-and-aft direction has been developed. The model accounts for the reaction from the steering wheel. Model parameters have been identified for each subject-measured apparent mass values (modulus and phase). The developed model structure and the averaged parameters can be used for further bio-dynamical research in this field.
Blade pitch optimization methods for vertical-axis wind turbines
NASA Astrophysics Data System (ADS)
Kozak, Peter
Vertical-axis wind turbines (VAWTs) offer an inherently simpler design than horizontal-axis machines, while their lower blade speed mitigates safety and noise concerns, potentially allowing for installation closer to populated and ecologically sensitive areas. While VAWTs do offer significant operational advantages, development has been hampered by the difficulty of modeling the aerodynamics involved, further complicated by their rotating geometry. This thesis presents results from a simulation of a baseline VAWT computed using Star-CCM+, a commercial finite-volume (FVM) code. VAWT aerodynamics are shown to be dominated at low tip-speed ratios by dynamic stall phenomena and at high tip-speed ratios by wake-blade interactions. Several optimization techniques have been developed for the adjustment of blade pitch based on finite-volume simulations and streamtube models. The effectiveness of the optimization procedure is evaluated and the basic architecture for a feedback control system is proposed. Implementation of variable blade pitch is shown to increase a baseline turbine's power output between 40%-100%, depending on the optimization technique, improving the turbine's competitiveness when compared with a commercially-available horizontal-axis turbine.
Performance of the NEXT Engineering Model Power Processing Unit
NASA Technical Reports Server (NTRS)
Pinero, Luis R.; Hopson, Mark; Todd, Philip C.; Wong, Brian
2007-01-01
The NASA s Evolutionary Xenon Thruster (NEXT) project is developing an advanced ion propulsion system for future NASA missions for solar system exploration. An engineering model (EM) power processing unit (PPU) for the NEXT project was designed and fabricated by L-3 Communications under contract with NASA Glenn Research Center (GRC). This modular PPU is capable of processing up from 0.5 to 7.0 kW of output power for the NEXT ion thruster. Its design includes many significant improvements for better performance over the state-of-the-art PPU. The most significant difference is the beam supply which is comprised of six modules and capable of very efficient operation through a wide voltage range because of innovative features like dual controls, module addressing, and a high current mode. The low voltage power supplies are based on elements of the previously validated NASA Solar Electric Propulsion Technology Application Readiness (NSTAR) PPU. The highly modular construction of the PPU resulted in improved manufacturability, simpler scalability, and lower cost. This paper describes the design of the EM PPU and the results of the bench-top performance tests.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marquette, Ian, E-mail: i.marquette@uq.edu.au; Quesne, Christiane, E-mail: cquesne@ulb.ac.be
2015-06-15
We extend the construction of 2D superintegrable Hamiltonians with separation of variables in spherical coordinates using combinations of shift, ladder, and supercharge operators to models involving rational extensions of the two-parameter Lissajous systems on the sphere. These new families of superintegrable systems with integrals of arbitrary order are connected with Jacobi exceptional orthogonal polynomials of type I (or II) and supersymmetric quantum mechanics. Moreover, we present an algebraic derivation of the degenerate energy spectrum for the one- and two-parameter Lissajous systems and the rationally extended models. These results are based on finitely generated polynomial algebras, Casimir operators, realizations as deformedmore » oscillator algebras, and finite-dimensional unitary representations. Such results have only been established so far for 2D superintegrable systems separable in Cartesian coordinates, which are related to a class of polynomial algebras that display a simpler structure. We also point out how the structure function of these deformed oscillator algebras is directly related with the generalized Heisenberg algebras spanned by the nonpolynomial integrals.« less
Effects of interband transitions on Faraday rotation in metallic nanoparticles.
Wysin, G M; Chikan, Viktor; Young, Nathan; Dani, Raj Kumar
2013-08-14
The Faraday rotation in metallic nanoparticles is considered based on a quantum model for the dielectric function ϵ(ω) in the presence of a DC magnetic field B. We focus on effects in ϵ(ω) due to interband transitions (IBTs), which are important in the blue and ultraviolet for noble metals used in plasmonics. The dielectric function is found using the perturbation of the electron density matrix due to the optical field of the incident electromagnetic radiation. The calculation is applied to transitions between two bands (d and p, for example) separated by a gap, as one finds in gold at the L-point of the Fermi surface. The result of the DC magnetic field is a shift in the effective optical frequency causing IBTs by ±μBB/ħ, where opposite signs are associated with left/right circular polarizations. The Faraday rotation for a dilute solution of 17 nm diameter gold nanoparticles is measured and compared with both the IBT theory and a simpler Drude model for the bound electron response. Effects of the plasmon resonance mode on Faraday rotation in nanoparticles are also discussed.
NASA Astrophysics Data System (ADS)
Liu, Quansheng; Tian, Yongchao; Ji, Peiqi; Ma, Hao
2018-04-01
The three-dimensional (3D) morphology of joints is enormously important for the shear mechanical properties of rock. In this study, three-dimensional morphology scanning tests and direct shear tests are conducted to establish a new peak shear strength criterion. The test results show that (1) surface morphology and normal stress exert significant effects on peak shear strength and distribution of the damage area. (2) The damage area is located at the steepest zone facing the shear direction; as the normal stress increases, it extends from the steepest zone toward a less steep zone. Via mechanical analysis, a new formula for the apparent dip angle is developed. The influence of the apparent dip angle and the average joint height on the potential contact area is discussed, respectively. A new peak shear strength criterion, mainly applicable to specimens under compression, is established by using new roughness parameters and taking the effects of normal stress and the rock mechanical properties into account. A comparison of this newly established model with the JRC-JCS model and the Grasselli's model shows that the new one could apparently improve the fitting effect. Compared with earlier models, the new model is simpler and more precise. All the parameters in the new model have clear physical meanings and can be directly determined from the scanned data. In addition, the indexes used in the new model are more rational.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, R.W.; Phillips, A.M.
1988-02-01
Low-permeability reservoirs are currently being propped with sand, resin-coated sand, intermediate-density proppants, and bauxite. This wide range of proppant cost and performance has resulted in a proliferation of proppant selection models. Initially, a rather vague relationship between well depth and proppant strength dictated the choice of proppant. More recently, computerized models of varying complexity have become available that use net-present-value (NPV) calculations. The input is based on the operator's performance goals for each well and on specific reservoir properties. Simpler, noncomputerized approaches also being used include cost/performance comparisons and nomographs. Each type of model, including several of the computerized models,more » will be examined. By use of these models and NPV calculations, optimum fracturing treatment designs have been developed for such low-permeability reservoirs as the Prue in Oklahoma. Typical well conditions are used in each of the selection models and the results are compared. The computerized models allow the operator to determine, before fracturing, how changes in proppant type, size, and quantity will affect postfracture production over time periods ranging from several months to many years. Thus, the operator can choose the fracturing treatment design that best satisfies the economic performance goals for a particular well, regardless of whether those goals are long or short term.« less
Rapid insights from remote sensing in the geosciences
NASA Astrophysics Data System (ADS)
Plaza, Antonio
2015-03-01
The growing availability of capacity computing for atomistic materials modeling has encouraged the use of high-accuracy computationally intensive interatomic potentials, such as SNAP. These potentials also happen to scale well on petascale computing platforms. SNAP has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected on to a basis of hyperspherical harmonics in four dimensions. The computational cost per atom is much greater than that of simpler potentials such as Lennard-Jones or EAM, while the communication cost remains modest. We discuss a variety of strategies for implementing SNAP in the LAMMPS molecular dynamics package. We present scaling results obtained running SNAP on three different classes of machine: a conventional Intel Xeon CPU cluster; the Titan GPU-based system; and the combined Sequoia and Vulcan BlueGene/Q. The growing availability of capacity computing for atomistic materials modeling has encouraged the use of high-accuracy computationally intensive interatomic potentials, such as SNAP. These potentials also happen to scale well on petascale computing platforms. SNAP has a very general form and uses machine-learning techniques to reproduce the energies, forces, and stress tensors of a large set of small configurations of atoms, which are obtained using high-accuracy quantum electronic structure (QM) calculations. The local environment of each atom is characterized by a set of bispectrum components of the local neighbor density projected on to a basis of hyperspherical harmonics in four dimensions. The computational cost per atom is much greater than that of simpler potentials such as Lennard-Jones or EAM, while the communication cost remains modest. We discuss a variety of strategies for implementing SNAP in the LAMMPS molecular dynamics package. We present scaling results obtained running SNAP on three different classes of machine: a conventional Intel Xeon CPU cluster; the Titan GPU-based system; and the combined Sequoia and Vulcan BlueGene/Q. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corp., for the U.S. Dept. of Energy's National Nuclear Security Admin. under Contract DE-AC04-94AL85000.
NASA Technical Reports Server (NTRS)
Martini, W. R.
1980-01-01
Four fully disclosed reference engines and five design methods are discussed. So far, the agreement between theory and experiment is about as good for the simpler calculation methods as it is for the more complicated methods, that is, within 20%. For the simpler methods, a one number adjustable constant can be used to reduce the error in predicting power output and efficiency over the entire operating map to less than 10%.
Vertical Photon Transport in Cloud Remote Sensing Problems
NASA Technical Reports Server (NTRS)
Platnick, S.
1999-01-01
Photon transport in plane-parallel, vertically inhomogeneous clouds is investigated and applied to cloud remote sensing techniques that use solar reflectance or transmittance measurements for retrieving droplet effective radius. Transport is couched in terms of weighting functions which approximate the relative contribution of individual layers to the overall retrieval. Two vertical weightings are investigated, including one based on the average number of scatterings encountered by reflected and transmitted photons in any given layer. A simpler vertical weighting based on the maximum penetration of reflected photons proves useful for solar reflectance measurements. These weighting functions are highly dependent on droplet absorption and solar/viewing geometry. A superposition technique, using adding/doubling radiative transfer procedures, is derived to accurately determine both weightings, avoiding time consuming Monte Carlo methods. Superposition calculations are made for a variety of geometries and cloud models, and selected results are compared with Monte Carlo calculations. Effective radius retrievals from modeled vertically inhomogeneous liquid water clouds are then made using the standard near-infrared bands, and compared with size estimates based on the proposed weighting functions. Agreement between the two methods is generally within several tenths of a micrometer, much better than expected retrieval accuracy. Though the emphasis is on photon transport in clouds, the derived weightings can be applied to any multiple scattering plane-parallel radiative transfer problem, including arbitrary combinations of cloud, aerosol, and gas layers.
Systems Biology Perspectives on Minimal and Simpler Cells
Xavier, Joana C.; Patil, Kiran Raosaheb
2014-01-01
SUMMARY The concept of the minimal cell has fascinated scientists for a long time, from both fundamental and applied points of view. This broad concept encompasses extreme reductions of genomes, the last universal common ancestor (LUCA), the creation of semiartificial cells, and the design of protocells and chassis cells. Here we review these different areas of research and identify common and complementary aspects of each one. We focus on systems biology, a discipline that is greatly facilitating the classical top-down and bottom-up approaches toward minimal cells. In addition, we also review the so-called middle-out approach and its contributions to the field with mathematical and computational models. Owing to the advances in genomics technologies, much of the work in this area has been centered on minimal genomes, or rather minimal gene sets, required to sustain life. Nevertheless, a fundamental expansion has been taking place in the last few years wherein the minimal gene set is viewed as a backbone of a more complex system. Complementing genomics, progress is being made in understanding the system-wide properties at the levels of the transcriptome, proteome, and metabolome. Network modeling approaches are enabling the integration of these different omics data sets toward an understanding of the complex molecular pathways connecting genotype to phenotype. We review key concepts central to the mapping and modeling of this complexity, which is at the heart of research on minimal cells. Finally, we discuss the distinction between minimizing the number of cellular components and minimizing cellular complexity, toward an improved understanding and utilization of minimal and simpler cells. PMID:25184563
NASA Technical Reports Server (NTRS)
Burger, R. A.; Moraal, H.; Webb, G. M.
1985-01-01
It is shown that there is a simpler way to derive the average guiding center drift of a distribution of particles than via the so-called single particle analysis. Based on this derivation it is shown that the entire drift formalism can be considerably simplified, and that results for low order anisotropies are more generally valid than is usually appreciated. This drift analysis leads to a natural alternative derivation of the drift velocity along a neutral sheet.
Initialization of distributed spacecraft for precision formation flying
NASA Technical Reports Server (NTRS)
Hadaegh, F. Y.; Scharf, D. P.; Ploen, S. R.
2003-01-01
In this paper we present a solution to the formation initialization problem for N distributed spacecraft located in deep space. Our solution to the FI problem is based on a three-stage sky search procedure that reduces the FI problem for N spacecraft to the simpler problem of initializing a set of sub-formations. We demonstrate our FI algorithm in simulation using NASA's five spacecraft Terrestrial Planet Finder mission as an example.
JPRS Report, Science & Technology Europe
1992-08-12
Head on Chip Industry, Plans [Heinrich von Pierer Interview; Bonn DIE WELT, 15 Jun 92] 33 Swiss Contraves Develops High-Density Multichip Module... Investment costs are low because the method is based on low pressure, 0.1-1.0 MPA, during injection. This permits the use of simpler molds and...For example, ABS Pumpen AG [ABS Pumps German Stock Corporation] in Lohmar needed 1.5 years to define the "Ceramic Compo- nents for Friction
CFD-Based Design of Turbopump Inlet Duct for Reduced Dynamic Loads
NASA Technical Reports Server (NTRS)
Rothermel, Jeffry; Dorney, Suzanne M.; Dorney, Daniel J.
2003-01-01
Numerical simulations have been completed for a variety of designs for a 90 deg elbow duct. The objective is to identify a design that minimizes the dynamic load entering a LOX turbopump located at the elbow exit. Designs simulated to date indicate that simpler duct geometries result in lower losses. Benchmark simulations have verified that the compressible flow codes used in this study are applicable to these incompressible flow simulations.
CFD-based Design of LOX Pump Inlet Duct for Reduced Dynamic Loads
NASA Technical Reports Server (NTRS)
Rothermel, Jeffry; Dorney, Daniel J.; Dorney, Suzanne M.
2003-01-01
Numerical simulations have been completed for a variety of designs for a 90 deg elbow duct. The objective is to identify a design that minimizes the dynamic load entering a LOX turbopump located at the elbow exit. Designs simulated to date indicate that simpler duct geometries result in lower losses. Benchmark simulations have verified that the compressible flow code used in this study is applicable to these incompressible flow simulations.
Proteins QSAR with Markov average electrostatic potentials.
González-Díaz, Humberto; Uriarte, Eugenio
2005-11-15
Classic physicochemical and topological indices have been largely used in small molecules QSAR but less in proteins QSAR. In this study, a Markov model is used to calculate, for the first time, average electrostatic potentials xik for an indirect interaction between aminoacids placed at topologic distances k within a given protein backbone. The short-term average stochastic potential xi1 for 53 Arc repressor mutants was used to model the effect of Alanine scanning on thermal stability. The Arc repressor is a model protein of relevance for biochemical studies on bioorganics and medicinal chemistry. A linear discriminant analysis model developed correctly classified 43 out of 53, 81.1% of proteins according to their thermal stability. More specifically, the model classified 20/28, 71.4% of proteins with near wild-type stability and 23/25, 92.0% of proteins with reduced stability. Moreover, predictability in cross-validation procedures was of 81.0%. Expansion of the electrostatic potential in the series xi0, xi1, xi2, and xi3, justified the use of the abrupt truncation approach, being the overall accuracy >70.0% for xi0 but equal for xi1, xi2, and xi3. The xi1 model compared favorably with respect to others based on D-Fire potential, surface area, volume, partition coefficient, and molar refractivity, with less than 77.0% of accuracy [Ramos de Armas, R.; González-Díaz, H.; Molina, R.; Uriarte, E. Protein Struct. Func. Bioinf.2004, 56, 715]. The xi1 model also has more tractable interpretation than others based on Markovian negentropies and stochastic moments. Finally, the model is notably simpler than the two models based on quadratic and linear indices. Both models, reported by Marrero-Ponce et al., use four-to-five time more descriptors. Introduction of average stochastic potentials may be useful for QSAR applications; having xik amenable physical interpretation and being very effective.
Convolutional neural network architectures for predicting DNA–protein binding
Zeng, Haoyang; Edwards, Matthew D.; Liu, Ge; Gifford, David K.
2016-01-01
Motivation: Convolutional neural networks (CNN) have outperformed conventional methods in modeling the sequence specificity of DNA–protein binding. Yet inappropriate CNN architectures can yield poorer performance than simpler models. Thus an in-depth understanding of how to match CNN architecture to a given task is needed to fully harness the power of CNNs for computational biology applications. Results: We present a systematic exploration of CNN architectures for predicting DNA sequence binding using a large compendium of transcription factor datasets. We identify the best-performing architectures by varying CNN width, depth and pooling designs. We find that adding convolutional kernels to a network is important for motif-based tasks. We show the benefits of CNNs in learning rich higher-order sequence features, such as secondary motifs and local sequence context, by comparing network performance on multiple modeling tasks ranging in difficulty. We also demonstrate how careful construction of sequence benchmark datasets, using approaches that control potentially confounding effects like positional or motif strength bias, is critical in making fair comparisons between competing methods. We explore how to establish the sufficiency of training data for these learning tasks, and we have created a flexible cloud-based framework that permits the rapid exploration of alternative neural network architectures for problems in computational biology. Availability and Implementation: All the models analyzed are available at http://cnn.csail.mit.edu. Contact: gifford@mit.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307608
Application of neural networks and sensitivity analysis to improved prediction of trauma survival.
Hunter, A; Kennedy, L; Henry, J; Ferguson, I
2000-05-01
The performance of trauma departments is widely audited by applying predictive models that assess probability of survival, and examining the rate of unexpected survivals and deaths. Although the TRISS methodology, a logistic regression modelling technique, is still the de facto standard, it is known that neural network models perform better. A key issue when applying neural network models is the selection of input variables. This paper proposes a novel form of sensitivity analysis, which is simpler to apply than existing techniques, and can be used for both numeric and nominal input variables. The technique is applied to the audit survival problem, and used to analyse the TRISS variables. The conclusions discuss the implications for the design of further improved scoring schemes and predictive models.
NASREN: Standard reference model for telerobot control
NASA Technical Reports Server (NTRS)
Albus, J. S.; Lumia, R.; Mccain, H.
1987-01-01
A hierarchical architecture is described which supports space station telerobots in a variety of modes. The system is divided into three hierarchies: task decomposition, world model, and sensory processing. Goals at each level of the task dedomposition heirarchy are divided both spatially and temporally into simpler commands for the next lower level. This decomposition is repreated until, at the lowest level, the drive signals to the robot actuators are generated. To accomplish its goals, task decomposition modules must often use information stored it the world model. The purpose of the sensory system is to update the world model as rapidly as possible to keep the model in registration with the physical world. The architecture of the entire control system hierarch is described and how it can be applied to space telerobot applications.
Janssen, Stefan; Schudoma, Christian; Steger, Gerhard; Giegerich, Robert
2011-11-03
Many bioinformatics tools for RNA secondary structure analysis are based on a thermodynamic model of RNA folding. They predict a single, "optimal" structure by free energy minimization, they enumerate near-optimal structures, they compute base pair probabilities and dot plots, representative structures of different abstract shapes, or Boltzmann probabilities of structures and shapes. Although all programs refer to the same physical model, they implement it with considerable variation for different tasks, and little is known about the effects of heuristic assumptions and model simplifications used by the programs on the outcome of the analysis. We extract four different models of the thermodynamic folding space which underlie the programs RNAFOLD, RNASHAPES, and RNASUBOPT. Their differences lie within the details of the energy model and the granularity of the folding space. We implement probabilistic shape analysis for all models, and introduce the shape probability shift as a robust measure of model similarity. Using four data sets derived from experimentally solved structures, we provide a quantitative evaluation of the model differences. We find that search space granularity affects the computed shape probabilities less than the over- or underapproximation of free energy by a simplified energy model. Still, the approximations perform similar enough to implementations of the full model to justify their continued use in settings where computational constraints call for simpler algorithms. On the side, we observe that the rarely used level 2 shapes, which predict the complete arrangement of helices, multiloops, internal loops and bulges, include the "true" shape in a rather small number of predicted high probability shapes. This calls for an investigation of new strategies to extract high probability members from the (very large) level 2 shape space of an RNA sequence. We provide implementations of all four models, written in a declarative style that makes them easy to be modified. Based on our study, future work on thermodynamic RNA folding may make a choice of model based on our empirical data. It can take our implementations as a starting point for further program development.
NASA Technical Reports Server (NTRS)
Rubesin, M. W.; Rose, W. C.
1973-01-01
The time-dependent, turbulent mean-flow, Reynolds stress, and heat flux equations in mass-averaged dependent variables are presented. These equations are given in conservative form for both generalized orthogonal and axisymmetric coordinates. For the case of small viscosity and thermal conductivity fluctuations, these equations are considerably simpler than the general Reynolds system of dependent variables for a compressible fluid and permit a more direct extension of low speed turbulence modeling to computer codes describing high speed turbulence fields.
Reverse engineering and analysis of large genome-scale gene networks
Aluru, Maneesha; Zola, Jaroslaw; Nettleton, Dan; Aluru, Srinivas
2013-01-01
Reverse engineering the whole-genome networks of complex multicellular organisms continues to remain a challenge. While simpler models easily scale to large number of genes and gene expression datasets, more accurate models are compute intensive limiting their scale of applicability. To enable fast and accurate reconstruction of large networks, we developed Tool for Inferring Network of Genes (TINGe), a parallel mutual information (MI)-based program. The novel features of our approach include: (i) B-spline-based formulation for linear-time computation of MI, (ii) a novel algorithm for direct permutation testing and (iii) development of parallel algorithms to reduce run-time and facilitate construction of large networks. We assess the quality of our method by comparison with ARACNe (Algorithm for the Reconstruction of Accurate Cellular Networks) and GeneNet and demonstrate its unique capability by reverse engineering the whole-genome network of Arabidopsis thaliana from 3137 Affymetrix ATH1 GeneChips in just 9 min on a 1024-core cluster. We further report on the development of a new software Gene Network Analyzer (GeNA) for extracting context-specific subnetworks from a given set of seed genes. Using TINGe and GeNA, we performed analysis of 241 Arabidopsis AraCyc 8.0 pathways, and the results are made available through the web. PMID:23042249
Experiences of building a medical data acquisition system based on two-level modeling.
Li, Bei; Li, Jianbin; Lan, Xiaoyun; An, Ying; Gao, Wuqiang; Jiang, Yuqiao
2018-04-01
Compared to traditional software development strategies, the two-level modeling approach is more flexible and applicable to build an information system in the medical domain. However, the standards of two-level modeling such as openEHR appear complex to medical professionals. This study aims to investigate, implement, and improve the two-level modeling approach, and discusses the experience of building a unified data acquisition system for four affiliated university hospitals based on this approach. After the investigation, we simplified the approach of archetype modeling and developed a medical data acquisition system where medical experts can define the metadata for their own specialties by using a visual easy-to-use tool. The medical data acquisition system for multiple centers, clinical specialties, and diseases has been developed, and integrates the functions of metadata modeling, form design, and data acquisition. To date, 93,353 data items and 6,017 categories for 285 specific diseases have been created by medical experts, and over 25,000 patients' information has been collected. OpenEHR is an advanced two-level modeling method for medical data, but its idea to separate domain knowledge and technical concern is not easy to realize. Moreover, it is difficult to reach an agreement on archetype definition. Therefore, we adopted simpler metadata modeling, and employed What-You-See-Is-What-You-Get (WYSIWYG) tools to further improve the usability of the system. Compared with the archetype definition, our approach lowers the difficulty. Nevertheless, to build such a system, every participant should have some knowledge in both medicine and information technology domains, as these interdisciplinary talents are necessary. Copyright © 2018 Elsevier B.V. All rights reserved.
Barimani, Shirin; Kleinebudde, Peter
2017-10-01
A multivariate analysis method, Science-Based Calibration (SBC), was used for the first time for endpoint determination of a tablet coating process using Raman data. Two types of tablet cores, placebo and caffeine cores, received a coating suspension comprising a polyvinyl alcohol-polyethylene glycol graft-copolymer and titanium dioxide to a maximum coating thickness of 80µm. Raman spectroscopy was used as in-line PAT tool. The spectra were acquired every minute and correlated to the amount of applied aqueous coating suspension. SBC was compared to another well-known multivariate analysis method, Partial Least Squares-regression (PLS) and a simpler approach, Univariate Data Analysis (UVDA). All developed calibration models had coefficient of determination values (R 2 ) higher than 0.99. The coating endpoints could be predicted with root mean square errors (RMSEP) less than 3.1% of the applied coating suspensions. Compared to PLS and UVDA, SBC proved to be an alternative multivariate calibration method with high predictive power. Copyright © 2017 Elsevier B.V. All rights reserved.
High-Resolution Rotational Spectroscopy of a Molecular Rotary Motor
NASA Astrophysics Data System (ADS)
Domingos, Sergio R.; Cnossen, Arjen; Perez, Cristobal; Buma, Wybren Jan; Browne, Wesley R.; Feringa, Ben L.; Schnell, Melanie
2017-06-01
To develop synthetic molecular motors and machinery that can mimic their biological counterparts has become a stimulating quest in modern synthetic chemistry. Gas phase studies of these simpler synthetic model systems provide the necessary isolated conditions that facilitate the elucidation of their structural intricacies. We report the first high-resolution rotational study of a synthetic molecular rotary motor based on chiral overcrowded alkenes using chirp-pulsed Fourier transform microwave spectroscopy. Rotational constants and quartic centrifugal distortion constants were determined based on a fit using more than two hundred rotational transitions spanning 5≤J≤21 in the 2-4 GHz frequency range. Despite the lack of polar groups, the rotor's asymmetry produces strong a- and b-type rotational transitions arising from a single predominant conformer. Evidence for fragmentation of the rotor allows for unambiguous identification of the isolated rotor components. The experimental spectroscopic parameters of the rotor are compared and discussed against current high-level ab initio and density functional theory methods. Vicario et al. Chem. Commun., 5910-5912 (2005) Brown et al. Rev. Sci. Instrum., 79, 053103 (2008)
Ghosal, Sayan; Gannepalli, Anil; Salapaka, Murti
2017-08-11
In this article, we explore methods that enable estimation of material properties with the dynamic mode atomic force microscopy suitable for soft matter investigation. The article presents the viewpoint of casting the system, comprising of a flexure probe interacting with the sample, as an equivalent cantilever system and compares a steady-state analysis based method with a recursive estimation technique for determining the parameters of the equivalent cantilever system in real time. The steady-state analysis of the equivalent cantilever model, which has been implicitly assumed in studies on material property determination, is validated analytically and experimentally. We show that the steady-state based technique yields results that quantitatively agree with the recursive method in the domain of its validity. The steady-state technique is considerably simpler to implement, however, slower compared to the recursive technique. The parameters of the equivalent system are utilized to interpret storage and dissipative properties of the sample. Finally, the article identifies key pitfalls that need to be avoided toward the quantitative estimation of material properties.
NASA Astrophysics Data System (ADS)
Moura, R. C.; Mengaldo, G.; Peiró, J.; Sherwin, S. J.
2017-02-01
We present estimates of spectral resolution power for under-resolved turbulent Euler flows obtained with high-order discontinuous Galerkin (DG) methods. The '1% rule' based on linear dispersion-diffusion analysis introduced by Moura et al. (2015) [10] is here adapted for 3D energy spectra and validated through the inviscid Taylor-Green vortex problem. The 1% rule estimates the wavenumber beyond which numerical diffusion induces an artificial dissipation range on measured energy spectra. As the original rule relies on standard upwinding, different Riemann solvers are tested. Very good agreement is found for solvers which treat the different physical waves in a consistent manner. Relatively good agreement is still found for simpler solvers. The latter however displayed spurious features attributed to the inconsistent treatment of different physical waves. It is argued that, in the limit of vanishing viscosity, such features might have a significant impact on robustness and solution quality. The estimates proposed are regarded as useful guidelines for no-model DG-based simulations of free turbulence at very high Reynolds numbers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poludniowski, Gavin G.; Evans, Philip M.
2013-04-15
Purpose: Monte Carlo methods based on the Boltzmann transport equation (BTE) have previously been used to model light transport in powdered-phosphor scintillator screens. Physically motivated guesses or, alternatively, the complexities of Mie theory have been used by some authors to provide the necessary inputs of transport parameters. The purpose of Part II of this work is to: (i) validate predictions of modulation transform function (MTF) using the BTE and calculated values of transport parameters, against experimental data published for two Gd{sub 2}O{sub 2}S:Tb screens; (ii) investigate the impact of size-distribution and emission spectrum on Mie predictions of transport parameters; (iii)more » suggest simpler and novel geometrical optics-based models for these parameters and compare to the predictions of Mie theory. A computer code package called phsphr is made available that allows the MTF predictions for the screens modeled to be reproduced and novel screens to be simulated. Methods: The transport parameters of interest are the scattering efficiency (Q{sub sct}), absorption efficiency (Q{sub abs}), and the scatter anisotropy (g). Calculations of these parameters are made using the analytic method of Mie theory, for spherical grains of radii 0.1-5.0 {mu}m. The sensitivity of the transport parameters to emission wavelength is investigated using an emission spectrum representative of that of Gd{sub 2}O{sub 2}S:Tb. The impact of a grain-size distribution in the screen on the parameters is investigated using a Gaussian size-distribution ({sigma}= 1%, 5%, or 10% of mean radius). Two simple and novel alternative models to Mie theory are suggested: a geometrical optics and diffraction model (GODM) and an extension of this (GODM+). Comparisons to measured MTF are made for two commercial screens: Lanex Fast Back and Lanex Fast Front (Eastman Kodak Company, Inc.). Results: The Mie theory predictions of transport parameters were shown to be highly sensitive to both grain size and emission wavelength. For a phosphor screen structure with a distribution in grain sizes and a spectrum of emission, only the average trend of Mie theory is likely to be important. This average behavior is well predicted by the more sophisticated of the geometrical optics models (GODM+) and in approximate agreement for the simplest (GODM). The root-mean-square differences obtained between predicted MTF and experimental measurements, using all three models (GODM, GODM+, Mie), were within 0.03 for both Lanex screens in all cases. This is excellent agreement in view of the uncertainties in screen composition and optical properties. Conclusions: If Mie theory is used for calculating transport parameters for light scattering and absorption in powdered-phosphor screens, care should be taken to average out the fine-structure in the parameter predictions. However, for visible emission wavelengths ({lambda} < 1.0 {mu}m) and grain radii (a > 0.5 {mu}m), geometrical optics models for transport parameters are an alternative to Mie theory. These geometrical optics models are simpler and lead to no substantial loss in accuracy.« less
Fernández, E N; Legarra, A; Martínez, R; Sánchez, J P; Baselga, M
2017-06-01
Inbreeding generates covariances between additive and dominance effects (breeding values and dominance deviations). In this work, we developed and applied models for estimation of dominance and additive genetic variances and their covariance, a model that we call "full dominance," from pedigree and phenotypic data. Estimates with this model such as presented here are very scarce both in livestock and in wild genetics. First, we estimated pedigree-based condensed probabilities of identity using recursion. Second, we developed an equivalent linear model in which variance components can be estimated using closed-form algorithms such as REML or Gibbs sampling and existing software. Third, we present a new method to refer the estimated variance components to meaningful parameters in a particular population, i.e., final partially inbred generations as opposed to outbred base populations. We applied these developments to three closed rabbit lines (A, V and H) selected for number of weaned at the Polytechnic University of Valencia. Pedigree and phenotypes are complete and span 43, 39 and 14 generations, respectively. Estimates of broad-sense heritability are 0.07, 0.07 and 0.05 at the base versus 0.07, 0.07 and 0.09 in the final generations. Narrow-sense heritability estimates are 0.06, 0.06 and 0.02 at the base versus 0.04, 0.04 and 0.01 at the final generations. There is also a reduction in the genotypic variance due to the negative additive-dominance correlation. Thus, the contribution of dominance variation is fairly large and increases with inbreeding and (over)compensates for the loss in additive variation. In addition, estimates of the additive-dominance correlation are -0.37, -0.31 and 0.00, in agreement with the few published estimates and theoretical considerations. © 2017 Blackwell Verlag GmbH.
Simplified subsurface modelling: data assimilation and violated model assumptions
NASA Astrophysics Data System (ADS)
Erdal, Daniel; Lange, Natascha; Neuweiler, Insa
2017-04-01
Integrated models are gaining more and more attention in hydrological modelling as they can better represent the interaction between different compartments. Naturally, these models come along with larger numbers of unknowns and requirements on computational resources compared to stand-alone models. If large model domains are to be represented, e.g. on catchment scale, the resolution of the numerical grid needs to be reduced or the model itself needs to be simplified. Both approaches lead to a reduced ability to reproduce the present processes. This lack of model accuracy may be compensated by using data assimilation methods. In these methods observations are used to update the model states, and optionally model parameters as well, in order to reduce the model error induced by the imposed simplifications. What is unclear is whether these methods combined with strongly simplified models result in completely data-driven models or if they can even be used to make adequate predictions of the model state for times when no observations are available. In the current work we consider the combined groundwater and unsaturated zone, which can be modelled in a physically consistent way using 3D-models solving the Richards equation. For use in simple predictions, however, simpler approaches may be considered. The question investigated here is whether a simpler model, in which the groundwater is modelled as a horizontal 2D-model and the unsaturated zones as a few sparse 1D-columns, can be used within an Ensemble Kalman filter to give predictions of groundwater levels and unsaturated fluxes. This is tested under conditions where the feedback between the two model-compartments are large (e.g. shallow groundwater table) and the simplification assumptions are clearly violated. Such a case may be a steep hill-slope or pumping wells, creating lateral fluxes in the unsaturated zone, or strong heterogeneous structures creating unaccounted flows in both the saturated and unsaturated compartments. Under such circumstances, direct modelling using a simplified model will not provide good results. However, a more data driven (e.g. grey box) approach, driven by the filter, may still provide an improved understanding of the system. Comparisons between full 3D simulations and simplified filter driven models will be shown and the resulting benefits and drawbacks will be discussed.
Confirmatory factor analysis of the Child Oral Health Impact Profile (Korean version).
Cho, Young Il; Lee, Soonmook; Patton, Lauren L; Kim, Hae-Young
2016-04-01
Empirical support for the factor structure of the Child Oral Health Impact Profile (COHIP) has not been fully established. The purposes of this study were to evaluate the factor structure of the Korean version of the COHIP (COHIP-K) empirically using confirmatory factor analysis (CFA) based on the theoretical framework and then to assess whether any of the factors in the structure could be grouped into a simpler single second-order factor. Data were collected through self-reported COHIP-K responses from a representative community sample of 2,236 Korean children, 8-15 yr of age. Because a large inter-factor correlation of 0.92 was estimated in the original five-factor structure, the two strongly correlated factors were combined into one factor, resulting in a four-factor structure. The revised four-factor model showed a reasonable fit with appropriate inter-factor correlations. Additionally, the second-order model with four sub-factors was reasonable with sufficient fit and showed equal fit to the revised four-factor model. A cross-validation procedure confirmed the appropriateness of the findings. Our analysis empirically supported a four-factor structure of COHIP-K, a summarized second-order model, and the use of an integrated summary COHIP score. © 2016 Eur J Oral Sci.
Learn the Lagrangian: A Vector-Valued RKHS Approach to Identifying Lagrangian Systems.
Cheng, Ching-An; Huang, Han-Pang
2016-12-01
We study the modeling of Lagrangian systems with multiple degrees of freedom. Based on system dynamics, canonical parametric models require ad hoc derivations and sometimes simplification for a computable solution; on the other hand, due to the lack of prior knowledge in the system's structure, modern nonparametric models in machine learning face the curse of dimensionality, especially in learning large systems. In this paper, we bridge this gap by unifying the theories of Lagrangian systems and vector-valued reproducing kernel Hilbert space. We reformulate Lagrangian systems with kernels that embed the governing Euler-Lagrange equation-the Lagrangian kernels-and show that these kernels span a subspace capturing the Lagrangian's projection as inverse dynamics. By such property, our model uses only inputs and outputs as in machine learning and inherits the structured form as in system dynamics, thereby removing the need for the mundane derivations for new systems as well as the generalization problem in learning from scratches. In effect, it learns the system's Lagrangian, a simpler task than directly learning the dynamics. To demonstrate, we applied the proposed kernel to identify the robot inverse dynamics in simulations and experiments. Our results present a competitive novel approach to identifying Lagrangian systems, despite using only inputs and outputs.
Feature Selection Methods for Zero-Shot Learning of Neural Activity.
Caceres, Carlos A; Roos, Matthew J; Rupp, Kyle M; Milsap, Griffin; Crone, Nathan E; Wolmetz, Michael E; Ratto, Christopher R
2017-01-01
Dimensionality poses a serious challenge when making predictions from human neuroimaging data. Across imaging modalities, large pools of potential neural features (e.g., responses from particular voxels, electrodes, and temporal windows) have to be related to typically limited sets of stimuli and samples. In recent years, zero-shot prediction models have been introduced for mapping between neural signals and semantic attributes, which allows for classification of stimulus classes not explicitly included in the training set. While choices about feature selection can have a substantial impact when closed-set accuracy, open-set robustness, and runtime are competing design objectives, no systematic study of feature selection for these models has been reported. Instead, a relatively straightforward feature stability approach has been adopted and successfully applied across models and imaging modalities. To characterize the tradeoffs in feature selection for zero-shot learning, we compared correlation-based stability to several other feature selection techniques on comparable data sets from two distinct imaging modalities: functional Magnetic Resonance Imaging and Electrocorticography. While most of the feature selection methods resulted in similar zero-shot prediction accuracies and spatial/spectral patterns of selected features, there was one exception; A novel feature/attribute correlation approach was able to achieve those accuracies with far fewer features, suggesting the potential for simpler prediction models that yield high zero-shot classification accuracy.
Improving Accuracy in Arrhenius Models of Cell Death: Adding a Temperature-Dependent Time Delay.
Pearce, John A
2015-12-01
The Arrhenius formulation for single-step irreversible unimolecular reactions has been used for many decades to describe the thermal damage and cell death processes. Arrhenius predictions are acceptably accurate for structural proteins, for some cell death assays, and for cell death at higher temperatures in most cell lines, above about 55 °C. However, in many cases--and particularly at hyperthermic temperatures, between about 43 and 55 °C--the particular intrinsic cell death or damage process under study exhibits a significant "shoulder" region that constant-rate Arrhenius models are unable to represent with acceptable accuracy. The primary limitation is that Arrhenius calculations always overestimate the cell death fraction, which leads to severely overoptimistic predictions of heating effectiveness in tumor treatment. Several more sophisticated mathematical model approaches have been suggested and show much-improved performance. But simpler models that have adequate accuracy would provide useful and practical alternatives to intricate biochemical analyses. Typical transient intrinsic cell death processes at hyperthermic temperatures consist of a slowly developing shoulder region followed by an essentially constant-rate region. The shoulder regions have been demonstrated to arise chiefly from complex functional protein signaling cascades that generate delays in the onset of the constant-rate region, but may involve heat shock protein activity as well. This paper shows that acceptably accurate and much-improved predictions in the simpler Arrhenius models can be obtained by adding a temperature-dependent time delay. Kinetic coefficients and the appropriate time delay are obtained from the constant-rate regions of the measured survival curves. The resulting predictions are seen to provide acceptably accurate results while not overestimating cell death. The method can be relatively easily incorporated into numerical models. Additionally, evidence is presented to support the application of compensation law behavior to the cell death processes--that is, the strong correlation between the kinetic coefficients, ln{A} and E(a), is confirmed.
A CNN Regression Approach for Real-Time 2D/3D Registration.
Shun Miao; Wang, Z Jane; Rui Liao
2016-05-01
In this paper, we present a Convolutional Neural Network (CNN) regression approach to address the two major limitations of existing intensity-based 2-D/3-D registration technology: 1) slow computation and 2) small capture range. Different from optimization-based methods, which iteratively optimize the transformation parameters over a scalar-valued metric function representing the quality of the registration, the proposed method exploits the information embedded in the appearances of the digitally reconstructed radiograph and X-ray images, and employs CNN regressors to directly estimate the transformation parameters. An automatic feature extraction step is introduced to calculate 3-D pose-indexed features that are sensitive to the variables to be regressed while robust to other factors. The CNN regressors are then trained for local zones and applied in a hierarchical manner to break down the complex regression task into multiple simpler sub-tasks that can be learned separately. Weight sharing is furthermore employed in the CNN regression model to reduce the memory footprint. The proposed approach has been quantitatively evaluated on 3 potential clinical applications, demonstrating its significant advantage in providing highly accurate real-time 2-D/3-D registration with a significantly enlarged capture range when compared to intensity-based methods.
NASA Astrophysics Data System (ADS)
Verma, Manish K.
Terrestrial gross primary productivity (GPP) is the largest and most variable component of the carbon cycle and is strongly influenced by phenology. Realistic characterization of spatio-temporal variation in GPP and phenology is therefore crucial for understanding dynamics in the global carbon cycle. In the last two decades, remote sensing has become a widely-used tool for this purpose. However, no study has comprehensively examined how well remote sensing models capture spatiotemporal patterns in GPP, and validation of remote sensing-based phenology models is limited. Using in-situ data from 144 eddy covariance towers located in all major biomes, I assessed the ability of 10 remote sensing-based methods to capture spatio-temporal variation in GPP at annual and seasonal scales. The models are based on different hypotheses regarding ecophysiological controls on GPP and span a range of structural and computational complexity. The results lead to four main conclusions: (i) at annual time scale, models were more successful capturing spatial variability than temporal variability; (ii) at seasonal scale, models were more successful in capturing average seasonal variability than interannual variability; (iii) simpler models performed as well or better than complex models; and (iv) models that were best at explaining seasonal variability in GPP were different from those that were best able to explain variability in annual scale GPP. Seasonal phenology of vegetation follows bounded growth and decay, and is widely modeled using growth functions. However, the specific form of the growth function affects how phenological dynamics are represented in ecosystem and remote sensing-base models. To examine this, four different growth functions (the logistic, Gompertz, Mirror-Gompertz and Richards function) were assessed using remotely sensed and in-situ data collected at several deciduous forest sites. All of the growth functions provided good statistical representation of in-situ and remote sensing time series. However, the Richards function captured observed asymmetric dynamics that were not captured by the other functions. The timing of key phenophase transitions derived using the Richards function therefore agreed best with observations. This suggests that ecosystem models and remote-sensing algorithms would benefit from using the Richards function to represent phenological dynamics.
Modes of interconnected lattice trusses using continuum models, part 1
NASA Technical Reports Server (NTRS)
Balakrishnan, A. V.
1991-01-01
This represents a continuing systematic attempt to explore the use of continuum models--in contrast to the Finite Element Models currently universally in use--to develop feedback control laws for stability enhancement of structures, particularly large structures, for deployment in space. We shall show that for the control objective, continuum models do offer unique advantages. It must be admitted of course that developing continuum models for arbitrary structures is no easy task. In this paper we take advantage of the special nature of current Large Space Structures--typified by the NASA-LaRC Evolutionary Model which will be our main concern--which consists of interconnected orthogonal lattice trusses each with identical bays. Using an equivalent one-dimensional Timoshenko beam model, we develop an almost complete continuum model for the evolutionary structure. We do this in stages, beginning only with the main bus as flexible and then going on to make all the appendages also flexible-except for the antenna structure. Based on these models we proceed to develop formulas for mode frequencies and shapes. These are shown to be the roots of the determinant of a matrix of small dimension compared with mode calculations using Finite Element Models, even though the matrix involves transcendental functions. The formulas allow us to study asymptotic properties of the modes and how they evolve as we increase the number of bodies which are treated as flexible. The asymptotics, in fact, become simpler.
Hydrograph separation for karst watersheds using a two-domain rainfall-discharge model
Long, Andrew J.
2009-01-01
Highly parameterized, physically based models may be no more effective at simulating the relations between rainfall and outflow from karst watersheds than are simpler models. Here an antecedent rainfall and convolution model was used to separate a karst watershed hydrograph into two outflow components: one originating from focused recharge in conduits and one originating from slow flow in a porous annex system. In convolution, parameters of a complex system are lumped together in the impulse-response function (IRF), which describes the response of the system to an impulse of effective precipitation. Two parametric functions in superposition approximate the two-domain IRF. The outflow hydrograph can be separated into flow components by forward modeling with isolated IRF components, which provides an objective criterion for separation. As an example, the model was applied to a karst watershed in the Madison aquifer, South Dakota, USA. Simulation results indicate that this watershed is characterized by a flashy response to storms, with a peak response time of 1 day, but that 89% of the flow results from the slow-flow domain, with a peak response time of more than 1 year. This long response time may be the result of perched areas that store water above the main water table. Simulation results indicated that some aspects of the system are stationary but that nonlinearities also exist.
Modeling the degradation kinetics of ascorbic acid.
Peleg, Micha; Normand, Mark D; Dixon, William R; Goulette, Timothy R
2018-06-13
Most published reports on ascorbic acid (AA) degradation during food storage and heat preservation suggest that it follows first-order kinetics. Deviations from this pattern include Weibullian decay, and exponential drop approaching finite nonzero retention. Almost invariably, the degradation rate constant's temperature-dependence followed the Arrhenius equation, and hence the simpler exponential model too. A formula and freely downloadable interactive Wolfram Demonstration to convert the Arrhenius model's energy of activation, E a , to the exponential model's c parameter, or vice versa, are provided. The AA's isothermal and non-isothermal degradation can be simulated with freely downloadable interactive Wolfram Demonstrations in which the model's parameters can be entered and modified by moving sliders on the screen. Where the degradation is known a priori to follow first or other fixed order kinetics, one can use the endpoints method, and in principle the successive points method too, to estimate the reaction's kinetic parameters from considerably fewer AA concentration determinations than in the traditional manner. Freeware to do the calculations by either method has been recently made available on the Internet. Once obtained in this way, the kinetic parameters can be used to reconstruct the entire degradation curves and predict those at different temperature profiles, isothermal or dynamic. Comparison of the predicted concentration ratios with experimental ones offers a way to validate or refute the kinetic model and the assumptions on which it is based.
Periodic matrix population models: growth rate, basic reproduction number, and entropy.
Bacaër, Nicolas
2009-10-01
This article considers three different aspects of periodic matrix population models. First, a formula for the sensitivity analysis of the growth rate lambda is obtained that is simpler than the one obtained by Caswell and Trevisan. Secondly, the formula for the basic reproduction number R0 in a constant environment is generalized to the case of a periodic environment. Some inequalities between lambda and R0 proved by Cushing and Zhou are also generalized to the periodic case. Finally, we add some remarks on Demetrius' notion of evolutionary entropy H and its relationship to the growth rate lambda in the periodic case.
Comparing the Performance of Two Dynamic Load Distribution Methods
NASA Technical Reports Server (NTRS)
Kale, L. V.
1987-01-01
Parallel processing of symbolic computations on a message-passing multi-processor presents one challenge: To effectively utilize the available processors, the load must be distributed uniformly to all the processors. However, the structure of these computations cannot be predicted in advance. go, static scheduling methods are not applicable. In this paper, we compare the performance of two dynamic, distributed load balancing methods with extensive simulation studies. The two schemes are: the Contracting Within a Neighborhood (CWN) scheme proposed by us, and the Gradient Model proposed by Lin and Keller. We conclude that although simpler, the CWN is significantly more effective at distributing the work than the Gradient model.
Xie, Haiyi; Tao, Jill; McHugo, Gregory J; Drake, Robert E
2013-07-01
Count data with skewness and many zeros are common in substance abuse and addiction research. Zero-adjusting models, especially zero-inflated models, have become increasingly popular in analyzing this type of data. This paper reviews and compares five mixed-effects Poisson family models commonly used to analyze count data with a high proportion of zeros by analyzing a longitudinal outcome: number of smoking quit attempts from the New Hampshire Dual Disorders Study. The findings of our study indicated that count data with many zeros do not necessarily require zero-inflated or other zero-adjusting models. For rare event counts or count data with small means, a simpler model such as the negative binomial model may provide a better fit. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Rajkumar, T.; Bardina, Jorge; Clancy, Daniel (Technical Monitor)
2002-01-01
Wind tunnels use scale models to characterize aerodynamic coefficients, Wind tunnel testing can be slow and costly due to high personnel overhead and intensive power utilization. Although manual curve fitting can be done, it is highly efficient to use a neural network to define the complex relationship between variables. Numerical simulation of complex vehicles on the wide range of conditions required for flight simulation requires static and dynamic data. Static data at low Mach numbers and angles of attack may be obtained with simpler Euler codes. Static data of stalled vehicles where zones of flow separation are usually present at higher angles of attack require Navier-Stokes simulations which are costly due to the large processing time required to attain convergence. Preliminary dynamic data may be obtained with simpler methods based on correlations and vortex methods; however, accurate prediction of the dynamic coefficients requires complex and costly numerical simulations. A reliable and fast method of predicting complex aerodynamic coefficients for flight simulation I'S presented using a neural network. The training data for the neural network are derived from numerical simulations and wind-tunnel experiments. The aerodynamic coefficients are modeled as functions of the flow characteristics and the control surfaces of the vehicle. The basic coefficients of lift, drag and pitching moment are expressed as functions of angles of attack and Mach number. The modeled and training aerodynamic coefficients show good agreement. This method shows excellent potential for rapid development of aerodynamic models for flight simulation. Genetic Algorithms (GA) are used to optimize a previously built Artificial Neural Network (ANN) that reliably predicts aerodynamic coefficients. Results indicate that the GA provided an efficient method of optimizing the ANN model to predict aerodynamic coefficients. The reliability of the ANN using the GA includes prediction of aerodynamic coefficients to an accuracy of 110% . In our problem, we would like to get an optimized neural network architecture and minimum data set. This has been accomplished within 500 training cycles of a neural network. After removing training pairs (outliers), the GA has produced much better results. The neural network constructed is a feed forward neural network with a back propagation learning mechanism. The main goal has been to free the network design process from constraints of human biases, and to discover better forms of neural network architectures. The automation of the network architecture search by genetic algorithms seems to have been the best way to achieve this goal.
Integrating 3D geological information with a national physically-based hydrological modelling system
NASA Astrophysics Data System (ADS)
Lewis, Elizabeth; Parkin, Geoff; Kessler, Holger; Whiteman, Mark
2016-04-01
Robust numerical models are an essential tool for informing flood and water management and policy around the world. Physically-based hydrological models have traditionally not been used for such applications due to prohibitively large data, time and computational resource requirements. Given recent advances in computing power and data availability, a robust, physically-based hydrological modelling system for Great Britain using the SHETRAN model and national datasets has been created. Such a model has several advantages over less complex systems. Firstly, compared with conceptual models, a national physically-based model is more readily applicable to ungauged catchments, in which hydrological predictions are also required. Secondly, the results of a physically-based system may be more robust under changing conditions such as climate and land cover, as physical processes and relationships are explicitly accounted for. Finally, a fully integrated surface and subsurface model such as SHETRAN offers a wider range of applications compared with simpler schemes, such as assessments of groundwater resources, sediment and nutrient transport and flooding from multiple sources. As such, SHETRAN provides a robust means of simulating numerous terrestrial system processes which will add physical realism when coupled to the JULES land surface model. 306 catchments spanning Great Britain have been modelled using this system. The standard configuration of this system performs satisfactorily (NSE > 0.5) for 72% of catchments and well (NSE > 0.7) for 48%. Many of the remaining 28% of catchments that performed relatively poorly (NSE < 0.5) are located in the chalk in the south east of England. As such, the British Geological Survey 3D geology model for Great Britain (GB3D) has been incorporated, for the first time in any hydrological model, to pave the way for improvements to be made to simulations of catchments with important groundwater regimes. This coupling has involved development of software to allow for easy incorporation of geological information into SHETRAN for any model setup. The addition of more realistic subsurface representation following this approach is shown to greatly improve model performance in areas dominated by groundwater processes. The resulting modelling system has great potential to be used as a resource at national, regional and local scales in an array of different applications, including climate change impact assessments, land cover change studies and integrated assessments of groundwater and surface water resources.
NASA Astrophysics Data System (ADS)
Everett, R. A.; Packer, A. M.; Kuang, Y.
Androgen deprivation therapy is a common treatment for advanced or metastatic prostate cancer. Like the normal prostate, most tumors depend on androgens for proliferation and survival but often develop treatment resistance. Hormonal treatment causes many undesirable side effects which significantly decrease the quality of life for patients. Intermittently applying androgen deprivation in cycles reduces the total duration with these negative effects and may reduce selective pressure for resistance. We extend an existing model which used measurements of patient testosterone levels to accurately fit measured serum prostate specific antigen (PSA) levels. We test the model's predictive accuracy, using only a subset of the data to find parameter values. The results are compared with those of an existing piecewise linear model which does not use testosterone as an input. Since actual treatment protocol is to re-apply therapy when PSA levels recover beyond some threshold value, we develop a second method for predicting the PSA levels. Based on a small set of data from seven patients, our results showed that the piecewise linear model produced slightly more accurate results while the two predictive methods are comparable. This suggests that a simpler model may be more beneficial for a predictive use compared to a more biologically insightful model, although further research is needed in this field prior to implementing mathematical models as a predictive method in a clinical setting. Nevertheless, both models are an important step in this direction.
NASA Astrophysics Data System (ADS)
Everett, R. A.; Packer, A. M.; Kuang, Y.
2014-04-01
Androgen deprivation therapy is a common treatment for advanced or metastatic prostate cancer. Like the normal prostate, most tumors depend on androgens for proliferation and survival but often develop treatment resistance. Hormonal treatment causes many undesirable side effects which significantly decrease the quality of life for patients. Intermittently applying androgen deprivation in cycles reduces the total duration with these negative effects and may reduce selective pressure for resistance. We extend an existing model which used measurements of patient testosterone levels to accurately fit measured serum prostate specific antigen (PSA) levels. We test the model's predictive accuracy, using only a subset of the data to find parameter values. The results are compared with those of an existing piecewise linear model which does not use testosterone as an input. Since actual treatment protocol is to re-apply therapy when PSA levels recover beyond some threshold value, we develop a second method for predicting the PSA levels. Based on a small set of data from seven patients, our results showed that the piecewise linear model produced slightly more accurate results while the two predictive methods are comparable. This suggests that a simpler model may be more beneficial for a predictive use compared to a more biologically insightful model, although further research is needed in this field prior to implementing mathematical models as a predictive method in a clinical setting. Nevertheless, both models are an important step in this direction.
Improving Flood Predictions in Data-Scarce Basins
NASA Astrophysics Data System (ADS)
Vimal, Solomon; Zanardo, Stefano; Rafique, Farhat; Hilberts, Arno
2017-04-01
Flood modeling methodology at Risk Management Solutions Ltd. has evolved over several years with the development of continental scale flood risk models spanning most of Europe, the United States and Japan. Pluvial (rain fed) and fluvial (river fed) flood maps represent the basis for the assessment of regional flood risk. These maps are derived by solving the 1D energy balance equation for river routing and 2D shallow water equation (SWE) for overland flow. The models are run with high performance computing and GPU based solvers as the time taken for simulation is large in such continental scale modeling. These results are validated with data from authorities and business partners, and have been used in the insurance industry for many years. While this methodology has been proven extremely effective in regions where the quality and availability of data are high, its application is very challenging in other regions where data are scarce. This is generally the case for low and middle income countries, where simpler approaches are needed for flood risk modeling and assessment. In this study we explore new methods to make use of modeling results obtained in data-rich contexts to improve predictive ability in data-scarce contexts. As an example, based on our modeled flood maps in data-rich countries, we identify statistical relationships between flood characteristics and topographic and climatic indicators, and test their generalization across physical domains. Moreover, we apply the Height Above Nearest Drainage (HAND)approach to estimate "probable" saturated areas for different return period flood events as functions of basin characteristics. This work falls into the well-established research field of Predictions in Ungauged Basins.
Mechanical alloying of a hydrogenation catalyst used for the remediation of contaminated compounds
NASA Technical Reports Server (NTRS)
Quinn, Jacqueline W. (Inventor); Geiger, Cherie L. (Inventor); Aitken, Brian S. (Inventor); Clausen, Christian A. (Inventor)
2012-01-01
A hydrogenation catalyst including a base material coated with a catalytic metal is made using mechanical milling techniques. The hydrogenation catalysts are used as an excellent catalyst for the dehalogenation of contaminated compounds and the remediation of other industrial compounds. Preferably, the hydrogenation catalyst is a bimetallic particle including zero-valent metal particles coated with a catalytic material. The mechanical milling technique is simpler and cheaper than previously used methods for producing hydrogenation catalysts.
Mechanical alloying of a hydrogenation catalyst used for the remediation of contaminated compounds
NASA Technical Reports Server (NTRS)
Quinn, Jacqueline W. (Inventor); Aitken, Brian S. (Inventor); Clausen, Christian A. (Inventor); Geiger, Cherie L. (Inventor)
2010-01-01
A hydrogenation catalyst including a base material coated with a catalytic metal is made using mechanical milling techniques. The hydrogenation catalysts are used as an excellent catalyst for the dehalogenation of contaminated compounds and the remediation of other industrial compounds. Preferably, the hydrogenation catalyst is a bimetallic particle including zero-valent metal particles coated with a catalytic material. The mechanical milling technique is simpler and cheaper than previously used methods for producing hydrogenation catalysts.
Crystallization of bovine insulin on a flow-free droplet-based platform
NASA Astrophysics Data System (ADS)
Chen, Fengjuan; Du, Guanru; Yin, Di; Yin, Ruixue; Zhang, Hongbo; Zhang, Wenjun; Yang, Shih-Mo
2017-03-01
Crystallization is an important process in the pharmaceutical manufacturing industry. In this work, we report a study to create the zinc-free crystals of bovine insulin on a flow-free droplet-based platform we previously developed. The benefit of this platform is its promise to create a single type of crystals under a simpler and more stable environment and with a high throughput. The experimental result shows that the bovine insulin forms a rhombic dodecahedra shape and the coefficient variation (CV) in the size of crystals is less than 5%. These results are very promising for the insulin production.
Discontinuous Spectral Difference Method for Conservation Laws on Unstructured Grids
NASA Technical Reports Server (NTRS)
Liu, Yen; Vinokur, Marcel; Wang, Z. J.
2004-01-01
A new, high-order, conservative, and efficient method for conservation laws on unstructured grids is developed. The concept of discontinuous and high-order local representations to achieve conservation and high accuracy is utilized in a manner similar to the Discontinuous Galerkin (DG) and the Spectral Volume (SV) methods, but while these methods are based on the integrated forms of the equations, the new method is based on the differential form to attain a simpler formulation and higher efficiency. A discussion on the Discontinuous Spectral Difference (SD) Method, locations of the unknowns and flux points and numerical results are also presented.
Recent New Ideas and Directions for Space-Based Nulling Interferometry
NASA Technical Reports Server (NTRS)
Serabyn, Eugene (Gene)
2004-01-01
This document is composed of two viewgraph presentations. The first is entitled "Recent New Ideas and Directions for Space-Based Nulling Interferometry." It reviews our understanding of interferometry compared to a year or so ago: (1) Simpler options identified, (2) A degree of flexibility is possible, allowing switching (or degradation) between some options, (3) Not necessary to define every component to the exclusion of all other possibilities and (4) MIR fibers are becoming a reality. The second, entitled "The Fiber Nuller," reviews the idea of Combining beams in a fiber instead of at a beamsplitter.
The Plasma Simulation Code: A modern particle-in-cell code with patch-based load-balancing
NASA Astrophysics Data System (ADS)
Germaschewski, Kai; Fox, William; Abbott, Stephen; Ahmadi, Narges; Maynard, Kristofor; Wang, Liang; Ruhl, Hartmut; Bhattacharjee, Amitava
2016-08-01
This work describes the Plasma Simulation Code (PSC), an explicit, electromagnetic particle-in-cell code with support for different order particle shape functions. We review the basic components of the particle-in-cell method as well as the computational architecture of the PSC code that allows support for modular algorithms and data structure in the code. We then describe and analyze in detail a distinguishing feature of PSC: patch-based load balancing using space-filling curves which is shown to lead to major efficiency gains over unbalanced methods and a previously used simpler balancing method.
Precision Laser Development for Interferometric Space Missions NGO, SGO, and GRACE Follow-On
NASA Technical Reports Server (NTRS)
Numata, Kenji; Camp, Jordan
2011-01-01
Optical fiber and semiconductor laser technologies have evolved dramatically over the last decade due to the increased demands from optical communications. We are developing a laser (master oscillator) and optical amplifier based on those technologies for interferometric space missions, including the gravitational-wave missions NGO/SGO (formerly LISA) and the climate monitoring mission GRACE Follow-On, by fully utilizing the matured wave-guided optics technologies. In space, where simpler and more reliable system is preferred, the wave-guided components are advantageous over bulk, crystal-based, free-space laser, such as NPRO (Nonplanar Ring Oscillator) and bulk-crystal amplifier.