NASA Astrophysics Data System (ADS)
Zhai, Yu; Li, Hui; Le Roy, Robert J.
2018-04-01
Spectroscopically accurate Potential Energy Surfaces (PESs) are fundamental for explaining and making predictions of the infrared and microwave spectra of van der Waals (vdW) complexes, and the model used for the potential energy function is critically important for providing accurate, robust and portable analytical PESs. The Morse/Long-Range (MLR) model has proved to be one of the most general, flexible and accurate one-dimensional (1D) model potentials, as it has physically meaningful parameters, is flexible, smooth and differentiable everywhere, to all orders and extrapolates sensibly at both long and short ranges. The Multi-Dimensional Morse/Long-Range (mdMLR) potential energy model described herein is based on that 1D MLR model, and has proved to be effective and accurate in the potentiology of various types of vdW complexes. In this paper, we review the current status of development of the mdMLR model and its application to vdW complexes. The future of the mdMLR model is also discussed. This review can serve as a tutorial for the construction of an mdMLR PES.
Small-time Scale Network Traffic Prediction Based on Complex-valued Neural Network
NASA Astrophysics Data System (ADS)
Yang, Bin
2017-07-01
Accurate models play an important role in capturing the significant characteristics of the network traffic, analyzing the network dynamic, and improving the forecasting accuracy for system dynamics. In this study, complex-valued neural network (CVNN) model is proposed to further improve the accuracy of small-time scale network traffic forecasting. Artificial bee colony (ABC) algorithm is proposed to optimize the complex-valued and real-valued parameters of CVNN model. Small-scale traffic measurements data namely the TCP traffic data is used to test the performance of CVNN model. Experimental results reveal that CVNN model forecasts the small-time scale network traffic measurement data very accurately
Fast and Accurate Circuit Design Automation through Hierarchical Model Switching.
Huynh, Linh; Tagkopoulos, Ilias
2015-08-21
In computer-aided biological design, the trifecta of characterized part libraries, accurate models and optimal design parameters is crucial for producing reliable designs. As the number of parts and model complexity increase, however, it becomes exponentially more difficult for any optimization method to search the solution space, hence creating a trade-off that hampers efficient design. To address this issue, we present a hierarchical computer-aided design architecture that uses a two-step approach for biological design. First, a simple model of low computational complexity is used to predict circuit behavior and assess candidate circuit branches through branch-and-bound methods. Then, a complex, nonlinear circuit model is used for a fine-grained search of the reduced solution space, thus achieving more accurate results. Evaluation with a benchmark of 11 circuits and a library of 102 experimental designs with known characterization parameters demonstrates a speed-up of 3 orders of magnitude when compared to other design methods that provide optimality guarantees.
Large eddy simulation modeling of particle-laden flows in complex terrain
NASA Astrophysics Data System (ADS)
Salesky, S.; Giometto, M. G.; Chamecki, M.; Lehning, M.; Parlange, M. B.
2017-12-01
The transport, deposition, and erosion of heavy particles over complex terrain in the atmospheric boundary layer is an important process for hydrology, air quality forecasting, biology, and geomorphology. However, in situ observations can be challenging in complex terrain due to spatial heterogeneity. Furthermore, there is a need to develop numerical tools that can accurately represent the physics of these multiphase flows over complex surfaces. We present a new numerical approach to accurately model the transport and deposition of heavy particles in complex terrain using large eddy simulation (LES). Particle transport is represented through solution of the advection-diffusion equation including terms that represent gravitational settling and inertia. The particle conservation equation is discretized in a cut-cell finite volume framework in order to accurately enforce mass conservation. Simulation results will be validated with experimental data, and numerical considerations required to enforce boundary conditions at the surface will be discussed. Applications will be presented in the context of snow deposition and transport, as well as urban dispersion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakhleh, Luay
I proposed to develop computationally efficient tools for accurate detection and reconstruction of microbes' complex evolutionary mechanisms, thus enabling rapid and accurate annotation, analysis and understanding of their genomes. To achieve this goal, I proposed to address three aspects. (1) Mathematical modeling. A major challenge facing the accurate detection of HGT is that of distinguishing between these two events on the one hand and other events that have similar "effects." I proposed to develop a novel mathematical approach for distinguishing among these events. Further, I proposed to develop a set of novel optimization criteria for the evolutionary analysis of microbialmore » genomes in the presence of these complex evolutionary events. (2) Algorithm design. In this aspect of the project, I proposed to develop an array of e cient and accurate algorithms for analyzing microbial genomes based on the formulated optimization criteria. Further, I proposed to test the viability of the criteria and the accuracy of the algorithms in an experimental setting using both synthetic as well as biological data. (3) Software development. I proposed the nal outcome to be a suite of software tools which implements the mathematical models as well as the algorithms developed.« less
Comparison of alternative designs for reducing complex neurons to equivalent cables.
Burke, R E
2000-01-01
Reduction of the morphological complexity of actual neurons into accurate, computationally efficient surrogate models is an important problem in computational neuroscience. The present work explores the use of two morphoelectrotonic transformations, somatofugal voltage attenuation (AT cables) and signal propagation delay (DL cables), as bases for construction of electrotonically equivalent cable models of neurons. In theory, the AT and DL cables should provide more accurate lumping of membrane regions that have the same transmembrane potential than the familiar equivalent cables that are based only on somatofugal electrotonic distance (LM cables). In practice, AT and DL cables indeed provided more accurate simulations of the somatic transient responses produced by fully branched neuron models than LM cables. This was the case in the presence of a somatic shunt as well as when membrane resistivity was uniform.
A pairwise maximum entropy model accurately describes resting-state human brain networks
Watanabe, Takamitsu; Hirose, Satoshi; Wada, Hiroyuki; Imai, Yoshio; Machida, Toru; Shirouzu, Ichiro; Konishi, Seiki; Miyashita, Yasushi; Masuda, Naoki
2013-01-01
The resting-state human brain networks underlie fundamental cognitive functions and consist of complex interactions among brain regions. However, the level of complexity of the resting-state networks has not been quantified, which has prevented comprehensive descriptions of the brain activity as an integrative system. Here, we address this issue by demonstrating that a pairwise maximum entropy model, which takes into account region-specific activity rates and pairwise interactions, can be robustly and accurately fitted to resting-state human brain activities obtained by functional magnetic resonance imaging. Furthermore, to validate the approximation of the resting-state networks by the pairwise maximum entropy model, we show that the functional interactions estimated by the pairwise maximum entropy model reflect anatomical connexions more accurately than the conventional functional connectivity method. These findings indicate that a relatively simple statistical model not only captures the structure of the resting-state networks but also provides a possible method to derive physiological information about various large-scale brain networks. PMID:23340410
Fürnstahl, Philipp; Vlachopoulos, Lazaros; Schweizer, Andreas; Fucentese, Sandro F; Koch, Peter P
2015-08-01
The accurate reduction of tibial plateau malunions can be challenging without guidance. In this work, we report on a novel technique that combines 3-dimensional computer-assisted planning with patient-specific surgical guides for improving reliability and accuracy of complex intraarticular corrective osteotomies. Preoperative planning based on 3-dimensional bone models was performed to simulate fragment mobilization and reduction in 3 cases. Surgical implementation of the preoperative plan using patient-specific cutting and reduction guides was evaluated; benefits and limitations of the approach were identified and discussed. The preliminary results are encouraging and show that complex, intraarticular corrective osteotomies can be accurately performed with this technique. For selective patients with complex malunions around the tibia plateau, this method might be an attractive option, with the potential to facilitate achieving the most accurate correction possible.
NASA Astrophysics Data System (ADS)
Wray, Timothy J.
Computational fluid dynamics (CFD) is routinely used in performance prediction and design of aircraft, turbomachinery, automobiles, and in many other industrial applications. Despite its wide range of use, deficiencies in its prediction accuracy still exist. One critical weakness is the accurate simulation of complex turbulent flows using the Reynolds-Averaged Navier-Stokes equations in conjunction with a turbulence model. The goal of this research has been to develop an eddy viscosity type turbulence model to increase the accuracy of flow simulations for mildly separated flows, flows with rotation and curvature effects, and flows with surface roughness. It is accomplished by developing a new zonal one-equation turbulence model which relies heavily on the flow physics; it is now known in the literature as the Wray-Agarwal one-equation turbulence model. The effectiveness of the new model is demonstrated by comparing its results with those obtained by the industry standard one-equation Spalart-Allmaras model and two-equation Shear-Stress-Transport k - o model and experimental data. Results for subsonic, transonic, and supersonic flows in and about complex geometries are presented. It is demonstrated that the Wray-Agarwal model can provide the industry and CFD researchers an accurate, efficient, and reliable turbulence model for the computation of a large class of complex turbulent flows.
Sun, Lifan; Ji, Baofeng; Lan, Jian; He, Zishu; Pu, Jiexin
2017-01-01
The key to successful maneuvering complex extended object tracking (MCEOT) using range extent measurements provided by high resolution sensors lies in accurate and effective modeling of both the extension dynamics and the centroid kinematics. During object maneuvers, the extension dynamics of an object with a complex shape is highly coupled with the centroid kinematics. However, this difficult but important problem is rarely considered and solved explicitly. In view of this, this paper proposes a general approach to modeling a maneuvering complex extended object based on Minkowski sum, so that the coupled turn maneuvers in both the centroid states and extensions can be described accurately. The new model has a concise and unified form, in which the complex extension dynamics can be simply and jointly characterized by multiple simple sub-objects’ extension dynamics based on Minkowski sum. The proposed maneuvering model fits range extent measurements very well due to its favorable properties. Based on this model, an MCEOT algorithm dealing with motion and extension maneuvers is also derived. Two different cases of the turn maneuvers with known/unknown turn rates are specifically considered. The proposed algorithm which jointly estimates the kinematic state and the object extension can also be easily implemented. Simulation results demonstrate the effectiveness of the proposed modeling and tracking approaches. PMID:28937629
Calibration of 3D ALE finite element model from experiments on friction stir welding of lap joints
NASA Astrophysics Data System (ADS)
Fourment, Lionel; Gastebois, Sabrina; Dubourg, Laurent
2016-10-01
In order to support the design of such a complex process like Friction Stir Welding (FSW) for the aeronautic industry, numerical simulation software requires (1) developing an efficient and accurate Finite Element (F.E.) formulation that allows predicting welding defects, (2) properly modeling the thermo-mechanical complexity of the FSW process and (3) calibrating the F.E. model from accurate measurements from FSW experiments. This work uses a parallel ALE formulation developed in the Forge® F.E. code to model the different possible defects (flashes and worm holes), while pin and shoulder threads are modeled by a new friction law at the tool / material interface. FSW experiments require using a complex tool with scroll on shoulder, which is instrumented for providing sensitive thermal data close to the joint. Calibration of unknown material thermal coefficients, constitutive equations parameters and friction model from measured forces, torques and temperatures is carried out using two F.E. models, Eulerian and ALE, to reach a satisfactory agreement assessed by the proper sensitivity of the simulation to process parameters.
Predicting Deforestation Patterns in Loreto, Peru from 2000-2010 Using a Nested GLM Approach
NASA Astrophysics Data System (ADS)
Vijay, V.; Jenkins, C.; Finer, M.; Pimm, S.
2013-12-01
Loreto is the largest province in Peru, covering about 370,000 km2. Because of its remote location in the Amazonian rainforest, it is also one of the most sparsely populated. Though a majority of the region remains covered by forest, deforestation is being driven by human encroachment through industrial activities and the spread of colonization and agriculture. The importance of accurate predictive modeling of deforestation has spawned an extensive body of literature on the topic. We present a nested GLM approach based on predictions of deforestation from 2000-2010 and using variables representing the expected drivers of deforestation. Models were constructed using 2000 to 2005 changes and tested against data for 2005 to 2010. The most complex model, which included transportation variables (roads and navigable rivers), spatial contagion processes, population centers and industrial activities, performed better in predicting the 2005 to 2010 changes (75.8% accurate) than did a simpler model using only transportation variables (69.2% accurate). Finally we contrast the GLM approach with a more complex spatially articulated model.
Cross hole GPR traveltime inversion using a fast and accurate neural network as a forward model
NASA Astrophysics Data System (ADS)
Mejer Hansen, Thomas
2017-04-01
Probabilistic formulated inverse problems can be solved using Monte Carlo based sampling methods. In principle both advanced prior information, such as based on geostatistics, and complex non-linear forward physical models can be considered. However, in practice these methods can be associated with huge computational costs that in practice limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error, that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival travel time inversion of cross hole ground-penetrating radar (GPR) data. An accurate forward model, based on 2D full-waveform modeling followed by automatic travel time picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the full forward model, and considerably faster, and more accurate, than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of the types of inverse problems that can be solved using non-linear Monte Carlo sampling techniques.
An improved switching converter model using discrete and average techniques
NASA Technical Reports Server (NTRS)
Shortt, D. J.; Lee, F. C.
1982-01-01
The nonlinear modeling and analysis of dc-dc converters has been done by averaging and discrete-sampling techniques. The averaging technique is simple, but inaccurate as the modulation frequencies approach the theoretical limit of one-half the switching frequency. The discrete technique is accurate even at high frequencies, but is very complex and cumbersome. An improved model is developed by combining the aforementioned techniques. This new model is easy to implement in circuit and state variable forms and is accurate to the theoretical limit.
A novel model for estimating organic chemical bioconcentration in agricultural plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hung, H.; Mackay, D.; Di Guardo, A.
1995-12-31
There is increasing recognition that much human and wildlife exposure to organic contaminants can be traced through the food chain to bioconcentration in vegetation. For risk assessment, there is a need for an accurate model to predict organic chemical concentrations in plants. Existing models range from relatively simple correlations of concentrations using octanol-water or octanol-air partition coefficients, to complex models involving extensive physiological data. To satisfy the need for a relatively accurate model of intermediate complexity, a novel approach has been devised to predict organic chemical concentrations in agricultural plants as a function of soil and air concentrations, without themore » need for extensive plant physiological data. The plant is treated as three compartments, namely, leaves, roots and stems (including fruit and seeds). Data readily available from the literature, including chemical properties, volume, density and composition of each compartment; metabolic and growth rate of plant; and readily obtainable environmental conditions at the site are required as input. Results calculated from the model are compared with observed and experimentally-determined concentrations. It is suggested that the model, which includes a physiological database for agricultural plants, gives acceptably accurate predictions of chemical partitioning between plants, air and soil.« less
Performance characterization of complex fuel port geometries for hybrid rocket fuel grains
NASA Astrophysics Data System (ADS)
Bath, Andrew
This research investigated the 3D printing and burning of fuel grains with complex geometry and the development of software capable of modeling and predicting the regression of a cross-section of these complex fuel grains. The software developed did predict the geometry to a fair degree of accuracy, especially when enhanced corner rounding was turned on. The model does have some drawbacks, notably being relatively slow, and does not perfectly predict the regression. If corner rounding is turned off, however, the model does become much faster; although less accurate, this method does still predict a relatively accurate resulting burn geometry, and is fast enough to be used for performance-tuning or genetic algorithms. In addition to the modeling method, preliminary investigations into the burning behavior of fuel grains with a helical flow path were performed. The helix fuel grains have a regression rate of nearly 3 times that of any other fuel grain geometry, primarily due to the enhancement of the friction coefficient between the flow and flow path.
On the dimension of complex responses in nonlinear structural vibrations
NASA Astrophysics Data System (ADS)
Wiebe, R.; Spottswood, S. M.
2016-07-01
The ability to accurately model engineering systems under extreme dynamic loads would prove a major breakthrough in many aspects of aerospace, mechanical, and civil engineering. Extreme loads frequently induce both nonlinearities and coupling which increase the complexity of the response and the computational cost of finite element models. Dimension reduction has recently gained traction and promises the ability to distill dynamic responses down to a minimal dimension without sacrificing accuracy. In this context, the dimensionality of a response is related to the number of modes needed in a reduced order model to accurately simulate the response. Thus, an important step is characterizing the dimensionality of complex nonlinear responses of structures. In this work, the dimensionality of the nonlinear response of a post-buckled beam is investigated. Significant detail is dedicated to carefully introducing the experiment, the verification of a finite element model, and the dimensionality estimation algorithm as it is hoped that this system may help serve as a benchmark test case. It is shown that with minor modifications, the method of false nearest neighbors can quantitatively distinguish between the response dimension of various snap-through, non-snap-through, random, and deterministic loads. The state-space dimension of the nonlinear system in question increased from 2-to-10 as the system response moved from simple, low-level harmonic to chaotic snap-through. Beyond the problem studied herein, the techniques developed will serve as a prescriptive guide in developing fast and accurate dimensionally reduced models of nonlinear systems, and eventually as a tool for adaptive dimension-reduction in numerical modeling. The results are especially relevant in the aerospace industry for the design of thin structures such as beams, panels, and shells, which are all capable of spatio-temporally complex dynamic responses that are difficult and computationally expensive to model.
Extending the diffuse layer model of surface acidity behavior: I. Model development
Considerable disenchantment exists within the environmental research community concerning our current ability to accurately model surface-complexation-mediated low-porewater-concentration ionic contaminant partitioning with natural surfaces. Several authors attribute this unaccep...
A 3D modeling approach to complex faults with multi-source data
NASA Astrophysics Data System (ADS)
Wu, Qiang; Xu, Hua; Zou, Xukai; Lei, Hongzhuan
2015-04-01
Fault modeling is a very important step in making an accurate and reliable 3D geological model. Typical existing methods demand enough fault data to be able to construct complex fault models, however, it is well known that the available fault data are generally sparse and undersampled. In this paper, we propose a workflow of fault modeling, which can integrate multi-source data to construct fault models. For the faults that are not modeled with these data, especially small-scale or approximately parallel with the sections, we propose the fault deduction method to infer the hanging wall and footwall lines after displacement calculation. Moreover, using the fault cutting algorithm can supplement the available fault points on the location where faults cut each other. Increasing fault points in poor sample areas can not only efficiently construct fault models, but also reduce manual intervention. By using a fault-based interpolation and remeshing the horizons, an accurate 3D geological model can be constructed. The method can naturally simulate geological structures no matter whether the available geological data are sufficient or not. A concrete example of using the method in Tangshan, China, shows that the method can be applied to broad and complex geological areas.
Dissecting innate immune responses with the tools of systems biology.
Smith, Kelly D; Bolouri, Hamid
2005-02-01
Systems biology strives to derive accurate predictive descriptions of complex systems such as innate immunity. The innate immune system is essential for host defense, yet the resulting inflammatory response must be tightly regulated. Current understanding indicates that this system is controlled by complex regulatory networks, which maintain homoeostasis while accurately distinguishing pathogenic infections from harmless exposures. Recent studies have used high throughput technologies and computational techniques that presage predictive models and will be the foundation of a systems level understanding of innate immunity.
The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems.
White, Andrew; Tolman, Malachi; Thames, Howard D; Withers, Hubert Rodney; Mason, Kathy A; Transtrum, Mark K
2016-12-01
We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model's discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system-a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model.
The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems
Tolman, Malachi; Thames, Howard D.; Mason, Kathy A.
2016-01-01
We explore the relationship among experimental design, parameter estimation, and systematic error in sloppy models. We show that the approximate nature of mathematical models poses challenges for experimental design in sloppy models. In many models of complex biological processes it is unknown what are the relevant physical mechanisms that must be included to explain system behaviors. As a consequence, models are often overly complex, with many practically unidentifiable parameters. Furthermore, which mechanisms are relevant/irrelevant vary among experiments. By selecting complementary experiments, experimental design may inadvertently make details that were ommitted from the model become relevant. When this occurs, the model will have a large systematic error and fail to give a good fit to the data. We use a simple hyper-model of model error to quantify a model’s discrepancy and apply it to two models of complex biological processes (EGFR signaling and DNA repair) with optimally selected experiments. We find that although parameters may be accurately estimated, the discrepancy in the model renders it less predictive than it was in the sloppy regime where systematic error is small. We introduce the concept of a sloppy system–a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. We explore the limits of accurate parameter estimation in sloppy systems and argue that identifying underlying mechanisms controlling system behavior is better approached by considering a hierarchy of models of varying detail rather than focusing on parameter estimation in a single model. PMID:27923060
Nunes, Matheus Henrique
2016-01-01
Tree stem form in native tropical forests is very irregular, posing a challenge to establishing taper equations that can accurately predict the diameter at any height along the stem and subsequently merchantable volume. Artificial intelligence approaches can be useful techniques in minimizing estimation errors within complex variations of vegetation. We evaluated the performance of Random Forest® regression tree and Artificial Neural Network procedures in modelling stem taper. Diameters and volume outside bark were compared to a traditional taper-based equation across a tropical Brazilian savanna, a seasonal semi-deciduous forest and a rainforest. Neural network models were found to be more accurate than the traditional taper equation. Random forest showed trends in the residuals from the diameter prediction and provided the least precise and accurate estimations for all forest types. This study provides insights into the superiority of a neural network, which provided advantages regarding the handling of local effects. PMID:27187074
Nunes, Matheus Henrique; Görgens, Eric Bastos
2016-01-01
Tree stem form in native tropical forests is very irregular, posing a challenge to establishing taper equations that can accurately predict the diameter at any height along the stem and subsequently merchantable volume. Artificial intelligence approaches can be useful techniques in minimizing estimation errors within complex variations of vegetation. We evaluated the performance of Random Forest® regression tree and Artificial Neural Network procedures in modelling stem taper. Diameters and volume outside bark were compared to a traditional taper-based equation across a tropical Brazilian savanna, a seasonal semi-deciduous forest and a rainforest. Neural network models were found to be more accurate than the traditional taper equation. Random forest showed trends in the residuals from the diameter prediction and provided the least precise and accurate estimations for all forest types. This study provides insights into the superiority of a neural network, which provided advantages regarding the handling of local effects.
Lattice Boltzmann simulations of immiscible displacement process with large viscosity ratios
NASA Astrophysics Data System (ADS)
Rao, Parthib; Schaefer, Laura
2017-11-01
Immiscible displacement is a key physical mechanism involved in enhanced oil recovery and carbon sequestration processes. This multiphase flow phenomenon involves a complex interplay of viscous, capillary, inertial and wettability effects. The lattice Boltzmann (LB) method is an accurate and efficient technique for modeling and simulating multiphase/multicomponent flows especially in complex flow configurations and media. In this presentation we present numerical simulation results of displacement process in thin long channels. The results are based on a new psuedo-potential multicomponent LB model with multiple relaxation time collision (MRT) model and explicit forcing scheme. We demonstrate that the proposed model is capable of accurately simulating the displacement process involving fluids with a wider range of viscosity ratios (>100) and which also leads to viscosity-independent interfacial tension and reduction of some important numerical artifacts.
Automated adaptive inference of phenomenological dynamical models.
Daniels, Bryan C; Nemenman, Ilya
2015-08-21
Dynamics of complex systems is often driven by large and intricate networks of microscopic interactions, whose sheer size obfuscates understanding. With limited experimental data, many parameters of such dynamics are unknown, and thus detailed, mechanistic models risk overfitting and making faulty predictions. At the other extreme, simple ad hoc models often miss defining features of the underlying systems. Here we develop an approach that instead constructs phenomenological, coarse-grained models of network dynamics that automatically adapt their complexity to the available data. Such adaptive models produce accurate predictions even when microscopic details are unknown. The approach is computationally tractable, even for a relatively large number of dynamical variables. Using simulated data, it correctly infers the phase space structure for planetary motion, avoids overfitting in a biological signalling system and produces accurate predictions for yeast glycolysis with tens of data points and over half of the interacting species unobserved.
Automated adaptive inference of phenomenological dynamical models
Daniels, Bryan C.; Nemenman, Ilya
2015-01-01
Dynamics of complex systems is often driven by large and intricate networks of microscopic interactions, whose sheer size obfuscates understanding. With limited experimental data, many parameters of such dynamics are unknown, and thus detailed, mechanistic models risk overfitting and making faulty predictions. At the other extreme, simple ad hoc models often miss defining features of the underlying systems. Here we develop an approach that instead constructs phenomenological, coarse-grained models of network dynamics that automatically adapt their complexity to the available data. Such adaptive models produce accurate predictions even when microscopic details are unknown. The approach is computationally tractable, even for a relatively large number of dynamical variables. Using simulated data, it correctly infers the phase space structure for planetary motion, avoids overfitting in a biological signalling system and produces accurate predictions for yeast glycolysis with tens of data points and over half of the interacting species unobserved. PMID:26293508
Evaluating the effectiveness of the MASW technique in a geologically complex terrain
NASA Astrophysics Data System (ADS)
Anukwu, G. C.; Khalil, A. E.; Abdullah, K. B.
2018-04-01
MASW surveys carried at a number of sites in Pulau Pinang, Malaysia, showed complicated dispersion curves which consequently made the inversion into soil shear velocity model ambiguous. This research work details effort to define the source of these complicated dispersion curves. As a starting point, the complexity of the phase velocity spectrum is assumed to be due to either the surveying parameters or the elastic properties of the soil structures. For the former, the surveying was carried out using different parameters. The complexities were persistent for the different surveying parameters, an indication that the elastic properties of the soil structure could be the reason. In order to exploit this assumption, a synthetic modelling approach was adopted using information from borehole, literature and geologically plausible models. Results suggest that the presence of irregular variation in the stiffness of the soil layers, high stiffness contrast and relatively shallow bedrock, results in a quite complex f-v spectrum, especially at frequencies lower than 20Hz, making it difficult to accurately extract the dispersion curve below this frequency. As such, for MASW technique, especially in complex geological situations as demonstrated, great care should be taken during the data processing and inversion to obtain a model that accurately depicts the subsurface.
NASA Astrophysics Data System (ADS)
Hansen, T. M.; Cordua, K. S.
2017-12-01
Probabilistically formulated inverse problems can be solved using Monte Carlo-based sampling methods. In principle, both advanced prior information, based on for example, complex geostatistical models and non-linear forward models can be considered using such methods. However, Monte Carlo methods may be associated with huge computational costs that, in practice, limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical forward response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival traveltime inversion of crosshole ground penetrating radar data. An accurate forward model, based on 2-D full-waveform modeling followed by automatic traveltime picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the accurate and computationally expensive forward model, and also considerably faster and more accurate (i.e. with better resolution), than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of non-linear and non-Gaussian inverse problems that have to be solved using Monte Carlo sampling techniques.
Turbulence spectra in the noise source regions of the flow around complex surfaces
NASA Technical Reports Server (NTRS)
Olsen, W. A.; Boldman, D. R.
1983-01-01
The complex turbulent flow around three complex surfaces was measured in detail with a hot wire. The measured data include extensive spatial surveys of the mean velocity and turbulence intensity and measurements of the turbulence spectra and scale length at many locations. The publication of the turbulence data is completed by reporting a summary of the turbulence spectra that were measured within the noise source locations of the flow. The results suggest some useful simplifications in modeling the very complex turbulent flow around complex surfaces for aeroacoustic predictive models. The turbulence spectra also show that noise data from scale models of moderate size can be accurately scaled up to full size.
Efficient embedding of complex networks to hyperbolic space via their Laplacian
Alanis-Lobato, Gregorio; Mier, Pablo; Andrade-Navarro, Miguel A.
2016-01-01
The different factors involved in the growth process of complex networks imprint valuable information in their observable topologies. How to exploit this information to accurately predict structural network changes is the subject of active research. A recent model of network growth sustains that the emergence of properties common to most complex systems is the result of certain trade-offs between node birth-time and similarity. This model has a geometric interpretation in hyperbolic space, where distances between nodes abstract this optimisation process. Current methods for network hyperbolic embedding search for node coordinates that maximise the likelihood that the network was produced by the afore-mentioned model. Here, a different strategy is followed in the form of the Laplacian-based Network Embedding, a simple yet accurate, efficient and data driven manifold learning approach, which allows for the quick geometric analysis of big networks. Comparisons against existing embedding and prediction techniques highlight its applicability to network evolution and link prediction. PMID:27445157
NASA Astrophysics Data System (ADS)
Bartlett, P. L.; Stelbovics, A. T.; Rescigno, T. N.; McCurdy, C. W.
2007-11-01
Calculations are reported for four-body electron-helium collisions and positron-hydrogen collisions, in the S-wave model, using the time-independent propagating exterior complex scaling (PECS) method. The PECS S-wave calculations for three-body processes in electron-helium collisions compare favourably with previous convergent close-coupling (CCC) and time-dependent exterior complex scaling (ECS) calculations, and exhibit smooth cross section profiles. The PECS four-body double-excitation cross sections are significantly different from CCC calculations and highlight the need for an accurate representation of the resonant helium final-state wave functions when undertaking these calculations. Results are also presented for positron-hydrogen collisions in an S-wave model using an electron-positron potential of V12 = - (8 + (r1 - r2)2)-1/2. This model is representative of the full problem, and the results demonstrate that ECS-based methods can accurately calculate scattering, ionization and positronium formation cross sections in this three-body rearrangement collision.
Efficient embedding of complex networks to hyperbolic space via their Laplacian
NASA Astrophysics Data System (ADS)
Alanis-Lobato, Gregorio; Mier, Pablo; Andrade-Navarro, Miguel A.
2016-07-01
The different factors involved in the growth process of complex networks imprint valuable information in their observable topologies. How to exploit this information to accurately predict structural network changes is the subject of active research. A recent model of network growth sustains that the emergence of properties common to most complex systems is the result of certain trade-offs between node birth-time and similarity. This model has a geometric interpretation in hyperbolic space, where distances between nodes abstract this optimisation process. Current methods for network hyperbolic embedding search for node coordinates that maximise the likelihood that the network was produced by the afore-mentioned model. Here, a different strategy is followed in the form of the Laplacian-based Network Embedding, a simple yet accurate, efficient and data driven manifold learning approach, which allows for the quick geometric analysis of big networks. Comparisons against existing embedding and prediction techniques highlight its applicability to network evolution and link prediction.
USDA-ARS?s Scientific Manuscript database
Accurate stream topography measurement is important for many ecological applications such as hydraulic modeling and habitat characterization. Habitat complexity measures are often made using total station surveying or visual approximation, which can be subjective and have spatial resolution limitati...
Terrestrial Solar Spectral Modeling Tools and Applications for Photovoltaic Devices: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Myers, D. R.; Emery, K. E.; Gueymard, C.
2002-05-01
This conference paper describes the variations in terrestrial spectral irradiance on photovoltaic devices can be an important consideration in photovoltaic device design and performance. This paper describes three available atmospheric transmission models, MODTRAN, SMARTS2, and SPCTRAL2. We describe the basics of their operation and performance, and applications in the photovoltaic community. Examples of model input and output data and comparisons between the model results for each under similar conditions are presented. The SMARTS2 model is shown to be much easier to use, as accurate as the complex MODTRAN model, and more accurate than the historical NREL SPCTRAL2 model.
Developments in deep brain stimulation using time dependent magnetic fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crowther, L.J.; Nlebedim, I.C.; Jiles, D.C.
2012-03-07
The effect of head model complexity upon the strength of field in different brain regions for transcranial magnetic stimulation (TMS) has been investigated. Experimental measurements were used to verify the validity of magnetic field calculations and induced electric field calculations for three 3D human head models of varying complexity. Results show the inability for simplified head models to accurately determine the site of high fields that lead to neuronal stimulation and highlight the necessity for realistic head modeling for TMS applications.
Developments in deep brain stimulation using time dependent magnetic fields
NASA Astrophysics Data System (ADS)
Crowther, L. J.; Nlebedim, I. C.; Jiles, D. C.
2012-04-01
The effect of head model complexity upon the strength of field in different brain regions for transcranial magnetic stimulation (TMS) has been investigated. Experimental measurements were used to verify the validity of magnetic field calculations and induced electric field calculations for three 3D human head models of varying complexity. Results show the inability for simplified head models to accurately determine the site of high fields that lead to neuronal stimulation and highlight the necessity for realistic head modeling for TMS applications.
Jihong, Qu
2014-01-01
Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision. PMID:24895663
Ren, Kun; Jihong, Qu
2014-01-01
Wind-hydrothermal power system dispatching has received intensive attention in recent years because it can help develop various reasonable plans to schedule the power generation efficiency. But future data such as wind power output and power load would not be accurately predicted and the nonlinear nature involved in the complex multiobjective scheduling model; therefore, to achieve accurate solution to such complex problem is a very difficult task. This paper presents an interval programming model with 2-step optimization algorithm to solve multiobjective dispatching. Initially, we represented the future data into interval numbers and simplified the object function to a linear programming problem to search the feasible and preliminary solutions to construct the Pareto set. Then the simulated annealing method was used to search the optimal solution of initial model. Thorough experimental results suggest that the proposed method performed reasonably well in terms of both operating efficiency and precision.
NASA Astrophysics Data System (ADS)
Afan, Haitham Abdulmohsin; El-shafie, Ahmed; Mohtar, Wan Hanna Melini Wan; Yaseen, Zaher Mundher
2016-10-01
An accurate model for sediment prediction is a priority for all hydrological researchers. Many conventional methods have shown an inability to achieve an accurate prediction of suspended sediment. These methods are unable to understand the behaviour of sediment transport in rivers due to the complexity, noise, non-stationarity, and dynamism of the sediment pattern. In the past two decades, Artificial Intelligence (AI) and computational approaches have become a remarkable tool for developing an accurate model. These approaches are considered a powerful tool for solving any non-linear model, as they can deal easily with a large number of data and sophisticated models. This paper is a review of all AI approaches that have been applied in sediment modelling. The current research focuses on the development of AI application in sediment transport. In addition, the review identifies major challenges and opportunities for prospective research. Throughout the literature, complementary models superior to classical modelling.
Phase reconstruction using compressive two-step parallel phase-shifting digital holography
NASA Astrophysics Data System (ADS)
Ramachandran, Prakash; Alex, Zachariah C.; Nelleri, Anith
2018-04-01
The linear relationship between the sample complex object wave and its approximated complex Fresnel field obtained using single shot parallel phase-shifting digital holograms (PPSDH) is used in compressive sensing framework and an accurate phase reconstruction is demonstrated. It is shown that the accuracy of phase reconstruction of this method is better than that of compressive sensing adapted single exposure inline holography (SEOL) method. It is derived that the measurement model of PPSDH method retains both the real and imaginary parts of the Fresnel field but with an approximation noise and the measurement model of SEOL retains only the real part exactly equal to the real part of the complex Fresnel field and its imaginary part is completely not available. Numerical simulation is performed for CS adapted PPSDH and CS adapted SEOL and it is demonstrated that the phase reconstruction is accurate for CS adapted PPSDH and can be used for single shot digital holographic reconstruction.
System Models and Aging: A Driving Example.
ERIC Educational Resources Information Center
Melichar, Joseph F.
Chronological age is a marker in time but it fails to measure accurately the performance or behavioral characteristics of individuals. This paper models the complexity of aging by using a system model and a human function paradigm. These models help facilitate representation of older adults, integrate research agendas, and enhance remediative…
NASA Astrophysics Data System (ADS)
Hill, James C.; Liu, Zhenping; Fox, Rodney O.; Passalacqua, Alberto; Olsen, Michael G.
2015-11-01
The multi-inlet vortex reactor (MIVR) has been developed to provide a platform for rapid mixing in the application of flash nanoprecipitation (FNP) for manufacturing functional nanoparticles. Unfortunately, commonly used RANS methods are unable to accurately model this complex swirling flow. Large eddy simulations have also been problematic, as expensive fine grids to accurately model the flow are required. These dilemmas led to the strategy of applying a Delayed Detached Eddy Simulation (DDES) method to the vortex reactor. In the current work, the turbulent swirling flow inside a scaled-up MIVR has been investigated by using a dynamic DDES model. In the DDES model, the eddy viscosity has a form similar to the Smagorinsky sub-grid viscosity in LES and allows the implementation of a dynamic procedure to determine its coefficient. The complex recirculating back flow near the reactor center has been successfully captured by using this dynamic DDES model. Moreover, the simulation results are found to agree with experimental data for mean velocity and Reynolds stresses.
A hand tracking algorithm with particle filter and improved GVF snake model
NASA Astrophysics Data System (ADS)
Sun, Yi-qi; Wu, Ai-guo; Dong, Na; Shao, Yi-zhe
2017-07-01
To solve the problem that the accurate information of hand cannot be obtained by particle filter, a hand tracking algorithm based on particle filter combined with skin-color adaptive gradient vector flow (GVF) snake model is proposed. Adaptive GVF and skin color adaptive external guidance force are introduced to the traditional GVF snake model, guiding the curve to quickly converge to the deep concave region of hand contour and obtaining the complex hand contour accurately. This algorithm realizes a real-time correction of the particle filter parameters, avoiding the particle drift phenomenon. Experimental results show that the proposed algorithm can reduce the root mean square error of the hand tracking by 53%, and improve the accuracy of hand tracking in the case of complex and moving background, even with a large range of occlusion.
Simulating immersed particle collisions: the Devil's in the details
NASA Astrophysics Data System (ADS)
Biegert, Edward; Vowinckel, Bernhard; Meiburg, Eckart
2015-11-01
Simulating densely-packed particle-laden flows with any degree of confidence requires accurate modeling of particle-particle collisions. To this end, we investigate a few collision models from the fluids and granular flow communities using sphere-wall collisions, which have been studied by a number of experimental groups. These collisions involve enough complexities--gravity, particle-wall lubrication forces, particle-wall contact stresses, particle-wake interactions--to challenge any collision model. Evaluating the successes and shortcomings of the collision models, we seek improvements in order to obtain more consistent results. We will highlight several implementation details that are crucial for obtaining accurate results.
Efficient computation of the joint sample frequency spectra for multiple populations.
Kamm, John A; Terhorst, Jonathan; Song, Yun S
2017-01-01
A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity.
Efficient computation of the joint sample frequency spectra for multiple populations
Kamm, John A.; Terhorst, Jonathan; Song, Yun S.
2016-01-01
A wide range of studies in population genetics have employed the sample frequency spectrum (SFS), a summary statistic which describes the distribution of mutant alleles at a polymorphic site in a sample of DNA sequences and provides a highly efficient dimensional reduction of large-scale population genomic variation data. Recently, there has been much interest in analyzing the joint SFS data from multiple populations to infer parameters of complex demographic histories, including variable population sizes, population split times, migration rates, admixture proportions, and so on. SFS-based inference methods require accurate computation of the expected SFS under a given demographic model. Although much methodological progress has been made, existing methods suffer from numerical instability and high computational complexity when multiple populations are involved and the sample size is large. In this paper, we present new analytic formulas and algorithms that enable accurate, efficient computation of the expected joint SFS for thousands of individuals sampled from hundreds of populations related by a complex demographic model with arbitrary population size histories (including piecewise-exponential growth). Our results are implemented in a new software package called momi (MOran Models for Inference). Through an empirical study we demonstrate our improvements to numerical stability and computational complexity. PMID:28239248
A novel medical information management and decision model for uncertain demand optimization.
Bi, Ya
2015-01-01
Accurately planning the procurement volume is an effective measure for controlling the medicine inventory cost. Due to uncertain demand it is difficult to make accurate decision on procurement volume. As to the biomedicine sensitive to time and season demand, the uncertain demand fitted by the fuzzy mathematics method is obviously better than general random distribution functions. To establish a novel medical information management and decision model for uncertain demand optimization. A novel optimal management and decision model under uncertain demand has been presented based on fuzzy mathematics and a new comprehensive improved particle swarm algorithm. The optimal management and decision model can effectively reduce the medicine inventory cost. The proposed improved particle swarm optimization is a simple and effective algorithm to improve the Fuzzy interference and hence effectively reduce the calculation complexity of the optimal management and decision model. Therefore the new model can be used for accurate decision on procurement volume under uncertain demand.
Multi-scale modeling of tsunami flows and tsunami-induced forces
NASA Astrophysics Data System (ADS)
Qin, X.; Motley, M. R.; LeVeque, R. J.; Gonzalez, F. I.
2016-12-01
The modeling of tsunami flows and tsunami-induced forces in coastal communities with the incorporation of the constructed environment is challenging for many numerical modelers because of the scale and complexity of the physical problem. A two-dimensional (2D) depth-averaged model can be efficient for modeling of waves offshore but may not be accurate enough to predict the complex flow with transient variance in vertical direction around constructed environments on land. On the other hand, using a more complex three-dimensional model is much more computational expensive and can become impractical due to the size of the problem and the meshing requirements near the built environment. In this study, a 2D depth-integrated model and a 3D Reynolds Averaged Navier-Stokes (RANS) model are built to model a 1:50 model-scale, idealized community, representative of Seaside, OR, USA, for which existing experimental data is available for comparison. Numerical results from the two numerical models are compared with each other as well as experimental measurement. Both models predict the flow parameters (water level, velocity, and momentum flux in the vicinity of the buildings) accurately, in general, except for time period near the initial impact, where the depth-averaged models can fail to capture the complexities in the flow. Forces predicted using direct integration of predicted pressure on structural surfaces from the 3D model and using momentum flux from the 2D model with constructed environment are compared, which indicates that force prediction from the 2D model is not always reliable in such a complicated case. Force predictions from integration of the pressure are also compared with forces predicted from bare earth momentum flux calculations to reveal the importance of incorporating the constructed environment in force prediction models.
Accurate complex scaling of three dimensional numerical potentials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cerioni, Alessandro; Genovese, Luigi; Duchemin, Ivan
2013-05-28
The complex scaling method, which consists in continuing spatial coordinates into the complex plane, is a well-established method that allows to compute resonant eigenfunctions of the time-independent Schroedinger operator. Whenever it is desirable to apply the complex scaling to investigate resonances in physical systems defined on numerical discrete grids, the most direct approach relies on the application of a similarity transformation to the original, unscaled Hamiltonian. We show that such an approach can be conveniently implemented in the Daubechies wavelet basis set, featuring a very promising level of generality, high accuracy, and no need for artificial convergence parameters. Complex scalingmore » of three dimensional numerical potentials can be efficiently and accurately performed. By carrying out an illustrative resonant state computation in the case of a one-dimensional model potential, we then show that our wavelet-based approach may disclose new exciting opportunities in the field of computational non-Hermitian quantum mechanics.« less
A Simple Model of Hox Genes: Bone Morphology Demonstration
ERIC Educational Resources Information Center
Shmaefsky, Brian
2008-01-01
Visual demonstrations of abstract scientific concepts are effective strategies for enhancing content retention (Shmaefsky 2004). The concepts associated with gene regulation of growth and development are particularly complex and are well suited for teaching with visual models. This demonstration provides a simple and accurate model of Hox gene…
Evidence for complex contagion models of social contagion from observational data
Sprague, Daniel A.
2017-01-01
Social influence can lead to behavioural ‘fads’ that are briefly popular and quickly die out. Various models have been proposed for these phenomena, but empirical evidence of their accuracy as real-world predictive tools has so far been absent. Here we find that a ‘complex contagion’ model accurately describes the spread of behaviours driven by online sharing. We found that standard, ‘simple’, contagion often fails to capture both the rapid spread and the long tails of popularity seen in real fads, where our complex contagion model succeeds. Complex contagion also has predictive power: it successfully predicted the peak time and duration of the ALS Icebucket Challenge. The fast spread and longer duration of fads driven by complex contagion has important implications for activities such as publicity campaigns and charity drives. PMID:28686719
Data Mining for Efficient and Accurate Large Scale Retrieval of Geophysical Parameters
NASA Astrophysics Data System (ADS)
Obradovic, Z.; Vucetic, S.; Peng, K.; Han, B.
2004-12-01
Our effort is devoted to developing data mining technology for improving efficiency and accuracy of the geophysical parameter retrievals by learning a mapping from observation attributes to the corresponding parameters within the framework of classification and regression. We will describe a method for efficient learning of neural network-based classification and regression models from high-volume data streams. The proposed procedure automatically learns a series of neural networks of different complexities on smaller data stream chunks and then properly combines them into an ensemble predictor through averaging. Based on the idea of progressive sampling the proposed approach starts with a very simple network trained on a very small chunk and then gradually increases the model complexity and the chunk size until the learning performance no longer improves. Our empirical study on aerosol retrievals from data obtained with the MISR instrument mounted at Terra satellite suggests that the proposed method is successful in learning complex concepts from large data streams with near-optimal computational effort. We will also report on a method that complements deterministic retrievals by constructing accurate predictive algorithms and applying them on appropriately selected subsets of observed data. The method is based on developing more accurate predictors aimed to catch global and local properties synthesized in a region. The procedure starts by learning the global properties of data sampled over the entire space, and continues by constructing specialized models on selected localized regions. The global and local models are integrated through an automated procedure that determines the optimal trade-off between the two components with the objective of minimizing the overall mean square errors over a specific region. Our experimental results on MISR data showed that the combined model can increase the retrieval accuracy significantly. The preliminary results on various large heterogeneous spatial-temporal datasets provide evidence that the benefits of the proposed methodology for efficient and accurate learning exist beyond the area of retrieval of geophysical parameters.
A practical model for pressure probe system response estimation (with review of existing models)
NASA Astrophysics Data System (ADS)
Hall, B. F.; Povey, T.
2018-04-01
The accurate estimation of the unsteady response (bandwidth) of pneumatic pressure probe systems (probe, line and transducer volume) is a common practical problem encountered in the design of aerodynamic experiments. Understanding the bandwidth of the probe system is necessary to capture unsteady flow features accurately. Where traversing probes are used, the desired traverse speed and spatial gradients in the flow dictate the minimum probe system bandwidth required to resolve the flow. Existing approaches for bandwidth estimation are either complex or inaccurate in implementation, so probes are often designed based on experience. Where probe system bandwidth is characterized, it is often done experimentally, requiring careful experimental set-up and analysis. There is a need for a relatively simple but accurate model for estimation of probe system bandwidth. A new model is presented for the accurate estimation of pressure probe bandwidth for simple probes commonly used in wind tunnel environments; experimental validation is provided. An additional, simple graphical method for air is included for convenience.
Lee, Mi Kyung; Coker, David F
2016-08-18
An accurate approach for computing intermolecular and intrachromophore contributions to spectral densities to describe the electronic-nuclear interactions relevant for modeling excitation energy transfer processes in light harvesting systems is presented. The approach is based on molecular dynamics (MD) calculations of classical correlation functions of long-range contributions to excitation energy fluctuations and a separate harmonic analysis and single-point gradient quantum calculations for electron-intrachromophore vibrational couplings. A simple model is also presented that enables detailed analysis of the shortcomings of standard MD-based excitation energy fluctuation correlation function approaches. The method introduced here avoids these problems, and its reliability is demonstrated in accurate predictions for bacteriochlorophyll molecules in the Fenna-Matthews-Olson pigment-protein complex, where excellent agreement with experimental spectral densities is found. This efficient approach can provide instantaneous spectral densities for treating the influence of fluctuations in environmental dissipation on fast electronic relaxation.
Hettinger, Lawrence J.; Kirlik, Alex; Goh, Yang Miang; Buckle, Peter
2015-01-01
Accurate comprehension and analysis of complex sociotechnical systems is a daunting task. Empirically examining, or simply envisioning the structure and behaviour of such systems challenges traditional analytic and experimental approaches as well as our everyday cognitive capabilities. Computer-based models and simulations afford potentially useful means of accomplishing sociotechnical system design and analysis objectives. From a design perspective, they can provide a basis for a common mental model among stakeholders, thereby facilitating accurate comprehension of factors impacting system performance and potential effects of system modifications. From a research perspective, models and simulations afford the means to study aspects of sociotechnical system design and operation, including the potential impact of modifications to structural and dynamic system properties, in ways not feasible with traditional experimental approaches. This paper describes issues involved in the design and use of such models and simulations and describes a proposed path forward to their development and implementation. Practitioner Summary: The size and complexity of real-world sociotechnical systems can present significant barriers to their design, comprehension and empirical analysis. This article describes the potential advantages of computer-based models and simulations for understanding factors that impact sociotechnical system design and operation, particularly with respect to process and occupational safety. PMID:25761227
Real-time, haptics-enabled simulator for probing ex vivo liver tissue.
Lister, Kevin; Gao, Zhan; Desai, Jaydev P
2009-01-01
The advent of complex surgical procedures has driven the need for realistic surgical training simulators. Comprehensive simulators that provide realistic visual and haptic feedback during surgical tasks are required to familiarize surgeons with the procedures they are to perform. Complex organ geometry inherent to biological tissues and intricate material properties drive the need for finite element methods to assure accurate tissue displacement and force calculations. Advances in real-time finite element methods have not reached the state where they are applicable to soft tissue surgical simulation. Therefore a real-time, haptics-enabled simulator for probing of soft tissue has been developed which utilizes preprocessed finite element data (derived from accurate constitutive model of the soft-tissue obtained from carefully collected experimental data) to accurately replicate the probing task in real-time.
Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach
NASA Technical Reports Server (NTRS)
Aguilo, Miguel A.; Warner, James E.
2017-01-01
This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.
Søreide, K; Thorsen, K; Søreide, J A
2015-02-01
Mortality prediction models for patients with perforated peptic ulcer (PPU) have not yielded consistent or highly accurate results. Given the complex nature of this disease, which has many non-linear associations with outcomes, we explored artificial neural networks (ANNs) to predict the complex interactions between the risk factors of PPU and death among patients with this condition. ANN modelling using a standard feed-forward, back-propagation neural network with three layers (i.e., an input layer, a hidden layer and an output layer) was used to predict the 30-day mortality of consecutive patients from a population-based cohort undergoing surgery for PPU. A receiver-operating characteristic (ROC) analysis was used to assess model accuracy. Of the 172 patients, 168 had their data included in the model; the data of 117 (70%) were used for the training set, and the data of 51 (39%) were used for the test set. The accuracy, as evaluated by area under the ROC curve (AUC), was best for an inclusive, multifactorial ANN model (AUC 0.90, 95% CIs 0.85-0.95; p < 0.001). This model outperformed standard predictive scores, including Boey and PULP. The importance of each variable decreased as the number of factors included in the ANN model increased. The prediction of death was most accurate when using an ANN model with several univariate influences on the outcome. This finding demonstrates that PPU is a highly complex disease for which clinical prognoses are likely difficult. The incorporation of computerised learning systems might enhance clinical judgments to improve decision making and outcome prediction.
Application of 3D Laser Scanning Technology in Complex Rock Foundation Design
NASA Astrophysics Data System (ADS)
Junjie, Ma; Dan, Lu; Zhilong, Liu
2017-12-01
Taking the complex landform of Tanxi Mountain Landscape Bridge as an example, the application of 3D laser scanning technology in the mapping of complex rock foundations is studied in this paper. A set of 3D laser scanning technologies are formed and several key engineering problems are solved. The first is 3D laser scanning technology of complex landforms. 3D laser scanning technology is used to obtain a complete 3D point cloud data model of the complex landform. The detailed and accurate results of the surveying and mapping decrease the measuring time and supplementary measuring times. The second is 3D collaborative modeling of the complex landform. A 3D model of the complex landform is established based on the 3D point cloud data model. The super-structural foundation model is introduced for 3D collaborative design. The optimal design plan is selected and the construction progress is accelerated. And the last is finite-element analysis technology of the complex landform foundation. A 3D model of the complex landform is introduced into ANSYS for building a finite element model to calculate anti-slide stability of the rock, and provides a basis for the landform foundation design and construction.
The Lyα forest and the Cosmic Web
NASA Astrophysics Data System (ADS)
Meiksin, Avery
2016-10-01
The accurate description of the properties of the Lyman-α forest is a spectacular success of the Cold Dark Matter theory of cosmological structure formation. After a brief review of early models, it is shown how numerical simulations have demonstrated the Lyman-α forest emerges from the cosmic web in the quasi-linear regime of overdensity. The quasi-linear nature of the structures allows accurate modeling, providing constraints on cosmological models over a unique range of scales and enabling the Lyman-α forest to serve as a bridge to the more complex problem of galaxy formation.
Machine learning for predicting soil classes in three semi-arid landscapes
Brungard, Colby W.; Boettinger, Janis L.; Duniway, Michael C.; Wills, Skye A.; Edwards, Thomas C.
2015-01-01
Mapping the spatial distribution of soil taxonomic classes is important for informing soil use and management decisions. Digital soil mapping (DSM) can quantitatively predict the spatial distribution of soil taxonomic classes. Key components of DSM are the method and the set of environmental covariates used to predict soil classes. Machine learning is a general term for a broad set of statistical modeling techniques. Many different machine learning models have been applied in the literature and there are different approaches for selecting covariates for DSM. However, there is little guidance as to which, if any, machine learning model and covariate set might be optimal for predicting soil classes across different landscapes. Our objective was to compare multiple machine learning models and covariate sets for predicting soil taxonomic classes at three geographically distinct areas in the semi-arid western United States of America (southern New Mexico, southwestern Utah, and northeastern Wyoming). All three areas were the focus of digital soil mapping studies. Sampling sites at each study area were selected using conditioned Latin hypercube sampling (cLHS). We compared models that had been used in other DSM studies, including clustering algorithms, discriminant analysis, multinomial logistic regression, neural networks, tree based methods, and support vector machine classifiers. Tested machine learning models were divided into three groups based on model complexity: simple, moderate, and complex. We also compared environmental covariates derived from digital elevation models and Landsat imagery that were divided into three different sets: 1) covariates selected a priori by soil scientists familiar with each area and used as input into cLHS, 2) the covariates in set 1 plus 113 additional covariates, and 3) covariates selected using recursive feature elimination. Overall, complex models were consistently more accurate than simple or moderately complex models. Random forests (RF) using covariates selected via recursive feature elimination was consistently the most accurate, or was among the most accurate, classifiers between study areas and between covariate sets within each study area. We recommend that for soil taxonomic class prediction, complex models and covariates selected by recursive feature elimination be used. Overall classification accuracy in each study area was largely dependent upon the number of soil taxonomic classes and the frequency distribution of pedon observations between taxonomic classes. Individual subgroup class accuracy was generally dependent upon the number of soil pedon observations in each taxonomic class. The number of soil classes is related to the inherent variability of a given area. The imbalance of soil pedon observations between classes is likely related to cLHS. Imbalanced frequency distributions of soil pedon observations between classes must be addressed to improve model accuracy. Solutions include increasing the number of soil pedon observations in classes with few observations or decreasing the number of classes. Spatial predictions using the most accurate models generally agree with expected soil–landscape relationships. Spatial prediction uncertainty was lowest in areas of relatively low relief for each study area.
An anisotropic thermal-stress model for through-silicon via
NASA Astrophysics Data System (ADS)
Liu, Song; Shan, Guangbao
2018-02-01
A two-dimensional thermal-stress model of through-silicon via (TSV) is proposed considering the anisotropic elastic property of the silicon substrate. By using the complex variable approach, the distribution of thermal-stress in the substrate can be characterized more accurately. TCAD 3-D simulations are used to verify the model accuracy and well agree with analytical results (< ±5%). The proposed thermal-stress model can be integrated into stress-driven design flow for 3-D IC , leading to the more accurate timing analysis considering the thermal-stress effect. Project supported by the Aerospace Advanced Manufacturing Technology Research Joint Fund (No. U1537208).
Fast and Robust STEM Reconstruction in Complex Environments Using Terrestrial Laser Scanning
NASA Astrophysics Data System (ADS)
Wang, D.; Hollaus, M.; Puttonen, E.; Pfeifer, N.
2016-06-01
Terrestrial Laser Scanning (TLS) is an effective tool in forest research and management. However, accurate estimation of tree parameters still remains challenging in complex forests. In this paper, we present a novel algorithm for stem modeling in complex environments. This method does not require accurate delineation of stem points from the original point cloud. The stem reconstruction features a self-adaptive cylinder growing scheme. This algorithm is tested for a landslide region in the federal state of Vorarlberg, Austria. The algorithm results are compared with field reference data, which show that our algorithm is able to accurately retrieve the diameter at breast height (DBH) with a root mean square error (RMSE) of ~1.9 cm. This algorithm is further facilitated by applying an advanced sampling technique. Different sampling rates are applied and tested. It is found that a sampling rate of 7.5% is already able to retain the stem fitting quality and simultaneously reduce the computation time significantly by ~88%.
Tools and techniques for developing policies for complex and uncertain systems.
Bankes, Steven C
2002-05-14
Agent-based models (ABM) are examples of complex adaptive systems, which can be characterized as those systems for which no model less complex than the system itself can accurately predict in detail how the system will behave at future times. Consequently, the standard tools of policy analysis, based as they are on devising policies that perform well on some best estimate model of the system, cannot be reliably used for ABM. This paper argues that policy analysis by using ABM requires an alternative approach to decision theory. The general characteristics of such an approach are described, and examples are provided of its application to policy analysis.
NASA Astrophysics Data System (ADS)
Guo, L.; Yin, Y.; Deng, M.; Guo, L.; Yan, J.
2017-12-01
At present, most magnetotelluric (MT) forward modelling and inversion codes are based on finite difference method. But its structured mesh gridding cannot be well adapted for the conditions with arbitrary topography or complex tectonic structures. By contrast, the finite element method is more accurate in calculating complex and irregular 3-D region and has lower requirement of function smoothness. However, the complexity of mesh gridding and limitation of computer capacity has been affecting its application. COMSOL Multiphysics is a cross-platform finite element analysis, solver and multiphysics full-coupling simulation software. It achieves highly accurate numerical simulations with high computational performance and outstanding multi-field bi-directional coupling analysis capability. In addition, its AC/DC and RF module can be used to easily calculate the electromagnetic responses of complex geological structures. Using the adaptive unstructured grid, the calculation is much faster. In order to improve the discretization technique of computing area, we use the combination of Matlab and COMSOL Multiphysics to establish a general procedure for calculating the MT responses for arbitrary resistivity models. The calculated responses include the surface electric and magnetic field components, impedance components, magnetic transfer functions and phase tensors. Then, the reliability of this procedure is certificated by 1-D, 2-D and 3-D and anisotropic forward modeling tests. Finally, we establish the 3-D lithospheric resistivity model for the Proterozoic Wutai-Hengshan Mts. within the North China Craton by fitting the real MT data collected there. The reliability of the model is also verified by induced vectors and phase tensors. Our model shows more details and better resolution, compared with the previously published 3-D model based on the finite difference method. In conclusion, COMSOL Multiphysics package is suitable for modeling the 3-D lithospheric resistivity structures under complex tectonic deformation backgrounds, which could be a good complement to the existing finite-difference inversion algorithms.
NASA Technical Reports Server (NTRS)
Schmidt, Gene I.; Rossow, Vernon J.; Vanaken, Johannes M.; Parrish, Cynthia L.
1987-01-01
The features of a 1/50-scale model of the National Full-Scale Aerodynamics Complex are first described. An overview is then given of some results from the various tests conducted with the model to aid in the design of the full-scale facility. It was found that the model tunnel simulated accurately many of the operational characteristics of the full-scale circuits. Some characteristics predicted by the model were, however, noted to differ from previous full-scale results by about 10%.
NASA Technical Reports Server (NTRS)
Cao, Fang; Fichot, Cedric G.; Hooker, Stanford B.; Miller, William L.
2014-01-01
Photochemical processes driven by high-energy ultraviolet radiation (UVR) in inshore, estuarine, and coastal waters play an important role in global bio geochemical cycles and biological systems. A key to modeling photochemical processes in these optically complex waters is an accurate description of the vertical distribution of UVR in the water column which can be obtained using the diffuse attenuation coefficients of down welling irradiance (Kd()). The Sea UV Sea UVc algorithms (Fichot et al., 2008) can accurately retrieve Kd ( 320, 340, 380,412, 443 and 490 nm) in oceanic and coastal waters using multispectral remote sensing reflectances (Rrs(), Sea WiFS bands). However, SeaUVSeaUVc algorithms are currently not optimized for use in optically complex, inshore waters, where they tend to severely underestimate Kd(). Here, a new training data set of optical properties collected in optically complex, inshore waters was used to re-parameterize the published SeaUVSeaUVc algorithms, resulting in improved Kd() retrievals for turbid, estuarine waters. Although the updated SeaUVSeaUVc algorithms perform best in optically complex waters, the published SeaUVSeaUVc models still perform well in most coastal and oceanic waters. Therefore, we propose a composite set of SeaUVSeaUVc algorithms, optimized for Kd() retrieval in almost all marine systems, ranging from oceanic to inshore waters. The composite algorithm set can retrieve Kd from ocean color with good accuracy across this wide range of water types (e.g., within 13 mean relative error for Kd(340)). A validation step using three independent, in situ data sets indicates that the composite SeaUVSeaUVc can generate accurate Kd values from 320 490 nm using satellite imagery on a global scale. Taking advantage of the inherent benefits of our statistical methods, we pooled the validation data with the training set, obtaining an optimized composite model for estimating Kd() in UV wavelengths for almost all marine waters. This optimized composite set of SeaUVSeaUVc algorithms will provide the optical community with improved ability to quantify the role of solar UV radiation in photochemical and photobiological processes in the ocean.
Tangible Models and Haptic Representations Aid Learning of Molecular Biology Concepts
ERIC Educational Resources Information Center
Johannes, Kristen; Powers, Jacklyn; Couper, Lisa; Silberglitt, Matt; Davenport, Jodi
2016-01-01
Can novel 3D models help students develop a deeper understanding of core concepts in molecular biology? We adapted 3D molecular models, developed by scientists, for use in high school science classrooms. The models accurately represent the structural and functional properties of complex DNA and Virus molecules, and provide visual and haptic…
Dudley, Peter N; Bonazza, Riccardo; Jones, T Todd; Wyneken, Jeanette; Porter, Warren P
2014-01-01
As global temperatures increase throughout the coming decades, species ranges will shift. New combinations of abiotic conditions will make predicting these range shifts difficult. Biophysical mechanistic niche modeling places bounds on an animal's niche through analyzing the animal's physical interactions with the environment. Biophysical mechanistic niche modeling is flexible enough to accommodate these new combinations of abiotic conditions. However, this approach is difficult to implement for aquatic species because of complex interactions among thrust, metabolic rate and heat transfer. We use contemporary computational fluid dynamic techniques to overcome these difficulties. We model the complex 3D motion of a swimming neonate and juvenile leatherback sea turtle to find power and heat transfer rates during the stroke. We combine the results from these simulations and a numerical model to accurately predict the core temperature of a swimming leatherback. These results are the first steps in developing a highly accurate mechanistic niche model, which can assists paleontologist in understanding biogeographic shifts as well as aid contemporary species managers about potential range shifts over the coming decades.
NASA Technical Reports Server (NTRS)
Leitold, Veronika; Keller, Michael; Morton, Douglas C.; Cook, Bruce D.; Shimabukuro, Yosio E.
2015-01-01
Background: Carbon stocks and fluxes in tropical forests remain large sources of uncertainty in the global carbon budget. Airborne lidar remote sensing is a powerful tool for estimating aboveground biomass, provided that lidar measurements penetrate dense forest vegetation to generate accurate estimates of surface topography and canopy heights. Tropical forest areas with complex topography present a challenge for lidar remote sensing. Results: We compared digital terrain models (DTM) derived from airborne lidar data from a mountainous region of the Atlantic Forest in Brazil to 35 ground control points measured with survey grade GNSS receivers. The terrain model generated from full-density (approx. 20 returns/sq m) data was highly accurate (mean signed error of 0.19 +/-0.97 m), while those derived from reduced-density datasets (8/sq m, 4/sq m, 2/sq m and 1/sq m) were increasingly less accurate. Canopy heights calculated from reduced-density lidar data declined as data density decreased due to the inability to accurately model the terrain surface. For lidar return densities below 4/sq m, the bias in height estimates translated into errors of 80-125 Mg/ha in predicted aboveground biomass. Conclusions: Given the growing emphasis on the use of airborne lidar for forest management, carbon monitoring, and conservation efforts, the results of this study highlight the importance of careful survey planning and consistent sampling for accurate quantification of aboveground biomass stocks and dynamics. Approaches that rely primarily on canopy height to estimate aboveground biomass are sensitive to DTM errors from variability in lidar sampling density.
Leitold, Veronika; Keller, Michael; Morton, Douglas C; Cook, Bruce D; Shimabukuro, Yosio E
2015-12-01
Carbon stocks and fluxes in tropical forests remain large sources of uncertainty in the global carbon budget. Airborne lidar remote sensing is a powerful tool for estimating aboveground biomass, provided that lidar measurements penetrate dense forest vegetation to generate accurate estimates of surface topography and canopy heights. Tropical forest areas with complex topography present a challenge for lidar remote sensing. We compared digital terrain models (DTM) derived from airborne lidar data from a mountainous region of the Atlantic Forest in Brazil to 35 ground control points measured with survey grade GNSS receivers. The terrain model generated from full-density (~20 returns m -2 ) data was highly accurate (mean signed error of 0.19 ± 0.97 m), while those derived from reduced-density datasets (8 m -2 , 4 m -2 , 2 m -2 and 1 m -2 ) were increasingly less accurate. Canopy heights calculated from reduced-density lidar data declined as data density decreased due to the inability to accurately model the terrain surface. For lidar return densities below 4 m -2 , the bias in height estimates translated into errors of 80-125 Mg ha -1 in predicted aboveground biomass. Given the growing emphasis on the use of airborne lidar for forest management, carbon monitoring, and conservation efforts, the results of this study highlight the importance of careful survey planning and consistent sampling for accurate quantification of aboveground biomass stocks and dynamics. Approaches that rely primarily on canopy height to estimate aboveground biomass are sensitive to DTM errors from variability in lidar sampling density.
The practical use of simplicity in developing ground water models
Hill, M.C.
2006-01-01
The advantages of starting with simple models and building complexity slowly can be significant in the development of ground water models. In many circumstances, simpler models are characterized by fewer defined parameters and shorter execution times. In this work, the number of parameters is used as the primary measure of simplicity and complexity; the advantages of shorter execution times also are considered. The ideas are presented in the context of constructing ground water models but are applicable to many fields. Simplicity first is put in perspective as part of the entire modeling process using 14 guidelines for effective model calibration. It is noted that neither very simple nor very complex models generally produce the most accurate predictions and that determining the appropriate level of complexity is an ill-defined process. It is suggested that a thorough evaluation of observation errors is essential to model development. Finally, specific ways are discussed to design useful ground water models that have fewer parameters and shorter execution times.
Characterization of known protein complexes using k-connectivity and other topological measures
Gallagher, Suzanne R; Goldberg, Debra S
2015-01-01
Many protein complexes are densely packed, so proteins within complexes often interact with several other proteins in the complex. Steric constraints prevent most proteins from simultaneously binding more than a handful of other proteins, regardless of the number of proteins in the complex. Because of this, as complex size increases, several measures of the complex decrease within protein-protein interaction networks. However, k-connectivity, the number of vertices or edges that need to be removed in order to disconnect a graph, may be consistently high for protein complexes. The property of k-connectivity has been little used previously in the investigation of protein-protein interactions. To understand the discriminative power of k-connectivity and other topological measures for identifying unknown protein complexes, we characterized these properties in known Saccharomyces cerevisiae protein complexes in networks generated both from highly accurate X-ray crystallography experiments which give an accurate model of each complex, and also as the complexes appear in high-throughput yeast 2-hybrid studies in which new complexes may be discovered. We also computed these properties for appropriate random subgraphs.We found that clustering coefficient, mutual clustering coefficient, and k-connectivity are better indicators of known protein complexes than edge density, degree, or betweenness. This suggests new directions for future protein complex-finding algorithms. PMID:26913183
Modeling ultrasound propagation through material of increasing geometrical complexity.
Odabaee, Maryam; Odabaee, Mostafa; Pelekanos, Matthew; Leinenga, Gerhard; Götz, Jürgen
2018-06-01
Ultrasound is increasingly being recognized as a neuromodulatory and therapeutic tool, inducing a broad range of bio-effects in the tissue of experimental animals and humans. To achieve these effects in a predictable manner in the human brain, the thick cancellous skull presents a problem, causing attenuation. In order to overcome this challenge, as a first step, the acoustic properties of a set of simple bone-modeling resin samples that displayed an increasing geometrical complexity (increasing step sizes) were analyzed. Using two Non-Destructive Testing (NDT) transducers, we found that Wiener deconvolution predicted the Ultrasound Acoustic Response (UAR) and attenuation caused by the samples. However, whereas the UAR of samples with step sizes larger than the wavelength could be accurately estimated, the prediction was not accurate when the sample had a smaller step size. Furthermore, a Finite Element Analysis (FEA) performed in ANSYS determined that the scattering and refraction of sound waves was significantly higher in complex samples with smaller step sizes compared to simple samples with a larger step size. Together, this reveals an interaction of frequency and geometrical complexity in predicting the UAR and attenuation. These findings could in future be applied to poro-visco-elastic materials that better model the human skull. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Hughes, Timothy J; Kandathil, Shaun M; Popelier, Paul L A
2015-02-05
As intermolecular interactions such as the hydrogen bond are electrostatic in origin, rigorous treatment of this term within force field methodologies should be mandatory. We present a method able of accurately reproducing such interactions for seven van der Waals complexes. It uses atomic multipole moments up to hexadecupole moment mapped to the positions of the nuclear coordinates by the machine learning method kriging. Models were built at three levels of theory: HF/6-31G(**), B3LYP/aug-cc-pVDZ and M06-2X/aug-cc-pVDZ. The quality of the kriging models was measured by their ability to predict the electrostatic interaction energy between atoms in external test examples for which the true energies are known. At all levels of theory, >90% of test cases for small van der Waals complexes were predicted within 1 kJ mol(-1), decreasing to 60-70% of test cases for larger base pair complexes. Models built on moments obtained at B3LYP and M06-2X level generally outperformed those at HF level. For all systems the individual interactions were predicted with a mean unsigned error of less than 1 kJ mol(-1). Copyright © 2013 Elsevier B.V. All rights reserved.
Mathematical and Numerical Techniques in Energy and Environmental Modeling
NASA Astrophysics Data System (ADS)
Chen, Z.; Ewing, R. E.
Mathematical models have been widely used to predict, understand, and optimize many complex physical processes, from semiconductor or pharmaceutical design to large-scale applications such as global weather models to astrophysics. In particular, simulation of environmental effects of air pollution is extensive. Here we address the need for using similar models to understand the fate and transport of groundwater contaminants and to design in situ remediation strategies. Three basic problem areas need to be addressed in the modeling and simulation of the flow of groundwater contamination. First, one obtains an effective model to describe the complex fluid/fluid and fluid/rock interactions that control the transport of contaminants in groundwater. This includes the problem of obtaining accurate reservoir descriptions at various length scales and modeling the effects of this heterogeneity in the reservoir simulators. Next, one develops accurate discretization techniques that retain the important physical properties of the continuous models. Finally, one develops efficient numerical solution algorithms that utilize the potential of the emerging computing architectures. We will discuss recent advances and describe the contribution of each of the papers in this book in these three areas. Keywords: reservoir simulation, mathematical models, partial differential equations, numerical algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keller, J.; Lacava, W.; Austin, J.
2015-02-01
This work investigates the minimum level of fidelity required to accurately simulate wind turbine gearboxes using state-of-the-art design tools. Excessive model fidelity including drivetrain complexity, gearbox complexity, excitation sources, and imperfections, significantly increases computational time, but may not provide a commensurate increase in the value of the results. Essential designparameters are evaluated, including the planetary load-sharing factor, gear tooth load distribution, and sun orbit motion. Based on the sensitivity study results, recommendations for the minimum model fidelities are provided.
Approximating high-dimensional dynamics by barycentric coordinates with linear programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirata, Yoshito, E-mail: yoshito@sat.t.u-tokyo.ac.jp; Aihara, Kazuyuki; Suzuki, Hideyuki
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics ofmore » the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.« less
Approximating high-dimensional dynamics by barycentric coordinates with linear programming.
Hirata, Yoshito; Shiro, Masanori; Takahashi, Nozomu; Aihara, Kazuyuki; Suzuki, Hideyuki; Mas, Paloma
2015-01-01
The increasing development of novel methods and techniques facilitates the measurement of high-dimensional time series but challenges our ability for accurate modeling and predictions. The use of a general mathematical model requires the inclusion of many parameters, which are difficult to be fitted for relatively short high-dimensional time series observed. Here, we propose a novel method to accurately model a high-dimensional time series. Our method extends the barycentric coordinates to high-dimensional phase space by employing linear programming, and allowing the approximation errors explicitly. The extension helps to produce free-running time-series predictions that preserve typical topological, dynamical, and/or geometric characteristics of the underlying attractors more accurately than the radial basis function model that is widely used. The method can be broadly applied, from helping to improve weather forecasting, to creating electronic instruments that sound more natural, and to comprehensively understanding complex biological data.
Water Planetary and Cometary Atmospheres: H2O/HDO Transmittance and Fluorescence Models
NASA Technical Reports Server (NTRS)
Villanueva, G. L.; Mumma, M. J.; Bonev, B. P.; Novak, R. E.; Barber, R. J.; DiSanti, M. A.
2012-01-01
We developed a modern methodology to retrieve water (H2O) and deuterated water (HDO) in planetary and cometary atmospheres, and constructed an accurate spectral database that combines theoretical and empirical results. Based on a greatly expanded set of spectroscopic parameters, we built a full non-resonance cascade fluorescence model and computed fluorescence efficiencies for H2O (500 million lines) and HDO (700 million lines). The new line list was also integrated into an advanced terrestrial radiative transfer code (LBLRTM) and adapted to the CO2 rich atmosphere of Mars, for which we adopted the complex Robert-Bonamy formalism for line shapes. We then retrieved water and D/H in the atmospheres of Mars, comet C/2007 WI, and Earth by applying the new formalism to spectra obtained with the high-resolution spectrograph NIRSPEC/Keck II atop Mauna Kea (Hawaii). The new model accurately describes the complex morphology of the water bands and greatly increases the accuracy of the retrieved abundances (and the D/H ratio in water) with respect to previously available models. The new model provides improved agreement of predicted and measured intensities for many H2O lines already identified in comets, and it identifies several unassigned cometary emission lines as new emission lines of H2O. The improved spectral accuracy permits retrieval of more accurate rotational temperatures and production rates for cometary water.
Accurate modeling and evaluation of microstructures in complex materials
NASA Astrophysics Data System (ADS)
Tahmasebi, Pejman
2018-02-01
Accurate characterization of heterogeneous materials is of great importance for different fields of science and engineering. Such a goal can be achieved through imaging. Acquiring three- or two-dimensional images under different conditions is not, however, always plausible. On the other hand, accurate characterization of complex and multiphase materials requires various digital images (I) under different conditions. An ensemble method is presented that can take one single (or a set of) I(s) and stochastically produce several similar models of the given disordered material. The method is based on a successive calculating of a conditional probability by which the initial stochastic models are produced. Then, a graph formulation is utilized for removing unrealistic structures. A distance transform function for the Is with highly connected microstructure and long-range features is considered which results in a new I that is more informative. Reproduction of the I is also considered through a histogram matching approach in an iterative framework. Such an iterative algorithm avoids reproduction of unrealistic structures. Furthermore, a multiscale approach, based on pyramid representation of the large Is, is presented that can produce materials with millions of pixels in a matter of seconds. Finally, the nonstationary systems—those for which the distribution of data varies spatially—are studied using two different methods. The method is tested on several complex and large examples of microstructures. The produced results are all in excellent agreement with the utilized Is and the similarities are quantified using various correlation functions.
NASA Technical Reports Server (NTRS)
Seltzer, S. M.; Patel, J. S.; Justice, D. W.; Schweitzer, G. E.
1972-01-01
The results are presented of a study of the dynamics of a spinning Skylab space station. The stability of motion of several simplified models with flexible appendages was investigated. A digital simulation model that more accurately portrays the complex Skylab vehicle is described, and simulation results are compared with analytically derived results.
USDA-ARS?s Scientific Manuscript database
Accurate electromagnetic sensing of soil water contents (') under field conditions is complicated by the dependence of permittivity on specific surface area, temperature, and apparent electrical conductivity, all which may vary across space or time. We present a physically-based mixing model to pred...
Background/Question/Methods Solar radiation is a significant environmental driver that impacts the quality and resilience of terrestrial and aquatic habitats, yet its spatiotemporal variations are complicated to model accurately at high resolution over large, complex watersheds. ...
USDA-ARS?s Scientific Manuscript database
Accurate prediction of pesticide volatilization is important for the protection of human and environmental health. Due to the complexity of the volatilization process, sophisticated predictive models are needed, especially for dry soil conditions. A mathematical model was developed to allow simulati...
Probability based remaining capacity estimation using data-driven and neural network model
NASA Astrophysics Data System (ADS)
Wang, Yujie; Yang, Duo; Zhang, Xu; Chen, Zonghai
2016-05-01
Since large numbers of lithium-ion batteries are composed in pack and the batteries are complex electrochemical devices, their monitoring and safety concerns are key issues for the applications of battery technology. An accurate estimation of battery remaining capacity is crucial for optimization of the vehicle control, preventing battery from over-charging and over-discharging and ensuring the safety during its service life. The remaining capacity estimation of a battery includes the estimation of state-of-charge (SOC) and state-of-energy (SOE). In this work, a probability based adaptive estimator is presented to obtain accurate and reliable estimation results for both SOC and SOE. For the SOC estimation, an n ordered RC equivalent circuit model is employed by combining an electrochemical model to obtain more accurate voltage prediction results. For the SOE estimation, a sliding window neural network model is proposed to investigate the relationship between the terminal voltage and the model inputs. To verify the accuracy and robustness of the proposed model and estimation algorithm, experiments under different dynamic operation current profiles are performed on the commercial 1665130-type lithium-ion batteries. The results illustrate that accurate and robust estimation can be obtained by the proposed method.
A bio-optical model for integration into ecosystem models for the Ligurian Sea
NASA Astrophysics Data System (ADS)
Bengil, Fethi; McKee, David; Beşiktepe, Sükrü T.; Sanjuan Calzado, Violeta; Trees, Charles
2016-12-01
A bio-optical model has been developed for the Ligurian Sea which encompasses both deep, oceanic Case 1 waters and shallow, coastal Case 2 waters. The model builds on earlier Case 1 models for the region and uses field data collected on the BP09 research cruise to establish new relationships for non-biogenic particles and CDOM. The bio-optical model reproduces in situ IOPs accurately and is used to parameterize radiative transfer simulations which demonstrate its utility for modeling underwater light levels and above surface remote sensing reflectance. Prediction of euphotic depth is found to be accurate to within ∼3.2 m (RMSE). Previously published light field models work well for deep oceanic parts of the Ligurian Sea that fit the Case 1 classification. However, they are found to significantly over-estimate euphotic depth in optically complex coastal waters where the influence of non-biogenic materials is strongest. For these coastal waters, the combination of the bio-optical model proposed here and full radiative transfer simulations provides significantly more accurate predictions of euphotic depth.
NASA Astrophysics Data System (ADS)
Owers, Christopher J.; Rogers, Kerrylee; Woodroffe, Colin D.
2018-05-01
Above-ground biomass represents a small yet significant contributor to carbon storage in coastal wetlands. Despite this, above-ground biomass is often poorly quantified, particularly in areas where vegetation structure is complex. Traditional methods for providing accurate estimates involve harvesting vegetation to develop mangrove allometric equations and quantify saltmarsh biomass in quadrats. However broad scale application of these methods may not capture structural variability in vegetation resulting in a loss of detail and estimates with considerable uncertainty. Terrestrial laser scanning (TLS) collects high resolution three-dimensional point clouds capable of providing detailed structural morphology of vegetation. This study demonstrates that TLS is a suitable non-destructive method for estimating biomass of structurally complex coastal wetland vegetation. We compare volumetric models, 3-D surface reconstruction and rasterised volume, and point cloud elevation histogram modelling techniques to estimate biomass. Our results show that current volumetric modelling approaches for estimating TLS-derived biomass are comparable to traditional mangrove allometrics and saltmarsh harvesting. However, volumetric modelling approaches oversimplify vegetation structure by under-utilising the large amount of structural information provided by the point cloud. The point cloud elevation histogram model presented in this study, as an alternative to volumetric modelling, utilises all of the information within the point cloud, as opposed to sub-sampling based on specific criteria. This method is simple but highly effective for both mangrove (r2 = 0.95) and saltmarsh (r2 > 0.92) vegetation. Our results provide evidence that application of TLS in coastal wetlands is an effective non-destructive method to accurately quantify biomass for structurally complex vegetation.
Inverse and Predictive Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Syracuse, Ellen Marie
The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an evenmore » greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.« less
Unger, Bertram J; Kraut, Jay; Rhodes, Charlotte; Hochman, Jordan
2014-01-01
Physical models of complex bony structures can be used for surgical skills training. Current models focus on surface rendering but suffer from a lack of internal accuracy due to limitations in the manufacturing process. We describe a technique for generating internally accurate rapid-prototyped anatomical models with solid and hollow structures from clinical and microCT data using a 3D printer. In a face validation experiment, otolaryngology residents drilled a cadaveric bone and its corresponding printed model. The printed bone models were deemed highly realistic representations across all measured parameters and the educational value of the models was strongly appreciated.
Zeinali-Rafsanjani, B; Mosleh-Shirazi, M A; Faghihi, R; Karbasi, S; Mosalaei, A
2015-01-01
To accurately recompute dose distributions in chest-wall radiotherapy with 120 kVp kilovoltage X-rays, an MCNP4C Monte Carlo model is presented using a fast method that obviates the need to fully model the tube components. To validate the model, half-value layer (HVL), percentage depth doses (PDDs) and beam profiles were measured. Dose measurements were performed for a more complex situation using thermoluminescence dosimeters (TLDs) placed within a Rando phantom. The measured and computed first and second HVLs were 3.8, 10.3 mm Al and 3.8, 10.6 mm Al, respectively. The differences between measured and calculated PDDs and beam profiles in water were within 2 mm/2% for all data points. In the Rando phantom, differences for majority of data points were within 2%. The proposed model offered an approximately 9500-fold reduced run time compared to the conventional full simulation. The acceptable agreement, based on international criteria, between the simulations and the measurements validates the accuracy of the model for its use in treatment planning and radiobiological modeling studies of superficial therapies including chest-wall irradiation using kilovoltage beam.
The effects of strain and stress state in hot forming of mg AZ31 sheet
NASA Astrophysics Data System (ADS)
Sherek, Paul A.; Carpenter, Alexander J.; Hector, Louis G.; Krajewski, Paul E.; Carter, Jon T.; Lasceski, Joshua; Taleff, Eric M.
Wrought magnesium alloys, such as AZ31 sheet, are of considerable interest for light-weighting of vehicle structural components. The poor room-temperature ductility of AZ31 sheet has been a hindrance to forming the complex part shapes necessary for practical applications. However, the outstanding formability of AZ31 sheet at elevated temperature provides an opportunity to overcome that problem. Complex demonstration components have already been produced at 450°C using gas-pressure forming. Accurate simulations of such hot, gas-pressure forming will be required for the design and optimization exercises necessary if this technology is to be implemented commercially. We report on experiments and simulations used to construct the accurate material constitutive models necessary for finite-element-method simulations. In particular, the effects of strain and stress state on plastic deformation of AZ31 sheet at 450°C are considered in material constitutive model development. Material models are validated against data from simple forming experiments.
A comparative study of turbulence models for overset grids
NASA Technical Reports Server (NTRS)
Renze, Kevin J.; Buning, Pieter G.; Rajagopalan, R. G.
1992-01-01
The implementation of two different types of turbulence models for a flow solver using the Chimera overset grid method is examined. Various turbulence model characteristics, such as length scale determination and transition modeling, are found to have a significant impact on the computed pressure distribution for a multielement airfoil case. No inherent problem is found with using either algebraic or one-equation turbulence models with an overset grid scheme, but simulation of turbulence for multiple-body or complex geometry flows is very difficult regardless of the gridding method. For complex geometry flowfields, modification of the Baldwin-Lomax turbulence model is necessary to select the appropriate length scale in wall-bounded regions. The overset grid approach presents no obstacle to use of a one- or two-equation turbulence model. Both Baldwin-Lomax and Baldwin-Barth models have problems providing accurate eddy viscosity levels for complex multiple-body flowfields such as those involving the Space Shuttle.
NASA Astrophysics Data System (ADS)
Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing; Zhang, Guannan; Ye, Ming; Wu, Jianfeng; Wu, Jichun
2017-12-01
Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we develop a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.
The Role of Mental Models in Dynamic Decision-Making
2009-03-01
Humansystems® Incorporated 111 Farquhar St., Guelph, ON N1H 3N4 Project Manager : Lisa A. Rehak PWGSC Contract No.: W7711-078110/001/TOR Call...simulate the processes that people use to manage complex systems. These analogies, moreover, represent one way to help people to form more accurate...make complex decisions. Control theory’s primary emphasis is on the role of feedback while managing a complex system. What is common to all of these
Eves, E Eugene; Murphy, Ethan K; Yakovlev, Vadim V
2007-01-01
The paper discusses characteristics of a new modeling-based technique for determining dielectric properties of materials. Complex permittivity is found with an optimization algorithm designed to match complex S-parameters obtained from measurements and from 3D FDTD simulation. The method is developed on a two-port (waveguide-type) fixture and deals with complex reflection and transmission characteristics at the frequency of interest. A computational part is constructed as an inverse-RBF-network-based procedure that reconstructs dielectric constant and the loss factor of the sample from the FDTD modeling data sets and the measured reflection and transmission coefficients. As such, it is applicable to samples and cavities of arbitrary configurations provided that the geometry of the experimental setup is adequately represented by the FDTD model. The practical implementation of the method considered in this paper is a section of a WR975 waveguide containing a sample of a liquid in a cylindrical cutout of a rectangular Teflon cup. The method is run in two stages and employs two databases--first, built for a sparse grid on the complex permittivity plane, in order to locate a domain with an anticipated solution and, second, made as a denser grid covering the determined domain, for finding an exact location of the complex permittivity point. Numerical tests demonstrate that the computational part of the method is highly accurate even when the modeling data is represented by relatively small data sets. When working with reflection and transmission coefficients measured in an actual experimental fixture and reconstructing a low dielectric constant and the loss factor the technique may be less accurate. It is shown that the employed neural network is capable of finding complex permittivity of the sample when experimental data on the reflection and transmission coefficients are numerically dispersive (noise-contaminated). A special modeling test is proposed for validating the results; it confirms that the values of complex permittivity for several liquids (including salt water acetone and three types of alcohol) at 915 MHz are reconstructed with satisfactory accuracy.
Hao, Yong; Sun, Xu-Dong; Yang, Qiang
2012-12-01
Variables selection strategy combined with local linear embedding (LLE) was introduced for the analysis of complex samples by using near infrared spectroscopy (NIRS). Three methods include Monte Carlo uninformation variable elimination (MCUVE), successive projections algorithm (SPA) and MCUVE connected with SPA were used for eliminating redundancy spectral variables. Partial least squares regression (PLSR) and LLE-PLSR were used for modeling complex samples. The results shown that MCUVE can both extract effective informative variables and improve the precision of models. Compared with PLSR models, LLE-PLSR models can achieve more accurate analysis results. MCUVE combined with LLE-PLSR is an effective modeling method for NIRS quantitative analysis.
Huang, Wei; Ravikumar, Krishnakumar M; Parisien, Marc; Yang, Sichun
2016-12-01
Structural determination of protein-protein complexes such as multidomain nuclear receptors has been challenging for high-resolution structural techniques. Here, we present a combined use of multiple biophysical methods, termed iSPOT, an integration of shape information from small-angle X-ray scattering (SAXS), protection factors probed by hydroxyl radical footprinting, and a large series of computationally docked conformations from rigid-body or molecular dynamics (MD) simulations. Specifically tested on two model systems, the power of iSPOT is demonstrated to accurately predict the structures of a large protein-protein complex (TGFβ-FKBP12) and a multidomain nuclear receptor homodimer (HNF-4α), based on the structures of individual components of the complexes. Although neither SAXS nor footprinting alone can yield an unambiguous picture for each complex, the combination of both, seamlessly integrated in iSPOT, narrows down the best-fit structures that are about 3.2Å and 4.2Å in RMSD from their corresponding crystal structures, respectively. Furthermore, this proof-of-principle study based on the data synthetically derived from available crystal structures shows that the iSPOT-using either rigid-body or MD-based flexible docking-is capable of overcoming the shortcomings of standalone computational methods, especially for HNF-4α. By taking advantage of the integration of SAXS-based shape information and footprinting-based protection/accessibility as well as computational docking, this iSPOT platform is set to be a powerful approach towards accurate integrated modeling of many challenging multiprotein complexes. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Chow, Chuen-Yen; Ryan, James S.
1987-01-01
While the zonal grid system of Transonic Navier-Stokes (TNS) provides excellent modeling of complex geometries, improved shock capturing, and a higher Mach number range will be required if flows about hypersonic aircraft are to be modeled accurately. A computational fluid dynamics (CFD) code, the Compressible Navier-Stokes (CNS), is under development to combine the required high Mach number capability with the existing TNS geometry capability. One of several candidate flow solvers for inclusion in the CNS is that of F3D. This upwinding flow solver promises improved shock capturing, and more accurate hypersonic solutions overall, compared to the solver currently used in TNS.
NASA Astrophysics Data System (ADS)
Brown, Alexander; Eviston, Connor
2017-02-01
Multiple FEM models of complex eddy current coil geometries were created and validated to calculate the change of impedance due to the presence of a notch. Capable realistic simulations of eddy current inspections are required for model assisted probability of detection (MAPOD) studies, inversion algorithms, experimental verification, and tailored probe design for NDE applications. An FEM solver was chosen to model complex real world situations including varying probe dimensions and orientations along with complex probe geometries. This will also enable creation of a probe model library database with variable parameters. Verification and validation was performed using other commercially available eddy current modeling software as well as experimentally collected benchmark data. Data analysis and comparison showed that the created models were able to correctly model the probe and conductor interactions and accurately calculate the change in impedance of several experimental scenarios with acceptable error. The promising results of the models enabled the start of an eddy current probe model library to give experimenters easy access to powerful parameter based eddy current models for alternate project applications.
NASA Astrophysics Data System (ADS)
Cai, Lei; Wang, Lin; Li, Bo; Zhang, Libao; Lv, Wen
2017-06-01
Vehicle tracking technology is currently one of the most active research topics in machine vision. It is an important part of intelligent transportation system. However, in theory and technology, it still faces many challenges including real-time and robustness. In video surveillance, the targets need to be detected in real-time and to be calculated accurate position for judging the motives. The contents of video sequence images and the target motion are complex, so the objects can't be expressed by a unified mathematical model. Object-tracking is defined as locating the interest moving target in each frame of a piece of video. The current tracking technology can achieve reliable results in simple environment over the target with easy identified characteristics. However, in more complex environment, it is easy to lose the target because of the mismatch between the target appearance and its dynamic model. Moreover, the target usually has a complex shape, but the tradition target tracking algorithm usually represents the tracking results by simple geometric such as rectangle or circle, so it cannot provide accurate information for the subsequent upper application. This paper combines a traditional object-tracking technology, Mean-Shift algorithm, with a kind of image segmentation algorithm, Active-Contour model, to get the outlines of objects while the tracking process and automatically handle topology changes. Meanwhile, the outline information is used to aid tracking algorithm to improve it.
Sampling and modeling riparian forest structure and riparian microclimate
Bianca N.I. Eskelson; Paul D. Anderson; Hailemariam Temesgen
2013-01-01
Riparian areas are extremely variable and dynamic, and represent some of the most complex terrestrial ecosystems in the world. The high variability within and among riparian areas poses challenges in developing efficient sampling and modeling approaches that accurately quantify riparian forest structure and riparian microclimate. Data from eight stream reaches that are...
Structural model of control system for hydraulic stepper motor complex
NASA Astrophysics Data System (ADS)
Obukhov, A. D.; Dedov, D. L.; Kolodin, A. N.
2018-03-01
The article considers the problem of developing a structural model of the control system for a hydraulic stepper drive complex. A comparative analysis of stepper drives and assessment of the applicability of HSM for solving problems, requiring accurate displacement in space with subsequent positioning of the object, are carried out. The presented structural model of the automated control system of the multi-spindle complex of hydraulic stepper drives reflects the main components of the system, as well as the process of its control based on the control signals transfer to the solenoid valves by the controller. The models and methods described in the article can be used to formalize the control process in technical systems based on the application hydraulic stepper drives and allow switching from mechanical control to automated control.
NASA Astrophysics Data System (ADS)
Hoopes, P. J.; Petryk, Alicia A.; Misra, Adwiteeya; Kastner, Elliot J.; Pearce, John A.; Ryan, Thomas P.
2015-03-01
For more than 50 years, hyperthermia-based cancer researchers have utilized mathematical models, cell culture studies and animal models to better understand, develop and validate potential new treatments. It has been, and remains, unclear how and to what degree these research techniques depend on, complement and, ultimately, translate accurately to a successful clinical treatment. In the past, when mathematical models have not proven accurate in a clinical treatment situation, the initiating quantitative scientists (engineers, mathematicians and physicists) have tended to believe the biomedical parameters provided to them were inaccurately determined or reported. In a similar manner, experienced biomedical scientists often tend to question the value of mathematical models and cell culture results since those data typically lack the level of biologic and medical variability and complexity that are essential to accurately study and predict complex diseases and subsequent treatments. Such quantitative and biomedical interdependence, variability, diversity and promise have never been greater than they are within magnetic nanoparticle hyperthermia cancer treatment. The use of hyperthermia to treat cancer is well studied and has utilized numerous delivery techniques, including microwaves, radio frequency, focused ultrasound, induction heating, infrared radiation, warmed perfusion liquids (combined with chemotherapy), and, recently, metallic nanoparticles (NP) activated by near infrared radiation (NIR) and alternating magnetic field (AMF) based platforms. The goal of this paper is to use proven concepts and current research to address the potential pathobiology, modeling and quantification of the effects of treatment as pertaining to the similarities and differences in energy delivered by known external delivery techniques and iron oxide nanoparticles.
Advanced EUV mask and imaging modeling
NASA Astrophysics Data System (ADS)
Evanschitzky, Peter; Erdmann, Andreas
2017-10-01
The exploration and optimization of image formation in partially coherent EUV projection systems with complex source shapes requires flexible, accurate, and efficient simulation models. This paper reviews advanced mask diffraction and imaging models for the highly accurate and fast simulation of EUV lithography systems, addressing important aspects of the current technical developments. The simulation of light diffraction from the mask employs an extended rigorous coupled wave analysis (RCWA) approach, which is optimized for EUV applications. In order to be able to deal with current EUV simulation requirements, several additional models are included in the extended RCWA approach: a field decomposition and a field stitching technique enable the simulation of larger complex structured mask areas. An EUV multilayer defect model including a database approach makes the fast and fully rigorous defect simulation and defect repair simulation possible. A hybrid mask simulation approach combining real and ideal mask parts allows the detailed investigation of the origin of different mask 3-D effects. The image computation is done with a fully vectorial Abbe-based approach. Arbitrary illumination and polarization schemes and adapted rigorous mask simulations guarantee a high accuracy. A fully vectorial sampling-free description of the pupil with Zernikes and Jones pupils and an optimized representation of the diffraction spectrum enable the computation of high-resolution images with high accuracy and short simulation times. A new pellicle model supports the simulation of arbitrary membrane stacks, pellicle distortions, and particles/defects on top of the pellicle. Finally, an extension for highly accurate anamorphic imaging simulations is included. The application of the models is demonstrated by typical use cases.
System-level simulation of liquid filling in microfluidic chips.
Song, Hongjun; Wang, Yi; Pant, Kapil
2011-06-01
Liquid filling in microfluidic channels is a complex process that depends on a variety of geometric, operating, and material parameters such as microchannel geometry, flow velocity∕pressure, liquid surface tension, and contact angle of channel surface. Accurate analysis of the filling process can provide key insights into the filling time, air bubble trapping, and dead zone formation, and help evaluate trade-offs among the various design parameters and lead to optimal chip design. However, efficient modeling of liquid filling in complex microfluidic networks continues to be a significant challenge. High-fidelity computational methods, such as the volume of fluid method, are prohibitively expensive from a computational standpoint. Analytical models, on the other hand, are primarily applicable to idealized geometries and, hence, are unable to accurately capture chip level behavior of complex microfluidic systems. This paper presents a parametrized dynamic model for the system-level analysis of liquid filling in three-dimensional (3D) microfluidic networks. In our approach, a complex microfluidic network is deconstructed into a set of commonly used components, such as reservoirs, microchannels, and junctions. The components are then assembled according to their spatial layout and operating rationale to achieve a rapid system-level model. A dynamic model based on the transient momentum equation is developed to track the liquid front in the microchannels. The principle of mass conservation at the junction is used to link the fluidic parameters in the microchannels emanating from the junction. Assembly of these component models yields a set of differential and algebraic equations, which upon integration provides temporal information of the liquid filling process, particularly liquid front propagation (i.e., the arrival time). The models are used to simulate the transient liquid filling process in a variety of microfluidic constructs and in a multiplexer, representing a complex microfluidic network. The accuracy (relative error less than 7%) and orders-of-magnitude speedup (30 000X-4 000 000X) of our system-level models are verified by comparison against 3D high-fidelity numerical studies. Our findings clearly establish the utility of our models and simulation methodology for fast, reliable analysis of liquid filling to guide the design optimization of complex microfluidic networks.
Phenomenological model to fit complex permittivity data of water from radio to optical frequencies.
Shubitidze, Fridon; Osterberg, Ulf
2007-04-01
A general factorized form of the dielectric function together with a fractional model-based parameter estimation method is used to provide an accurate analytical formula for the complex refractive index in water for the frequency range 10(8)-10(16)Hz . The analytical formula is derived using a combination of a microscopic frequency-dependent rational function for adjusting zeros and poles of the dielectric dispersion together with the macroscopic statistical Fermi-Dirac distribution to provide a description of both the real and imaginary parts of the complex permittivity for water. The Fermi-Dirac distribution allows us to model the dramatic reduction in the imaginary part of the permittivity in the visible window of the water spectrum.
Transforming Multidisciplinary Customer Requirements to Product Design Specifications
NASA Astrophysics Data System (ADS)
Ma, Xiao-Jie; Ding, Guo-Fu; Qin, Sheng-Feng; Li, Rong; Yan, Kai-Yin; Xiao, Shou-Ne; Yang, Guang-Wu
2017-09-01
With the increasing of complexity of complex mechatronic products, it is necessary to involve multidisciplinary design teams, thus, the traditional customer requirements modeling for a single discipline team becomes difficult to be applied in a multidisciplinary team and project since team members with various disciplinary backgrounds may have different interpretations of the customers' requirements. A new synthesized multidisciplinary customer requirements modeling method is provided for obtaining and describing the common understanding of customer requirements (CRs) and more importantly transferring them into a detailed and accurate product design specifications (PDS) to interact with different team members effectively. A case study of designing a high speed train verifies the rationality and feasibility of the proposed multidisciplinary requirement modeling method for complex mechatronic product development. This proposed research offersthe instruction to realize the customer-driven personalized customization of complex mechatronic product.
STEPS: efficient simulation of stochastic reaction-diffusion models in realistic morphologies.
Hepburn, Iain; Chen, Weiliang; Wils, Stefan; De Schutter, Erik
2012-05-10
Models of cellular molecular systems are built from components such as biochemical reactions (including interactions between ligands and membrane-bound proteins), conformational changes and active and passive transport. A discrete, stochastic description of the kinetics is often essential to capture the behavior of the system accurately. Where spatial effects play a prominent role the complex morphology of cells may have to be represented, along with aspects such as chemical localization and diffusion. This high level of detail makes efficiency a particularly important consideration for software that is designed to simulate such systems. We describe STEPS, a stochastic reaction-diffusion simulator developed with an emphasis on simulating biochemical signaling pathways accurately and efficiently. STEPS supports all the above-mentioned features, and well-validated support for SBML allows many existing biochemical models to be imported reliably. Complex boundaries can be represented accurately in externally generated 3D tetrahedral meshes imported by STEPS. The powerful Python interface facilitates model construction and simulation control. STEPS implements the composition and rejection method, a variation of the Gillespie SSA, supporting diffusion between tetrahedral elements within an efficient search and update engine. Additional support for well-mixed conditions and for deterministic model solution is implemented. Solver accuracy is confirmed with an original and extensive validation set consisting of isolated reaction, diffusion and reaction-diffusion systems. Accuracy imposes upper and lower limits on tetrahedron sizes, which are described in detail. By comparing to Smoldyn, we show how the voxel-based approach in STEPS is often faster than particle-based methods, with increasing advantage in larger systems, and by comparing to MesoRD we show the efficiency of the STEPS implementation. STEPS simulates models of cellular reaction-diffusion systems with complex boundaries with high accuracy and high performance in C/C++, controlled by a powerful and user-friendly Python interface. STEPS is free for use and is available at http://steps.sourceforge.net/
STEPS: efficient simulation of stochastic reaction–diffusion models in realistic morphologies
2012-01-01
Background Models of cellular molecular systems are built from components such as biochemical reactions (including interactions between ligands and membrane-bound proteins), conformational changes and active and passive transport. A discrete, stochastic description of the kinetics is often essential to capture the behavior of the system accurately. Where spatial effects play a prominent role the complex morphology of cells may have to be represented, along with aspects such as chemical localization and diffusion. This high level of detail makes efficiency a particularly important consideration for software that is designed to simulate such systems. Results We describe STEPS, a stochastic reaction–diffusion simulator developed with an emphasis on simulating biochemical signaling pathways accurately and efficiently. STEPS supports all the above-mentioned features, and well-validated support for SBML allows many existing biochemical models to be imported reliably. Complex boundaries can be represented accurately in externally generated 3D tetrahedral meshes imported by STEPS. The powerful Python interface facilitates model construction and simulation control. STEPS implements the composition and rejection method, a variation of the Gillespie SSA, supporting diffusion between tetrahedral elements within an efficient search and update engine. Additional support for well-mixed conditions and for deterministic model solution is implemented. Solver accuracy is confirmed with an original and extensive validation set consisting of isolated reaction, diffusion and reaction–diffusion systems. Accuracy imposes upper and lower limits on tetrahedron sizes, which are described in detail. By comparing to Smoldyn, we show how the voxel-based approach in STEPS is often faster than particle-based methods, with increasing advantage in larger systems, and by comparing to MesoRD we show the efficiency of the STEPS implementation. Conclusion STEPS simulates models of cellular reaction–diffusion systems with complex boundaries with high accuracy and high performance in C/C++, controlled by a powerful and user-friendly Python interface. STEPS is free for use and is available at http://steps.sourceforge.net/ PMID:22574658
Unstructured mesh algorithms for aerodynamic calculations
NASA Technical Reports Server (NTRS)
Mavriplis, D. J.
1992-01-01
The use of unstructured mesh techniques for solving complex aerodynamic flows is discussed. The principle advantages of unstructured mesh strategies, as they relate to complex geometries, adaptive meshing capabilities, and parallel processing are emphasized. The various aspects required for the efficient and accurate solution of aerodynamic flows are addressed. These include mesh generation, mesh adaptivity, solution algorithms, convergence acceleration, and turbulence modeling. Computations of viscous turbulent two-dimensional flows and inviscid three-dimensional flows about complex configurations are demonstrated. Remaining obstacles and directions for future research are also outlined.
Reynolds, Andrew M.; Lihoreau, Mathieu; Chittka, Lars
2013-01-01
Pollinating bees develop foraging circuits (traplines) to visit multiple flowers in a manner that minimizes overall travel distance, a task analogous to the travelling salesman problem. We report on an in-depth exploration of an iterative improvement heuristic model of bumblebee traplining previously found to accurately replicate the establishment of stable routes by bees between flowers distributed over several hectares. The critical test for a model is its predictive power for empirical data for which the model has not been specifically developed, and here the model is shown to be consistent with observations from different research groups made at several spatial scales and using multiple configurations of flowers. We refine the model to account for the spatial search strategy of bees exploring their environment, and test several previously unexplored predictions. We find that the model predicts accurately 1) the increasing propensity of bees to optimize their foraging routes with increasing spatial scale; 2) that bees cannot establish stable optimal traplines for all spatial configurations of rewarding flowers; 3) the observed trade-off between travel distance and prioritization of high-reward sites (with a slight modification of the model); 4) the temporal pattern with which bees acquire approximate solutions to travelling salesman-like problems over several dozen foraging bouts; 5) the instability of visitation schedules in some spatial configurations of flowers; 6) the observation that in some flower arrays, bees' visitation schedules are highly individually different; 7) the searching behaviour that leads to efficient location of flowers and routes between them. Our model constitutes a robust theoretical platform to generate novel hypotheses and refine our understanding about how small-brained insects develop a representation of space and use it to navigate in complex and dynamic environments. PMID:23505353
Seismic modeling of complex stratified reservoirs
NASA Astrophysics Data System (ADS)
Lai, Hung-Liang
Turbidite reservoirs in deep-water depositional systems, such as the oil fields in the offshore Gulf of Mexico and North Sea, are becoming an important exploration target in the petroleum industry. Accurate seismic reservoir characterization, however, is complicated by the heterogeneous of the sand and shale distribution and also by the lack of resolution when imaging thin channel deposits. Amplitude variation with offset (AVO) is a very important technique that is widely applied to locate hydrocarbons. Inaccurate estimates of seismic reflection amplitudes may result in misleading interpretations because of these problems in application to turbidite reservoirs. Therefore, an efficient, accurate, and robust method of modeling seismic responses for such complex reservoirs is crucial and necessary to reduce exploration risk. A fast and accurate approach generating synthetic seismograms for such reservoir models combines wavefront construction ray tracing with composite reflection coefficients in a hybrid modeling algorithm. The wavefront construction approach is a modern, fast implementation of ray tracing that I have extended to model quasi-shear wave propagation in anisotropic media. Composite reflection coefficients, which are computed using propagator matrix methods, provide the exact seismic reflection amplitude for a stratified reservoir model. This is a distinct improvement over conventional AVO analysis based on a model with only two homogeneous half spaces. I combine the two methods to compute synthetic seismograms for test models of turbidite reservoirs in the Ursa field, Gulf of Mexico, validating the new results against exact calculations using the discrete wavenumber method. The new method, however, can also be used to generate synthetic seismograms for the laterally heterogeneous, complex stratified reservoir models. The results show important frequency dependence that may be useful for exploration. Because turbidite channel systems often display complex vertical and lateral heterogeneity that is difficult to measure directly, stochastic modeling is often used to predict the range of possible seismic responses. Though binary models containing mixtures of sands and shales have been proposed in previous work, log measurements show that these are not good representations of real seismic properties. Therefore, I develop a new approach for generating stochastic turbidite models (STM) from a combination of geological interpretation and well log measurements that are more realistic. Calculations of the composite reflection coefficient and synthetic seismograms predict direct hydrocarbon indicators associated with such turbidite sequences. The STMs provide important insights to predict the seismic responses for the complexity of turbidite reservoirs. Results of AVO responses predict the presence of gas saturation in the sand beds. For example, as the source frequency increases, the uncertainty in AVO responses for brine and gas sands predict the possibility of false interpretation in AVO analysis.
On Connectivity of Wireless Sensor Networks with Directional Antennas
Wang, Qiu; Dai, Hong-Ning; Zheng, Zibin; Imran, Muhammad; Vasilakos, Athanasios V.
2017-01-01
In this paper, we investigate the network connectivity of wireless sensor networks with directional antennas. In particular, we establish a general framework to analyze the network connectivity while considering various antenna models and the channel randomness. Since existing directional antenna models have their pros and cons in the accuracy of reflecting realistic antennas and the computational complexity, we propose a new analytical directional antenna model called the iris model to balance the accuracy against the complexity. We conduct extensive simulations to evaluate the analytical framework. Our results show that our proposed analytical model on the network connectivity is accurate, and our iris antenna model can provide a better approximation to realistic directional antennas than other existing antenna models. PMID:28085081
Bim Automation: Advanced Modeling Generative Process for Complex Structures
NASA Astrophysics Data System (ADS)
Banfi, F.; Fai, S.; Brumana, R.
2017-08-01
The new paradigm of the complexity of modern and historic structures, which are characterised by complex forms, morphological and typological variables, is one of the greatest challenges for building information modelling (BIM). Generation of complex parametric models needs new scientific knowledge concerning new digital technologies. These elements are helpful to store a vast quantity of information during the life cycle of buildings (LCB). The latest developments of parametric applications do not provide advanced tools, resulting in time-consuming work for the generation of models. This paper presents a method capable of processing and creating complex parametric Building Information Models (BIM) with Non-Uniform to NURBS) with multiple levels of details (Mixed and ReverseLoD) based on accurate 3D photogrammetric and laser scanning surveys. Complex 3D elements are converted into parametric BIM software and finite element applications (BIM to FEA) using specific exchange formats and new modelling tools. The proposed approach has been applied to different case studies: the BIM of modern structure for the courtyard of West Block on Parliament Hill in Ottawa (Ontario) and the BIM of Masegra Castel in Sondrio (Italy), encouraging the dissemination and interaction of scientific results without losing information during the generative process.
Evaluation of a Linear Cumulative Damage Failure Model for Epoxy Adhesive
NASA Technical Reports Server (NTRS)
Richardson, David E.; Batista-Rodriquez, Alicia; Macon, David; Totman, Peter; McCool, Alex (Technical Monitor)
2001-01-01
Recently a significant amount of work has been conducted to provide more complex and accurate material models for use in the evaluation of adhesive bondlines. Some of this has been prompted by recent studies into the effects of residual stresses on the integrity of bondlines. Several techniques have been developed for the analysis of bondline residual stresses. Key to these analyses is the criterion that is used for predicting failure. Residual stress loading of an adhesive bondline can occur over the life of the component. For many bonded systems, this can be several years. It is impractical to directly characterize failure of adhesive bondlines under a constant load for several years. Therefore, alternative approaches for predictions of bondline failures are required. In the past, cumulative damage failure models have been developed. These models have ranged from very simple to very complex. This paper documents the generation and evaluation of some of the most simple linear damage accumulation tensile failure models for an epoxy adhesive. This paper shows how several variations on the failure model were generated and presents an evaluation of the accuracy of these failure models in predicting creep failure of the adhesive. The paper shows that a simple failure model can be generated from short-term failure data for accurate predictions of long-term adhesive performance.
Sensitivity analysis of dynamic biological systems with time-delays.
Wu, Wu Hsiung; Wang, Feng Sheng; Chang, Maw Shang
2010-10-15
Mathematical modeling has been applied to the study and analysis of complex biological systems for a long time. Some processes in biological systems, such as the gene expression and feedback control in signal transduction networks, involve a time delay. These systems are represented as delay differential equation (DDE) models. Numerical sensitivity analysis of a DDE model by the direct method requires the solutions of model and sensitivity equations with time-delays. The major effort is the computation of Jacobian matrix when computing the solution of sensitivity equations. The computation of partial derivatives of complex equations either by the analytic method or by symbolic manipulation is time consuming, inconvenient, and prone to introduce human errors. To address this problem, an automatic approach to obtain the derivatives of complex functions efficiently and accurately is necessary. We have proposed an efficient algorithm with an adaptive step size control to compute the solution and dynamic sensitivities of biological systems described by ordinal differential equations (ODEs). The adaptive direct-decoupled algorithm is extended to solve the solution and dynamic sensitivities of time-delay systems describing by DDEs. To save the human effort and avoid the human errors in the computation of partial derivatives, an automatic differentiation technique is embedded in the extended algorithm to evaluate the Jacobian matrix. The extended algorithm is implemented and applied to two realistic models with time-delays: the cardiovascular control system and the TNF-α signal transduction network. The results show that the extended algorithm is a good tool for dynamic sensitivity analysis on DDE models with less user intervention. By comparing with direct-coupled methods in theory, the extended algorithm is efficient, accurate, and easy to use for end users without programming background to do dynamic sensitivity analysis on complex biological systems with time-delays.
NASA Technical Reports Server (NTRS)
Smith, C. B.
1982-01-01
The Fymat analytic inversion method for retrieving a particle-area distribution function from anomalous diffraction multispectral extinction data and total area is generalized to the case of a variable complex refractive index m(lambda) near unity depending on spectral wavelength lambda. Inversion tests are presented for a water-haze aerosol model. An upper-phase shift limit of 5 pi/2 retrieved an accurate peak area distribution profile. Analytical corrections using both the total number and area improved the inversion.
NASA Astrophysics Data System (ADS)
Mitilineos, Stelios A.; Argyreas, Nick D.; Thomopoulos, Stelios C. A.
2009-05-01
A fusion-based localization technique for location-based services in indoor environments is introduced herein, based on ultrasound time-of-arrival measurements from multiple off-the-shelf range estimating sensors which are used in a market-available localization system. In-situ field measurements results indicated that the respective off-the-shelf system was unable to estimate position in most of the cases, while the underlying sensors are of low-quality and yield highly inaccurate range and position estimates. An extensive analysis is performed and a model of the sensor-performance characteristics is established. A low-complexity but accurate sensor fusion and localization technique is then developed, which consists inof evaluating multiple sensor measurements and selecting the one that is considered most-accurate based on the underlying sensor model. Optimality, in the sense of a genie selecting the optimum sensor, is subsequently evaluated and compared to the proposed technique. The experimental results indicate that the proposed fusion method exhibits near-optimal performance and, albeit being theoretically suboptimal, it largely overcomes most flaws of the underlying single-sensor system resulting in a localization system of increased accuracy, robustness and availability.
Fitting neuron models to spike trains.
Rossant, Cyrille; Goodman, Dan F M; Fontaine, Bertrand; Platkiewicz, Jonathan; Magnusson, Anna K; Brette, Romain
2011-01-01
Computational modeling is increasingly used to understand the function of neural circuits in systems neuroscience. These studies require models of individual neurons with realistic input-output properties. Recently, it was found that spiking models can accurately predict the precisely timed spike trains produced by cortical neurons in response to somatically injected currents, if properly fitted. This requires fitting techniques that are efficient and flexible enough to easily test different candidate models. We present a generic solution, based on the Brian simulator (a neural network simulator in Python), which allows the user to define and fit arbitrary neuron models to electrophysiological recordings. It relies on vectorization and parallel computing techniques to achieve efficiency. We demonstrate its use on neural recordings in the barrel cortex and in the auditory brainstem, and confirm that simple adaptive spiking models can accurately predict the response of cortical neurons. Finally, we show how a complex multicompartmental model can be reduced to a simple effective spiking model.
Wang, Lin; Tian, Dan; Sun, Xiumei; Xiao, Yanju; Chen, Li; Wu, Guomin
2017-08-01
Facial asymmetry is very common in maxillofacial deformities. It is difficult to achieve accurate reconstruction. With the help of 3D printing models and surgical templates, the osteotomy line and the amount of bone grinding can be accurate. Also, by means of the precise repositioning instrument, the repositioning of genioplasty can be accurate and quick. In this study, we present a three-dimensional printing technique and the precise repositioning instrument to guide the osteotomy and repositioning, and illustrate their feasibility and validity. Eight patients with complex facial asymmetries were studied. A precise 3D printing model was obtained. We made the preoperative design and surgical templates according to it. The surgical templates and precise repositioning instrument were used to obtain an accurate osteotomy and repositioning during the operation. Postoperative measurements were made based on computed tomographic data, including chin point deviation as well as the symmetry of the mandible evaluated by 3D curve functions. All patients obtained satisfactory esthetic results, and no recurrences occurred during follow-up. The results showed that we achieved clinically acceptable precision for the mandible and chin. The mean and SD of ICC between R-Post and L-Post were 0.973 ± 0.007. The mean and SD of chin point deviation 6 months after the operation were 0.63 ± 0.19 mm. The results of this study suggest that the three-dimensional printing technique and the precise repositioning instrument could aid in making better operation designs and more accurate manipulation in orthognathic surgery for complex facial asymmetry. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Modeling Interfacial Glass-Water Reactions: Recent Advances and Current Limitations
Pierce, Eric M.; Frugier, Pierre; Criscenti, Louise J.; ...
2014-07-12
Describing the reactions that occur at the glass-water interface and control the development of the altered layer constitutes one of the main scientific challenges impeding existing models from providing accurate radionuclide release estimates. Radionuclide release estimates are a critical component of the safety basis for geologic repositories. The altered layer (i.e., amorphous hydrated surface layer and crystalline reaction products) represents a complex region, both physically and chemically, sandwiched between two distinct boundaries pristine glass surface at the inner most interface and aqueous solution at the outer most interface. Computational models, spanning different length and time-scales, are currently being developed tomore » improve our understanding of this complex and dynamic process with the goal of accurately describing the pore-scale changes that occur as the system evolves. These modeling approaches include geochemical simulations [i.e., classical reaction path simulations and glass reactivity in allowance for alteration layer (GRAAL) simulations], Monte Carlo simulations, and Molecular Dynamics methods. Finally, in this manuscript, we discuss the advances and limitations of each modeling approach placed in the context of the glass-water reaction and how collectively these approaches provide insights into the mechanisms that control the formation and evolution of altered layers.« less
Howell, Bryan; McIntyre, Cameron C
2016-06-01
Deep brain stimulation (DBS) is an adjunctive therapy that is effective in treating movement disorders and shows promise for treating psychiatric disorders. Computational models of DBS have begun to be utilized as tools to optimize the therapy. Despite advancements in the anatomical accuracy of these models, there is still uncertainty as to what level of electrical complexity is adequate for modeling the electric field in the brain and the subsequent neural response to the stimulation. We used magnetic resonance images to create an image-based computational model of subthalamic DBS. The complexity of the volume conductor model was increased by incrementally including heterogeneity, anisotropy, and dielectric dispersion in the electrical properties of the brain. We quantified changes in the load of the electrode, the electric potential distribution, and stimulation thresholds of descending corticofugal (DCF) axon models. Incorporation of heterogeneity altered the electric potentials and subsequent stimulation thresholds, but to a lesser degree than incorporation of anisotropy. Additionally, the results were sensitive to the choice of method for defining anisotropy, with stimulation thresholds of DCF axons changing by as much as 190%. Typical approaches for defining anisotropy underestimate the expected load of the stimulation electrode, which led to underestimation of the extent of stimulation. More accurate predictions of the electrode load were achieved with alternative approaches for defining anisotropy. The effects of dielectric dispersion were small compared to the effects of heterogeneity and anisotropy. The results of this study help delineate the level of detail that is required to accurately model electric fields generated by DBS electrodes.
NASA Astrophysics Data System (ADS)
Howell, Bryan; McIntyre, Cameron C.
2016-06-01
Objective. Deep brain stimulation (DBS) is an adjunctive therapy that is effective in treating movement disorders and shows promise for treating psychiatric disorders. Computational models of DBS have begun to be utilized as tools to optimize the therapy. Despite advancements in the anatomical accuracy of these models, there is still uncertainty as to what level of electrical complexity is adequate for modeling the electric field in the brain and the subsequent neural response to the stimulation. Approach. We used magnetic resonance images to create an image-based computational model of subthalamic DBS. The complexity of the volume conductor model was increased by incrementally including heterogeneity, anisotropy, and dielectric dispersion in the electrical properties of the brain. We quantified changes in the load of the electrode, the electric potential distribution, and stimulation thresholds of descending corticofugal (DCF) axon models. Main results. Incorporation of heterogeneity altered the electric potentials and subsequent stimulation thresholds, but to a lesser degree than incorporation of anisotropy. Additionally, the results were sensitive to the choice of method for defining anisotropy, with stimulation thresholds of DCF axons changing by as much as 190%. Typical approaches for defining anisotropy underestimate the expected load of the stimulation electrode, which led to underestimation of the extent of stimulation. More accurate predictions of the electrode load were achieved with alternative approaches for defining anisotropy. The effects of dielectric dispersion were small compared to the effects of heterogeneity and anisotropy. Significance. The results of this study help delineate the level of detail that is required to accurately model electric fields generated by DBS electrodes.
NASA Astrophysics Data System (ADS)
Gurrala, Praveen; Downs, Andrew; Chen, Kun; Song, Jiming; Roberts, Ron
2018-04-01
Full wave scattering models for ultrasonic waves are necessary for the accurate prediction of voltage signals received from complex defects/flaws in practical nondestructive evaluation (NDE) measurements. We propose the high-order Nyström method accelerated by the multilevel fast multipole algorithm (MLFMA) as an improvement to the state-of-the-art full-wave scattering models that are based on boundary integral equations. We present numerical results demonstrating improvements in simulation time and memory requirement. Particularly, we demonstrate the need for higher order geom-etry and field approximation in modeling NDE measurements. Also, we illustrate the importance of full-wave scattering models using experimental pulse-echo data from a spherical inclusion in a solid, which cannot be modeled accurately by approximation-based scattering models such as the Kirchhoff approximation.
Castellazzi, Giovanni; D'Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro
2015-07-28
In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation.
NASA Technical Reports Server (NTRS)
Kory, Carol L.
1999-01-01
The phenomenal growth of commercial communications has created a great demand for traveling-wave tube (TWT) amplifiers. Although the helix slow-wave circuit remains the mainstay of the TWT industry because of its exceptionally wide bandwidth, until recently it has been impossible to accurately analyze a helical TWT using its exact dimensions because of the complexity of its geometrical structure. For the first time, an accurate three-dimensional helical model was developed that allows accurate prediction of TWT cold-test characteristics including operating frequency, interaction impedance, and attenuation. This computational model, which was developed at the NASA Lewis Research Center, allows TWT designers to obtain a more accurate value of interaction impedance than is possible using experimental methods. Obtaining helical slow-wave circuit interaction impedance is an important part of the design process for a TWT because it is related to the gain and efficiency of the tube. This impedance cannot be measured directly; thus, conventional methods involve perturbing a helical circuit with a cylindrical dielectric rod placed on the central axis of the circuit and obtaining the difference in resonant frequency between the perturbed and unperturbed circuits. A mathematical relationship has been derived between this frequency difference and the interaction impedance (ref. 1). However, because of the complex configuration of the helical circuit, deriving this relationship involves several approximations. In addition, this experimental procedure is time-consuming and expensive, but until recently it was widely accepted as the most accurate means of determining interaction impedance. The advent of an accurate three-dimensional helical circuit model (ref. 2) made it possible for Lewis researchers to fully investigate standard approximations made in deriving the relationship between measured perturbation data and interaction impedance. The most prominent approximations made in the analysis were addressed and fully investigated for their accuracy by using the three-dimensional electromagnetic simulation code MAFIA (Solution of Maxwell's Equations by the Finite Integration Algorithm) (refs. 3 and 4). We found that several approximations introduced significant error (ref. 5).
NASA Astrophysics Data System (ADS)
Khalili, Ashkan; Jha, Ratneshwar; Samaratunga, Dulip
2016-11-01
Wave propagation analysis in 2-D composite structures is performed efficiently and accurately through the formulation of a User-Defined Element (UEL) based on the wavelet spectral finite element (WSFE) method. The WSFE method is based on the first-order shear deformation theory which yields accurate results for wave motion at high frequencies. The 2-D WSFE model is highly efficient computationally and provides a direct relationship between system input and output in the frequency domain. The UEL is formulated and implemented in Abaqus (commercial finite element software) for wave propagation analysis in 2-D composite structures with complexities. Frequency domain formulation of WSFE leads to complex valued parameters, which are decoupled into real and imaginary parts and presented to Abaqus as real values. The final solution is obtained by forming a complex value using the real number solutions given by Abaqus. Five numerical examples are presented in this article, namely undamaged plate, impacted plate, plate with ply drop, folded plate and plate with stiffener. Wave motions predicted by the developed UEL correlate very well with Abaqus simulations. The results also show that the UEL largely retains computational efficiency of the WSFE method and extends its ability to model complex features.
Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing; ...
2017-12-27
Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we developmore » a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.« less
Kumar, B Shiva; Venkateswarlu, Ch
2014-08-01
The complex nature of biological reactions in biofilm reactors often poses difficulties in analyzing such reactors experimentally. Mathematical models could be very useful for their design and analysis. However, application of biofilm reactor models to practical problems proves somewhat ineffective due to the lack of knowledge of accurate kinetic models and uncertainty in model parameters. In this work, we propose an inverse modeling approach based on tabu search (TS) to estimate the parameters of kinetic and film thickness models. TS is used to estimate these parameters as a consequence of the validation of the mathematical models of the process with the aid of measured data obtained from an experimental fixed-bed anaerobic biofilm reactor involving the treatment of pharmaceutical industry wastewater. The results evaluated for different modeling configurations of varying degrees of complexity illustrate the effectiveness of TS for accurate estimation of kinetic and film thickness model parameters of the biofilm process. The results show that the two-dimensional mathematical model with Edward kinetics (with its optimum parameters as mu(max)rho(s)/Y = 24.57, Ks = 1.352 and Ki = 102.36) and three-parameter film thickness expression (with its estimated parameters as a = 0.289 x 10(-5), b = 1.55 x 10(-4) and c = 15.2 x 10(-6)) better describes the biofilm reactor treating the industry wastewater.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mo, Shaoxing; Lu, Dan; Shi, Xiaoqing
Global sensitivity analysis (GSA) and uncertainty quantification (UQ) for groundwater modeling are challenging because of the model complexity and significant computational requirements. To reduce the massive computational cost, a cheap-to-evaluate surrogate model is usually constructed to approximate and replace the expensive groundwater models in the GSA and UQ. Constructing an accurate surrogate requires actual model simulations on a number of parameter samples. Thus, a robust experimental design strategy is desired to locate informative samples so as to reduce the computational cost in surrogate construction and consequently to improve the efficiency in the GSA and UQ. In this study, we developmore » a Taylor expansion-based adaptive design (TEAD) that aims to build an accurate global surrogate model with a small training sample size. TEAD defines a novel hybrid score function to search informative samples, and a robust stopping criterion to terminate the sample search that guarantees the resulted approximation errors satisfy the desired accuracy. The good performance of TEAD in building global surrogate models is demonstrated in seven analytical functions with different dimensionality and complexity in comparison to two widely used experimental design methods. The application of the TEAD-based surrogate method in two groundwater models shows that the TEAD design can effectively improve the computational efficiency of GSA and UQ for groundwater modeling.« less
Deformable complex network for refining low-resolution X-ray structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Chong; Wang, Qinghua; Ma, Jianpeng, E-mail: jpma@bcm.edu
2015-10-27
A new refinement algorithm called the deformable complex network that combines a novel angular network-based restraint with a deformable elastic network model in the target function has been developed to aid in structural refinement in macromolecular X-ray crystallography. In macromolecular X-ray crystallography, building more accurate atomic models based on lower resolution experimental diffraction data remains a great challenge. Previous studies have used a deformable elastic network (DEN) model to aid in low-resolution structural refinement. In this study, the development of a new refinement algorithm called the deformable complex network (DCN) is reported that combines a novel angular network-based restraint withmore » the DEN model in the target function. Testing of DCN on a wide range of low-resolution structures demonstrated that it constantly leads to significantly improved structural models as judged by multiple refinement criteria, thus representing a new effective refinement tool for low-resolution structural determination.« less
Kurata, Hiroyuki; Sugimoto, Yurie
2018-02-01
Many kinetic models of Escherichia coli central metabolism have been built, but few models accurately reproduced the dynamic behaviors of wild type and multiple genetic mutants. In 2016, our latest kinetic model improved problems of existing models to reproduce the cell growth and glucose uptake of wild type, ΔpykA:pykF and Δpgi in a batch culture, while it overestimated the glucose uptake and cell growth rates of Δppc and hardly captured the typical characteristics of the glyoxylate and TCA cycle fluxes for Δpgi and Δppc. Such discrepancies between the simulated and experimental data suggested biological complexity. In this study, we overcame these problems by assuming critical mechanisms regarding the OAA-regulated isocitrate dehydrogenase activity, aceBAK gene regulation and growth suppression. The present model accurately predicts the extracellular and intracellular dynamics of wild type and many gene knockout mutants in batch and continuous cultures. It is now the most accurate, detailed kinetic model of E. coli central carbon metabolism and will contribute to advances in mathematical modeling of cell factories. Copyright © 2017 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
Jaffa, Miran A; Gebregziabher, Mulugeta; Jaffa, Ayad A
2015-06-14
Renal transplant patients are mandated to have continuous assessment of their kidney function over time to monitor disease progression determined by changes in blood urea nitrogen (BUN), serum creatinine (Cr), and estimated glomerular filtration rate (eGFR). Multivariate analysis of these outcomes that aims at identifying the differential factors that affect disease progression is of great clinical significance. Thus our study aims at demonstrating the application of different joint modeling approaches with random coefficients on a cohort of renal transplant patients and presenting a comparison of their performance through a pseudo-simulation study. The objective of this comparison is to identify the model with best performance and to determine whether accuracy compensates for complexity in the different multivariate joint models. We propose a novel application of multivariate Generalized Linear Mixed Models (mGLMM) to analyze multiple longitudinal kidney function outcomes collected over 3 years on a cohort of 110 renal transplantation patients. The correlated outcomes BUN, Cr, and eGFR and the effect of various covariates such patient's gender, age and race on these markers was determined holistically using different mGLMMs. The performance of the various mGLMMs that encompass shared random intercept (SHRI), shared random intercept and slope (SHRIS), separate random intercept (SPRI) and separate random intercept and slope (SPRIS) was assessed to identify the one that has the best fit and most accurate estimates. A bootstrap pseudo-simulation study was conducted to gauge the tradeoff between the complexity and accuracy of the models. Accuracy was determined using two measures; the mean of the differences between the estimates of the bootstrapped datasets and the true beta obtained from the application of each model on the renal dataset, and the mean of the square of these differences. The results showed that SPRI provided most accurate estimates and did not exhibit any computational or convergence problem. Higher accuracy was demonstrated when the level of complexity increased from shared random coefficient models to the separate random coefficient alternatives with SPRI showing to have the best fit and most accurate estimates.
NASA Technical Reports Server (NTRS)
Mei, Chuh; Pates, Carl S., III
1994-01-01
A coupled boundary element (BEM)-finite element (FEM) approach is presented to accurately model structure-acoustic interaction systems. The boundary element method is first applied to interior, two and three-dimensional acoustic domains with complex geometry configurations. Boundary element results are very accurate when compared with limited exact solutions. Structure-interaction problems are then analyzed with the coupled FEM-BEM method, where the finite element method models the structure and the boundary element method models the interior acoustic domain. The coupled analysis is compared with exact and experimental results for a simplistic model. Composite panels are analyzed and compared with isotropic results. The coupled method is then extended for random excitation. Random excitation results are compared with uncoupled results for isotropic and composite panels.
NASA Technical Reports Server (NTRS)
Daigle, Matthew; Kulkarni, Chetan S.
2016-01-01
As batteries become increasingly prevalent in complex systems such as aircraft and electric cars, monitoring and predicting battery state of charge and state of health becomes critical. In order to accurately predict the remaining battery power to support system operations for informed operational decision-making, age-dependent changes in dynamics must be accounted for. Using an electrochemistry-based model, we investigate how key parameters of the battery change as aging occurs, and develop models to describe aging through these key parameters. Using these models, we demonstrate how we can (i) accurately predict end-of-discharge for aged batteries, and (ii) predict the end-of-life of a battery as a function of anticipated usage. The approach is validated through an experimental set of randomized discharge profiles.
NASA Astrophysics Data System (ADS)
Weitzner, Stephen E.; Dabo, Ismaila
2017-11-01
The detailed atomistic modeling of electrochemically deposited metal monolayers is challenging due to the complex structure of the metal-solution interface and the critical effects of surface electrification during electrode polarization. Accurate models of interfacial electrochemical equilibria are further challenged by the need to include entropic effects to obtain accurate surface chemical potentials. We present an embedded quantum-continuum model of the interfacial environment that addresses each of these challenges and study the underpotential deposition of silver on the gold (100) surface. We leverage these results to parametrize a cluster expansion of the electrified interface and show through grand canonical Monte Carlo calculations the crucial need to account for variations in the interfacial dipole when modeling electrodeposited metals under finite-temperature electrochemical conditions.
A drone detection with aircraft classification based on a camera array
NASA Astrophysics Data System (ADS)
Liu, Hao; Qu, Fangchao; Liu, Yingjian; Zhao, Wei; Chen, Yitong
2018-03-01
In recent years, because of the rapid popularity of drones, many people have begun to operate drones, bringing a range of security issues to sensitive areas such as airports and military locus. It is one of the important ways to solve these problems by realizing fine-grained classification and providing the fast and accurate detection of different models of drone. The main challenges of fine-grained classification are that: (1) there are various types of drones, and the models are more complex and diverse. (2) the recognition test is fast and accurate, in addition, the existing methods are not efficient. In this paper, we propose a fine-grained drone detection system based on the high resolution camera array. The system can quickly and accurately recognize the detection of fine grained drone based on hd camera.
Experimental validation of numerical simulations on a cerebral aneurysm phantom model
Seshadhri, Santhosh; Janiga, Gábor; Skalej, Martin; Thévenin, Dominique
2012-01-01
The treatment of cerebral aneurysms, found in roughly 5% of the population and associated in case of rupture to a high mortality rate, is a major challenge for neurosurgery and neuroradiology due to the complexity of the intervention and to the resulting, high hazard ratio. Improvements are possible but require a better understanding of the associated, unsteady blood flow patterns in complex 3D geometries. It would be very useful to carry out such studies using suitable numerical models, if it is proven that they reproduce accurately enough the real conditions. This validation step is classically based on comparisons with measured data. Since in vivo measurements are extremely difficult and therefore of limited accuracy, complementary model-based investigations considering realistic configurations are essential. In the present study, simulations based on computational fluid dynamics (CFD) have been compared with in situ, laser-Doppler velocimetry (LDV) measurements in the phantom model of a cerebral aneurysm. The employed 1:1 model is made from transparent silicone. A liquid mixture composed of water, glycerin, xanthan gum and sodium chloride has been specifically adapted for the present investigation. It shows physical flow properties similar to real blood and leads to a refraction index perfectly matched to that of the silicone model, allowing accurate optical measurements of the flow velocity. For both experiments and simulations, complex pulsatile flow waveforms and flow rates were accounted for. This finally allows a direct, quantitative comparison between measurements and simulations. In this manner, the accuracy of the employed computational model can be checked. PMID:24265876
A multiscale red blood cell model with accurate mechanics, rheology, and dynamics.
Fedosov, Dmitry A; Caswell, Bruce; Karniadakis, George Em
2010-05-19
Red blood cells (RBCs) have highly deformable viscoelastic membranes exhibiting complex rheological response and rich hydrodynamic behavior governed by special elastic and bending properties and by the external/internal fluid and membrane viscosities. We present a multiscale RBC model that is able to predict RBC mechanics, rheology, and dynamics in agreement with experiments. Based on an analytic theory, the modeled membrane properties can be uniquely related to the experimentally established RBC macroscopic properties without any adjustment of parameters. The RBC linear and nonlinear elastic deformations match those obtained in optical-tweezers experiments. The rheological properties of the membrane are compared with those obtained in optical magnetic twisting cytometry, membrane thermal fluctuations, and creep followed by cell recovery. The dynamics of RBCs in shear and Poiseuille flows is tested against experiments and theoretical predictions, and the applicability of the latter is discussed. Our findings clearly indicate that a purely elastic model for the membrane cannot accurately represent the RBC's rheological properties and its dynamics, and therefore accurate modeling of a viscoelastic membrane is necessary. Copyright 2010 Biophysical Society. Published by Elsevier Inc. All rights reserved.
A Multiscale Red Blood Cell Model with Accurate Mechanics, Rheology, and Dynamics
Fedosov, Dmitry A.; Caswell, Bruce; Karniadakis, George Em
2010-01-01
Abstract Red blood cells (RBCs) have highly deformable viscoelastic membranes exhibiting complex rheological response and rich hydrodynamic behavior governed by special elastic and bending properties and by the external/internal fluid and membrane viscosities. We present a multiscale RBC model that is able to predict RBC mechanics, rheology, and dynamics in agreement with experiments. Based on an analytic theory, the modeled membrane properties can be uniquely related to the experimentally established RBC macroscopic properties without any adjustment of parameters. The RBC linear and nonlinear elastic deformations match those obtained in optical-tweezers experiments. The rheological properties of the membrane are compared with those obtained in optical magnetic twisting cytometry, membrane thermal fluctuations, and creep followed by cell recovery. The dynamics of RBCs in shear and Poiseuille flows is tested against experiments and theoretical predictions, and the applicability of the latter is discussed. Our findings clearly indicate that a purely elastic model for the membrane cannot accurately represent the RBC's rheological properties and its dynamics, and therefore accurate modeling of a viscoelastic membrane is necessary. PMID:20483330
NASA Astrophysics Data System (ADS)
Marsh, C.; Pomeroy, J. W.; Wheater, H. S.
2017-12-01
Accurate management of water resources is necessary for social, economic, and environmental sustainability worldwide. In locations with seasonal snowcovers, the accurate prediction of these water resources is further complicated due to frozen soils, solid-phase precipitation, blowing snow transport, and snowcover-vegetation-atmosphere interactions. Complex process interactions and feedbacks are a key feature of hydrological systems and may result in emergent phenomena, i.e., the arising of novel and unexpected properties within a complex system. One example is the feedback associated with blowing snow redistribution, which can lead to drifts that cause locally-increased soil moisture, thus increasing plant growth that in turn subsequently impacts snow redistribution, creating larger drifts. Attempting to simulate these emergent behaviours is a significant challenge, however, and there is concern that process conceptualizations within current models are too incomplete to represent the needed interactions. An improved understanding of the role of emergence in hydrological systems often requires high resolution distributed numerical hydrological models that incorporate the relevant process dynamics. The Canadian Hydrological Model (CHM) provides a novel tool for examining cold region hydrological systems. Key features include efficient terrain representation, allowing simulations at various spatial scales, reduced computational overhead, and a modular process representation allowing for an alternative-hypothesis framework. Using both physics-based and conceptual process representations sourced from long term process studies and the current cold regions literature allows for comparison of process representations and importantly, their ability to produce emergent behaviours. Examining the system in a holistic, process-based manner can hopefully derive important insights and aid in development of improved process representations.
Adly, Amr A.; Abd-El-Hafiz, Salwa K.
2012-01-01
Incorporation of hysteresis models in electromagnetic analysis approaches is indispensable to accurate field computation in complex magnetic media. Throughout those computations, vector nature and computational efficiency of such models become especially crucial when sophisticated geometries requiring massive sub-region discretization are involved. Recently, an efficient vector Preisach-type hysteresis model constructed from only two scalar models having orthogonally coupled elementary operators has been proposed. This paper presents a novel Hopfield neural network approach for the implementation of Stoner–Wohlfarth-like operators that could lead to a significant enhancement in the computational efficiency of the aforementioned model. Advantages of this approach stem from the non-rectangular nature of these operators that substantially minimizes the number of operators needed to achieve an accurate vector hysteresis model. Details of the proposed approach, its identification and experimental testing are presented in the paper. PMID:25685446
Quantitative computational models of molecular self-assembly in systems biology
Thomas, Marcus; Schwartz, Russell
2017-01-01
Molecular self-assembly is the dominant form of chemical reaction in living systems, yet efforts at systems biology modeling are only beginning to appreciate the need for and challenges to accurate quantitative modeling of self-assembly. Self-assembly reactions are essential to nearly every important process in cell and molecular biology and handling them is thus a necessary step in building comprehensive models of complex cellular systems. They present exceptional challenges, however, to standard methods for simulating complex systems. While the general systems biology world is just beginning to deal with these challenges, there is an extensive literature dealing with them for more specialized self-assembly modeling. This review will examine the challenges of self-assembly modeling, nascent efforts to deal with these challenges in the systems modeling community, and some of the solutions offered in prior work on self-assembly specifically. The review concludes with some consideration of the likely role of self-assembly in the future of complex biological system models more generally. PMID:28535149
Quantitative computational models of molecular self-assembly in systems biology.
Thomas, Marcus; Schwartz, Russell
2017-05-23
Molecular self-assembly is the dominant form of chemical reaction in living systems, yet efforts at systems biology modeling are only beginning to appreciate the need for and challenges to accurate quantitative modeling of self-assembly. Self-assembly reactions are essential to nearly every important process in cell and molecular biology and handling them is thus a necessary step in building comprehensive models of complex cellular systems. They present exceptional challenges, however, to standard methods for simulating complex systems. While the general systems biology world is just beginning to deal with these challenges, there is an extensive literature dealing with them for more specialized self-assembly modeling. This review will examine the challenges of self-assembly modeling, nascent efforts to deal with these challenges in the systems modeling community, and some of the solutions offered in prior work on self-assembly specifically. The review concludes with some consideration of the likely role of self-assembly in the future of complex biological system models more generally.
Bioaccumulation of methylmercury in exposed fish communities is primarily mediated via dietary uptake rather than direct gill uptake from the ambient water. Consequently, accurate predication of fish methylmercury concentrations demands reasonably realistic presentations of a com...
COSP - A computer model of cyclic oxidation
NASA Technical Reports Server (NTRS)
Lowell, Carl E.; Barrett, Charles A.; Palmer, Raymond W.; Auping, Judith V.; Probst, Hubert B.
1991-01-01
A computer model useful in predicting the cyclic oxidation behavior of alloys is presented. The model considers the oxygen uptake due to scale formation during the heating cycle and the loss of oxide due to spalling during the cooling cycle. The balance between scale formation and scale loss is modeled and used to predict weight change and metal loss kinetics. A simple uniform spalling model is compared to a more complex random spall site model. In nearly all cases, the simpler uniform spall model gave predictions as accurate as the more complex model. The model has been applied to several nickel-base alloys which, depending upon composition, form Al2O3 or Cr2O3 during oxidation. The model has been validated by several experimental approaches. Versions of the model that run on a personal computer are available.
Optimisation of a Generic Ionic Model of Cardiac Myocyte Electrical Activity
Guo, Tianruo; Al Abed, Amr; Lovell, Nigel H.; Dokos, Socrates
2013-01-01
A generic cardiomyocyte ionic model, whose complexity lies between a simple phenomenological formulation and a biophysically detailed ionic membrane current description, is presented. The model provides a user-defined number of ionic currents, employing two-gate Hodgkin-Huxley type kinetics. Its generic nature allows accurate reconstruction of action potential waveforms recorded experimentally from a range of cardiac myocytes. Using a multiobjective optimisation approach, the generic ionic model was optimised to accurately reproduce multiple action potential waveforms recorded from central and peripheral sinoatrial nodes and right atrial and left atrial myocytes from rabbit cardiac tissue preparations, under different electrical stimulus protocols and pharmacological conditions. When fitted simultaneously to multiple datasets, the time course of several physiologically realistic ionic currents could be reconstructed. Model behaviours tend to be well identified when extra experimental information is incorporated into the optimisation. PMID:23710254
Real-time tumor motion estimation using respiratory surrogate via memory-based learning
NASA Astrophysics Data System (ADS)
Li, Ruijiang; Lewis, John H.; Berbeco, Ross I.; Xing, Lei
2012-08-01
Respiratory tumor motion is a major challenge in radiation therapy for thoracic and abdominal cancers. Effective motion management requires an accurate knowledge of the real-time tumor motion. External respiration monitoring devices (optical, etc) provide a noninvasive, non-ionizing, low-cost and practical approach to obtain the respiratory signal. Due to the highly complex and nonlinear relations between tumor and surrogate motion, its ultimate success hinges on the ability to accurately infer the tumor motion from respiratory surrogates. Given their widespread use in the clinic, such a method is critically needed. We propose to use a powerful memory-based learning method to find the complex relations between tumor motion and respiratory surrogates. The method first stores the training data in memory and then finds relevant data to answer a particular query. Nearby data points are assigned high relevance (or weights) and conversely distant data are assigned low relevance. By fitting relatively simple models to local patches instead of fitting one single global model, it is able to capture highly nonlinear and complex relations between the internal tumor motion and external surrogates accurately. Due to the local nature of weighting functions, the method is inherently robust to outliers in the training data. Moreover, both training and adapting to new data are performed almost instantaneously with memory-based learning, making it suitable for dynamically following variable internal/external relations. We evaluated the method using respiratory motion data from 11 patients. The data set consists of simultaneous measurement of 3D tumor motion and 1D abdominal surface (used as the surrogate signal in this study). There are a total of 171 respiratory traces, with an average peak-to-peak amplitude of ∼15 mm and average duration of ∼115 s per trace. Given only 5 s (roughly one breath) pretreatment training data, the method achieved an average 3D error of 1.5 mm and 95th percentile error of 3.4 mm on unseen test data. The average 3D error was further reduced to 1.4 mm when the model was tuned to its optimal setting for each respiratory trace. In one trace where a few outliers are present in the training data, the proposed method achieved an error reduction of as much as ∼50% compared with the best linear model (1.0 mm versus 2.1 mm). The memory-based learning technique is able to accurately capture the highly complex and nonlinear relations between tumor and surrogate motion in an efficient manner (a few milliseconds per estimate). Furthermore, the algorithm is particularly suitable to handle situations where the training data are contaminated by large errors or outliers. These desirable properties make it an ideal candidate for accurate and robust tumor gating/tracking using respiratory surrogates.
Harikrishnan, A R; Dhar, Purbarun; Gedupudi, Sateesh; Das, Sarit K
2018-04-12
We propose a comprehensive analysis and a quasi-analytical mathematical formalism to predict the surface tension and contact angles of complex surfactant-infused nanocolloids. The model rests on the foundations of the interaction potentials for the interfacial adsorption-desorption dynamics in complex multicomponent colloids. Surfactant-infused nanoparticle-laden interface problems are difficult to deal with because of the many-body interactions and interfaces involved at the meso-nanoscales. The model is based on the governing role of thermodynamic and chemical equilibrium parameters in modulating the interfacial energies. The influence of parameters such as the presence of surfactants, nanoparticles, and surfactant-capped nanoparticles on interfacial dynamics is revealed by the analysis. Solely based on the knowledge of interfacial properties of independent surfactant solutions and nanocolloids, the same can be deduced for complex surfactant-based nanocolloids through the proposed approach. The model accurately predicts the equilibrium surface tension and contact angle of complex nanocolloids available in the existing literature and present experimental findings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sentis, Manuel Lorenzo; Gable, Carl W.
Furthermore, there are many applications in science and engineering modeling where an accurate representation of a complex model geometry in the form of a mesh is important. In applications of flow and transport in subsurface porous media, this is manifest in models that must capture complex geologic stratigraphy, structure (faults, folds, erosion, deposition) and infrastructure (tunnels, boreholes, excavations). Model setup, defined as the activities of geometry definition, mesh generation (creation, optimization, modification, refine, de-refine, smooth), assigning material properties, initial conditions and boundary conditions requires specialized software tools to automate and streamline the process. In addition, some model setup tools willmore » provide more utility if they are designed to interface with and meet the needs of a particular flow and transport software suite. A control volume discretization that uses a two point flux approximation is for example most accurate when the underlying control volumes are 2D or 3D Voronoi tessellations. In this paper we will present the coupling of LaGriT, a mesh generation and model setup software suite and TOUGH2 to model subsurface flow problems and we show an example of how LaGriT can be used as a model setup tool for the generation of a Voronoi mesh for the simulation program TOUGH2. To generate the MESH file for TOUGH2 from the LaGriT output a standalone module Lagrit2Tough2 was developed, which is presented here and will be included in a future release of LaGriT. Here in this paper an alternative method to generate a Voronoi mesh for TOUGH2 with LaGriT is presented and thanks to the modular and command based structure of LaGriT this method is well suited to generating a mesh for complex models.« less
Li, Shan; Dong, Xia; Su, Zhengchang
2013-07-30
Although prokaryotic gene transcription has been studied over decades, many aspects of the process remain poorly understood. Particularly, recent studies have revealed that transcriptomes in many prokaryotes are far more complex than previously thought. Genes in an operon are often alternatively and dynamically transcribed under different conditions, and a large portion of genes and intergenic regions have antisense RNA (asRNA) and non-coding RNA (ncRNA) transcripts, respectively. Ironically, similar studies have not been conducted in the model bacterium E coli K12, thus it is unknown whether or not the bacterium possesses similar complex transcriptomes. Furthermore, although RNA-seq becomes the major method for analyzing the complexity of prokaryotic transcriptome, it is still a challenging task to accurately assemble full length transcripts using short RNA-seq reads. To fill these gaps, we have profiled the transcriptomes of E. coli K12 under different culture conditions and growth phases using a highly specific directional RNA-seq technique that can capture various types of transcripts in the bacterial cells, combined with a highly accurate and robust algorithm and tool TruHMM (http://bioinfolab.uncc.edu/TruHmm_package/) for assembling full length transcripts. We found that 46.9 ~ 63.4% of expressed operons were utilized in their putative alternative forms, 72.23 ~ 89.54% genes had putative asRNA transcripts and 51.37 ~ 72.74% intergenic regions had putative ncRNA transcripts under different culture conditions and growth phases. As has been demonstrated in many other prokaryotes, E. coli K12 also has a highly complex and dynamic transcriptomes under different culture conditions and growth phases. Such complex and dynamic transcriptomes might play important roles in the physiology of the bacterium. TruHMM is a highly accurate and robust algorithm for assembling full-length transcripts in prokaryotes using directional RNA-seq short reads.
2013-01-01
Background Although prokaryotic gene transcription has been studied over decades, many aspects of the process remain poorly understood. Particularly, recent studies have revealed that transcriptomes in many prokaryotes are far more complex than previously thought. Genes in an operon are often alternatively and dynamically transcribed under different conditions, and a large portion of genes and intergenic regions have antisense RNA (asRNA) and non-coding RNA (ncRNA) transcripts, respectively. Ironically, similar studies have not been conducted in the model bacterium E coli K12, thus it is unknown whether or not the bacterium possesses similar complex transcriptomes. Furthermore, although RNA-seq becomes the major method for analyzing the complexity of prokaryotic transcriptome, it is still a challenging task to accurately assemble full length transcripts using short RNA-seq reads. Results To fill these gaps, we have profiled the transcriptomes of E. coli K12 under different culture conditions and growth phases using a highly specific directional RNA-seq technique that can capture various types of transcripts in the bacterial cells, combined with a highly accurate and robust algorithm and tool TruHMM (http://bioinfolab.uncc.edu/TruHmm_package/) for assembling full length transcripts. We found that 46.9 ~ 63.4% of expressed operons were utilized in their putative alternative forms, 72.23 ~ 89.54% genes had putative asRNA transcripts and 51.37 ~ 72.74% intergenic regions had putative ncRNA transcripts under different culture conditions and growth phases. Conclusions As has been demonstrated in many other prokaryotes, E. coli K12 also has a highly complex and dynamic transcriptomes under different culture conditions and growth phases. Such complex and dynamic transcriptomes might play important roles in the physiology of the bacterium. TruHMM is a highly accurate and robust algorithm for assembling full-length transcripts in prokaryotes using directional RNA-seq short reads. PMID:23899370
Effect of shoulder model complexity in upper-body kinematics analysis of the golf swing.
Bourgain, M; Hybois, S; Thoreux, P; Rouillon, O; Rouch, P; Sauret, C
2018-06-25
The golf swing is a complex full body movement during which the spine and shoulders are highly involved. In order to determine shoulder kinematics during this movement, multibody kinematics optimization (MKO) can be recommended to limit the effect of the soft tissue artifact and to avoid joint dislocations or bone penetration in reconstructed kinematics. Classically, in golf biomechanics research, the shoulder is represented by a 3 degrees-of-freedom model representing the glenohumeral joint. More complex and physiological models are already provided in the scientific literature. Particularly, the model used in this study was a full body model and also described motions of clavicles and scapulae. This study aimed at quantifying the effect of utilizing a more complex and physiological shoulder model when studying the golf swing. Results obtained on 20 golfers showed that a more complex and physiologically-accurate model can more efficiently track experimental markers, which resulted in differences in joint kinematics. Hence, the model with 3 degrees-of-freedom between the humerus and the thorax may be inadequate when combined with MKO and a more physiological model would be beneficial. Finally, results would also be improved through a subject-specific approach for the determination of the segment lengths. Copyright © 2018 Elsevier Ltd. All rights reserved.
An Efficient Model-based Diagnosis Engine for Hybrid Systems Using Structural Model Decomposition
NASA Technical Reports Server (NTRS)
Bregon, Anibal; Narasimhan, Sriram; Roychoudhury, Indranil; Daigle, Matthew; Pulido, Belarmino
2013-01-01
Complex hybrid systems are present in a large range of engineering applications, like mechanical systems, electrical circuits, or embedded computation systems. The behavior of these systems is made up of continuous and discrete event dynamics that increase the difficulties for accurate and timely online fault diagnosis. The Hybrid Diagnosis Engine (HyDE) offers flexibility to the diagnosis application designer to choose the modeling paradigm and the reasoning algorithms. The HyDE architecture supports the use of multiple modeling paradigms at the component and system level. However, HyDE faces some problems regarding performance in terms of complexity and time. Our focus in this paper is on developing efficient model-based methodologies for online fault diagnosis in complex hybrid systems. To do this, we propose a diagnosis framework where structural model decomposition is integrated within the HyDE diagnosis framework to reduce the computational complexity associated with the fault diagnosis of hybrid systems. As a case study, we apply our approach to a diagnostic testbed, the Advanced Diagnostics and Prognostics Testbed (ADAPT), using real data.
Rill erosion in natural and disturbed forests: 2. Modeling approaches
J. W. Wagenbrenner; P. R. Robichaud; W. J. Elliot
2010-01-01
As forest management scenarios become more complex, the ability to more accurately predict erosion from those scenarios becomes more important. In this second part of a two-part study we report model parameters based on 66 simulated runoff experiments in two disturbed forests in the northwestern U.S. The 5 disturbance classes were natural, 10-month old and 2-week old...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, Gregory; Mistrick, Ph.D., Richard; Lee, Eleanor
2011-01-21
We describe two methods which rely on bidirectional scattering distribution functions (BSDFs) to model the daylighting performance of complex fenestration systems (CFS), enabling greater flexibility and accuracy in evaluating arbitrary assemblies of glazing, shading, and other optically-complex coplanar window systems. Two tools within Radiance enable a) efficient annual performance evaluations of CFS, and b) accurate renderings of CFS despite the loss of spatial resolution associated with low-resolution BSDF datasets for inhomogeneous systems. Validation, accuracy, and limitations of the methods are discussed.
Modeling the binding of fulvic acid by goethite: the speciation of adsorbed FA molecules
NASA Astrophysics Data System (ADS)
Filius, Jeroen D.; Meeussen, Johannes C. L.; Lumsdon, David G.; Hiemstra, Tjisse; van Riemsdijk, Willem H.
2003-04-01
Under natural conditions, the adsorption of ions at the solid-water interface may be strongly influenced by the adsorption of organic matter. In this paper, we describe the adsorption of fulvic acid (FA) by metal(hydr)oxide surfaces with a heterogeneous surface complexation model, the ligand and charge distribution (LCD) model. The model is a self-consistent combination of the nonideal competitive adsorption (NICA) equation and the CD-MUSIC model. The LCD model can describe simultaneously the concentration, pH, and salt dependency of the adsorption with a minimum of only three adjustable parameters. Furthermore, the model predicts the coadsorption of protons accurately for an extended range of conditions. Surface speciation calculations show that almost all hydroxyl groups of the adsorbed FA molecules are involved in outer sphere complexation reactions. The carboxylic groups of the adsorbed FA molecule form inner and outer sphere complexes. Furthermore, part of the carboxylate groups remain noncoordinated and deprotonated.
Model improvements to simulate charging in SEM
NASA Astrophysics Data System (ADS)
Arat, K. T.; Klimpel, T.; Hagen, C. W.
2018-03-01
Charging of insulators is a complex phenomenon to simulate since the accuracy of the simulations is very sensitive to the interaction of electrons with matter and electric fields. In this study, we report model improvements for a previously developed Monte-Carlo simulator to more accurately simulate samples that charge. The improvements include both modelling of low energy electron scattering and charging of insulators. The new first-principle scattering models provide a more realistic charge distribution cloud in the material, and a better match between non-charging simulations and experimental results. Improvements on charging models mainly focus on redistribution of the charge carriers in the material with an induced conductivity (EBIC) and a breakdown model, leading to a smoother distribution of the charges. Combined with a more accurate tracing of low energy electrons in the electric field, we managed to reproduce the dynamically changing charging contrast due to an induced positive surface potential.
New Equation of State Models for Hydrodynamic Applications
NASA Astrophysics Data System (ADS)
Young, David A.; Barbee, Troy W., III; Rogers, Forrest J.
1997-07-01
Accurate models of the equation of state of matter at high pressures and temperatures are increasingly required for hydrodynamic simulations. We have developed two new approaches to accurate EOS modeling: 1) ab initio phonons from electron band structure theory for condensed matter and 2) the ACTEX dense plasma model for ultrahigh pressure shocks. We have studied the diamond and high pressure phases of carbon with the ab initio model and find good agreement between theory and experiment for shock Hugoniots, isotherms, and isobars. The theory also predicts a comprehensive phase diagram for carbon. For ultrahigh pressure shock states, we have studied the comparison of ACTEX theory with experiments for deuterium, beryllium, polystyrene, water, aluminum, and silicon dioxide. The agreement is good, showing that complex multispecies plasmas are treated adequately by the theory. These models will be useful in improving the numerical EOS tables used by hydrodynamic codes.
Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems.
Williams, Richard A; Timmis, Jon; Qwarnstrom, Eva E
2016-01-01
Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model.
Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems
Timmis, Jon; Qwarnstrom, Eva E.
2016-01-01
Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model. PMID:27571414
EXTRAPOLATION MODELING: ADVANCEMENTS AND RESEARCH ISSUES IN LUNG DOSIMETRY
Many of the environmental pollutants to which humans are exposed are increasing rapidly, in terms of number, complexity, and concentration. ne of the great challenges in environmental medicine is to define more accurately the adverse health effects likely to be encountered by exp...
Green, Christopher T.; Zhang, Yong; Jurgens, Bryant C.; Starn, J. Jeffrey; Landon, Matthew K.
2014-01-01
Analytical models of the travel time distribution (TTD) from a source area to a sample location are often used to estimate groundwater ages and solute concentration trends. The accuracies of these models are not well known for geologically complex aquifers. In this study, synthetic datasets were used to quantify the accuracy of four analytical TTD models as affected by TTD complexity, observation errors, model selection, and tracer selection. Synthetic TTDs and tracer data were generated from existing numerical models with complex hydrofacies distributions for one public-supply well and 14 monitoring wells in the Central Valley, California. Analytical TTD models were calibrated to synthetic tracer data, and prediction errors were determined for estimates of TTDs and conservative tracer (NO3−) concentrations. Analytical models included a new, scale-dependent dispersivity model (SDM) for two-dimensional transport from the watertable to a well, and three other established analytical models. The relative influence of the error sources (TTD complexity, observation error, model selection, and tracer selection) depended on the type of prediction. Geological complexity gave rise to complex TTDs in monitoring wells that strongly affected errors of the estimated TTDs. However, prediction errors for NO3− and median age depended more on tracer concentration errors. The SDM tended to give the most accurate estimates of the vertical velocity and other predictions, although TTD model selection had minor effects overall. Adding tracers improved predictions if the new tracers had different input histories. Studies using TTD models should focus on the factors that most strongly affect the desired predictions.
NASA Astrophysics Data System (ADS)
Monnier, J.; Couderc, F.; Dartus, D.; Larnier, K.; Madec, R.; Vila, J.-P.
2016-11-01
The 2D shallow water equations adequately model some geophysical flows with wet-dry fronts (e.g. flood plain or tidal flows); nevertheless deriving accurate, robust and conservative numerical schemes for dynamic wet-dry fronts over complex topographies remains a challenge. Furthermore for these flows, data are generally complex, multi-scale and uncertain. Robust variational inverse algorithms, providing sensitivity maps and data assimilation processes may contribute to breakthrough shallow wet-dry front dynamics modelling. The present study aims at deriving an accurate, positive and stable finite volume scheme in presence of dynamic wet-dry fronts, and some corresponding inverse computational algorithms (variational approach). The schemes and algorithms are assessed on classical and original benchmarks plus a real flood plain test case (Lèze river, France). Original sensitivity maps with respect to the (friction, topography) pair are performed and discussed. The identification of inflow discharges (time series) or friction coefficients (spatially distributed parameters) demonstrate the algorithms efficiency.
Efficient and accurate approach to modeling the microstructure and defect properties of LaCoO3
NASA Astrophysics Data System (ADS)
Buckeridge, J.; Taylor, F. H.; Catlow, C. R. A.
2016-04-01
Complex perovskite oxides are promising materials for cathode layers in solid oxide fuel cells. Such materials have intricate electronic, magnetic, and crystalline structures that prove challenging to model accurately. We analyze a wide range of standard density functional theory approaches to modeling a highly promising system, the perovskite LaCoO3, focusing on optimizing the Hubbard U parameter to treat the self-interaction of the B-site cation's d states, in order to determine the most appropriate method to study defect formation and the effect of spin on local structure. By calculating structural and electronic properties for different magnetic states we determine that U =4 eV for Co in LaCoO3 agrees best with available experiments. We demonstrate that the generalized gradient approximation (PBEsol +U ) is most appropriate for studying structure versus spin state, while the local density approximation (LDA +U ) is most appropriate for determining accurate energetics for defect properties.
Semiclassical description of resonance-assisted tunneling in one-dimensional integrable models
NASA Astrophysics Data System (ADS)
Le Deunff, Jérémy; Mouchet, Amaury; Schlagheck, Peter
2013-10-01
Resonance-assisted tunneling is investigated within the framework of one-dimensional integrable systems. We present a systematic recipe, based on Hamiltonian normal forms, to construct one-dimensional integrable models that exhibit resonance island chain structures with accurately controlled sizes and positions of the islands. Using complex classical trajectories that evolve along suitably defined paths in the complex time domain, we construct a semiclassical theory of the resonance-assisted tunneling process. This semiclassical approach yields a compact analytical expression for tunnelling-induced level splittings which is found to be in very good agreement with the exact splittings obtained through numerical diagonalization.
NASA Technical Reports Server (NTRS)
Shishir, Pandya; Chaderjian, Neal; Ahmad, Jsaim; Kwak, Dochan (Technical Monitor)
2001-01-01
Flow simulations using the time-dependent Navier-Stokes equations remain a challenge for several reasons. Principal among them are the difficulty to accurately model complex flows, and the time needed to perform the computations. A parametric study of such complex problems is not considered practical due to the large cost associated with computing many time-dependent solutions. The computation time for each solution must be reduced in order to make a parametric study possible. With successful reduction of computation time, the issue of accuracy, and appropriateness of turbulence models will become more tractable.
NASA Astrophysics Data System (ADS)
Chicea, Anca-Lucia
2015-09-01
The paper presents the process of building geometric and kinematic models of a technological equipment used in the process of manufacturing devices. First, the process of building the model for a six axes industrial robot is presented. In the second part of the paper, the process of building the model for a five-axis CNC milling machining center is also shown. Both models can be used for accurate cutting processes simulation of complex parts, such as prosthetic devices.
A spectral dynamic stiffness method for free vibration analysis of plane elastodynamic problems
NASA Astrophysics Data System (ADS)
Liu, X.; Banerjee, J. R.
2017-03-01
A highly efficient and accurate analytical spectral dynamic stiffness (SDS) method for modal analysis of plane elastodynamic problems based on both plane stress and plane strain assumptions is presented in this paper. First, the general solution satisfying the governing differential equation exactly is derived by applying two types of one-dimensional modified Fourier series. Then the SDS matrix for an element is formulated symbolically using the general solution. The SDS matrices are assembled directly in a similar way to that of the finite element method, demonstrating the method's capability to model complex structures. Any arbitrary boundary conditions are represented accurately in the form of the modified Fourier series. The Wittrick-Williams algorithm is then used as the solution technique where the mode count problem (J0) of a fully-clamped element is resolved. The proposed method gives highly accurate solutions with remarkable computational efficiency, covering low, medium and high frequency ranges. The method is applied to both plane stress and plane strain problems with simple as well as complex geometries. All results from the theory in this paper are accurate up to the last figures quoted to serve as benchmarks.
Testing MODFLOW-LGR for simulating flow around buried Quaternary valleys - synthetic test cases
NASA Astrophysics Data System (ADS)
Vilhelmsen, T. N.; Christensen, S.
2009-12-01
In this study the Local Grid Refinement (LGR) method developed for MODFLOW-2005 (Mehl and Hill, 2005) is utilized to describe groundwater flow in areas containing buried Quaternary valley structures. The tests are conducted as comparative analysis between simulations run with a globally refined model, a locally refined model, and a globally coarse model, respectively. The models vary from simple one layer models to more complex ones with up to 25 model layers. The comparisons of accuracy are conducted within the locally refined area and focus on water budgets, simulated heads, and simulated particle traces. Simulations made with the globally refined model are used as reference (regarded as “true” values). As expected, for all test cases the application of local grid refinement resulted in more accurate results than when using the globally coarse model. A significant advantage of utilizing MODFLOW-LGR was that it allows increased numbers of model layers to better resolve complex geology within local areas. This resulted in more accurate simulations than when using either a globally coarse model grid or a locally refined model with lower geological resolution. Improved accuracy in the latter case could not be expected beforehand because difference in geological resolution between the coarse parent model and the refined child model contradicts the assumptions of the Darcy weighted interpolation used in MODFLOW-LGR. With respect to model runtimes, it was sometimes found that the runtime for the locally refined model is much longer than for the globally refined model. This was the case even when the closure criteria were relaxed compared to the globally refined model. These results are contradictory to those presented by Mehl and Hill (2005). Furthermore, in the complex cases it took some testing (model runs) to identify the closure criteria and the damping factor that secured convergence, accurate solutions, and reasonable runtimes. For our cases this is judged to be a serious disadvantage of applying MODFLOW-LGR. Another disadvantage in the studied cases was that the MODFLOW-LGR results proved to be somewhat dependent on the correction method used at the parent-child model interface. This indicates that when applying MODFLOW-LGR there is a need for thorough and case-specific considerations regarding choice of correction method. References: Mehl, S. and M. C. Hill (2005). "MODFLOW-2005, THE U.S. GEOLOGICAL SURVEY MODULAR GROUND-WATER MODEL - DOCUMENTATION OF SHARED NODE LOCAL GRID REFINEMENT (LGR) AND THE BOUNDARY FLOW AND HEAD (BFH) PACKAGE " U.S. Geological Survey Techniques and Methods 6-A12
Snowden, Thomas J; van der Graaf, Piet H; Tindall, Marcus J
2017-07-01
Complex models of biochemical reaction systems have become increasingly common in the systems biology literature. The complexity of such models can present a number of obstacles for their practical use, often making problems difficult to intuit or computationally intractable. Methods of model reduction can be employed to alleviate the issue of complexity by seeking to eliminate those portions of a reaction network that have little or no effect upon the outcomes of interest, hence yielding simplified systems that retain an accurate predictive capacity. This review paper seeks to provide a brief overview of a range of such methods and their application in the context of biochemical reaction network models. To achieve this, we provide a brief mathematical account of the main methods including timescale exploitation approaches, reduction via sensitivity analysis, optimisation methods, lumping, and singular value decomposition-based approaches. Methods are reviewed in the context of large-scale systems biology type models, and future areas of research are briefly discussed.
3D printing the pterygopalatine fossa: a negative space model of a complex structure.
Bannon, Ross; Parihar, Shivani; Skarparis, Yiannis; Varsou, Ourania; Cezayirli, Enis
2018-02-01
The pterygopalatine fossa is one of the most complex anatomical regions to understand. It is poorly visualized in cadaveric dissection and most textbooks rely on schematic depictions. We describe our approach to creating a low-cost, 3D model of the pterygopalatine fossa, including its associated canals and foramina, using an affordable "desktop" 3D printer. We used open source software to create a volume render of the pterygopalatine fossa from axial slices of a head computerised tomography scan. These data were then exported to a 3D printer to produce an anatomically accurate model. The resulting 'negative space' model of the pterygopalatine fossa provides a useful and innovative aid for understanding the complex anatomical relationships of the pterygopalatine fossa. This model was designed primarily for medical students; however, it will also be of interest to postgraduates in ENT, ophthalmology, neurosurgery, and radiology. The technical process described may be replicated by other departments wishing to develop their own anatomical models whilst incurring minimal costs.
Tao, Jianmin; Rappe, Andrew M.
2016-01-20
Due to the absence of the long-range van der Waals (vdW) interaction, conventional density functional theory (DFT) often fails in the description of molecular complexes and solids. In recent years, considerable progress has been made in the development of the vdW correction. However, the vdW correction based on the leading-order coefficient C 6 alone can only achieve limited accuracy, while accurate modeling of higher-order coefficients remains a formidable task, due to the strong non-additivity effect. Here, we apply a model dynamic multipole polarizability within a modified single-frequency approximation to calculate C 8 and C 10 between small molecules. We findmore » that the higher-order vdW coefficients from this model can achieve remarkable accuracy, with mean absolute relative deviations of 5% for C 8 and 7% for C 10. As a result, inclusion of accurate higher-order contributions in the vdW correction will effectively enhance the predictive power of DFT in condensed matter physics and quantum chemistry.« less
Brain white matter fiber estimation and tractography using Q-ball imaging and Bayesian MODEL.
Lu, Meng
2015-01-01
Diffusion tensor imaging allows for the non-invasive in vivo mapping of the brain tractography. However, fiber bundles have complex structures such as fiber crossings, fiber branchings and fibers with large curvatures that tensor imaging (DTI) cannot accurately handle. This study presents a novel brain white matter tractography method using Q-ball imaging as the data source instead of DTI, because QBI can provide accurate information about multiple fiber crossings and branchings in a single voxel using an orientation distribution function (ODF). The presented method also uses graph theory to construct the Bayesian model-based graph, so that the fiber tracking between two voxels can be represented as the shortest path in a graph. Our experiment showed that our new method can accurately handle brain white matter fiber crossings and branchings, and reconstruct brain tractograhpy both in phantom data and real brain data.
Human eyeball model reconstruction and quantitative analysis.
Xing, Qi; Wei, Qi
2014-01-01
Determining shape of the eyeball is important to diagnose eyeball disease like myopia. In this paper, we present an automatic approach to precisely reconstruct three dimensional geometric shape of eyeball from MR Images. The model development pipeline involved image segmentation, registration, B-Spline surface fitting and subdivision surface fitting, neither of which required manual interaction. From the high resolution resultant models, geometric characteristics of the eyeball can be accurately quantified and analyzed. In addition to the eight metrics commonly used by existing studies, we proposed two novel metrics, Gaussian Curvature Analysis and Sphere Distance Deviation, to quantify the cornea shape and the whole eyeball surface respectively. The experiment results showed that the reconstructed eyeball models accurately represent the complex morphology of the eye. The ten metrics parameterize the eyeball among different subjects, which can potentially be used for eye disease diagnosis.
Grid Resolution Effects on LES of a Piloted Methane-Air Flame
2009-05-20
respectively. In the LES momen- tum equation , Eq.(3), the Smagorinsky model is used to obtain the deviatoric part of the unclosed SGS stress τi j... accurately predicted from integra- tion of their LES evolution equations ; and (ii), the flamelet parametrization should adequately approximate the... effect of the complex small-scale turbulence/chemistry interactions is modeled in an affordable way by a combustion model. A question of how a particular
An ocean scatter propagation model for aeronautical satellite communication applications
NASA Technical Reports Server (NTRS)
Moreland, K. W.
1990-01-01
In this paper an ocean scattering propagation model, developed for aircraft-to-satellite (aeronautical) applications, is described. The purpose of the propagation model is to characterize the behavior of sea reflected multipath as a function of physical propagation path parameters. An accurate validation against the theoretical far field solution for a perfectly conducting sinusoidal surface is provided. Simulation results for typical L band aeronautical applications with low complexity antennas are presented.
Casey, F P; Baird, D; Feng, Q; Gutenkunst, R N; Waterfall, J J; Myers, C R; Brown, K S; Cerione, R A; Sethna, J P
2007-05-01
We apply the methods of optimal experimental design to a differential equation model for epidermal growth factor receptor signalling, trafficking and down-regulation. The model incorporates the role of a recently discovered protein complex made up of the E3 ubiquitin ligase, Cbl, the guanine exchange factor (GEF), Cool-1 (beta -Pix) and the Rho family G protein Cdc42. The complex has been suggested to be important in disrupting receptor down-regulation. We demonstrate that the model interactions can accurately reproduce the experimental observations, that they can be used to make predictions with accompanying uncertainties, and that we can apply ideas of optimal experimental design to suggest new experiments that reduce the uncertainty on unmeasurable components of the system.
Wu, Jianlan; Tang, Zhoufei; Gong, Zhihao; Cao, Jianshu; Mukamel, Shaul
2015-04-02
The energy absorbed in a light-harvesting protein complex is often transferred collectively through aggregated chromophore clusters. For population evolution of chromophores, the time-integrated effective rate matrix allows us to construct quantum kinetic clusters quantitatively and determine the reduced cluster-cluster transfer rates systematically, thus defining a minimal model of energy-transfer kinetics. For Fenna-Matthews-Olson (FMO) and light-havrvesting complex II (LCHII) monomers, quantum Markovian kinetics of clusters can accurately reproduce the overall energy-transfer process in the long-time scale. The dominant energy-transfer pathways are identified in the picture of aggregated clusters. The chromophores distributed extensively in various clusters can assist a fast and long-range energy transfer.
Castellazzi, Giovanni; D’Altri, Antonio Maria; Bitelli, Gabriele; Selvaggi, Ilenia; Lambertini, Alessandro
2015-01-01
In this paper, a new semi-automatic procedure to transform three-dimensional point clouds of complex objects to three-dimensional finite element models is presented and validated. The procedure conceives of the point cloud as a stacking of point sections. The complexity of the clouds is arbitrary, since the procedure is designed for terrestrial laser scanner surveys applied to buildings with irregular geometry, such as historical buildings. The procedure aims at solving the problems connected to the generation of finite element models of these complex structures by constructing a fine discretized geometry with a reduced amount of time and ready to be used with structural analysis. If the starting clouds represent the inner and outer surfaces of the structure, the resulting finite element model will accurately capture the whole three-dimensional structure, producing a complex solid made by voxel elements. A comparison analysis with a CAD-based model is carried out on a historical building damaged by a seismic event. The results indicate that the proposed procedure is effective and obtains comparable models in a shorter time, with an increased level of automation. PMID:26225978
Complex optimization for big computational and experimental neutron datasets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, Feng; Oak Ridge National Lab.; Archibald, Richard
Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less
NASA Astrophysics Data System (ADS)
McDowell, Sean A. C.
2017-04-01
An MP2 computational study of model hydrogen-bonded pyrrole⋯YZ (YZ = NH3, NCH, BF, CO, N2, OC, FB) complexes was undertaken in order to examine the variation of the Nsbnd H bond length change and its associated vibrational frequency shift. The chemical hardness of Y, as well as the YZ dipole moment, were found to be important parameters in modifying the bond length change/frequency shift. The basis set effect on the computed properties was also assessed. A perturbative model, which accurately reproduced the ab initio Nsbnd H bond length changes and frequency shifts, was useful in rationalizing the observed trends.
Complex optimization for big computational and experimental neutron datasets
Bao, Feng; Oak Ridge National Lab.; Archibald, Richard; ...
2016-11-07
Here, we present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. Finally, we use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, andmore » refine first principles calculations to better describe the experimental data.« less
NASA Astrophysics Data System (ADS)
Xu, Kaixuan; Wang, Jun
2017-02-01
In this paper, recently introduced permutation entropy and sample entropy are further developed to the fractional cases, weighted fractional permutation entropy (WFPE) and fractional sample entropy (FSE). The fractional order generalization of information entropy is utilized in the above two complexity approaches, to detect the statistical characteristics of fractional order information in complex systems. The effectiveness analysis of proposed methods on the synthetic data and the real-world data reveals that tuning the fractional order allows a high sensitivity and more accurate characterization to the signal evolution, which is useful in describing the dynamics of complex systems. Moreover, the numerical research on nonlinear complexity behaviors is compared between the returns series of Potts financial model and the actual stock markets. And the empirical results confirm the feasibility of the proposed model.
Conditional dissipation of scalars in homogeneous turbulence: Closure for MMC modelling
NASA Astrophysics Data System (ADS)
Wandel, Andrew P.
2013-08-01
While the mean and unconditional variance are to be predicted well by any reasonable turbulent combustion model, these are generally not sufficient for the accurate modelling of complex phenomena such as extinction/reignition. An additional criterion has been recently introduced: accurate modelling of the dissipation timescales associated with fluctuations of scalars about their conditional mean (conditional dissipation timescales). Analysis of Direct Numerical Simulation (DNS) results for a passive scalar shows that the conditional dissipation timescale is of the order of the integral timescale and smaller than the unconditional dissipation timescale. A model is proposed: the conditional dissipation timescale is proportional to the integral timescale. This model is used in Multiple Mapping Conditioning (MMC) modelling for a passive scalar case and a reactive scalar case, comparing to DNS results for both. The results show that this model improves the accuracy of MMC predictions so as to match the DNS results more closely using a relatively-coarse spatial resolution compared to other turbulent combustion models.
Fitting Neuron Models to Spike Trains
Rossant, Cyrille; Goodman, Dan F. M.; Fontaine, Bertrand; Platkiewicz, Jonathan; Magnusson, Anna K.; Brette, Romain
2011-01-01
Computational modeling is increasingly used to understand the function of neural circuits in systems neuroscience. These studies require models of individual neurons with realistic input–output properties. Recently, it was found that spiking models can accurately predict the precisely timed spike trains produced by cortical neurons in response to somatically injected currents, if properly fitted. This requires fitting techniques that are efficient and flexible enough to easily test different candidate models. We present a generic solution, based on the Brian simulator (a neural network simulator in Python), which allows the user to define and fit arbitrary neuron models to electrophysiological recordings. It relies on vectorization and parallel computing techniques to achieve efficiency. We demonstrate its use on neural recordings in the barrel cortex and in the auditory brainstem, and confirm that simple adaptive spiking models can accurately predict the response of cortical neurons. Finally, we show how a complex multicompartmental model can be reduced to a simple effective spiking model. PMID:21415925
Vaquerizo, Beatriz; Theriault-Lauzier, Pascal; Piazza, Nicolo
2015-12-01
Mitral regurgitation is the most prevalent valvular heart disease worldwide. Despite the widespread availability of curative surgical intervention, a considerable proportion of patients with severe mitral regurgitation are not referred for treatment, largely due to the presence of left ventricular dysfunction, advanced age, and comorbid illnesses. Transcatheter mitral valve replacement is a promising therapeutic alternative to traditional surgical valve replacement. The complex anatomical and pathophysiological nature of the mitral valvular complex, however, presents significant challenges to the successful design and implementation of novel transcatheter mitral replacement devices. Patient-specific 3-dimensional computer-based models enable accurate assessment of the mitral valve anatomy and preprocedural simulations for transcatheter therapies. Such information may help refine the design features of novel transcatheter mitral devices and enhance procedural planning. Herein, we describe a novel medical image-based processing tool that facilitates accurate, noninvasive assessment of the mitral valvular complex, by creating precise three-dimensional heart models. The 3-dimensional computer reconstructions are then converted to a physical model using 3-dimensional printing technology, thereby enabling patient-specific assessment of the interaction between device and patient. It may provide new opportunities for a better understanding of the mitral anatomy-pathophysiology-device interaction, which is of critical importance for the advancement of transcatheter mitral valve replacement. Copyright © 2015 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.
Characterization of structural connections using free and forced response test data
NASA Technical Reports Server (NTRS)
Lawrence, Charles; Huckelbridge, Arthur A.
1989-01-01
The accurate prediction of system dynamic response often has been limited by deficiencies in existing capabilities to characterize connections adequately. Connections between structural components often are complex mechanically, and difficult to accurately model analytically. Improved analytical models for connections are needed to improve system dynamic preditions. A procedure for identifying physical connection properties from free and forced response test data is developed, then verified utilizing a system having both a linear and nonlinear connection. Connection properties are computed in terms of physical parameters so that the physical characteristics of the connections can better be understood, in addition to providing improved input for the system model. The identification procedure is applicable to multi-degree of freedom systems, and does not require that the test data be measured directly at the connection locations.
Parameters estimation for reactive transport: A way to test the validity of a reactive model
NASA Astrophysics Data System (ADS)
Aggarwal, Mohit; Cheikh Anta Ndiaye, Mame; Carrayrou, Jérôme
The chemical parameters used in reactive transport models are not known accurately due to the complexity and the heterogeneous conditions of a real domain. We will present an efficient algorithm in order to estimate the chemical parameters using Monte-Carlo method. Monte-Carlo methods are very robust for the optimisation of the highly non-linear mathematical model describing reactive transport. Reactive transport of tributyltin (TBT) through natural quartz sand at seven different pHs is taken as the test case. Our algorithm will be used to estimate the chemical parameters of the sorption of TBT onto the natural quartz sand. By testing and comparing three models of surface complexation, we show that the proposed adsorption model cannot explain the experimental data.
NASA Astrophysics Data System (ADS)
Soltanzadeh, Iman; Bonnardot, Valérie; Sturman, Andrew; Quénol, Hervé; Zawar-Reza, Peyman
2017-08-01
Global warming has implications for thermal stress for grapevines during ripening, so that wine producers need to adapt their viticultural practices to ensure optimum physiological response to environmental conditions in order to maintain wine quality. The aim of this paper is to assess the ability of the Weather Research and Forecasting (WRF) model to accurately represent atmospheric processes at high resolution (500 m) during two events during the grapevine ripening period in the Stellenbosch Wine of Origin district of South Africa. Two case studies were selected to identify areas of potentially high daytime heat stress when grapevine photosynthesis and grape composition were expected to be affected. The results of high-resolution atmospheric model simulations were compared to observations obtained from an automatic weather station (AWS) network in the vineyard region. Statistical analysis was performed to assess the ability of the WRF model to reproduce spatial and temporal variations of meteorological parameters at 500-m resolution. The model represented the spatial and temporal variation of meteorological variables very well, with an average model air temperature bias of 0.1 °C, while that for relative humidity was -5.0 % and that for wind speed 0.6 m s-1. Variation in model performance varied between AWS and with time of day, as WRF was not always able to accurately represent effects of nocturnal cooling within the complex terrain. Variations in performance between the two case studies resulted from effects of atmospheric boundary layer processes in complex terrain under the influence of the different synoptic conditions prevailing during the two periods.
Bishop, Chris; Paul, Gunther; Thewlis, Dominic
2013-04-01
Kinematic models are commonly used to quantify foot and ankle kinematics, yet no marker sets or models have been proven reliable or accurate when wearing shoes. Further, the minimal detectable difference of a developed model is often not reported. We present a kinematic model that is reliable, accurate and sensitive to describe the kinematics of the foot-shoe complex and lower leg during walking gait. In order to achieve this, a new marker set was established, consisting of 25 markers applied on the shoe and skin surface, which informed a four segment kinematic model of the foot-shoe complex and lower leg. Three independent experiments were conducted to determine the reliability, accuracy and minimal detectable difference of the marker set and model. Inter-rater reliability of marker placement on the shoe was proven to be good to excellent (ICC=0.75-0.98) indicating that markers could be applied reliably between raters. Intra-rater reliability was better for the experienced rater (ICC=0.68-0.99) than the inexperienced rater (ICC=0.38-0.97). The accuracy of marker placement along each axis was <6.7 mm for all markers studied. Minimal detectable difference (MDD90) thresholds were defined for each joint; tibiocalcaneal joint--MDD90=2.17-9.36°, tarsometatarsal joint--MDD90=1.03-9.29° and the metatarsophalangeal joint--MDD90=1.75-9.12°. These thresholds proposed are specific for the description of shod motion, and can be used in future research designed at comparing between different footwear. Copyright © 2012 Elsevier B.V. All rights reserved.
Aliabadi, Mohsen; Golmohammadi, Rostam; Khotanlou, Hassan; Mansoorizadeh, Muharram; Salarpour, Amir
2014-01-01
Noise prediction is considered to be the best method for evaluating cost-preventative noise controls in industrial workrooms. One of the most important issues is the development of accurate models for analysis of the complex relationships among acoustic features affecting noise level in workrooms. In this study, advanced fuzzy approaches were employed to develop relatively accurate models for predicting noise in noisy industrial workrooms. The data were collected from 60 industrial embroidery workrooms in the Khorasan Province, East of Iran. The main acoustic and embroidery process features that influence the noise were used to develop prediction models using MATLAB software. Multiple regression technique was also employed and its results were compared with those of fuzzy approaches. Prediction errors of all prediction models based on fuzzy approaches were within the acceptable level (lower than one dB). However, Neuro-fuzzy model (RMSE=0.53dB and R2=0.88) could slightly improve the accuracy of noise prediction compared with generate fuzzy model. Moreover, fuzzy approaches provided more accurate predictions than did regression technique. The developed models based on fuzzy approaches as useful prediction tools give professionals the opportunity to have an optimum decision about the effectiveness of acoustic treatment scenarios in embroidery workrooms.
Modeling of protein binary complexes using structural mass spectrometry data
Kamal, J.K. Amisha; Chance, Mark R.
2008-01-01
In this article, we describe a general approach to modeling the structure of binary protein complexes using structural mass spectrometry data combined with molecular docking. In the first step, hydroxyl radical mediated oxidative protein footprinting is used to identify residues that experience conformational reorganization due to binding or participate in the binding interface. In the second step, a three-dimensional atomic structure of the complex is derived by computational modeling. Homology modeling approaches are used to define the structures of the individual proteins if footprinting detects significant conformational reorganization as a function of complex formation. A three-dimensional model of the complex is constructed from these binary partners using the ClusPro program, which is composed of docking, energy filtering, and clustering steps. Footprinting data are used to incorporate constraints—positive and/or negative—in the docking step and are also used to decide the type of energy filter—electrostatics or desolvation—in the successive energy-filtering step. By using this approach, we examine the structure of a number of binary complexes of monomeric actin and compare the results to crystallographic data. Based on docking alone, a number of competing models with widely varying structures are observed, one of which is likely to agree with crystallographic data. When the docking steps are guided by footprinting data, accurate models emerge as top scoring. We demonstrate this method with the actin/gelsolin segment-1 complex. We also provide a structural model for the actin/cofilin complex using this approach which does not have a crystal or NMR structure. PMID:18042684
Characterizing fuels in the 21st century.
David Sandberg; Roger D. Ottmar; Geoffrey H. Cushon
2001-01-01
The ongoing development of sophisticated fire behavior and effects models has demonstrated the need for a comprehensive system of fuel classification that more accurately captures the structural complexity and geographic diversity of fuelbeds. The Fire and Environmental Research Applications Team (FERA) of the USD Forest Service, Pacific Northwest Research Station, is...
NASA Astrophysics Data System (ADS)
Walker, Ernest; Chen, Xinjia; Cooper, Reginald L.
2010-04-01
An arbitrarily accurate approach is used to determine the bit-error rate (BER) performance for generalized asynchronous DS-CDMA systems, in Gaussian noise with Raleigh fading. In this paper, and the sequel, new theoretical work has been contributed which substantially enhances existing performance analysis formulations. Major contributions include: substantial computational complexity reduction, including a priori BER accuracy bounding; an analytical approach that facilitates performance evaluation for systems with arbitrary spectral spreading distributions, with non-uniform transmission delay distributions. Using prior results, augmented by these enhancements, a generalized DS-CDMA system model is constructed and used to evaluated the BER performance, in a variety of scenarios. In this paper, the generalized system modeling was used to evaluate the performance of both Walsh- Hadamard (WH) and Walsh-Hadamard-seeded zero-correlation-zone (WH-ZCZ) coding. The selection of these codes was informed by the observation that WH codes contain N spectral spreading values (0 to N - 1), one for each code sequence; while WH-ZCZ codes contain only two spectral spreading values (N/2 - 1,N/2); where N is the sequence length in chips. Since these codes span the spectral spreading range for DS-CDMA coding, by invoking an induction argument, the generalization of the system model is sufficiently supported. The results in this paper, and the sequel, support the claim that an arbitrary accurate performance analysis for DS-CDMA systems can be evaluated over the full range of binary coding, with minimal computational complexity.
NASA Technical Reports Server (NTRS)
Wallace, Dolores R.
2003-01-01
In FY01 we learned that hardware reliability models need substantial changes to account for differences in software, thus making software reliability measurements more effective, accurate, and easier to apply. These reliability models are generally based on familiar distributions or parametric methods. An obvious question is 'What new statistical and probability models can be developed using non-parametric and distribution-free methods instead of the traditional parametric method?" Two approaches to software reliability engineering appear somewhat promising. The first study, begin in FY01, is based in hardware reliability, a very well established science that has many aspects that can be applied to software. This research effort has investigated mathematical aspects of hardware reliability and has identified those applicable to software. Currently the research effort is applying and testing these approaches to software reliability measurement, These parametric models require much project data that may be difficult to apply and interpret. Projects at GSFC are often complex in both technology and schedules. Assessing and estimating reliability of the final system is extremely difficult when various subsystems are tested and completed long before others. Parametric and distribution free techniques may offer a new and accurate way of modeling failure time and other project data to provide earlier and more accurate estimates of system reliability.
How rare is complex life in the Milky Way?
Bounama, Christine; von Bloh, Werner; Franck, Siegfried
2007-10-01
An integrated Earth system model was applied to calculate the number of habitable Earth-analog planets that are likely to have developed primitive (unicellular) and complex (multicellular) life in extrasolar planetary systems. The model is based on the global carbon cycle mediated by life and driven by increasing stellar luminosity and plate tectonics. We assumed that the hypothetical primitive and complex life forms differed in their temperature limits and CO(2) tolerances. Though complex life would be more vulnerable to environmental stress, its presence would amplify weathering processes on a terrestrial planet. The model allowed us to calculate the average number of Earth-analog planets that may harbor such life by using the formation rate of Earth-like planets in the Milky Way as well as the size of a habitable zone that could support primitive and complex life forms. The number of planets predicted to bear complex life was found to be approximately 2 orders of magnitude lower than the number predicted for primitive life forms. Our model predicted a maximum abundance of such planets around 1.8 Ga ago and allowed us to calculate the average distance between potentially habitable planets in the Milky Way. If the model predictions are accurate, the future missions DARWIN (up to a probability of 65%) and TPF (up to 20%) are likely to detect at least one planet with a biosphere composed of complex life.
Modeling of Complex Coupled Fluid-Structure Interaction Systems in Arbitrary Water Depth
2009-01-01
basin. For the particle finite- element method ( PFEM ) near-field fluid model we completed: (4) the development of a fully-coupled fluid/flexible...method ( PFEM ) based framework for the ALE-RANS solver [1]. We presented the theory of ALE-RANS with a k- turbulence closure model and several numerical...implemented by PFEM (Task (4)). In this work a universal wall function (UWF) is introduced and implemented to more accurately predict the boundary
Mathematics as a Conduit for Translational Research in Post-Traumatic Osteoarthritis
Ayati, Bruce P.; Kapitanov, Georgi I.; Coleman, Mitchell C.; Anderson, Donald D.; Martin, James A.
2016-01-01
Biomathematical models offer a powerful method of clarifying complex temporal interactions and the relationships among multiple variables in a system. We present a coupled in silico biomathematical model of articular cartilage degeneration in response to impact and/or aberrant loading such as would be associated with injury to an articular joint. The model incorporates fundamental biological and mechanical information obtained from explant and small animal studies to predict post-traumatic osteoarthritis (PTOA) progression, with an eye toward eventual application in human patients. In this sense, we refer to the mathematics as a “conduit of translation”. The new in silico framework presented in this paper involves a biomathematical model for the cellular and biochemical response to strains computed using finite element analysis. The model predicts qualitative responses presently, utilizing system parameter values largely taken from the literature. To contribute to accurate predictions, models need to be accurately parameterized with values that are based on solid science. We discuss a parameter identification protocol that will enable us to make increasingly accurate predictions of PTOA progression using additional data from smaller scale explant and small animal assays as they become available. By distilling the data from the explant and animal assays into parameters for biomathematical models, mathematics can translate experimental data to clinically relevant knowledge. PMID:27653021
A novel phenomenological multi-physics model of Li-ion battery cells
NASA Astrophysics Data System (ADS)
Oh, Ki-Yong; Samad, Nassim A.; Kim, Youngki; Siegel, Jason B.; Stefanopoulou, Anna G.; Epureanu, Bogdan I.
2016-09-01
A novel phenomenological multi-physics model of Lithium-ion battery cells is developed for control and state estimation purposes. The model can capture electrical, thermal, and mechanical behaviors of battery cells under constrained conditions, e.g., battery pack conditions. Specifically, the proposed model predicts the core and surface temperatures and reaction force induced from the volume change of battery cells because of electrochemically- and thermally-induced swelling. Moreover, the model incorporates the influences of changes in preload and ambient temperature on the force considering severe environmental conditions electrified vehicles face. Intensive experimental validation demonstrates that the proposed multi-physics model accurately predicts the surface temperature and reaction force for a wide operational range of preload and ambient temperature. This high fidelity model can be useful for more accurate and robust state of charge estimation considering the complex dynamic behaviors of the battery cell. Furthermore, the inherent simplicity of the mechanical measurements offers distinct advantages to improve the existing power and thermal management strategies for battery management.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hobbs, Michael L.
We previously developed a PETN thermal decomposition model that accurately predicts thermal ignition and detonator failure [1]. This model was originally developed for CALORE [2] and required several complex user subroutines. Recently, a simplified version of the PETN decomposition model was implemented into ARIA [3] using a general chemistry framework without need for user subroutines. Detonator failure was also predicted with this new model using ENCORE. The model was simplified by 1) basing the model on moles rather than mass, 2) simplifying the thermal conductivity model, and 3) implementing ARIA’s new phase change model. This memo briefly describes the model,more » implementation, and validation.« less
Stiffness distribution in insect cuticle: a continuous or a discontinuous profile?
Rajabi, H; Jafarpour, M; Darvizeh, A; Dirks, J-H; Gorb, S N
2017-07-01
Insect cuticle is a biological composite with a high degree of complexity in terms of both architecture and material composition. Given the complex morphology of many insect body parts, finite-element (FE) models play an important role in the analysis and interpretation of biomechanical measurements, taken by either macroscopic or nanoscopic techniques. Many previous studies show that the interpretation of nanoindentation measurements of this layered composite material is very challenging. To develop accurate FE models, it is of particular interest to understand more about the variations in the stiffness through the thickness of the cuticle. Considering the difficulties of making direct measurements, in this study, we use the FE method to analyse previously published data and address this issue numerically. For this purpose, sets of continuous or discontinuous stiffness profiles through the thickness of the cuticle were mathematically described. The obtained profiles were assigned to models developed based on the cuticle of three insect species with different geometries and layer configurations. The models were then used to simulate the mechanical behaviour of insect cuticles subjected to nanoindentation experiments. Our results show that FE models with discontinuous exponential stiffness gradients along their thickness were able to predict the stress and deformation states in insect cuticle very well. Our results further suggest that, for more accurate measurements and interpretation of nanoindentation test data, the ratio of the indentation depth to cuticle thickness should be limited to 7% rather than the traditional '10% rule'. The results of this study thus might be useful to provide a deeper insight into the biomechanical consequences of the distinct material distribution in insect cuticle and also to form a basis for more realistic modelling of this complex natural composite. © 2017 The Author(s).
Optimizing complex phenotypes through model-guided multiplex genome engineering
Kuznetsov, Gleb; Goodman, Daniel B.; Filsinger, Gabriel T.; ...
2017-05-25
Here, we present a method for identifying genomic modifications that optimize a complex phenotype through multiplex genome engineering and predictive modeling. We apply our method to identify six single nucleotide mutations that recover 59% of the fitness defect exhibited by the 63-codon E. coli strain C321.ΔA. By introducing targeted combinations of changes in multiplex we generate rich genotypic and phenotypic diversity and characterize clones using whole-genome sequencing and doubling time measurements. Regularized multivariate linear regression accurately quantifies individual allelic effects and overcomes bias from hitchhiking mutations and context-dependence of genome editing efficiency that would confound other strategies.
Optimizing complex phenotypes through model-guided multiplex genome engineering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuznetsov, Gleb; Goodman, Daniel B.; Filsinger, Gabriel T.
Here, we present a method for identifying genomic modifications that optimize a complex phenotype through multiplex genome engineering and predictive modeling. We apply our method to identify six single nucleotide mutations that recover 59% of the fitness defect exhibited by the 63-codon E. coli strain C321.ΔA. By introducing targeted combinations of changes in multiplex we generate rich genotypic and phenotypic diversity and characterize clones using whole-genome sequencing and doubling time measurements. Regularized multivariate linear regression accurately quantifies individual allelic effects and overcomes bias from hitchhiking mutations and context-dependence of genome editing efficiency that would confound other strategies.
Smith, Robert W; van Rosmalen, Rik P; Martins Dos Santos, Vitor A P; Fleck, Christian
2018-06-19
Models of metabolism are often used in biotechnology and pharmaceutical research to identify drug targets or increase the direct production of valuable compounds. Due to the complexity of large metabolic systems, a number of conclusions have been drawn using mathematical methods with simplifying assumptions. For example, constraint-based models describe changes of internal concentrations that occur much quicker than alterations in cell physiology. Thus, metabolite concentrations and reaction fluxes are fixed to constant values. This greatly reduces the mathematical complexity, while providing a reasonably good description of the system in steady state. However, without a large number of constraints, many different flux sets can describe the optimal model and we obtain no information on how metabolite levels dynamically change. Thus, to accurately determine what is taking place within the cell, finer quality data and more detailed models need to be constructed. In this paper we present a computational framework, DMPy, that uses a network scheme as input to automatically search for kinetic rates and produce a mathematical model that describes temporal changes of metabolite fluxes. The parameter search utilises several online databases to find measured reaction parameters. From this, we take advantage of previous modelling efforts, such as Parameter Balancing, to produce an initial mathematical model of a metabolic pathway. We analyse the effect of parameter uncertainty on model dynamics and test how recent flux-based model reduction techniques alter system properties. To our knowledge this is the first time such analysis has been performed on large models of metabolism. Our results highlight that good estimates of at least 80% of the reaction rates are required to accurately model metabolic systems. Furthermore, reducing the size of the model by grouping reactions together based on fluxes alters the resulting system dynamics. The presented pipeline automates the modelling process for large metabolic networks. From this, users can simulate their pathway of interest and obtain a better understanding of how altering conditions influences cellular dynamics. By testing the effects of different parameterisations we are also able to provide suggestions to help construct more accurate models of complete metabolic systems in the future.
Range 7 Scanner Integration with PaR Robot Scanning System
NASA Technical Reports Server (NTRS)
Schuler, Jason; Burns, Bradley; Carlson, Jeffrey; Minich, Mark
2011-01-01
An interface bracket and coordinate transformation matrices were designed to allow the Range 7 scanner to be mounted on the PaR Robot detector arm for scanning the heat shield or other object placed in the test cell. A process was designed for using Rapid Form XOR to stitch data from multiple scans together to provide an accurate 3D model of the object scanned. An accurate model was required for the design and verification of an existing heat shield. The large physical size and complex shape of the heat shield does not allow for direct measurement of certain features in relation to other features. Any imaging devices capable of imaging the entire heat shield in its entirety suffers a reduced resolution and cannot image sections that are blocked from view. Prior methods involved tools such as commercial measurement arms, taking images with cameras, then performing manual measurements. These prior methods were tedious and could not provide a 3D model of the object being scanned, and were typically limited to a few tens of measurement points at prominent locations. Integration of the scanner with the robot allows for large complex objects to be scanned at high resolution, and for 3D Computer Aided Design (CAD) models to be generated for verification of items to the original design, and to generate models of previously undocumented items. The main components are the mounting bracket for the scanner to the robot and the coordinate transformation matrices used for stitching the scanner data into a 3D model. The steps involve mounting the interface bracket to the robot's detector arm, mounting the scanner to the bracket, and then scanning sections of the object and recording the location of the tool tip (in this case the center of the scanner's focal point). A novel feature is the ability to stitch images together by coordinates instead of requiring each scan data set to have overlapping identifiable features. This setup allows models of complex objects to be developed even if the object is large and featureless, or has sections that don't have visibility to other parts of the object for use as a reference. In addition, millions of points can be used for creation of an accurate model [i.e. within 0.03 in. (=0.8 mm) over a span of 250 in. (=635 mm)].
Time domain simulation of novel photovoltaic materials
NASA Astrophysics Data System (ADS)
Chung, Haejun
Thin-film silicon-based solar cells have operated far from the Shockley- Queisser limit in all experiments to date. Novel light-trapping structures, however, may help address this limitation. Finite-difference time domain simulation methods offer the potential to accurately determine the light-trapping potential of arbitrary dielectric structures, but suffer from materials modeling problems. In this thesis, existing dispersion models for novel photovoltaic materials will be reviewed, and a novel dispersion model, known as the quadratic complex rational function (QCRF), will be proposed. It has the advantage of accurately fitting experimental semiconductor dielectric values over a wide bandwidth in a numerically stable fashion. Applying the proposed dispersion model, a statistically correlated surface texturing method will be suggested, and light absorption rates of it will be explained. In future work, these designs will be combined with other structures and optimized to help guide future experiments.
Spreading dynamics on complex networks: a general stochastic approach.
Noël, Pierre-André; Allard, Antoine; Hébert-Dufresne, Laurent; Marceau, Vincent; Dubé, Louis J
2014-12-01
Dynamics on networks is considered from the perspective of Markov stochastic processes. We partially describe the state of the system through network motifs and infer any missing data using the available information. This versatile approach is especially well adapted for modelling spreading processes and/or population dynamics. In particular, the generality of our framework and the fact that its assumptions are explicitly stated suggests that it could be used as a common ground for comparing existing epidemics models too complex for direct comparison, such as agent-based computer simulations. We provide many examples for the special cases of susceptible-infectious-susceptible and susceptible-infectious-removed dynamics (e.g., epidemics propagation) and we observe multiple situations where accurate results may be obtained at low computational cost. Our perspective reveals a subtle balance between the complex requirements of a realistic model and its basic assumptions.
Ringer, Ashley L.; Senenko, Anastasia; Sherrill, C. David
2007-01-01
S/π interactions are prevalent in biochemistry and play an important role in protein folding and stabilization. Geometries of cysteine/aromatic interactions found in crystal structures from the Brookhaven Protein Data Bank (PDB) are analyzed and compared with the equilibrium configurations predicted by high-level quantum mechanical results for the H2S–benzene complex. A correlation is observed between the energetically favorable configurations on the quantum mechanical potential energy surface of the H2S–benzene model and the cysteine/aromatic configurations most frequently found in crystal structures of the PDB. In contrast to some previous PDB analyses, configurations with the sulfur over the aromatic ring are found to be the most important. Our results suggest that accurate quantum computations on models of noncovalent interactions may be helpful in understanding the structures of proteins and other complex systems. PMID:17766371
Modeling of Fuel Film Cooling on Chamber Hot Wall
2014-07-01
downstream, when the film has been depleted of its cooling and coking capacities, a second slot is needed to inject fresh cool fuel. All of these...pyrolysis and oxidation. 7. As discussed in the introductory section, sooting and coking are notoriously complex topics. Well- validated global...accurate models for soot formation and deposition. Instead, the potential impact of the coke layer is evaluated parametrically by representing the
NASA Astrophysics Data System (ADS)
Simmons, Daniel; Cools, Kristof; Sewell, Phillip
2016-11-01
Time domain electromagnetic simulation tools have the ability to model transient, wide-band applications, and non-linear problems. The Boundary Element Method (BEM) and the Transmission Line Modeling (TLM) method are both well established numerical techniques for simulating time-varying electromagnetic fields. The former surface based method can accurately describe outwardly radiating fields from piecewise uniform objects and efficiently deals with large domains filled with homogeneous media. The latter volume based method can describe inhomogeneous and non-linear media and has been proven to be unconditionally stable. Furthermore, the Unstructured TLM (UTLM) enables modelling of geometrically complex objects by using triangular meshes which removes staircasing and unnecessary extensions of the simulation domain. The hybridization of BEM and UTLM which is described in this paper is named the Boundary Element Unstructured Transmission-line (BEUT) method. It incorporates the advantages of both methods. The theory and derivation of the 2D BEUT method is described in this paper, along with any relevant implementation details. The method is corroborated by studying its correctness and efficiency compared to the traditional UTLM method when applied to complex problems such as the transmission through a system of Luneburg lenses and the modelling of antenna radomes for use in wireless communications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simmons, Daniel, E-mail: daniel.simmons@nottingham.ac.uk; Cools, Kristof; Sewell, Phillip
Time domain electromagnetic simulation tools have the ability to model transient, wide-band applications, and non-linear problems. The Boundary Element Method (BEM) and the Transmission Line Modeling (TLM) method are both well established numerical techniques for simulating time-varying electromagnetic fields. The former surface based method can accurately describe outwardly radiating fields from piecewise uniform objects and efficiently deals with large domains filled with homogeneous media. The latter volume based method can describe inhomogeneous and non-linear media and has been proven to be unconditionally stable. Furthermore, the Unstructured TLM (UTLM) enables modelling of geometrically complex objects by using triangular meshes which removesmore » staircasing and unnecessary extensions of the simulation domain. The hybridization of BEM and UTLM which is described in this paper is named the Boundary Element Unstructured Transmission-line (BEUT) method. It incorporates the advantages of both methods. The theory and derivation of the 2D BEUT method is described in this paper, along with any relevant implementation details. The method is corroborated by studying its correctness and efficiency compared to the traditional UTLM method when applied to complex problems such as the transmission through a system of Luneburg lenses and the modelling of antenna radomes for use in wireless communications. - Graphical abstract:.« less
The geometric nature of weights in real complex networks
NASA Astrophysics Data System (ADS)
Allard, Antoine; Serrano, M. Ángeles; García-Pérez, Guillermo; Boguñá, Marián
2017-01-01
The topology of many real complex networks has been conjectured to be embedded in hidden metric spaces, where distances between nodes encode their likelihood of being connected. Besides of providing a natural geometrical interpretation of their complex topologies, this hypothesis yields the recipe for sustainable Internet's routing protocols, sheds light on the hierarchical organization of biochemical pathways in cells, and allows for a rich characterization of the evolution of international trade. Here we present empirical evidence that this geometric interpretation also applies to the weighted organization of real complex networks. We introduce a very general and versatile model and use it to quantify the level of coupling between their topology, their weights and an underlying metric space. Our model accurately reproduces both their topology and their weights, and our results suggest that the formation of connections and the assignment of their magnitude are ruled by different processes.
Intermittent dynamics in complex systems driven to depletion.
Escobar, Juan V; Pérez Castillo, Isaac
2018-03-19
When complex systems are driven to depletion by some external factor, their non-stationary dynamics can present an intermittent behaviour between relative tranquility and burst of activity whose consequences are often catastrophic. To understand and ultimately be able to predict such dynamics, we propose an underlying mechanism based on sharp thresholds of a local generalized energy density that naturally leads to negative feedback. We find a transition from a continuous regime to an intermittent one, in which avalanches can be predicted despite the stochastic nature of the process. This model may have applications in many natural and social complex systems where a rapid depletion of resources or generalized energy drives the dynamics. In particular, we show how this model accurately describes the time evolution and avalanches present in a real social system.
Multiscale Modelling of the 2011 Tohoku Tsunami with Fluidity: Coastal Inundation and Run-up.
NASA Astrophysics Data System (ADS)
Hill, J.; Martin-Short, R.; Piggott, M. D.; Candy, A. S.
2014-12-01
Tsunami-induced flooding represents one of the most dangerous natural hazards to coastal communities around the world, as exemplified by Tohoku tsunami of March 2011. In order to further understand this hazard and to design appropriate mitigation it is necessary to develop versatile, accurate software capable of simulating large scale tsunami propagation and interaction with coastal geomorphology on a local scale. One such software package is Fluidity, an open source, finite element, multiscale, code that is capable of solving the fully three dimensional Navier-Stokes equations on unstructured meshes. Such meshes are significantly better at representing complex coastline shapes than structured meshes and have the advantage of allowing variation in element size across a domain. Furthermore, Fluidity incorporates a novel wetting and drying algorithm, which enables accurate, efficient simulation of tsunami run-up over complex, multiscale, topography. Fluidity has previously been demonstrated to accurately simulate the 2011 Tohoku tsunami (Oishi et al 2013) , but its wetting and drying facility has not yet been tested on a geographical scale. This study makes use of Fluidity to simulate the 2011 Tohoku tsunami and its interaction with Japan's eastern shoreline, including coastal flooding. The results are validated against observations made by survey teams, aerial photographs and previous modelling efforts in order to evaluate Fluidity's current capabilities and suggest methods of future improvement. The code is shown to perform well at simulating flooding along the topographically complex Tohoku coast of Japan, with major deviations between model and observation arising mainly due to limitations imposed by bathymetry resolution, which could be improved in future. In theory, Fluidity is capable of full multiscale tsunami modelling, thus enabling researchers to understand both wave propagation across ocean basins and flooding of coastal landscapes down to interaction with individual defence structures. This makes the code an exciting candidate for use in future studies aiming to investigate tsunami risk elsewhere in the world. Oishi, Y. et al. Three-dimensional tsunami propagation simulations using an unstructured mesh finite element model. J. Geophys. Res. [Solid Earth] 118, 2998-3018 (2013).
Voulgarelis, Dimitrios; Velayudhan, Ajoy; Smith, Frank
2017-01-01
Agent-based models provide a formidable tool for exploring complex and emergent behaviour of biological systems as well as accurate results but with the drawback of needing a lot of computational power and time for subsequent analysis. On the other hand, equation-based models can more easily be used for complex analysis in a much shorter timescale. This paper formulates an ordinary differential equations and stochastic differential equations model to capture the behaviour of an existing agent-based model of tumour cell reprogramming and applies it to optimization of possible treatment as well as dosage sensitivity analysis. For certain values of the parameter space a close match between the equation-based and agent-based models is achieved. The need for division of labour between the two approaches is explored. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Heh, Ding Yu; Tan, Eng Leong
2011-04-12
This paper presents the modeling of hemoglobin at optical frequency (250 nm - 1000 nm) using the unconditionally stable fundamental alternating-direction-implicit finite-difference time-domain (FADI-FDTD) method. An accurate model based on complex conjugate pole-residue pairs is proposed to model the complex permittivity of hemoglobin at optical frequency. Two hemoglobin concentrations at 15 g/dL and 33 g/dL are considered. The model is then incorporated into the FADI-FDTD method for solving electromagnetic problems involving interaction of light with hemoglobin. The computation of transmission and reflection coefficients of a half space hemoglobin medium using the FADI-FDTD validates the accuracy of our model and method. The specific absorption rate (SAR) distribution of human capillary at optical frequency is also shown. While maintaining accuracy, the unconditionally stable FADI-FDTD method exhibits high efficiency in modeling hemoglobin.
Heh, Ding Yu; Tan, Eng Leong
2011-01-01
This paper presents the modeling of hemoglobin at optical frequency (250 nm – 1000 nm) using the unconditionally stable fundamental alternating-direction-implicit finite-difference time-domain (FADI-FDTD) method. An accurate model based on complex conjugate pole-residue pairs is proposed to model the complex permittivity of hemoglobin at optical frequency. Two hemoglobin concentrations at 15 g/dL and 33 g/dL are considered. The model is then incorporated into the FADI-FDTD method for solving electromagnetic problems involving interaction of light with hemoglobin. The computation of transmission and reflection coefficients of a half space hemoglobin medium using the FADI-FDTD validates the accuracy of our model and method. The specific absorption rate (SAR) distribution of human capillary at optical frequency is also shown. While maintaining accuracy, the unconditionally stable FADI-FDTD method exhibits high efficiency in modeling hemoglobin. PMID:21559129
NASA Astrophysics Data System (ADS)
Rocha, Alby D.; Groen, Thomas A.; Skidmore, Andrew K.; Darvishzadeh, Roshanak; Willemen, Louise
2017-11-01
The growing number of narrow spectral bands in hyperspectral remote sensing improves the capacity to describe and predict biological processes in ecosystems. But it also poses a challenge to fit empirical models based on such high dimensional data, which often contain correlated and noisy predictors. As sample sizes, to train and validate empirical models, seem not to be increasing at the same rate, overfitting has become a serious concern. Overly complex models lead to overfitting by capturing more than the underlying relationship, and also through fitting random noise in the data. Many regression techniques claim to overcome these problems by using different strategies to constrain complexity, such as limiting the number of terms in the model, by creating latent variables or by shrinking parameter coefficients. This paper is proposing a new method, named Naïve Overfitting Index Selection (NOIS), which makes use of artificially generated spectra, to quantify the relative model overfitting and to select an optimal model complexity supported by the data. The robustness of this new method is assessed by comparing it to a traditional model selection based on cross-validation. The optimal model complexity is determined for seven different regression techniques, such as partial least squares regression, support vector machine, artificial neural network and tree-based regressions using five hyperspectral datasets. The NOIS method selects less complex models, which present accuracies similar to the cross-validation method. The NOIS method reduces the chance of overfitting, thereby avoiding models that present accurate predictions that are only valid for the data used, and too complex to make inferences about the underlying process.
Cluster-Expansion Model for Complex Quinary Alloys: Application to Alnico Permanent Magnets
NASA Astrophysics Data System (ADS)
Nguyen, Manh Cuong; Zhou, Lin; Tang, Wei; Kramer, Matthew J.; Anderson, Iver E.; Wang, Cai-Zhuang; Ho, Kai-Ming
2017-11-01
An accurate and transferable cluster-expansion model for complex quinary alloys is developed. Lattice Monte Carlo simulation enabled by this cluster-expansion model is used to investigate temperature-dependent atomic structure of alnico alloys, which are considered as promising high-performance non-rare-earth permanent-magnet materials for high-temperature applications. The results of the Monte Carlo simulations are consistent with available experimental data and provide useful insights into phase decomposition, selection, and chemical ordering in alnico. The simulations also reveal a previously unrecognized D 03 alloy phase. This phase is very rich in Ni and exhibits very weak magnetization. Manipulating the size and location of this phase provides a possible route to improve the magnetic properties of alnico, especially coercivity.
Effects of additional data on Bayesian clustering.
Yamazaki, Keisuke
2017-10-01
Hierarchical probabilistic models, such as mixture models, are used for cluster analysis. These models have two types of variables: observable and latent. In cluster analysis, the latent variable is estimated, and it is expected that additional information will improve the accuracy of the estimation of the latent variable. Many proposed learning methods are able to use additional data; these include semi-supervised learning and transfer learning. However, from a statistical point of view, a complex probabilistic model that encompasses both the initial and additional data might be less accurate due to having a higher-dimensional parameter. The present paper presents a theoretical analysis of the accuracy of such a model and clarifies which factor has the greatest effect on its accuracy, the advantages of obtaining additional data, and the disadvantages of increasing the complexity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Application of tire dynamics to aircraft landing gear design analysis
NASA Technical Reports Server (NTRS)
Black, R. J.
1983-01-01
The tire plays a key part in many analyses used for design of aircraft landing gear. Examples include structural design of wheels, landing gear shimmy, brake whirl, chatter and squeal, complex combination of chatter and shimmy on main landing gear (MLG) systems, anti-skid performance, gear walk, and rough terrain loads and performance. Tire parameters needed in the various analyses are discussed. Two tire models are discussed for shimmy analysis, the modified Moreland approach and the von Schlippe-Dietrich approach. It is shown that the Moreland model can be derived from the Von Schlippe-Dietrich model by certain approximations. The remaining analysis areas are discussed in general terms and the tire parameters needed for each are identified. Accurate tire data allows more accurate design analysis and the correct prediction of dynamic performance of aircraft landing gear.
Sentis, Manuel Lorenzo; Gable, Carl W.
2017-06-15
Furthermore, there are many applications in science and engineering modeling where an accurate representation of a complex model geometry in the form of a mesh is important. In applications of flow and transport in subsurface porous media, this is manifest in models that must capture complex geologic stratigraphy, structure (faults, folds, erosion, deposition) and infrastructure (tunnels, boreholes, excavations). Model setup, defined as the activities of geometry definition, mesh generation (creation, optimization, modification, refine, de-refine, smooth), assigning material properties, initial conditions and boundary conditions requires specialized software tools to automate and streamline the process. In addition, some model setup tools willmore » provide more utility if they are designed to interface with and meet the needs of a particular flow and transport software suite. A control volume discretization that uses a two point flux approximation is for example most accurate when the underlying control volumes are 2D or 3D Voronoi tessellations. In this paper we will present the coupling of LaGriT, a mesh generation and model setup software suite and TOUGH2 to model subsurface flow problems and we show an example of how LaGriT can be used as a model setup tool for the generation of a Voronoi mesh for the simulation program TOUGH2. To generate the MESH file for TOUGH2 from the LaGriT output a standalone module Lagrit2Tough2 was developed, which is presented here and will be included in a future release of LaGriT. Here in this paper an alternative method to generate a Voronoi mesh for TOUGH2 with LaGriT is presented and thanks to the modular and command based structure of LaGriT this method is well suited to generating a mesh for complex models.« less
NASA Astrophysics Data System (ADS)
Sentís, Manuel Lorenzo; Gable, Carl W.
2017-11-01
There are many applications in science and engineering modeling where an accurate representation of a complex model geometry in the form of a mesh is important. In applications of flow and transport in subsurface porous media, this is manifest in models that must capture complex geologic stratigraphy, structure (faults, folds, erosion, deposition) and infrastructure (tunnels, boreholes, excavations). Model setup, defined as the activities of geometry definition, mesh generation (creation, optimization, modification, refine, de-refine, smooth), assigning material properties, initial conditions and boundary conditions requires specialized software tools to automate and streamline the process. In addition, some model setup tools will provide more utility if they are designed to interface with and meet the needs of a particular flow and transport software suite. A control volume discretization that uses a two point flux approximation is for example most accurate when the underlying control volumes are 2D or 3D Voronoi tessellations. In this paper we will present the coupling of LaGriT, a mesh generation and model setup software suite and TOUGH2 (Pruess et al., 1999) to model subsurface flow problems and we show an example of how LaGriT can be used as a model setup tool for the generation of a Voronoi mesh for the simulation program TOUGH2. To generate the MESH file for TOUGH2 from the LaGriT output a standalone module Lagrit2Tough2 was developed, which is presented here and will be included in a future release of LaGriT. In this paper an alternative method to generate a Voronoi mesh for TOUGH2 with LaGriT is presented and thanks to the modular and command based structure of LaGriT this method is well suited to generating a mesh for complex models.
Simulation of Hydraulic and Natural Fracture Interaction Using a Coupled DFN-DEM Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, J.; Huang, H.; Deo, M.
2016-03-01
The presence of natural fractures will usually result in a complex fracture network due to the interactions between hydraulic and natural fracture. The reactivation of natural fractures can generally provide additional flow paths from formation to wellbore which play a crucial role in improving the hydrocarbon recovery in these ultra-low permeability reservoir. Thus, accurate description of the geometry of discrete fractures and bedding is highly desired for accurate flow and production predictions. Compared to conventional continuum models that implicitly represent the discrete feature, Discrete Fracture Network (DFN) models could realistically model the connectivity of discontinuities at both reservoir scale andmore » well scale. In this work, a new hybrid numerical model that couples Discrete Fracture Network (DFN) and Dual-Lattice Discrete Element Method (DL-DEM) is proposed to investigate the interaction between hydraulic fracture and natural fractures. Based on the proposed model, the effects of natural fracture orientation, density and injection properties on hydraulic-natural fractures interaction are investigated.« less
Simulation of Hydraulic and Natural Fracture Interaction Using a Coupled DFN-DEM Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. Zhou; H. Huang; M. Deo
The presence of natural fractures will usually result in a complex fracture network due to the interactions between hydraulic and natural fracture. The reactivation of natural fractures can generally provide additional flow paths from formation to wellbore which play a crucial role in improving the hydrocarbon recovery in these ultra-low permeability reservoir. Thus, accurate description of the geometry of discrete fractures and bedding is highly desired for accurate flow and production predictions. Compared to conventional continuum models that implicitly represent the discrete feature, Discrete Fracture Network (DFN) models could realistically model the connectivity of discontinuities at both reservoir scale andmore » well scale. In this work, a new hybrid numerical model that couples Discrete Fracture Network (DFN) and Dual-Lattice Discrete Element Method (DL-DEM) is proposed to investigate the interaction between hydraulic fracture and natural fractures. Based on the proposed model, the effects of natural fracture orientation, density and injection properties on hydraulic-natural fractures interaction are investigated.« less
NASA Astrophysics Data System (ADS)
Mashayekhi, Somayeh; Miles, Paul; Hussaini, M. Yousuff; Oates, William S.
2018-02-01
In this paper, fractional and non-fractional viscoelastic models for elastomeric materials are derived and analyzed in comparison to experimental results. The viscoelastic models are derived by expanding thermodynamic balance equations for both fractal and non-fractal media. The order of the fractional time derivative is shown to strongly affect the accuracy of the viscoelastic constitutive predictions. Model validation uses experimental data describing viscoelasticity of the dielectric elastomer Very High Bond (VHB) 4910. Since these materials are known for their broad applications in smart structures, it is important to characterize and accurately predict their behavior across a large range of time scales. Whereas integer order viscoelastic models can yield reasonable agreement with data, the model parameters often lack robustness in prediction at different deformation rates. Alternatively, fractional order models of viscoelasticity provide an alternative framework to more accurately quantify complex rate-dependent behavior. Prior research that has considered fractional order viscoelasticity lacks experimental validation and contains limited links between viscoelastic theory and fractional order derivatives. To address these issues, we use fractional order operators to experimentally validate fractional and non-fractional viscoelastic models in elastomeric solids using Bayesian uncertainty quantification. The fractional order model is found to be advantageous as predictions are significantly more accurate than integer order viscoelastic models for deformation rates spanning four orders of magnitude.
Accurate Behavioral Simulator of All-Digital Time-Domain Smart Temperature Sensors by Using SIMULINK
Chen, Chun-Chi; Chen, Chao-Lieh; Lin, You-Ting
2016-01-01
This study proposes a new behavioral simulator that uses SIMULINK for all-digital CMOS time-domain smart temperature sensors (TDSTSs) for performing rapid and accurate simulations. Inverter-based TDSTSs offer the benefits of low cost and simple structure for temperature-to-digital conversion and have been developed. Typically, electronic design automation tools, such as HSPICE, are used to simulate TDSTSs for performance evaluations. However, such tools require extremely long simulation time and complex procedures to analyze the results and generate figures. In this paper, we organize simple but accurate equations into a temperature-dependent model (TDM) by which the TDSTSs evaluate temperature behavior. Furthermore, temperature-sensing models of a single CMOS NOT gate were devised using HSPICE simulations. Using the TDM and these temperature-sensing models, a novel simulator in SIMULINK environment was developed to substantially accelerate the simulation and simplify the evaluation procedures. Experiments demonstrated that the simulation results of the proposed simulator have favorable agreement with those obtained from HSPICE simulations, showing that the proposed simulator functions successfully. This is the first behavioral simulator addressing the rapid simulation of TDSTSs. PMID:27509507
Kossert, K; Cassette, Ph; Carles, A Grau; Jörg, G; Gostomski, Christroph Lierse V; Nähle, O; Wolf, Ch
2014-05-01
The triple-to-double coincidence ratio (TDCR) method is frequently used to measure the activity of radionuclides decaying by pure β emission or electron capture (EC). Some radionuclides with more complex decays have also been studied, but accurate calculations of decay branches which are accompanied by many coincident γ transitions have not yet been investigated. This paper describes recent extensions of the model to make efficiency computations for more complex decay schemes possible. In particular, the MICELLE2 program that applies a stochastic approach of the free parameter model was extended. With an improved code, efficiencies for β(-), β(+) and EC branches with up to seven coincident γ transitions can be calculated. Moreover, a new parametrization for the computation of electron stopping powers has been implemented to compute the ionization quenching function of 10 commercial scintillation cocktails. In order to demonstrate the capabilities of the TDCR method, the following radionuclides are discussed: (166m)Ho (complex β(-)/γ), (59)Fe (complex β(-)/γ), (64)Cu (β(-), β(+), EC and EC/γ) and (229)Th in equilibrium with its progenies (decay chain with many α, β and complex β(-)/γ transitions). © 2013 Published by Elsevier Ltd.
Global horizontal irradiance clear sky models : implementation and analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stein, Joshua S.; Hansen, Clifford W.; Reno, Matthew J.
2012-03-01
Clear sky models estimate the terrestrial solar radiation under a cloudless sky as a function of the solar elevation angle, site altitude, aerosol concentration, water vapor, and various atmospheric conditions. This report provides an overview of a number of global horizontal irradiance (GHI) clear sky models from very simple to complex. Validation of clear-sky models requires comparison of model results to measured irradiance during clear-sky periods. To facilitate validation, we present a new algorithm for automatically identifying clear-sky periods in a time series of GHI measurements. We evaluate the performance of selected clear-sky models using measured data from 30 differentmore » sites, totaling about 300 site-years of data. We analyze the variation of these errors across time and location. In terms of error averaged over all locations and times, we found that complex models that correctly account for all the atmospheric parameters are slightly more accurate than other models, but, primarily at low elevations, comparable accuracy can be obtained from some simpler models. However, simpler models often exhibit errors that vary with time of day and season, whereas the errors for complex models vary less over time.« less
Multi-Purpose Enrollment Projections: A Comparative Analysis of Four Approaches
ERIC Educational Resources Information Center
Allen, Debra Mary
2013-01-01
Providing support for institutional planning is central to the function of institutional research. Necessary for the planning process are accurate enrollment projections. The purpose of the present study was to develop a short-term enrollment model simple enough to be understood by those who rely on it, yet sufficiently complex to serve varying…
Toward New Data and Information Management Solutions for Data-Intensive Ecological Research
ERIC Educational Resources Information Center
Laney, Christine Marie
2013-01-01
Ecosystem health is deteriorating in many parts of the world due to direct and indirect anthropogenic pressures. Generating accurate, useful, and impactful models of past, current, and future states of ecosystem structure and function is a complex endeavor that often requires vast amounts of data from multiple sources and knowledge from…
Model of anisotropic nonlinearity in self-defocusing photorefractive media.
Barsi, C; Fleischer, J W
2015-09-21
We develop a phenomenological model of anisotropy in self-defocusing photorefractive crystals. In addition to an independent term due to nonlinear susceptibility, we introduce a nonlinear, non-separable correction to the spectral diffraction operator. The model successfully describes the crossover between photovoltaic and photorefractive responses and the spatially dispersive shock wave behavior of a nonlinearly spreading Gaussian input beam. It should prove useful for characterizing internal charge dynamics in complex materials and for accurate image reconstruction through nonlinear media.
1989-03-15
3. F 2(g) -Li(L) 4. SF 6(g)-Li(L ) - vii - Several different modeling techniques are used to accurately estimate the activity coefficients of the...electrolytes with molecular species. The gas phase of the electrolytic solution is modeled using a pressure-explicit second order virial equation. The pure...calculated using the van Laar model . - viii - ACKNOWLEDGMENT This research was sponsored by the Office of Naval Research, Contract No. N00014-85--k
A cellular automata approach for modeling surface water runoff
NASA Astrophysics Data System (ADS)
Jozefik, Zoltan; Nanu Frechen, Tobias; Hinz, Christoph; Schmidt, Heiko
2015-04-01
This abstract reports the development and application of a two-dimensional cellular automata based model, which couples the dynamics of overland flow, infiltration processes and surface evolution through sediment transport. The natural hill slopes are represented by their topographic elevation and spatially varying soil properties infiltration rates and surface roughness coefficients. This model allows modeling of Hortonian overland flow and infiltration during complex rainfall events. An advantage of the cellular automata approach over the kinematic wave equations is that wet/dry interfaces that often appear with rainfall overland flows can be accurately captured and are not a source of numerical instabilities. An adaptive explicit time stepping scheme allows for rainfall events to be adequately resolved in time, while large time steps are taken during dry periods to provide for simulation run time efficiency. The time step is constrained by the CFL condition and mass conservation considerations. The spatial discretization is shown to be first-order accurate. For validation purposes, hydrographs for non-infiltrating and infiltrating plates are compared to the kinematic wave analytic solutions and data taken from literature [1,2]. Results show that our cellular automata model quantitatively accurately reproduces hydrograph patterns. However, recent works have showed that even through the hydrograph is satisfyingly reproduced, the flow field within the plot might be inaccurate [3]. For a more stringent validation, we compare steady state velocity, water flux, and water depth fields to rainfall simulation experiments conducted in Thies, Senegal [3]. Comparisons show that our model is able to accurately capture these flow properties. Currently, a sediment transport and deposition module is being implemented and tested. [1] M. Rousseau, O. Cerdan, O. Delestre, F. Dupros, F. James, S. Cordier. Overland flow modeling with the Shallow Water Equation using a well balanced numerical scheme: Adding efficiency or sum more complexity?. 2012.
Calibration of an Unsteady Groundwater Flow Model for a Complex, Strongly Heterogeneous Aquifer
NASA Astrophysics Data System (ADS)
Curtis, Z. K.; Liao, H.; Li, S. G.; Phanikumar, M. S.; Lusch, D.
2016-12-01
Modeling of groundwater systems characterized by complex three-dimensional structure and heterogeneity remains a significant challenge. Most of today's groundwater models are developed based on relatively simple conceptual representations in favor of model calibratibility. As more complexities are modeled, e.g., by adding more layers and/or zones, or introducing transient processes, more parameters have to be estimated and issues related to ill-posed groundwater problems and non-unique calibration arise. Here, we explore the use of an alternative conceptual representation for groundwater modeling that is fully three-dimensional and can capture complex 3D heterogeneity (both systematic and "random") without over-parameterizing the aquifer system. In particular, we apply Transition Probability (TP) geostatistics on high resolution borehole data from a water well database to characterize the complex 3D geology. Different aquifer material classes, e.g., `AQ' (aquifer material), `MAQ' (marginal aquifer material'), `PCM' (partially confining material), and `CM' (confining material), are simulated, with the hydraulic properties of each material type as tuning parameters during calibration. The TP-based approach is applied to simulate unsteady groundwater flow in a large, complex, and strongly heterogeneous glacial aquifer system in Michigan across multiple spatial and temporal scales. The resulting model is calibrated to observed static water level data over a time span of 50 years. The results show that the TP-based conceptualization enables much more accurate and robust calibration/simulation than that based on conventional deterministic layer/zone based conceptual representations.
NASA Technical Reports Server (NTRS)
Guruswamy, Guru P.; MacMurdy, Dale E.; Kapania, Rakesh K.
1994-01-01
Strong interactions between flow about an aircraft wing and the wing structure can result in aeroelastic phenomena which significantly impact aircraft performance. Time-accurate methods for solving the unsteady Navier-Stokes equations have matured to the point where reliable results can be obtained with reasonable computational costs for complex non-linear flows with shock waves, vortices and separations. The ability to combine such a flow solver with a general finite element structural model is key to an aeroelastic analysis in these flows. Earlier work involved time-accurate integration of modal structural models based on plate elements. A finite element model was developed to handle three-dimensional wing boxes, and incorporated into the flow solver without the need for modal analysis. Static condensation is performed on the structural model to reduce the structural degrees of freedom for the aeroelastic analysis. Direct incorporation of the finite element wing-box structural model with the flow solver requires finding adequate methods for transferring aerodynamic pressures to the structural grid and returning deflections to the aerodynamic grid. Several schemes were explored for handling the grid-to-grid transfer of information. The complex, built-up nature of the wing-box complicated this transfer. Aeroelastic calculations for a sample wing in transonic flow comparing various simple transfer schemes are presented and discussed.
NASA Astrophysics Data System (ADS)
Bellemans, Aurélie; Parente, Alessandro; Magin, Thierry
2018-04-01
The present work introduces a novel approach for obtaining reduced chemistry representations of large kinetic mechanisms in strong non-equilibrium conditions. The need for accurate reduced-order models arises from compression of large ab initio quantum chemistry databases for their use in fluid codes. The method presented in this paper builds on existing physics-based strategies and proposes a new approach based on the combination of a simple coarse grain model with Principal Component Analysis (PCA). The internal energy levels of the chemical species are regrouped in distinct energy groups with a uniform lumping technique. Following the philosophy of machine learning, PCA is applied on the training data provided by the coarse grain model to find an optimally reduced representation of the full kinetic mechanism. Compared to recently published complex lumping strategies, no expert judgment is required before the application of PCA. In this work, we will demonstrate the benefits of the combined approach, stressing its simplicity, reliability, and accuracy. The technique is demonstrated by reducing the complex quantum N2(g+1Σ) -N(S4u ) database for studying molecular dissociation and excitation in strong non-equilibrium. Starting from detailed kinetics, an accurate reduced model is developed and used to study non-equilibrium properties of the N2(g+1Σ) -N(S4u ) system in shock relaxation simulations.
Atmospheric Carbon Dioxide and the Global Carbon Cycle: The Key Uncertainties
DOE R&D Accomplishments Database
Peng, T. H.; Post, W. M.; DeAngelis, D. L.; Dale, V. H.; Farrell, M. P.
1987-12-01
The biogeochemical cycling of carbon between its sources and sinks determines the rate of increase in atmospheric CO{sub 2} concentrations. The observed increase in atmospheric CO{sub 2} content is less than the estimated release from fossil fuel consumption and deforestation. This discrepancy can be explained by interactions between the atmosphere and other global carbon reservoirs such as the oceans, and the terrestrial biosphere including soils. Undoubtedly, the oceans have been the most important sinks for CO{sub 2} produced by man. But, the physical, chemical, and biological processes of oceans are complex and, therefore, credible estimates of CO{sub 2} uptake can probably only come from mathematical models. Unfortunately, one- and two-dimensional ocean models do not allow for enough CO{sub 2} uptake to accurately account for known releases. Thus, they produce higher concentrations of atmospheric CO{sub 2} than was historically the case. More complex three-dimensional models, while currently being developed, may make better use of existing tracer data than do one- and two-dimensional models and will also incorporate climate feedback effects to provide a more realistic view of ocean dynamics and CO{sub 2} fluxes. The instability of current models to estimate accurately oceanic uptake of CO{sub 2} creates one of the key uncertainties in predictions of atmospheric CO{sub 2} increases and climate responses over the next 100 to 200 years.
Adaptive System Modeling for Spacecraft Simulation
NASA Technical Reports Server (NTRS)
Thomas, Justin
2011-01-01
This invention introduces a methodology and associated software tools for automatically learning spacecraft system models without any assumptions regarding system behavior. Data stream mining techniques were used to learn models for critical portions of the International Space Station (ISS) Electrical Power System (EPS). Evaluation on historical ISS telemetry data shows that adaptive system modeling reduces simulation error anywhere from 50 to 90 percent over existing approaches. The purpose of the methodology is to outline how someone can create accurate system models from sensor (telemetry) data. The purpose of the software is to support the methodology. The software provides analysis tools to design the adaptive models. The software also provides the algorithms to initially build system models and continuously update them from the latest streaming sensor data. The main strengths are as follows: Creates accurate spacecraft system models without in-depth system knowledge or any assumptions about system behavior. Automatically updates/calibrates system models using the latest streaming sensor data. Creates device specific models that capture the exact behavior of devices of the same type. Adapts to evolving systems. Can reduce computational complexity (faster simulations).
NASA Astrophysics Data System (ADS)
Yang, Liu; Xiao-Jing, Yu; Jian-Ming, Ma; Yi-Wen, Guan; Jiang, Li; Qiang, Li; Sa, Yang
2017-06-01
A volumetric ablation model for EPDM (ethylene- propylene-diene monomer) is established in this paper. This model considers the complex physicochemical process in the porous structure of a char layer. An ablation physics model based on a porous structure of a char layer and another model of heterogeneous volumetric ablation char layer physics are then built. In the model, porosity is used to describe the porous structure of a char layer. Gas diffusion and chemical reactions are introduced to the entire porous structure. Through detailed formation analysis, the causes of the compact or loose structure in the char layer and chemical vapor deposition (CVD) reaction between pyrolysis gas and char layer skeleton are introduced. The Arrhenius formula is adopted to determine the methods for calculating carbon deposition rate C which is the consumption rate caused by thermochemical reactions in the char layer, and porosity evolution. The critical porosity value is used as a criterion for char layer porous structure failure under gas flow and particle erosion. This critical porosity value is obtained by fitting experimental parameters and surface porosity of the char layer. Linear ablation and mass ablation rates are confirmed with the critical porosity value. Results of linear ablation and mass ablation rate calculations generally coincide with experimental results, suggesting that the ablation analysis proposed in this paper can accurately reflect practical situations and that the physics and mathematics models built are accurate and reasonable.
Modeling central metabolism and energy biosynthesis across microbial life.
Edirisinghe, Janaka N; Weisenhorn, Pamela; Conrad, Neal; Xia, Fangfang; Overbeek, Ross; Stevens, Rick L; Henry, Christopher S
2016-08-08
Automatically generated bacterial metabolic models, and even some curated models, lack accuracy in predicting energy yields due to poor representation of key pathways in energy biosynthesis and the electron transport chain (ETC). Further compounding the problem, complex interlinking pathways in genome-scale metabolic models, and the need for extensive gapfilling to support complex biomass reactions, often results in predicting unrealistic yields or unrealistic physiological flux profiles. To overcome this challenge, we developed methods and tools ( http://coremodels.mcs.anl.gov ) to build high quality core metabolic models (CMM) representing accurate energy biosynthesis based on a well studied, phylogenetically diverse set of model organisms. We compare these models to explore the variability of core pathways across all microbial life, and by analyzing the ability of our core models to synthesize ATP and essential biomass precursors, we evaluate the extent to which the core metabolic pathways and functional ETCs are known for all microbes. 6,600 (80 %) of our models were found to have some type of aerobic ETC, whereas 5,100 (62 %) have an anaerobic ETC, and 1,279 (15 %) do not have any ETC. Using our manually curated ETC and energy biosynthesis pathways with no gapfilling at all, we predict accurate ATP yields for nearly 5586 (70 %) of the models under aerobic and anaerobic growth conditions. This study revealed gaps in our knowledge of the central pathways that result in 2,495 (30 %) CMMs being unable to produce ATP under any of the tested conditions. We then established a methodology for the systematic identification and correction of inconsistent annotations using core metabolic models coupled with phylogenetic analysis. We predict accurate energy yields based on our improved annotations in energy biosynthesis pathways and the implementation of diverse ETC reactions across the microbial tree of life. We highlighted missing annotations that were essential to energy biosynthesis in our models. We examine the diversity of these pathways across all microbial life and enable the scientific community to explore the analyses generated from this large-scale analysis of over 8000 microbial genomes.
Glaholt, Stephen P; Chen, Celia Y; Demidenko, Eugene; Bugge, Deenie M; Folt, Carol L; Shaw, Joseph R
2012-08-15
The study of stressor interactions by eco-toxicologists using nonlinear response variables is limited by required amounts of a priori knowledge, complexity of experimental designs, the use of linear models, and the lack of use of optimal designs of nonlinear models to characterize complex interactions. Therefore, we developed AID, an adaptive-iterative design for eco-toxicologist to more accurately and efficiently examine complex multiple stressor interactions. AID incorporates the power of the general linear model and A-optimal criteria with an iterative process that: 1) minimizes the required amount of a priori knowledge, 2) simplifies the experimental design, and 3) quantifies both individual and interactive effects. Once a stable model is determined, the best fit model is identified and the direction and magnitude of stressors, individually and all combinations (including complex interactions) are quantified. To validate AID, we selected five commonly co-occurring components of polluted aquatic systems, three metal stressors (Cd, Zn, As) and two water chemistry parameters (pH, hardness) to be tested using standard acute toxicity tests in which Daphnia mortality is the (nonlinear) response variable. We found after the initial data input of experimental data, although literature values (e.g. EC-values) may also be used, and after only two iterations of AID, our dose response model was stable. The model ln(Cd)*ln(Zn) was determined the best predictor of Daphnia mortality response to the combined effects of Cd, Zn, As, pH, and hardness. This model was then used to accurately identify and quantify the strength of both greater- (e.g. As*Cd) and less-than additive interactions (e.g. Cd*Zn). Interestingly, our study found only binary interactions significant, not higher order interactions. We conclude that AID is more efficient and effective at assessing multiple stressor interactions than current methods. Other applications, including life-history endpoints commonly used by regulators, could benefit from AID's efficiency in assessing water quality criteria. Copyright © 2012 Elsevier B.V. All rights reserved.
3D Numerical simulation of bed morphological responses to complex in-streamstructures
NASA Astrophysics Data System (ADS)
Xu, Y.; Liu, X.
2017-12-01
In-stream structures are widely used in stream restoration for both hydraulic and ecologicalpurposes. The geometries of the structures are usually designed to be extremely complex andirregular, so as to provide nature-like physical habitat. The aim of this study is to develop anumerical model to accurately predict the bed-load transport and the morphological changescaused by the complex in-stream structures. This model is developed in the platform ofOpenFOAM. In the hydrodynamics part, it utilizes different turbulence models to capture thedetailed turbulence information near the in-stream structures. The technique of immersedboundary method (IBM) is efficiently implemented in the model to describe the movable bendand the rigid solid body of in-stream structures. With IBM, the difficulty of mesh generation onthe complex geometry is greatly alleviated, and the bed surface deformation is able to becoupled in to flow system. This morphodynamics model is firstly validated by simple structures,such as the morphology of the scour in log-vane structure. Then it is applied in a more complexstructure, engineered log jams (ELJ), which consists of multiple logs piled together. Thenumerical results including turbulence flow information and bed morphological responses areevaluated against the experimental measurement within the exact same flow condition.
NASA Astrophysics Data System (ADS)
Zerkle, Ronald D.; Prakash, Chander
1995-03-01
This viewgraph presentation summarizes some CFD experience at GE Aircraft Engines for flows in the primary gaspath of a gas turbine engine and in turbine blade cooling passages. It is concluded that application of the standard k-epsilon turbulence model with wall functions is not adequate for accurate CFD simulation of aerodynamic performance and heat transfer in the primary gas path of a gas turbine engine. New models are required in the near-wall region which include more physics than wall functions. The two-layer modeling approach appears attractive because of its computational complexity. In addition, improved CFD simulation of film cooling and turbine blade internal cooling passages will require anisotropic turbulence models. New turbulence models must be practical in order to have a significant impact on the engine design process. A coordinated turbulence modeling effort between NASA centers would be beneficial to the gas turbine industry.
NASA Technical Reports Server (NTRS)
Zerkle, Ronald D.; Prakash, Chander
1995-01-01
This viewgraph presentation summarizes some CFD experience at GE Aircraft Engines for flows in the primary gaspath of a gas turbine engine and in turbine blade cooling passages. It is concluded that application of the standard k-epsilon turbulence model with wall functions is not adequate for accurate CFD simulation of aerodynamic performance and heat transfer in the primary gas path of a gas turbine engine. New models are required in the near-wall region which include more physics than wall functions. The two-layer modeling approach appears attractive because of its computational complexity. In addition, improved CFD simulation of film cooling and turbine blade internal cooling passages will require anisotropic turbulence models. New turbulence models must be practical in order to have a significant impact on the engine design process. A coordinated turbulence modeling effort between NASA centers would be beneficial to the gas turbine industry.
Keane, Robert E.; Burgan, Robert E.; Van Wagtendonk, Jan W.
2001-01-01
Fuel maps are essential for computing spatial fire hazard and risk and simulating fire growth and intensity across a landscape. However, fuel mapping is an extremely difficult and complex process requiring expertise in remotely sensed image classification, fire behavior, fuels modeling, ecology, and geographical information systems (GIS). This paper first presents the challenges of mapping fuels: canopy concealment, fuelbed complexity, fuel type diversity, fuel variability, and fuel model generalization. Then, four approaches to mapping fuels are discussed with examples provided from the literature: (1) field reconnaissance; (2) direct mapping methods; (3) indirect mapping methods; and (4) gradient modeling. A fuel mapping method is proposed that uses current remote sensing and image processing technology. Future fuel mapping needs are also discussed which include better field data and fuel models, accurate GIS reference layers, improved satellite imagery, and comprehensive ecosystem models.
Accurate modelling of unsteady flows in collapsible tubes.
Marchandise, Emilie; Flaud, Patrice
2010-01-01
The context of this paper is the development of a general and efficient numerical haemodynamic tool to help clinicians and researchers in understanding of physiological flow phenomena. We propose an accurate one-dimensional Runge-Kutta discontinuous Galerkin (RK-DG) method coupled with lumped parameter models for the boundary conditions. The suggested model has already been successfully applied to haemodynamics in arteries and is now extended for the flow in collapsible tubes such as veins. The main difference with cardiovascular simulations is that the flow may become supercritical and elastic jumps may appear with the numerical consequence that scheme may not remain monotone if no limiting procedure is introduced. We show that our second-order RK-DG method equipped with an approximate Roe's Riemann solver and a slope-limiting procedure allows us to capture elastic jumps accurately. Moreover, this paper demonstrates that the complex physics associated with such flows is more accurately modelled than with traditional methods such as finite difference methods or finite volumes. We present various benchmark problems that show the flexibility and applicability of the numerical method. Our solutions are compared with analytical solutions when they are available and with solutions obtained using other numerical methods. Finally, to illustrate the clinical interest, we study the emptying process in a calf vein squeezed by contracting skeletal muscle in a normal and pathological subject. We compare our results with experimental simulations and discuss the sensitivity to parameters of our model.
NASA Astrophysics Data System (ADS)
Sizov, Gennadi Y.
In this dissertation, a model-based multi-objective optimal design of permanent magnet ac machines, supplied by sine-wave current regulated drives, is developed and implemented. The design procedure uses an efficient electromagnetic finite element-based solver to accurately model nonlinear material properties and complex geometric shapes associated with magnetic circuit design. Application of an electromagnetic finite element-based solver allows for accurate computation of intricate performance parameters and characteristics. The first contribution of this dissertation is the development of a rapid computational method that allows accurate and efficient exploration of large multi-dimensional design spaces in search of optimum design(s). The computationally efficient finite element-based approach developed in this work provides a framework of tools that allow rapid analysis of synchronous electric machines operating under steady-state conditions. In the developed modeling approach, major steady-state performance parameters such as, winding flux linkages and voltages, average, cogging and ripple torques, stator core flux densities, core losses, efficiencies and saturated machine winding inductances, are calculated with minimum computational effort. In addition, the method includes means for rapid estimation of distributed stator forces and three-dimensional effects of stator and/or rotor skew on the performance of the machine. The second contribution of this dissertation is the development of the design synthesis and optimization method based on a differential evolution algorithm. The approach relies on the developed finite element-based modeling method for electromagnetic analysis and is able to tackle large-scale multi-objective design problems using modest computational resources. Overall, computational time savings of up to two orders of magnitude are achievable, when compared to current and prevalent state-of-the-art methods. These computational savings allow one to expand the optimization problem to achieve more complex and comprehensive design objectives. The method is used in the design process of several interior permanent magnet industrial motors. The presented case studies demonstrate that the developed finite element-based approach practically eliminates the need for using less accurate analytical and lumped parameter equivalent circuit models for electric machine design optimization. The design process and experimental validation of the case-study machines are detailed in the dissertation.
Real-time monitoring of high-gravity corn mash fermentation using in situ raman spectroscopy.
Gray, Steven R; Peretti, Steven W; Lamb, H Henry
2013-06-01
In situ Raman spectroscopy was employed for real-time monitoring of simultaneous saccharification and fermentation (SSF) of corn mash by an industrial strain of Saccharomyces cerevisiae. An accurate univariate calibration model for ethanol was developed based on the very strong 883 cm(-1) C-C stretching band. Multivariate partial least squares (PLS) calibration models for total starch, dextrins, maltotriose, maltose, glucose, and ethanol were developed using data from eight batch fermentations and validated using predictions for a separate batch. The starch, ethanol, and dextrins models showed significant prediction improvement when the calibration data were divided into separate high- and low-concentration sets. Collinearity between the ethanol and starch models was avoided by excluding regions containing strong ethanol peaks from the starch model and, conversely, excluding regions containing strong saccharide peaks from the ethanol model. The two-set calibration models for starch (R(2) = 0.998, percent error = 2.5%) and ethanol (R(2) = 0.999, percent error = 2.1%) provide more accurate predictions than any previously published spectroscopic models. Glucose, maltose, and maltotriose are modeled to accuracy comparable to previous work on less complex fermentation processes. Our results demonstrate that Raman spectroscopy is capable of real time in situ monitoring of a complex industrial biomass fermentation. To our knowledge, this is the first PLS-based chemometric modeling of corn mash fermentation under typical industrial conditions, and the first Raman-based monitoring of a fermentation process with glucose, oligosaccharides and polysaccharides present. Copyright © 2013 Wiley Periodicals, Inc.
A Combined Experimental and Computational Approach to Subject-Specific Analysis of Knee Joint Laxity
Harris, Michael D.; Cyr, Adam J.; Ali, Azhar A.; Fitzpatrick, Clare K.; Rullkoetter, Paul J.; Maletsky, Lorin P.; Shelburne, Kevin B.
2016-01-01
Modeling complex knee biomechanics is a continual challenge, which has resulted in many models of varying levels of quality, complexity, and validation. Beyond modeling healthy knees, accurately mimicking pathologic knee mechanics, such as after cruciate rupture or meniscectomy, is difficult. Experimental tests of knee laxity can provide important information about ligament engagement and overall contributions to knee stability for development of subject-specific models to accurately simulate knee motion and loading. Our objective was to provide combined experimental tests and finite-element (FE) models of natural knee laxity that are subject-specific, have one-to-one experiment to model calibration, simulate ligament engagement in agreement with literature, and are adaptable for a variety of biomechanical investigations (e.g., cartilage contact, ligament strain, in vivo kinematics). Calibration involved perturbing ligament stiffness, initial ligament strain, and attachment location until model-predicted kinematics and ligament engagement matched experimental reports. Errors between model-predicted and experimental kinematics averaged <2 deg during varus–valgus (VV) rotations, <6 deg during internal–external (IE) rotations, and <3 mm of translation during anterior–posterior (AP) displacements. Engagement of the individual ligaments agreed with literature descriptions. These results demonstrate the ability of our constraint models to be customized for multiple individuals and simultaneously call attention to the need to verify that ligament engagement is in good general agreement with literature. To facilitate further investigations of subject-specific or population based knee joint biomechanics, data collected during the experimental and modeling phases of this study are available for download by the research community. PMID:27306137
Nonlinear finite element analysis of liquid sloshing in complex vehicle motion scenarios
NASA Astrophysics Data System (ADS)
Nicolsen, Brynne; Wang, Liang; Shabana, Ahmed
2017-09-01
The objective of this investigation is to develop a new total Lagrangian continuum-based liquid sloshing model that can be systematically integrated with multibody system (MBS) algorithms in order to allow for studying complex motion scenarios. The new approach allows for accurately capturing the effect of the sloshing forces during curve negotiation, rapid lane change, and accelerating and braking scenarios. In these motion scenarios, the liquid experiences large displacements and significant changes in shape that can be captured effectively using the finite element (FE) absolute nodal coordinate formulation (ANCF). ANCF elements are used in this investigation to describe complex mesh geometries, to capture the change in inertia due to the change in the fluid shape, and to accurately calculate the centrifugal forces, which for flexible bodies do not take the simple form used in rigid body dynamics. A penalty formulation is used to define the contact between the rigid tank walls and the fluid. A fully nonlinear MBS truck model that includes a suspension system and Pacejka's brush tire model is developed. Specified motion trajectories are used to examine the vehicle dynamics in three different scenarios - deceleration during straight-line motion, rapid lane change, and curve negotiation. It is demonstrated that the liquid sloshing changes the contact forces between the tires and the ground - increasing the forces on certain wheels and decreasing the forces on other wheels. In cases of extreme sloshing, this dynamic behavior can negatively impact the vehicle stability by increasing the possibility of wheel lift and vehicle rollover.
Modeling and simulation of high dimensional stochastic multiscale PDE systems at the exascale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zabaras, Nicolas J.
2016-11-08
Predictive Modeling of multiscale and Multiphysics systems requires accurate data driven characterization of the input uncertainties, and understanding of how they propagate across scales and alter the final solution. This project develops a rigorous mathematical framework and scalable uncertainty quantification algorithms to efficiently construct realistic low dimensional input models, and surrogate low complexity systems for the analysis, design, and control of physical systems represented by multiscale stochastic PDEs. The work can be applied to many areas including physical and biological processes, from climate modeling to systems biology.
2006-09-01
Figure 17. Station line center of Magnus force vs. Mach number for spin-stabilized projectile...forces and moments on the projectile. It is also relatively easy to change the wind tunnel model to allow detailed parametric effects to be...such as pitch and roll damping, as well as, Magnus force and moment coefficients, are difficult to obtain in a wind tunnel and require a complex
On the Helix Propensity in Generalized Born Solvent Descriptions of Modeling the Dark Proteome
2017-01-10
benchmarks of conformational sampling methods and their all-atom force fields plus solvent descriptions to accurately model structural transitions on a...atom simulations of proteins is the replacement of explicit water interactions with a continuum description of treating implicitly the bulk physical... structure was reported by Amarasinghe and coworkers (Leung et al., 2015) of the Ebola nucleoprotein NP in complex with a 28-residue peptide extracted
Nguyen, Hai; Pérez, Alberto; Bermeo, Sherry; Simmerling, Carlos
2016-01-01
The Generalized Born (GB) implicit solvent model has undergone significant improvements in accuracy for modeling of proteins and small molecules. However, GB still remains a less widely explored option for nucleic acid simulations, in part because fast GB models are often unable to maintain stable nucleic acid structures, or they introduce structural bias in proteins, leading to difficulty in application of GB models in simulations of protein-nucleic acid complexes. Recently, GB-neck2 was developed to improve the behavior of protein simulations. In an effort to create a more accurate model for nucleic acids, a similar procedure to the development of GB-neck2 is described here for nucleic acids. The resulting parameter set significantly reduces absolute and relative energy error relative to Poisson Boltzmann for both nucleic acids and nucleic acid-protein complexes, when compared to its predecessor GB-neck model. This improvement in solvation energy calculation translates to increased structural stability for simulations of DNA and RNA duplexes, quadruplexes, and protein-nucleic acid complexes. The GB-neck2 model also enables successful folding of small DNA and RNA hairpins to near native structures as determined from comparison with experiment. The functional form and all required parameters are provided here and also implemented in the AMBER software. PMID:26574454
Modeling central metabolism and energy biosynthesis across microbial life
Edirisinghe, Janaka N.; Weisenhorn, Pamela; Conrad, Neal; ...
2016-08-08
Here, automatically generated bacterial metabolic models, and even some curated models, lack accuracy in predicting energy yields due to poor representation of key pathways in energy biosynthesis and the electron transport chain (ETC). Further compounding the problem, complex interlinking pathways in genome-scale metabolic models, and the need for extensive gapfilling to support complex biomass reactions, often results in predicting unrealistic yields or unrealistic physiological flux profiles. As a result, to overcome this challenge, we developed methods and tools to build high quality core metabolic models (CMM) representing accurate energy biosynthesis based on a well studied, phylogenetically diverse set of modelmore » organisms. We compare these models to explore the variability of core pathways across all microbial life, and by analyzing the ability of our core models to synthesize ATP and essential biomass precursors, we evaluate the extent to which the core metabolic pathways and functional ETCs are known for all microbes. 6,600 (80 %) of our models were found to have some type of aerobic ETC, whereas 5,100 (62 %) have an anaerobic ETC, and 1,279 (15 %) do not have any ETC. Using our manually curated ETC and energy biosynthesis pathways with no gapfilling at all, we predict accurate ATP yields for nearly 5586 (70 %) of the models under aerobic and anaerobic growth conditions. This study revealed gaps in our knowledge of the central pathways that result in 2,495 (30 %) CMMs being unable to produce ATP under any of the tested conditions. We then established a methodology for the systematic identification and correction of inconsistent annotations using core metabolic models coupled with phylogenetic analysis. In conclusion, we predict accurate energy yields based on our improved annotations in energy biosynthesis pathways and the implementation of diverse ETC reactions across the microbial tree of life. We highlighted missing annotations that were essential to energy biosynthesis in our models. We examine the diversity of these pathways across all microbial life and enable the scientific community to explore the analyses generated from this large-scale analysis of over 8000 microbial genomes.« less
Modeling central metabolism and energy biosynthesis across microbial life
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edirisinghe, Janaka N.; Weisenhorn, Pamela; Conrad, Neal
Here, automatically generated bacterial metabolic models, and even some curated models, lack accuracy in predicting energy yields due to poor representation of key pathways in energy biosynthesis and the electron transport chain (ETC). Further compounding the problem, complex interlinking pathways in genome-scale metabolic models, and the need for extensive gapfilling to support complex biomass reactions, often results in predicting unrealistic yields or unrealistic physiological flux profiles. As a result, to overcome this challenge, we developed methods and tools to build high quality core metabolic models (CMM) representing accurate energy biosynthesis based on a well studied, phylogenetically diverse set of modelmore » organisms. We compare these models to explore the variability of core pathways across all microbial life, and by analyzing the ability of our core models to synthesize ATP and essential biomass precursors, we evaluate the extent to which the core metabolic pathways and functional ETCs are known for all microbes. 6,600 (80 %) of our models were found to have some type of aerobic ETC, whereas 5,100 (62 %) have an anaerobic ETC, and 1,279 (15 %) do not have any ETC. Using our manually curated ETC and energy biosynthesis pathways with no gapfilling at all, we predict accurate ATP yields for nearly 5586 (70 %) of the models under aerobic and anaerobic growth conditions. This study revealed gaps in our knowledge of the central pathways that result in 2,495 (30 %) CMMs being unable to produce ATP under any of the tested conditions. We then established a methodology for the systematic identification and correction of inconsistent annotations using core metabolic models coupled with phylogenetic analysis. In conclusion, we predict accurate energy yields based on our improved annotations in energy biosynthesis pathways and the implementation of diverse ETC reactions across the microbial tree of life. We highlighted missing annotations that were essential to energy biosynthesis in our models. We examine the diversity of these pathways across all microbial life and enable the scientific community to explore the analyses generated from this large-scale analysis of over 8000 microbial genomes.« less
Simple to complex modeling of breathing volume using a motion sensor.
John, Dinesh; Staudenmayer, John; Freedson, Patty
2013-06-01
To compare simple and complex modeling techniques to estimate categories of low, medium, and high ventilation (VE) from ActiGraph™ activity counts. Vertical axis ActiGraph™ GT1M activity counts, oxygen consumption and VE were measured during treadmill walking and running, sports, household chores and labor-intensive employment activities. Categories of low (<19.3 l/min), medium (19.3 to 35.4 l/min) and high (>35.4 l/min) VEs were derived from activity intensity classifications (light <2.9 METs, moderate 3.0 to 5.9 METs and vigorous >6.0 METs). We examined the accuracy of two simple techniques (multiple regression and activity count cut-point analyses) and one complex (random forest technique) modeling technique in predicting VE from activity counts. Prediction accuracy of the complex random forest technique was marginally better than the simple multiple regression method. Both techniques accurately predicted VE categories almost 80% of the time. The multiple regression and random forest techniques were more accurate (85 to 88%) in predicting medium VE. Both techniques predicted the high VE (70 to 73%) with greater accuracy than low VE (57 to 60%). Actigraph™ cut-points for light, medium and high VEs were <1381, 1381 to 3660 and >3660 cpm. There were minor differences in prediction accuracy between the multiple regression and the random forest technique. This study provides methods to objectively estimate VE categories using activity monitors that can easily be deployed in the field. Objective estimates of VE should provide a better understanding of the dose-response relationship between internal exposure to pollutants and disease. Copyright © 2013 Elsevier B.V. All rights reserved.
Actinic imaging and evaluation of phase structures on EUV lithography masks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mochi, Iacopo; Goldberg, Kenneth; Huh, Sungmin
2010-09-28
The authors describe the implementation of a phase-retrieval algorithm to reconstruct phase and complex amplitude of structures on EUV lithography masks. Many native defects commonly found on EUV reticles are difficult to detect and review accurately because they have a strong phase component. Understanding the complex amplitude of mask features is essential for predictive modeling of defect printability and defect repair. Besides printing in a stepper, the most accurate way to characterize such defects is with actinic inspection, performed at the design, EUV wavelength. Phase defect and phase structures show a distinct through-focus behavior that enables qualitative evaluation of themore » object phase from two or more high-resolution intensity measurements. For the first time, phase of structures and defects on EUV masks were quantitatively reconstructed based on aerial image measurements, using a modified version of a phase-retrieval algorithm developed to test optical phase shifting reticles.« less
Sediment calibration strategies of Phase 5 Chesapeake Bay watershed model
Wu, J.; Shenk, G.W.; Raffensperger, Jeff P.; Moyer, D.; Linker, L.C.; ,
2005-01-01
Sediment is a primary constituent of concern for Chesapeake Bay due to its effect on water clarity. Accurate representation of sediment processes and behavior in Chesapeake Bay watershed model is critical for developing sound load reduction strategies. Sediment calibration remains one of the most difficult components of watershed-scale assessment. This is especially true for Chesapeake Bay watershed model given the size of the watershed being modeled and complexity involved in land and stream simulation processes. To obtain the best calibration, the Chesapeake Bay program has developed four different strategies for sediment calibration of Phase 5 watershed model, including 1) comparing observed and simulated sediment rating curves for different parts of the hydrograph; 2) analyzing change of bed depth over time; 3) relating deposition/scour to total annual sediment loads; and 4) calculating "goodness-of-fit' statistics. These strategies allow a more accurate sediment calibration, and also provide some insightful information on sediment processes and behavior in Chesapeake Bay watershed.
NASA Technical Reports Server (NTRS)
Sances, Dillon J.; Gangadharan, Sathya N.; Sudermann, James E.; Marsell, Brandon
2010-01-01
Liquid sloshing within spacecraft propellant tanks causes rapid energy dissipation at resonant modes, which can result in attitude destabilization of the vehicle. Identifying resonant slosh modes currently requires experimental testing and mechanical pendulum analogs to characterize the slosh dynamics. Computational Fluid Dynamics (CFD) techniques have recently been validated as an effective tool for simulating fuel slosh within free-surface propellant tanks. Propellant tanks often incorporate an internal flexible diaphragm to separate ullage and propellant which increases modeling complexity. A coupled fluid-structure CFD model is required to capture the damping effects of a flexible diaphragm on the propellant. ANSYS multidisciplinary engineering software employs a coupled solver for analyzing two-way Fluid Structure Interaction (FSI) cases such as the diaphragm propellant tank system. Slosh models generated by ANSYS software are validated by experimental lateral slosh test results. Accurate data correlation would produce an innovative technique for modeling fuel slosh within diaphragm tanks and provide an accurate and efficient tool for identifying resonant modes and the slosh dynamic response.
A Stratified Acoustic Model Accounting for Phase Shifts for Underwater Acoustic Networks
Wang, Ping; Zhang, Lin; Li, Victor O. K.
2013-01-01
Accurate acoustic channel models are critical for the study of underwater acoustic networks. Existing models include physics-based models and empirical approximation models. The former enjoy good accuracy, but incur heavy computational load, rendering them impractical in large networks. On the other hand, the latter are computationally inexpensive but inaccurate since they do not account for the complex effects of boundary reflection losses, the multi-path phenomenon and ray bending in the stratified ocean medium. In this paper, we propose a Stratified Acoustic Model (SAM) based on frequency-independent geometrical ray tracing, accounting for each ray's phase shift during the propagation. It is a feasible channel model for large scale underwater acoustic network simulation, allowing us to predict the transmission loss with much lower computational complexity than the traditional physics-based models. The accuracy of the model is validated via comparisons with the experimental measurements in two different oceans. Satisfactory agreements with the measurements and with other computationally intensive classical physics-based models are demonstrated. PMID:23669708
A stratified acoustic model accounting for phase shifts for underwater acoustic networks.
Wang, Ping; Zhang, Lin; Li, Victor O K
2013-05-13
Accurate acoustic channel models are critical for the study of underwater acoustic networks. Existing models include physics-based models and empirical approximation models. The former enjoy good accuracy, but incur heavy computational load, rendering them impractical in large networks. On the other hand, the latter are computationally inexpensive but inaccurate since they do not account for the complex effects of boundary reflection losses, the multi-path phenomenon and ray bending in the stratified ocean medium. In this paper, we propose a Stratified Acoustic Model (SAM) based on frequency-independent geometrical ray tracing, accounting for each ray's phase shift during the propagation. It is a feasible channel model for large scale underwater acoustic network simulation, allowing us to predict the transmission loss with much lower computational complexity than the traditional physics-based models. The accuracy of the model is validated via comparisons with the experimental measurements in two different oceans. Satisfactory agreements with the measurements and with other computationally intensive classical physics-based models are demonstrated.
Towards Assessing the Human Trajectory Planning Horizon
Nitsch, Verena; Meinzer, Dominik; Wollherr, Dirk
2016-01-01
Mobile robots are envisioned to cooperate closely with humans and to integrate seamlessly into a shared environment. For locomotion, these environments resemble traversable areas which are shared between multiple agents like humans and robots. The seamless integration of mobile robots into these environments requires accurate predictions of human locomotion. This work considers optimal control and model predictive control approaches for accurate trajectory prediction and proposes to integrate aspects of human behavior to improve their performance. Recently developed models are not able to reproduce accurately trajectories that result from sudden avoidance maneuvers. Particularly, the human locomotion behavior when handling disturbances from other agents poses a problem. The goal of this work is to investigate whether humans alter their trajectory planning horizon, in order to resolve abruptly emerging collision situations. By modeling humans as model predictive controllers, the influence of the planning horizon is investigated in simulations. Based on these results, an experiment is designed to identify, whether humans initiate a change in their locomotion planning behavior while moving in a complex environment. The results support the hypothesis, that humans employ a shorter planning horizon to avoid collisions that are triggered by unexpected disturbances. Observations presented in this work are expected to further improve the generalizability and accuracy of prediction methods based on dynamic models. PMID:27936015
Towards Assessing the Human Trajectory Planning Horizon.
Carton, Daniel; Nitsch, Verena; Meinzer, Dominik; Wollherr, Dirk
2016-01-01
Mobile robots are envisioned to cooperate closely with humans and to integrate seamlessly into a shared environment. For locomotion, these environments resemble traversable areas which are shared between multiple agents like humans and robots. The seamless integration of mobile robots into these environments requires accurate predictions of human locomotion. This work considers optimal control and model predictive control approaches for accurate trajectory prediction and proposes to integrate aspects of human behavior to improve their performance. Recently developed models are not able to reproduce accurately trajectories that result from sudden avoidance maneuvers. Particularly, the human locomotion behavior when handling disturbances from other agents poses a problem. The goal of this work is to investigate whether humans alter their trajectory planning horizon, in order to resolve abruptly emerging collision situations. By modeling humans as model predictive controllers, the influence of the planning horizon is investigated in simulations. Based on these results, an experiment is designed to identify, whether humans initiate a change in their locomotion planning behavior while moving in a complex environment. The results support the hypothesis, that humans employ a shorter planning horizon to avoid collisions that are triggered by unexpected disturbances. Observations presented in this work are expected to further improve the generalizability and accuracy of prediction methods based on dynamic models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nitta, Yohei; Brain Research Institute, Niigata University; Sugie, Atsushi
Precisely controlled axon guidance for complex neuronal wiring is essential for appropriate neuronal function. c-Jun N-terminal kinase (JNK) was found to play a role in axon guidance recently as well as in cell proliferation, protection and apoptosis. In spite of many genetic and molecular studies on these biological processes regulated by JNK, how JNK regulates axon guidance accurately has not been fully explained thus far. To address this question, we use the Drosophila mushroom body (MB) as a model since the α/β axons project in two distinct directions. Here we show that DISCO interacting protein 2 (DIP2) is required formore » the accurate direction of axonal guidance. DIP2 expression is under the regulation of Basket (Bsk), the Drosophila homologue of JNK. We additionally found that the Bsk/DIP2 pathway is independent from the AP-1 transcriptional factor complex pathway, which is directly activated by Bsk. In conclusion, our findings revealed DIP2 as a novel effector downstream of Bsk modulating the direction of axon projection. - Highlights: • DIP2 is required for accurate direction of axon guidance in Drosophila mushroom body. • DIP2 is a downstream of JNK in the axon guidance of Drosophila mushroom body neuron. • JNK/DIP2 pathway is independent from JNK/AP-1 transcriptional factor complex pathway.« less
NASA Astrophysics Data System (ADS)
Kozak, J.; Gulbinowicz, D.; Gulbinowicz, Z.
2009-05-01
The need for complex and accurate three dimensional (3-D) microcomponents is increasing rapidly for many industrial and consumer products. Electrochemical machining process (ECM) has the potential of generating desired crack-free and stress-free surfaces of microcomponents. This paper reports a study of pulse electrochemical micromachining (PECMM) using ultrashort (nanoseconds) pulses for generating complex 3-D microstructures of high accuracy. A mathematical model of the microshaping process with taking into consideration unsteady phenomena in electrical double layer has been developed. The software for computer simulation of PECM has been developed and the effects of machining parameters on anodic localization and final shape of machined surface are presented.
A general mechanism for competitor-induced dissociation of molecular complexes
Paramanathan, Thayaparan; Reeves, Daniel; Friedman, Larry J.; Kondev, Jane; Gelles, Jeff
2014-01-01
The kinetic stability of non-covalent macromolecular complexes controls many biological phenomena. Here we find that physical models of complex dissociation predict that competitor molecules will in general accelerate the breakdown of isolated bimolecular complexes by occluding rapid rebinding of the two binding partners. This prediction is largely independent of molecular details. We confirm the prediction with single-molecule fluorescence experiments on a well-characterized DNA strand dissociation reaction. Contrary to common assumptions, competitor–induced acceleration of dissociation can occur in biologically relevant competitor concentration ranges and does not necessarily implyternary association of competitor with the bimolecular complex. Thus, occlusion of complex rebinding may play a significant role in a variety of biomolecular processes. The results also show that single-molecule colocalization experiments can accurately measure dissociation rates despite their limited spatio temporal resolution. PMID:25342513
Non-stationary noise estimation using dictionary learning and Gaussian mixture models
NASA Astrophysics Data System (ADS)
Hughes, James M.; Rockmore, Daniel N.; Wang, Yang
2014-02-01
Stationarity of the noise distribution is a common assumption in image processing. This assumption greatly simplifies denoising estimators and other model parameters and consequently assuming stationarity is often a matter of convenience rather than an accurate model of noise characteristics. The problematic nature of this assumption is exacerbated in real-world contexts, where noise is often highly non-stationary and can possess time- and space-varying characteristics. Regardless of model complexity, estimating the parameters of noise dis- tributions in digital images is a difficult task, and estimates are often based on heuristic assumptions. Recently, sparse Bayesian dictionary learning methods were shown to produce accurate estimates of the level of additive white Gaussian noise in images with minimal assumptions. We show that a similar model is capable of accu- rately modeling certain kinds of non-stationary noise processes, allowing for space-varying noise in images to be estimated, detected, and removed. We apply this modeling concept to several types of non-stationary noise and demonstrate the model's effectiveness on real-world problems, including denoising and segmentation of images according to noise characteristics, which has applications in image forensics.
CAD-based Automatic Modeling Method for Geant4 geometry model Through MCAM
NASA Astrophysics Data System (ADS)
Wang, Dong; Nie, Fanzhi; Wang, Guozhong; Long, Pengcheng; LV, Zhongliang; LV, Zhongliang
2014-06-01
Geant4 is a widely used Monte Carlo transport simulation package. Before calculating using Geant4, the calculation model need be established which could be described by using Geometry Description Markup Language (GDML) or C++ language. However, it is time-consuming and error-prone to manually describe the models by GDML. Automatic modeling methods have been developed recently, but there are some problem existed in most of present modeling programs, specially some of them were not accurate or adapted to specifically CAD format. To convert the GDML format models to CAD format accurately, a Geant4 Computer Aided Design (CAD) based modeling method was developed for automatically converting complex CAD geometry model into GDML geometry model. The essence of this method was dealing with CAD model represented with boundary representation (B-REP) and GDML model represented with constructive solid geometry (CSG). At first, CAD model was decomposed to several simple solids which had only one close shell. And then the simple solid was decomposed to convex shell set. Then corresponding GDML convex basic solids were generated by the boundary surfaces getting from the topological characteristic of a convex shell. After the generation of these solids, GDML model was accomplished with series boolean operations. This method was adopted in CAD/Image-based Automatic Modeling Program for Neutronics & Radiation Transport (MCAM), and tested with several models including the examples in Geant4 install package. The results showed that this method could convert standard CAD model accurately, and can be used for Geant4 automatic modeling.
Oscillation mechanics of the respiratory system.
Bates, Jason H T; Irvin, Charles G; Farré, Ramon; Hantos, Zoltán
2011-07-01
The mechanical impedance of the respiratory system defines the pressure profile required to drive a unit of oscillatory flow into the lungs. Impedance is a function of oscillation frequency, and is measured using the forced oscillation technique. Digital signal processing methods, most notably the Fourier transform, are used to calculate impedance from measured oscillatory pressures and flows. Impedance is a complex function of frequency, having both real and imaginary parts that vary with frequency in ways that can be used empirically to distinguish normal lung function from a variety of different pathologies. The most useful diagnostic information is gained when anatomically based mathematical models are fit to measurements of impedance. The simplest such model consists of a single flow-resistive conduit connecting to a single elastic compartment. Models of greater complexity may have two or more compartments, and provide more accurate fits to impedance measurements over a variety of different frequency ranges. The model that currently enjoys the widest application in studies of animal models of lung disease consists of a single airway serving an alveolar compartment comprising tissue with a constant-phase impedance. This model has been shown to fit very accurately to a wide range of impedance data, yet contains only four free parameters, and as such is highly parsimonious. The measurement of impedance in human patients is also now rapidly gaining acceptance, and promises to provide a more comprehensible assessment of lung function than parameters derived from conventional spirometry. © 2011 American Physiological Society.
Electrical receptive fields of retinal ganglion cells: Influence of presynaptic neurons
Apollo, Nicholas V.; Garrett, David J.
2018-01-01
Implantable retinal stimulators activate surviving neurons to restore a sense of vision in people who have lost their photoreceptors through degenerative diseases. Complex spatial and temporal interactions occur in the retina during multi-electrode stimulation. Due to these complexities, most existing implants activate only a few electrodes at a time, limiting the repertoire of available stimulation patterns. Measuring the spatiotemporal interactions between electrodes and retinal cells, and incorporating them into a model may lead to improved stimulation algorithms that exploit the interactions. Here, we present a computational model that accurately predicts both the spatial and temporal nonlinear interactions of multi-electrode stimulation of rat retinal ganglion cells (RGCs). The model was verified using in vitro recordings of ON, OFF, and ON-OFF RGCs in response to subretinal multi-electrode stimulation with biphasic pulses at three stimulation frequencies (10, 20, 30 Hz). The model gives an estimate of each cell’s spatiotemporal electrical receptive fields (ERFs); i.e., the pattern of stimulation leading to excitation or suppression in the neuron. All cells had excitatory ERFs and many also had suppressive sub-regions of their ERFs. We show that the nonlinearities in observed responses arise largely from activation of presynaptic interneurons. When synaptic transmission was blocked, the number of sub-regions of the ERF was reduced, usually to a single excitatory ERF. This suggests that direct cell activation can be modeled accurately by a one-dimensional model with linear interactions between electrodes, whereas indirect stimulation due to summated presynaptic responses is nonlinear. PMID:29432411
Bhatla, Puneet; Tretter, Justin T; Ludomirsky, Achi; Argilla, Michael; Latson, Larry A; Chakravarti, Sujata; Barker, Piers C; Yoo, Shi-Joon; McElhinney, Doff B; Wake, Nicole; Mosca, Ralph S
2017-01-01
Rapid prototyping facilitates comprehension of complex cardiac anatomy. However, determining when this additional information proves instrumental in patient management remains a challenge. We describe our experience with patient-specific anatomic models created using rapid prototyping from various imaging modalities, suggesting their utility in surgical and interventional planning in congenital heart disease (CHD). Virtual and physical 3-dimensional (3D) models were generated from CT or MRI data, using commercially available software for patients with complex muscular ventricular septal defects (CMVSD) and double-outlet right ventricle (DORV). Six patients with complex anatomy and uncertainty of the optimal management strategy were included in this study. The models were subsequently used to guide management decisions, and the outcomes reviewed. 3D models clearly demonstrated the complex intra-cardiac anatomy in all six patients and were utilized to guide management decisions. In the three patients with CMVSD, one underwent successful endovascular device closure following a prior failed attempt at transcatheter closure, and the other two underwent successful primary surgical closure with the aid of 3D models. In all three cases of DORV, the models provided better anatomic delineation and additional information that altered or confirmed the surgical plan. Patient-specific 3D heart models show promise in accurately defining intra-cardiac anatomy in CHD, specifically CMVSD and DORV. We believe these models improve understanding of the complex anatomical spatial relationships in these defects and provide additional insight for pre/intra-interventional management and surgical planning.
Dendritic trafficking faces physiologically critical speed-precision tradeoffs
Williams, Alex H.; O'Donnell, Cian; Sejnowski, Terrence J.; ...
2016-12-30
Nervous system function requires intracellular transport of channels, receptors, mRNAs, and other cargo throughout complex neuronal morphologies. Local signals such as synaptic input can regulate cargo trafficking, motivating the leading conceptual model of neuron-wide transport, sometimes called the ‘sushi-belt model’. Current theories and experiments are based on this model, yet its predictions are not rigorously understood. We formalized the sushi belt model mathematically, and show that it can achieve arbitrarily complex spatial distributions of cargo in reconstructed morphologies. However, the model also predicts an unavoidable, morphology dependent tradeoff between speed, precision and metabolic efficiency of cargo transport. With experimental estimatesmore » of trafficking kinetics, the model predicts delays of many hours or days for modestly accurate and efficient cargo delivery throughout a dendritic tree. In conclusion, these findings challenge current understanding of the efficacy of nucleus-to-synapse trafficking and may explain the prevalence of local biosynthesis in neurons.« less
Dendritic trafficking faces physiologically critical speed-precision tradeoffs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Alex H.; O'Donnell, Cian; Sejnowski, Terrence J.
Nervous system function requires intracellular transport of channels, receptors, mRNAs, and other cargo throughout complex neuronal morphologies. Local signals such as synaptic input can regulate cargo trafficking, motivating the leading conceptual model of neuron-wide transport, sometimes called the ‘sushi-belt model’. Current theories and experiments are based on this model, yet its predictions are not rigorously understood. We formalized the sushi belt model mathematically, and show that it can achieve arbitrarily complex spatial distributions of cargo in reconstructed morphologies. However, the model also predicts an unavoidable, morphology dependent tradeoff between speed, precision and metabolic efficiency of cargo transport. With experimental estimatesmore » of trafficking kinetics, the model predicts delays of many hours or days for modestly accurate and efficient cargo delivery throughout a dendritic tree. In conclusion, these findings challenge current understanding of the efficacy of nucleus-to-synapse trafficking and may explain the prevalence of local biosynthesis in neurons.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Xiangjian; State Key Laboratory of Molecular Reaction Dynamics and Center for Theoretical Computational Chemistry, Dalian Institute of Chemical Physics, Chinese Academy of Sciences, Dalian 116023; Zhang, Zhaojun, E-mail: zhangzhj@dicp.ac.cn, E-mail: zhangdh@dicp.ac.cn
2016-03-14
Understanding the role of reactant ro-vibrational degrees of freedom (DOFs) in reaction dynamics of polyatomic molecular dissociation on metal surfaces is of great importance to explore the complex chemical reaction mechanism. Here, we present an expensive quantum dynamics study of the dissociative chemisorption of CH{sub 4} on a rigid Ni(111) surface by developing an accurate nine-dimensional quantum dynamical model including the DOF of azimuth. Based on a highly accurate fifteen-dimensional potential energy surface built from first principles, our simulations elucidate that the dissociation probability of CH{sub 4} has the strong dependence on azimuth and surface impact site. Some improvements aremore » suggested to obtain the accurate dissociation probability from quantum dynamics simulations.« less
Freire, Ricardo O; Rocha, Gerd B; Simas, Alfredo M
2006-03-01
lanthanide coordination compounds efficiently and accurately is central for the design of new ligands capable of forming stable and highly luminescent complexes. Accordingly, we present in this paper a report on the capability of various ab initio effective core potential calculations in reproducing the coordination polyhedron geometries of lanthanide complexes. Starting with all combinations of HF, B3LYP and MP2(Full) with STO-3G, 3-21G, 6-31G, 6-31G* and 6-31+G basis sets for [Eu(H2O)9]3+ and closing with more manageable calculations for the larger complexes, we computed the fully predicted ab initio geometries for a total of 80 calculations on 52 complexes of Sm(III), Eu(III), Gd(III), Tb(III), Dy(III), Ho(III), Er(III) and Tm(III), the largest containing 164 atoms. Our results indicate that RHF/STO-3G/ECP appears to be the most efficient model chemistry in terms of coordination polyhedron crystallographic geometry predictions from isolated lanthanide complex ion calculations. Moreover, both augmenting the basis set and/or including electron correlation generally enlarged the deviations and aggravated the quality of the predicted coordination polyhedron crystallographic geometry. Our results further indicate that Cosentino et al.'s suggestion of using RHF/3-21G/ECP geometries appears to be indeed a more robust, but not necessarily, more accurate recommendation to be adopted for the general lanthanide complex case. [Figure: see text].
Model-order reduction of lumped parameter systems via fractional calculus
NASA Astrophysics Data System (ADS)
Hollkamp, John P.; Sen, Mihir; Semperlotti, Fabio
2018-04-01
This study investigates the use of fractional order differential models to simulate the dynamic response of non-homogeneous discrete systems and to achieve efficient and accurate model order reduction. The traditional integer order approach to the simulation of non-homogeneous systems dictates the use of numerical solutions and often imposes stringent compromises between accuracy and computational performance. Fractional calculus provides an alternative approach where complex dynamical systems can be modeled with compact fractional equations that not only can still guarantee analytical solutions, but can also enable high levels of order reduction without compromising on accuracy. Different approaches are explored in order to transform the integer order model into a reduced order fractional model able to match the dynamic response of the initial system. Analytical and numerical results show that, under certain conditions, an exact match is possible and the resulting fractional differential models have both a complex and frequency-dependent order of the differential operator. The implications of this type of approach for both model order reduction and model synthesis are discussed.
Deformation, Failure, and Fatigue Life of SiC/Ti-15-3 Laminates Accurately Predicted by MAC/GMC
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M.
2002-01-01
NASA Glenn Research Center's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) (ref.1) has been extended to enable fully coupled macro-micro deformation, failure, and fatigue life predictions for advanced metal matrix, ceramic matrix, and polymer matrix composites. Because of the multiaxial nature of the code's underlying micromechanics model, GMC--which allows the incorporation of complex local inelastic constitutive models--MAC/GMC finds its most important application in metal matrix composites, like the SiC/Ti-15-3 composite examined here. Furthermore, since GMC predicts the microscale fields within each constituent of the composite material, submodels for local effects such as fiber breakage, interfacial debonding, and matrix fatigue damage can and have been built into MAC/GMC. The present application of MAC/GMC highlights the combination of these features, which has enabled the accurate modeling of the deformation, failure, and life of titanium matrix composites.
Yang Li; Wei Liang; Yinlong Zhang; Haibo An; Jindong Tan
2016-08-01
Automatic and accurate lumbar vertebrae detection is an essential step of image-guided minimally invasive spine surgery (IG-MISS). However, traditional methods still require human intervention due to the similarity of vertebrae, abnormal pathological conditions and uncertain imaging angle. In this paper, we present a novel convolutional neural network (CNN) model to automatically detect lumbar vertebrae for C-arm X-ray images. Training data is augmented by DRR and automatic segmentation of ROI is able to reduce the computational complexity. Furthermore, a feature fusion deep learning (FFDL) model is introduced to combine two types of features of lumbar vertebrae X-ray images, which uses sobel kernel and Gabor kernel to obtain the contour and texture of lumbar vertebrae, respectively. Comprehensive qualitative and quantitative experiments demonstrate that our proposed model performs more accurate in abnormal cases with pathologies and surgical implants in multi-angle views.
NASA Technical Reports Server (NTRS)
Maccormack, R. W.
1978-01-01
The calculation of flow fields past aircraft configuration at flight Reynolds numbers is considered. Progress in devising accurate and efficient numerical methods, in understanding and modeling the physics of turbulence, and in developing reliable and powerful computer hardware is discussed. Emphasis is placed on efficient solutions to the Navier-Stokes equations.
All-Possible-Subsets for MANOVA and Factorial MANOVAs: Less than a Weekend Project
ERIC Educational Resources Information Center
Nimon, Kim; Zientek, Linda Reichwein; Kraha, Amanda
2016-01-01
Multivariate techniques are increasingly popular as researchers attempt to accurately model a complex world. MANOVA is a multivariate technique used to investigate the dimensions along which groups differ, and how these dimensions may be used to predict group membership. A concern in a MANOVA analysis is to determine if a smaller subset of…
Todd A. Schroeder; Robbie Hember; Nicholas C. Coops; Shunlin Liang
2009-01-01
The magnitude and distribution of incoming shortwave solar radiation (SW) has significant influence on the productive capacity of forest vegetation. Models that estimate forest productivity require accurate and spatially explicit radiation surfaces that resolve both long- and short-term temporal climatic patterns and that account for topographic variability of the land...
Fluency Heuristic: A Model of How the Mind Exploits a By-Product of Information Retrieval
ERIC Educational Resources Information Center
Hertwig, Ralph; Herzog, Stefan M.; Schooler, Lael J.; Reimer, Torsten
2008-01-01
Boundedly rational heuristics for inference can be surprisingly accurate and frugal for several reasons. They can exploit environmental structures, co-opt complex capacities, and elude effortful search by exploiting information that automatically arrives on the mental stage. The fluency heuristic is a prime example of a heuristic that makes the…
Hybrid experimental/analytical models of structural dynamics - Creation and use for predictions
NASA Technical Reports Server (NTRS)
Balmes, Etienne
1993-01-01
An original complete methodology for the construction of predictive models of damped structural vibrations is introduced. A consistent definition of normal and complex modes is given which leads to an original method to accurately identify non-proportionally damped normal mode models. A new method to create predictive hybrid experimental/analytical models of damped structures is introduced, and the ability of hybrid models to predict the response to system configuration changes is discussed. Finally a critical review of the overall methodology is made by application to the case of the MIT/SERC interferometer testbed.
NASA Technical Reports Server (NTRS)
Leser, Patrick E.; Hochhalter, Jacob D.; Newman, John A.; Leser, William P.; Warner, James E.; Wawrzynek, Paul A.; Yuan, Fuh-Gwo
2015-01-01
Utilizing inverse uncertainty quantification techniques, structural health monitoring can be integrated with damage progression models to form probabilistic predictions of a structure's remaining useful life. However, damage evolution in realistic structures is physically complex. Accurately representing this behavior requires high-fidelity models which are typically computationally prohibitive. In the present work, a high-fidelity finite element model is represented by a surrogate model, reducing computation times. The new approach is used with damage diagnosis data to form a probabilistic prediction of remaining useful life for a test specimen under mixed-mode conditions.
Computational Fluid Dynamics of Whole-Body Aircraft
NASA Astrophysics Data System (ADS)
Agarwal, Ramesh
1999-01-01
The current state of the art in computational aerodynamics for whole-body aircraft flowfield simulations is described. Recent advances in geometry modeling, surface and volume grid generation, and flow simulation algorithms have led to accurate flowfield predictions for increasingly complex and realistic configurations. As a result, computational aerodynamics has emerged as a crucial enabling technology for the design and development of flight vehicles. Examples illustrating the current capability for the prediction of transport and fighter aircraft flowfields are presented. Unfortunately, accurate modeling of turbulence remains a major difficulty in the analysis of viscosity-dominated flows. In the future, inverse design methods, multidisciplinary design optimization methods, artificial intelligence technology, and massively parallel computer technology will be incorporated into computational aerodynamics, opening up greater opportunities for improved product design at substantially reduced costs.
Ab initio theory and modeling of water.
Chen, Mohan; Ko, Hsin-Yu; Remsing, Richard C; Calegari Andrade, Marcos F; Santra, Biswajit; Sun, Zhaoru; Selloni, Annabella; Car, Roberto; Klein, Michael L; Perdew, John P; Wu, Xifan
2017-10-10
Water is of the utmost importance for life and technology. However, a genuinely predictive ab initio model of water has eluded scientists. We demonstrate that a fully ab initio approach, relying on the strongly constrained and appropriately normed (SCAN) density functional, provides such a description of water. SCAN accurately describes the balance among covalent bonds, hydrogen bonds, and van der Waals interactions that dictates the structure and dynamics of liquid water. Notably, SCAN captures the density difference between water and ice I h at ambient conditions, as well as many important structural, electronic, and dynamic properties of liquid water. These successful predictions of the versatile SCAN functional open the gates to study complex processes in aqueous phase chemistry and the interactions of water with other materials in an efficient, accurate, and predictive, ab initio manner.
Ab initio theory and modeling of water
Chen, Mohan; Ko, Hsin-Yu; Remsing, Richard C.; Calegari Andrade, Marcos F.; Santra, Biswajit; Sun, Zhaoru; Selloni, Annabella; Car, Roberto; Klein, Michael L.; Perdew, John P.; Wu, Xifan
2017-01-01
Water is of the utmost importance for life and technology. However, a genuinely predictive ab initio model of water has eluded scientists. We demonstrate that a fully ab initio approach, relying on the strongly constrained and appropriately normed (SCAN) density functional, provides such a description of water. SCAN accurately describes the balance among covalent bonds, hydrogen bonds, and van der Waals interactions that dictates the structure and dynamics of liquid water. Notably, SCAN captures the density difference between water and ice Ih at ambient conditions, as well as many important structural, electronic, and dynamic properties of liquid water. These successful predictions of the versatile SCAN functional open the gates to study complex processes in aqueous phase chemistry and the interactions of water with other materials in an efficient, accurate, and predictive, ab initio manner. PMID:28973868
Topography Modeling in Atmospheric Flows Using the Immersed Boundary Method
NASA Technical Reports Server (NTRS)
Ackerman, A. S.; Senocak, I.; Mansour, N. N.; Stevens, D. E.
2004-01-01
Numerical simulation of flow over complex geometry needs accurate and efficient computational methods. Different techniques are available to handle complex geometry. The unstructured grid and multi-block body-fitted grid techniques have been widely adopted for complex geometry in engineering applications. In atmospheric applications, terrain fitted single grid techniques have found common use. Although these are very effective techniques, their implementation, coupling with the flow algorithm, and efficient parallelization of the complete method are more involved than a Cartesian grid method. The grid generation can be tedious and one needs to pay special attention in numerics to handle skewed cells for conservation purposes. Researchers have long sought for alternative methods to ease the effort involved in simulating flow over complex geometry.
On accuracy, privacy, and complexity in the identification problem
NASA Astrophysics Data System (ADS)
Beekhof, F.; Voloshynovskiy, S.; Koval, O.; Holotyak, T.
2010-02-01
This paper presents recent advances in the identification problem taking into account the accuracy, complexity and privacy leak of different decoding algorithms. Using a model of different actors from literature, we show that it is possible to use more accurate decoding algorithms using reliability information without increasing the privacy leak relative to algorithms that only use binary information. Existing algorithms from literature have been modified to take advantage of reliability information, and we show that a proposed branch-and-bound algorithm can outperform existing work, including the enhanced variants.
ALC: automated reduction of rule-based models
Koschorreck, Markus; Gilles, Ernst Dieter
2008-01-01
Background Combinatorial complexity is a challenging problem for the modeling of cellular signal transduction since the association of a few proteins can give rise to an enormous amount of feasible protein complexes. The layer-based approach is an approximative, but accurate method for the mathematical modeling of signaling systems with inherent combinatorial complexity. The number of variables in the simulation equations is highly reduced and the resulting dynamic models show a pronounced modularity. Layer-based modeling allows for the modeling of systems not accessible previously. Results ALC (Automated Layer Construction) is a computer program that highly simplifies the building of reduced modular models, according to the layer-based approach. The model is defined using a simple but powerful rule-based syntax that supports the concepts of modularity and macrostates. ALC performs consistency checks on the model definition and provides the model output in different formats (C MEX, MATLAB, Mathematica and SBML) as ready-to-run simulation files. ALC also provides additional documentation files that simplify the publication or presentation of the models. The tool can be used offline or via a form on the ALC website. Conclusion ALC allows for a simple rule-based generation of layer-based reduced models. The model files are given in different formats as ready-to-run simulation files. PMID:18973705
Endoscopic skull base training using 3D printed models with pre-existing pathology.
Narayanan, Vairavan; Narayanan, Prepageran; Rajagopalan, Raman; Karuppiah, Ravindran; Rahman, Zainal Ariff Abdul; Wormald, Peter-John; Van Hasselt, Charles Andrew; Waran, Vicknes
2015-03-01
Endoscopic base of skull surgery has been growing in acceptance in the recent past due to improvements in visualisation and micro instrumentation as well as the surgical maturing of early endoscopic skull base practitioners. Unfortunately, these demanding procedures have a steep learning curve. A physical simulation that is able to reproduce the complex anatomy of the anterior skull base provides very useful means of learning the necessary skills in a safe and effective environment. This paper aims to assess the ease of learning endoscopic skull base exposure and drilling techniques using an anatomically accurate physical model with a pre-existing pathology (i.e., basilar invagination) created from actual patient data. Five models of a patient with platy-basia and basilar invagination were created from the original MRI and CT imaging data of a patient. The models were used as part of a training workshop for ENT surgeons with varying degrees of experience in endoscopic base of skull surgery, from trainees to experienced consultants. The surgeons were given a list of key steps to achieve in exposing and drilling the skull base using the simulation model. They were then asked to list the level of difficulty of learning these steps using the model. The participants found the models suitable for learning registration, navigation and skull base drilling techniques. All participants also found the deep structures to be accurately represented spatially as confirmed by the navigation system. These models allow structured simulation to be conducted in a workshop environment where surgeons and trainees can practice to perform complex procedures in a controlled fashion under the supervision of experts.
NASA Astrophysics Data System (ADS)
Fan, Qiang; Huang, Zhenyu; Zhang, Bing; Chen, Dayue
2013-02-01
Properties of discontinuities, such as bolt joints and cracks in the waveguide structures, are difficult to evaluate by either analytical or numerical methods due to the complexity and uncertainty of the discontinuities. In this paper, the discontinuity in a Timoshenko beam is modeled with high-order parameters and then these parameters are identified by using reflection coefficients at the discontinuity. The high-order model is composed of several one-order sub-models in series and each sub-model consists of inertia, stiffness and damping components in parallel. The order of the discontinuity model is determined based on the characteristics of the reflection coefficient curve and the accuracy requirement of the dynamic modeling. The model parameters are identified through the least-square fitting iteration method, of which the undetermined model parameters are updated in iteration to fit the dynamic reflection coefficient curve with the wave-based one. By using the spectral super-element method (SSEM), simulation cases, including one-order discontinuities on infinite- and finite-beams and a two-order discontinuity on an infinite beam, were employed to evaluate both the accuracy of the discontinuity model and the effectiveness of the identification method. For practical considerations, effects of measurement noise on the discontinuity parameter identification are investigated by adding different levels of noise to the simulated data. The simulation results were then validated by the corresponding experiments. Both the simulation and experimental results show that (1) the one-order discontinuities can be identified accurately with the maximum errors of 6.8% and 8.7%, respectively; (2) and the high-order discontinuities can be identified with the maximum errors of 15.8% and 16.2%, respectively; and (3) the high-order model can predict the complex discontinuity much more accurately than the one-order discontinuity model.
Observing Consistency in Online Communication Patterns for User Re-Identification.
Adeyemi, Ikuesan Richard; Razak, Shukor Abd; Salleh, Mazleena; Venter, Hein S
2016-01-01
Comprehension of the statistical and structural mechanisms governing human dynamics in online interaction plays a pivotal role in online user identification, online profile development, and recommender systems. However, building a characteristic model of human dynamics on the Internet involves a complete analysis of the variations in human activity patterns, which is a complex process. This complexity is inherent in human dynamics and has not been extensively studied to reveal the structural composition of human behavior. A typical method of anatomizing such a complex system is viewing all independent interconnectivity that constitutes the complexity. An examination of the various dimensions of human communication pattern in online interactions is presented in this paper. The study employed reliable server-side web data from 31 known users to explore characteristics of human-driven communications. Various machine-learning techniques were explored. The results revealed that each individual exhibited a relatively consistent, unique behavioral signature and that the logistic regression model and model tree can be used to accurately distinguish online users. These results are applicable to one-to-one online user identification processes, insider misuse investigation processes, and online profiling in various areas.
NASA Astrophysics Data System (ADS)
Su, Wei; Lindsay, Scott; Liu, Haihu; Wu, Lei
2017-08-01
Rooted from the gas kinetics, the lattice Boltzmann method (LBM) is a powerful tool in modeling hydrodynamics. In the past decade, it has been extended to simulate rarefied gas flows beyond the Navier-Stokes level, either by using the high-order Gauss-Hermite quadrature, or by introducing the relaxation time that is a function of the gas-wall distance. While the former method, with a limited number of discrete velocities (e.g., D2Q36), is accurate up to the early transition flow regime, the latter method (especially the multiple relaxation time (MRT) LBM), with the same discrete velocities as those used in simulating hydrodynamics (i.e., D2Q9), is accurate up to the free-molecular flow regime in the planar Poiseuille flow. This is quite astonishing in the sense that less discrete velocities are more accurate. In this paper, by solving the Bhatnagar-Gross-Krook kinetic equation accurately via the discrete velocity method, we find that the high-order Gauss-Hermite quadrature cannot describe the large variation in the velocity distribution function when the rarefaction effect is strong, but the MRT-LBM can capture the flow velocity well because it is equivalent to solving the Navier-Stokes equations with an effective shear viscosity. Since the MRT-LBM has only been validated in simple channel flows, and for complex geometries it is difficult to find the effective viscosity, it is necessary to assess its performance for the simulation of rarefied gas flows. Our numerical simulations based on the accurate discrete velocity method suggest that the accuracy of the MRT-LBM is reduced significantly in the simulation of rarefied gas flows through the rough surface and porous media. Our simulation results could serve as benchmarking cases for future development of the LBM for modeling and simulation of rarefied gas flows in complex geometries.
Su, Wei; Lindsay, Scott; Liu, Haihu; Wu, Lei
2017-08-01
Rooted from the gas kinetics, the lattice Boltzmann method (LBM) is a powerful tool in modeling hydrodynamics. In the past decade, it has been extended to simulate rarefied gas flows beyond the Navier-Stokes level, either by using the high-order Gauss-Hermite quadrature, or by introducing the relaxation time that is a function of the gas-wall distance. While the former method, with a limited number of discrete velocities (e.g., D2Q36), is accurate up to the early transition flow regime, the latter method (especially the multiple relaxation time (MRT) LBM), with the same discrete velocities as those used in simulating hydrodynamics (i.e., D2Q9), is accurate up to the free-molecular flow regime in the planar Poiseuille flow. This is quite astonishing in the sense that less discrete velocities are more accurate. In this paper, by solving the Bhatnagar-Gross-Krook kinetic equation accurately via the discrete velocity method, we find that the high-order Gauss-Hermite quadrature cannot describe the large variation in the velocity distribution function when the rarefaction effect is strong, but the MRT-LBM can capture the flow velocity well because it is equivalent to solving the Navier-Stokes equations with an effective shear viscosity. Since the MRT-LBM has only been validated in simple channel flows, and for complex geometries it is difficult to find the effective viscosity, it is necessary to assess its performance for the simulation of rarefied gas flows. Our numerical simulations based on the accurate discrete velocity method suggest that the accuracy of the MRT-LBM is reduced significantly in the simulation of rarefied gas flows through the rough surface and porous media. Our simulation results could serve as benchmarking cases for future development of the LBM for modeling and simulation of rarefied gas flows in complex geometries.
Computational structure analysis of biomacromolecule complexes by interface geometry.
Mahdavi, Sedigheh; Salehzadeh-Yazdi, Ali; Mohades, Ali; Masoudi-Nejad, Ali
2013-12-01
The ability to analyze and compare protein-nucleic acid and protein-protein interaction interface has critical importance in understanding the biological function and essential processes occurring in the cells. Since high-resolution three-dimensional (3D) structures of biomacromolecule complexes are available, computational characterizing of the interface geometry become an important research topic in the field of molecular biology. In this study, the interfaces of a set of 180 protein-nucleic acid and protein-protein complexes are computed to understand the principles of their interactions. The weighted Voronoi diagram of the atoms and the Alpha complex has provided an accurate description of the interface atoms. Our method is implemented in the presence and absence of water molecules. A comparison among the three types of interaction interfaces show that RNA-protein complexes have the largest size of an interface. The results show a high correlation coefficient between our method and the PISA server in the presence and absence of water molecules in the Voronoi model and the traditional model based on solvent accessibility and the high validation parameters in comparison to the classical model. Copyright © 2013 Elsevier Ltd. All rights reserved.
Ali, Syed Mashhood; Shamim, Shazia
2015-07-01
Complexation of racemic citalopram with β-cyclodextrin (β-CD) in aqueous medium was investigated to determine atom-accurate structure of the inclusion complexes. (1) H-NMR chemical shift change data of β-CD cavity protons in the presence of citalopram confirmed the formation of 1 : 1 inclusion complexes. ROESY spectrum confirmed the presence of aromatic ring in the β-CD cavity but whether one of the two or both rings was not clear. Molecular mechanics and molecular dynamic calculations showed the entry of fluoro-ring from wider side of β-CD cavity as the most favored mode of inclusion. Minimum energy computational models were analyzed for their accuracy in atomic coordinates by comparison of calculated and experimental intermolecular ROESY peak intensities, which were not found in agreement. Several least energy computational models were refined and analyzed till calculated and experimental intensities were compatible. The results demonstrate that computational models of CD complexes need to be analyzed for atom-accuracy and quantitative ROESY analysis is a promising method. Moreover, the study also validates that the quantitative use of ROESY is feasible even with longer mixing times if peak intensity ratios instead of absolute intensities are used. Copyright © 2015 John Wiley & Sons, Ltd.
Contributions of the ARM Program to Radiative Transfer Modeling for Climate and Weather Applications
NASA Technical Reports Server (NTRS)
Mlawer, Eli J.; Iacono, Michael J.; Pincus, Robert; Barker, Howard W.; Oreopoulos, Lazaros; Mitchell, David L.
2016-01-01
Accurate climate and weather simulations must account for all relevant physical processes and their complex interactions. Each of these atmospheric, ocean, and land processes must be considered on an appropriate spatial and temporal scale, which leads these simulations to require a substantial computational burden. One especially critical physical process is the flow of solar and thermal radiant energy through the atmosphere, which controls planetary heating and cooling and drives the large-scale dynamics that moves energy from the tropics toward the poles. Radiation calculations are therefore essential for climate and weather simulations, but are themselves quite complex even without considering the effects of variable and inhomogeneous clouds. Clear-sky radiative transfer calculations have to account for thousands of absorption lines due to water vapor, carbon dioxide, and other gases, which are irregularly distributed across the spectrum and have shapes dependent on pressure and temperature. The line-by-line (LBL) codes that treat these details have a far greater computational cost than can be afforded by global models. Therefore, the crucial requirement for accurate radiation calculations in climate and weather prediction models must be satisfied by fast solar and thermal radiation parameterizations with a high level of accuracy that has been demonstrated through extensive comparisons with LBL codes. See attachment for continuation.
A Possible Role for End-Stopped V1 Neurons in the Perception of Motion: A Computational Model
Zarei Eskikand, Parvin; Kameneva, Tatiana; Ibbotson, Michael R.; Burkitt, Anthony N.; Grayden, David B.
2016-01-01
We present a model of the early stages of processing in the visual cortex, in particular V1 and MT, to investigate the potential role of end-stopped V1 neurons in solving the aperture problem. A hierarchical network is used in which the incoming motion signals provided by complex V1 neurons and end-stopped V1 neurons proceed to MT neurons at the next stage. MT neurons are categorized into two types based on their function: integration and segmentation. The role of integration neurons is to propagate unambiguous motion signals arriving from those V1 neurons that emphasize object terminators (e.g. corners). Segmentation neurons detect the discontinuities in the input stimulus to control the activity of integration neurons. Although the activity of the complex V1 neurons at the terminators of the object accurately represents the direction of the motion, their level of activity is less than the activity of the neurons along the edges. Therefore, a model incorporating end-stopped neurons is essential to suppress ambiguous motion signals along the edges of the stimulus. It is shown that the unambiguous motion signals at terminators propagate over the rest of the object to achieve an accurate representation of motion. PMID:27741307
Sato, Masanao; Tsuda, Kenichi; Wang, Lin; Coller, John; Watanabe, Yuichiro; Glazebrook, Jane; Katagiri, Fumiaki
2010-01-01
Biological signaling processes may be mediated by complex networks in which network components and network sectors interact with each other in complex ways. Studies of complex networks benefit from approaches in which the roles of individual components are considered in the context of the network. The plant immune signaling network, which controls inducible responses to pathogen attack, is such a complex network. We studied the Arabidopsis immune signaling network upon challenge with a strain of the bacterial pathogen Pseudomonas syringae expressing the effector protein AvrRpt2 (Pto DC3000 AvrRpt2). This bacterial strain feeds multiple inputs into the signaling network, allowing many parts of the network to be activated at once. mRNA profiles for 571 immune response genes of 22 Arabidopsis immunity mutants and wild type were collected 6 hours after inoculation with Pto DC3000 AvrRpt2. The mRNA profiles were analyzed as detailed descriptions of changes in the network state resulting from the genetic perturbations. Regulatory relationships among the genes corresponding to the mutations were inferred by recursively applying a non-linear dimensionality reduction procedure to the mRNA profile data. The resulting static network model accurately predicted 23 of 25 regulatory relationships reported in the literature, suggesting that predictions of novel regulatory relationships are also accurate. The network model revealed two striking features: (i) the components of the network are highly interconnected; and (ii) negative regulatory relationships are common between signaling sectors. Complex regulatory relationships, including a novel negative regulatory relationship between the early microbe-associated molecular pattern-triggered signaling sectors and the salicylic acid sector, were further validated. We propose that prevalent negative regulatory relationships among the signaling sectors make the plant immune signaling network a “sector-switching” network, which effectively balances two apparently conflicting demands, robustness against pathogenic perturbations and moderation of negative impacts of immune responses on plant fitness. PMID:20661428
Modified energy cascade model adapted for a multicrop Lunar greenhouse prototype
NASA Astrophysics Data System (ADS)
Boscheri, G.; Kacira, M.; Patterson, L.; Giacomelli, G.; Sadler, P.; Furfaro, R.; Lobascio, C.; Lamantea, M.; Grizzaffi, L.
2012-10-01
Models are required to accurately predict mass and energy balances in a bioregenerative life support system. A modified energy cascade model was used to predict outputs of a multi-crop (tomatoes, potatoes, lettuce and strawberries) Lunar greenhouse prototype. The model performance was evaluated against measured data obtained from several system closure experiments. The model predictions corresponded well to those obtained from experimental measurements for the overall system closure test period (five months), especially for biomass produced (0.7% underestimated), water consumption (0.3% overestimated) and condensate production (0.5% overestimated). However, the model was less accurate when the results were compared with data obtained from a shorter experimental time period, with 31%, 48% and 51% error for biomass uptake, water consumption, and condensate production, respectively, which were obtained under more complex crop production patterns (e.g. tall tomato plants covering part of the lettuce production zones). These results, together with a model sensitivity analysis highlighted the necessity of periodic characterization of the environmental parameters (e.g. light levels, air leakage) in the Lunar greenhouse.
Development of a CFD code for casting simulation
NASA Technical Reports Server (NTRS)
Murph, Jesse E.
1993-01-01
Because of high rejection rates for large structural castings (e.g., the Space Shuttle Main Engine Alternate Turbopump Design Program), a reliable casting simulation computer code is very desirable. This code would reduce both the development time and life cycle costs by allowing accurate modeling of the entire casting process. While this code could be used for other types of castings, the most significant reductions of time and cost would probably be realized in complex investment castings, where any reduction in the number of development castings would be of significant benefit. The casting process is conveniently divided into three distinct phases: (1) mold filling, where the melt is poured or forced into the mold cavity; (2) solidification, where the melt undergoes a phase change to the solid state; and (3) cool down, where the solidified part continues to cool to ambient conditions. While these phases may appear to be separate and distinct, temporal overlaps do exist between phases (e.g., local solidification occurring during mold filling), and some phenomenological events are affected by others (e.g., residual stresses depend on solidification and cooling rates). Therefore, a reliable code must accurately model all three phases and the interactions between each. While many codes have been developed (to various stages of complexity) to model the solidification and cool down phases, only a few codes have been developed to model mold filling.
Mezei, Pál D; Csonka, Gábor I; Ruzsinszky, Adrienn; Sun, Jianwei
2015-01-13
A correct description of the anion-π interaction is essential for the design of selective anion receptors and channels and important for advances in the field of supramolecular chemistry. However, it is challenging to do accurate, precise, and efficient calculations of this interaction, which are lacking in the literature. In this article, by testing sets of 20 binary anion-π complexes of fluoride, chloride, bromide, nitrate, or carbonate ions with hexafluorobenzene, 1,3,5-trifluorobenzene, 2,4,6-trifluoro-1,3,5-triazine, or 1,3,5-triazine and 30 ternary π-anion-π' sandwich complexes composed from the same monomers, we suggest domain-based local-pair natural orbital coupled cluster energies extrapolated to the complete basis-set limit as reference values. We give a detailed explanation of the origin of anion-π interactions, using the permanent quadrupole moments, static dipole polarizabilities, and electrostatic potential maps. We use symmetry-adapted perturbation theory (SAPT) to calculate the components of the anion-π interaction energies. We examine the performance of the direct random phase approximation (dRPA), the second-order screened exchange (SOSEX), local-pair natural-orbital (LPNO) coupled electron pair approximation (CEPA), and several dispersion-corrected density functionals (including generalized gradient approximation (GGA), meta-GGA, and double hybrid density functional). The LPNO-CEPA/1 results show the best agreement with the reference results. The dRPA method is only slightly less accurate and precise than the LPNO-CEPA/1, but it is considerably more efficient (6-17 times faster) for the binary complexes studied in this paper. For 30 ternary π-anion-π' sandwich complexes, we give dRPA interaction energies as reference values. The double hybrid functionals are much more efficient but less accurate and precise than dRPA. The dispersion-corrected double hybrid PWPB95-D3(BJ) and B2PLYP-D3(BJ) functionals perform better than the GGA and meta-GGA functionals for the present test set.
O’Hagan, Rónán C.; Heyer, Joerg
2011-01-01
KRAS is a potent oncogene and is mutated in about 30% of all human cancers. However, the biological context of KRAS-dependent oncogenesis is poorly understood. Genetically engineered mouse models of cancer provide invaluable tools to study the oncogenic process, and insights from KRAS-driven models have significantly increased our understanding of the genetic, cellular, and tissue contexts in which KRAS is competent for oncogenesis. Moreover, variation among tumors arising in mouse models can provide insight into the mechanisms underlying response or resistance to therapy in KRAS-dependent cancers. Hence, it is essential that models of KRAS-driven cancers accurately reflect the genetics of human tumors and recapitulate the complex tumor-stromal intercommunication that is manifest in human cancers. Here, we highlight the progress made in modeling KRAS-dependent cancers and the impact that these models have had on our understanding of cancer biology. In particular, the development of models that recapitulate the complex biology of human cancers enables translational insights into mechanisms of therapeutic intervention in KRAS-dependent cancers. PMID:21779503
NASA Technical Reports Server (NTRS)
Yliniemi, Logan; Agogino, Adrian K.; Tumer, Kagan
2014-01-01
Accurate simulation of the effects of integrating new technologies into a complex system is critical to the modernization of our antiquated air traffic system, where there exist many layers of interacting procedures, controls, and automation all designed to cooperate with human operators. Additions of even simple new technologies may result in unexpected emergent behavior due to complex human/ machine interactions. One approach is to create high-fidelity human models coming from the field of human factors that can simulate a rich set of behaviors. However, such models are difficult to produce, especially to show unexpected emergent behavior coming from many human operators interacting simultaneously within a complex system. Instead of engineering complex human models, we directly model the emergent behavior by evolving goal directed agents, representing human users. Using evolution we can predict how the agent representing the human user reacts given his/her goals. In this paradigm, each autonomous agent in a system pursues individual goals, and the behavior of the system emerges from the interactions, foreseen or unforeseen, between the agents/actors. We show that this method reflects the integration of new technologies in a historical case, and apply the same methodology for a possible future technology.
Alaa, Nour Eddine; Lefraich, Hamid; El Malki, Imane
2014-10-21
Cardiac arrhythmias are becoming one of the major health care problem in the world, causing numerous serious disease conditions including stroke and sudden cardiac death. Furthermore, cardiac arrhythmias are intimately related to the signaling ability of cardiac cells, and are caused by signaling defects. Consequently, modeling the electrical activity of the heart, and the complex signaling models that subtend dangerous arrhythmias such as tachycardia and fibrillation, necessitates a quantitative model of action potential (AP) propagation. Yet, many electrophysiological models, which accurately reproduce dynamical characteristic of the action potential in cells, have been introduced. However, these models are very complex and are very time consuming computationally. Consequently, a large amount of research is consecrated to design models with less computational complexity. This paper is presenting a new model for analyzing the propagation of ionic concentrations and electrical potential in space and time. In this model, the transport of ions is governed by Nernst-Planck flux equation (NP), and the electrical interaction of the species is described by a new cable equation. These set of equations form a system of coupled partial nonlinear differential equations that is solved numerically. In the first we describe the mathematical model. To realize the numerical simulation of our model, we proceed by a finite element discretization and then we choose an appropriate resolution algorithm. We give numerical simulations obtained for different input scenarios in the case of suicide substrate reaction which were compared to those obtained in literature. These input scenarios have been chosen so as to provide an intuitive understanding of dynamics of the model. By accessing time and space domains, it is shown that interpreting the electrical potential of cell membrane at steady state is incorrect. This model is general and applies to ions of any charge in space and time domains. The results obtained show a complete agreement with literature findings and also with the physical interpretation of the phenomenon. Furthermore, various numerical experiments are presented to confirm the accuracy, efficiency and stability of the proposed method. In particular, we show that the scheme is second-order accurate in space.
Ge, Liang; Sotiropoulos, Fotis
2007-08-01
A novel numerical method is developed that integrates boundary-conforming grids with a sharp interface, immersed boundary methodology. The method is intended for simulating internal flows containing complex, moving immersed boundaries such as those encountered in several cardiovascular applications. The background domain (e.g the empty aorta) is discretized efficiently with a curvilinear boundary-fitted mesh while the complex moving immersed boundary (say a prosthetic heart valve) is treated with the sharp-interface, hybrid Cartesian/immersed-boundary approach of Gilmanov and Sotiropoulos [1]. To facilitate the implementation of this novel modeling paradigm in complex flow simulations, an accurate and efficient numerical method is developed for solving the unsteady, incompressible Navier-Stokes equations in generalized curvilinear coordinates. The method employs a novel, fully-curvilinear staggered grid discretization approach, which does not require either the explicit evaluation of the Christoffel symbols or the discretization of all three momentum equations at cell interfaces as done in previous formulations. The equations are integrated in time using an efficient, second-order accurate fractional step methodology coupled with a Jacobian-free, Newton-Krylov solver for the momentum equations and a GMRES solver enhanced with multigrid as preconditioner for the Poisson equation. Several numerical experiments are carried out on fine computational meshes to demonstrate the accuracy and efficiency of the proposed method for standard benchmark problems as well as for unsteady, pulsatile flow through a curved, pipe bend. To demonstrate the ability of the method to simulate flows with complex, moving immersed boundaries we apply it to calculate pulsatile, physiological flow through a mechanical, bileaflet heart valve mounted in a model straight aorta with an anatomical-like triple sinus.
Ge, Liang; Sotiropoulos, Fotis
2008-01-01
A novel numerical method is developed that integrates boundary-conforming grids with a sharp interface, immersed boundary methodology. The method is intended for simulating internal flows containing complex, moving immersed boundaries such as those encountered in several cardiovascular applications. The background domain (e.g the empty aorta) is discretized efficiently with a curvilinear boundary-fitted mesh while the complex moving immersed boundary (say a prosthetic heart valve) is treated with the sharp-interface, hybrid Cartesian/immersed-boundary approach of Gilmanov and Sotiropoulos [1]. To facilitate the implementation of this novel modeling paradigm in complex flow simulations, an accurate and efficient numerical method is developed for solving the unsteady, incompressible Navier-Stokes equations in generalized curvilinear coordinates. The method employs a novel, fully-curvilinear staggered grid discretization approach, which does not require either the explicit evaluation of the Christoffel symbols or the discretization of all three momentum equations at cell interfaces as done in previous formulations. The equations are integrated in time using an efficient, second-order accurate fractional step methodology coupled with a Jacobian-free, Newton-Krylov solver for the momentum equations and a GMRES solver enhanced with multigrid as preconditioner for the Poisson equation. Several numerical experiments are carried out on fine computational meshes to demonstrate the accuracy and efficiency of the proposed method for standard benchmark problems as well as for unsteady, pulsatile flow through a curved, pipe bend. To demonstrate the ability of the method to simulate flows with complex, moving immersed boundaries we apply it to calculate pulsatile, physiological flow through a mechanical, bileaflet heart valve mounted in a model straight aorta with an anatomical-like triple sinus. PMID:19194533
A finite element model of rigid body structures actuated by dielectric elastomer actuators
NASA Astrophysics Data System (ADS)
Simone, F.; Linnebach, P.; Rizzello, G.; Seelecke, S.
2018-06-01
This paper presents on finite element (FE) modeling and simulation of dielectric elastomer actuators (DEAs) coupled with articulated structures. DEAs have proven to represent an effective transduction technology for the realization of large deformation, low-power consuming, and fast mechatronic actuators. However, the complex dynamic behavior of the material, characterized by nonlinearities and rate-dependent phenomena, makes it difficult to accurately model and design DEA systems. The problem is further complicated in case the DEA is used to activate articulated structures, which increase both system complexity and implementation effort of numerical simulation models. In this paper, we present a model based tool which allows to effectively implement and simulate complex articulated systems actuated by DEAs. A first prototype of a compact switch actuated by DEA membranes is chosen as reference study to introduce the methodology. The commercially available FE software COMSOL is used for implementing and coupling a physics-based dynamic model of the DEA with the external structure, i.e., the switch. The model is then experimentally calibrated and validated in both quasi-static and dynamic loading conditions. Finally, preliminary results on how to use the simulation tool to optimize the design are presented.
AST: Activity-Security-Trust driven modeling of time varying networks.
Wang, Jian; Xu, Jiake; Liu, Yanheng; Deng, Weiwen
2016-02-18
Network modeling is a flexible mathematical structure that enables to identify statistical regularities and structural principles hidden in complex systems. The majority of recent driving forces in modeling complex networks are originated from activity, in which an activity potential of a time invariant function is introduced to identify agents' interactions and to construct an activity-driven model. However, the new-emerging network evolutions are already deeply coupled with not only the explicit factors (e.g. activity) but also the implicit considerations (e.g. security and trust), so more intrinsic driving forces behind should be integrated into the modeling of time varying networks. The agents undoubtedly seek to build a time-dependent trade-off among activity, security, and trust in generating a new connection to another. Thus, we reasonably propose the Activity-Security-Trust (AST) driven model through synthetically considering the explicit and implicit driving forces (e.g. activity, security, and trust) underlying the decision process. AST-driven model facilitates to more accurately capture highly dynamical network behaviors and figure out the complex evolution process, allowing a profound understanding of the effects of security and trust in driving network evolution, and improving the biases induced by only involving activity representations in analyzing the dynamical processes.
Accurate in situ measurement of complex refractive index and particle size in intralipid emulsions
NASA Astrophysics Data System (ADS)
Dong, Miao L.; Goyal, Kashika G.; Worth, Bradley W.; Makkar, Sorab S.; Calhoun, William R.; Bali, Lalit M.; Bali, Samir
2013-08-01
A first accurate measurement of the complex refractive index in an intralipid emulsion is demonstrated, and thereby the average scatterer particle size using standard Mie scattering calculations is extracted. Our method is based on measurement and modeling of the reflectance of a divergent laser beam from the sample surface. In the absence of any definitive reference data for the complex refractive index or particle size in highly turbid intralipid emulsions, we base our claim of accuracy on the fact that our work offers several critically important advantages over previously reported attempts. First, our measurements are in situ in the sense that they do not require any sample dilution, thus eliminating dilution errors. Second, our theoretical model does not employ any fitting parameters other than the two quantities we seek to determine, i.e., the real and imaginary parts of the refractive index, thus eliminating ambiguities arising from multiple extraneous fitting parameters. Third, we fit the entire reflectance-versus-incident-angle data curve instead of focusing on only the critical angle region, which is just a small subset of the data. Finally, despite our use of highly scattering opaque samples, our experiment uniquely satisfies a key assumption behind the Mie scattering formalism, namely, no multiple scattering occurs. Further proof of our method's validity is given by the fact that our measured particle size finds good agreement with the value obtained by dynamic light scattering.
Accurate in situ measurement of complex refractive index and particle size in intralipid emulsions.
Dong, Miao L; Goyal, Kashika G; Worth, Bradley W; Makkar, Sorab S; Calhoun, William R; Bali, Lalit M; Bali, Samir
2013-08-01
A first accurate measurement of the complex refractive index in an intralipid emulsion is demonstrated, and thereby the average scatterer particle size using standard Mie scattering calculations is extracted. Our method is based on measurement and modeling of the reflectance of a divergent laser beam from the sample surface. In the absence of any definitive reference data for the complex refractive index or particle size in highly turbid intralipid emulsions, we base our claim of accuracy on the fact that our work offers several critically important advantages over previously reported attempts. First, our measurements are in situ in the sense that they do not require any sample dilution, thus eliminating dilution errors. Second, our theoretical model does not employ any fitting parameters other than the two quantities we seek to determine, i.e., the real and imaginary parts of the refractive index, thus eliminating ambiguities arising from multiple extraneous fitting parameters. Third, we fit the entire reflectance-versus-incident-angle data curve instead of focusing on only the critical angle region, which is just a small subset of the data. Finally, despite our use of highly scattering opaque samples, our experiment uniquely satisfies a key assumption behind the Mie scattering formalism, namely, no multiple scattering occurs. Further proof of our method's validity is given by the fact that our measured particle size finds good agreement with the value obtained by dynamic light scattering.
Multiscale entropy-based methods for heart rate variability complexity analysis
NASA Astrophysics Data System (ADS)
Silva, Luiz Eduardo Virgilio; Cabella, Brenno Caetano Troca; Neves, Ubiraci Pereira da Costa; Murta Junior, Luiz Otavio
2015-03-01
Physiologic complexity is an important concept to characterize time series from biological systems, which associated to multiscale analysis can contribute to comprehension of many complex phenomena. Although multiscale entropy has been applied to physiological time series, it measures irregularity as function of scale. In this study we purpose and evaluate a set of three complexity metrics as function of time scales. Complexity metrics are derived from nonadditive entropy supported by generation of surrogate data, i.e. SDiffqmax, qmax and qzero. In order to access accuracy of proposed complexity metrics, receiver operating characteristic (ROC) curves were built and area under the curves was computed for three physiological situations. Heart rate variability (HRV) time series in normal sinus rhythm, atrial fibrillation, and congestive heart failure data set were analyzed. Results show that proposed metric for complexity is accurate and robust when compared to classic entropic irregularity metrics. Furthermore, SDiffqmax is the most accurate for lower scales, whereas qmax and qzero are the most accurate when higher time scales are considered. Multiscale complexity analysis described here showed potential to assess complex physiological time series and deserves further investigation in wide context.
Dunham, Kylee; Grand, James B.
2016-01-01
We examined the effects of complexity and priors on the accuracy of models used to estimate ecological and observational processes, and to make predictions regarding population size and structure. State-space models are useful for estimating complex, unobservable population processes and making predictions about future populations based on limited data. To better understand the utility of state space models in evaluating population dynamics, we used them in a Bayesian framework and compared the accuracy of models with differing complexity, with and without informative priors using sequential importance sampling/resampling (SISR). Count data were simulated for 25 years using known parameters and observation process for each model. We used kernel smoothing to reduce the effect of particle depletion, which is common when estimating both states and parameters with SISR. Models using informative priors estimated parameter values and population size with greater accuracy than their non-informative counterparts. While the estimates of population size and trend did not suffer greatly in models using non-informative priors, the algorithm was unable to accurately estimate demographic parameters. This model framework provides reasonable estimates of population size when little to no information is available; however, when information on some vital rates is available, SISR can be used to obtain more precise estimates of population size and process. Incorporating model complexity such as that required by structured populations with stage-specific vital rates affects precision and accuracy when estimating latent population variables and predicting population dynamics. These results are important to consider when designing monitoring programs and conservation efforts requiring management of specific population segments.
Assessment of applications of transport models on regional scale solute transport
NASA Astrophysics Data System (ADS)
Guo, Z.; Fogg, G. E.; Henri, C.; Pauloo, R.
2017-12-01
Regional scale transport models are needed to support the long-term evaluation of groundwater quality and to develop management strategies aiming to prevent serious groundwater degradation. The purpose of this study is to evaluate the capacity of previously-developed upscaling approaches to accurately describe main solute transport processes including the capture of late-time tails under changing boundary conditions. Advective-dispersive contaminant transport in a 3D heterogeneous domain was simulated and used as a reference solution. Equivalent transport under homogeneous flow conditions were then evaluated applying the Multi-Rate Mass Transfer (MRMT) model. The random walk particle tracking method was used for both heterogeneous and homogeneous-MRMT scenarios under steady state and transient conditions. The results indicate that the MRMT model can capture the tails satisfactorily for plume transported with ambient steady-state flow field. However, when boundary conditions change, the mass transfer model calibrated for transport under steady-state conditions cannot accurately reproduce the tailing effect observed for the heterogeneous scenario. The deteriorating impact of transient boundary conditions on the upscaled model is more significant for regions where flow fields are dramatically affected, highlighting the poor applicability of the MRMT approach for complex field settings. Accurately simulating mass in both mobile and immobile zones is critical to represent the transport process under transient flow conditions and will be the future focus of our study.
NASA Astrophysics Data System (ADS)
Le Maire, P.; Munschy, M.
2017-12-01
Interpretation of marine magnetic anomalies enable to perform accurate global kinematic models. Several methods have been proposed to compute the paleo-latitude of the oceanic crust as its formation. A model of the Earth's magnetic field is used to determine a relationship between the apparent inclination of the magnetization and the paleo-latitude. Usually, the estimation of the apparent inclination is qualitative, with the fit between magnetic data and forward models. We propose to apply a new method using complex algebra to obtain the apparent inclination of the magnetization of the oceanic crust. For two dimensional bodies, we rewrite Talwani's equations using complex algebra; the corresponding complex function of the complex variable, called CMA (complex magnetic anomaly) is easier to use for forward modelling and inversion of the magnetic data. This complex equation allows to visualize the data in the complex plane (Argand diagram) and offers a new way to interpret data (curves to the right of the figure (B), while the curves to the left represent the standard display of magnetic anomalies (A) for the model displayed (C) at the bottom of the figure). In the complex plane, the effect of the apparent inclination is to rotate the curves, while on the standard display the evolution of the shape of the anomaly is more complicated (figure). This innovative method gives the opportunity to study a set of magnetic profiles (provided by the Geological Survey of Norway) acquired in the Norwegian Sea, near the Jan Mayen fracture zone. In this area, the age of the oceanic crust ranges from 40 to 55 Ma and the apparent inclination of the magnetization is computed.
Highly Physical Solar Radiation Pressure Modeling During Penumbra Transitions
NASA Astrophysics Data System (ADS)
Robertson, Robert V.
Solar radiation pressure (SRP) is one of the major non-gravitational forces acting on spacecraft. Acceleration by radiation pressure depends on the radiation flux; on spacecraft shape, attitude, and mass; and on the optical properties of the spacecraft surfaces. Precise modeling of SRP is needed for dynamic satellite orbit determination, space mission design and control, and processing of data from space-based science instruments. During Earth penumbra transitions, sunlight is passing through Earth's lower atmosphere and, in the process, its path, intensity, spectral composition, and shape are significantly affected. This dissertation presents a new method for highly physical SRP modeling in Earth's penumbra called Solar radiation pressure with Oblateness and Lower Atmospheric Absorption, Refraction, and Scattering (SOLAARS). The fundamental geometry and approach mirrors past work, where the solar radiation field is modeled using a number of light rays, rather than treating the Sun as a single point source. This dissertation aims to clarify this approach, simplify its implementation, and model previously overlooked factors. The complex geometries involved in modeling penumbra solar radiation fields are described in a more intuitive and complete way to simplify implementation. Atmospheric effects due to solar radiation passing through the troposphere and stratosphere are modeled, and the results are tabulated to significantly reduce computational cost. SOLAARS includes new, more efficient and accurate approaches to modeling atmospheric effects which allow us to consider the spatial and temporal variability in lower atmospheric conditions. A new approach to modeling the influence of Earth's polar flattening draws on past work to provide a relatively simple but accurate method for this important effect. Previous penumbra SRP models tend to lie at two extremes of complexity and computational cost, and so the significant improvement in accuracy provided by the complex models has often been lost in the interest of convenience and efficiency. This dissertation presents a simple model which provides an accurate alternative to the full, high precision SOLAARS model with reduced complexity and computational cost. This simpler method is based on curve fitting to results of the full SOLAARS model and is called SOLAARS Curve Fit (SOLAARS-CF). Both the high precision SOLAARS model and the simpler SOLAARS-CF model are applied to the Gravity Recovery and Climate Experiment (GRACE) satellites. Modeling results are compared to the sub-nm/s2 precision GRACE accelerometer data and the results of a traditional penumbra SRP model. These comparisons illustrate the improved accuracy of the SOLAARS and SOLAARS-CF models. A sensitivity analyses for the GRACE orbit illustrates the significance of various input parameters and features of the SOLAARS model on results. The SOLAARS-CF model is applied to a study of penumbra SRP and the Earth flyby anomaly. Beyond the value of its results to the scientific community, this study provides an application example where the computational efficiency of the simplified SOLAARS-CF model is necessary. The Earth flyby anomaly is an open question in orbit determination which has gone unsolved for over 20 years. This study quantifies the influence of penumbra SRP modeling errors on the observed anomalies from the Galileo, Cassini, and Rosetta Earth flybys. The results of this study prove that penumbra SRP is not an explanation for or significant contributor to the Earth flyby anomaly.
NASA Technical Reports Server (NTRS)
Schmidt, R. J.; Dodds, R. H., Jr.
1985-01-01
The dynamic analysis of complex structural systems using the finite element method and multilevel substructured models is presented. The fixed-interface method is selected for substructure reduction because of its efficiency, accuracy, and adaptability to restart and reanalysis. This method is extended to reduction of substructures which are themselves composed of reduced substructures. The implementation and performance of the method in a general purpose software system is emphasized. Solution algorithms consistent with the chosen data structures are presented. It is demonstrated that successful finite element software requires the use of software executives to supplement the algorithmic language. The complexity of the implementation of restart and reanalysis porcedures illustrates the need for executive systems to support the noncomputational aspects of the software. It is shown that significant computational efficiencies can be achieved through proper use of substructuring and reduction technbiques without sacrificing solution accuracy. The restart and reanalysis capabilities and the flexible procedures for multilevel substructured modeling gives economical yet accurate analyses of complex structural systems.
Xia, Bing; Mamonov, Artem; Leysen, Seppe; Allen, Karen N; Strelkov, Sergei V; Paschalidis, Ioannis Ch; Vajda, Sandor; Kozakov, Dima
2015-07-30
The protein-protein docking server ClusPro is used by thousands of laboratories, and models built by the server have been reported in over 300 publications. Although the structures generated by the docking include near-native ones for many proteins, selecting the best model is difficult due to the uncertainty in scoring. Small angle X-ray scattering (SAXS) is an experimental technique for obtaining low resolution structural information in solution. While not sufficient on its own to uniquely predict complex structures, accounting for SAXS data improves the ranking of models and facilitates the identification of the most accurate structure. Although SAXS profiles are currently available only for a small number of complexes, due to its simplicity the method is becoming increasingly popular. Since combining docking with SAXS experiments will provide a viable strategy for fairly high-throughput determination of protein complex structures, the option of using SAXS restraints is added to the ClusPro server. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Ma, Huanfei; Leng, Siyang; Tao, Chenyang; Ying, Xiong; Kurths, Jürgen; Lai, Ying-Cheng; Lin, Wei
2017-07-01
Data-based and model-free accurate identification of intrinsic time delays and directional interactions is an extremely challenging problem in complex dynamical systems and their networks reconstruction. A model-free method with new scores is proposed to be generally capable of detecting single, multiple, and distributed time delays. The method is applicable not only to mutually interacting dynamical variables but also to self-interacting variables in a time-delayed feedback loop. Validation of the method is carried out using physical, biological, and ecological models and real data sets. Especially, applying the method to air pollution data and hospital admission records of cardiovascular diseases in Hong Kong reveals the major air pollutants as a cause of the diseases and, more importantly, it uncovers a hidden time delay (about 30-40 days) in the causal influence that previous studies failed to detect. The proposed method is expected to be universally applicable to ascertaining and quantifying subtle interactions (e.g., causation) in complex systems arising from a broad range of disciplines.
Using machine learning tools to model complex toxic interactions with limited sampling regimes.
Bertin, Matthew J; Moeller, Peter; Guillette, Louis J; Chapman, Robert W
2013-03-19
A major impediment to understanding the impact of environmental stress, including toxins and other pollutants, on organisms, is that organisms are rarely challenged by one or a few stressors in natural systems. Thus, linking laboratory experiments that are limited by practical considerations to a few stressors and a few levels of these stressors to real world conditions is constrained. In addition, while the existence of complex interactions among stressors can be identified by current statistical methods, these methods do not provide a means to construct mathematical models of these interactions. In this paper, we offer a two-step process by which complex interactions of stressors on biological systems can be modeled in an experimental design that is within the limits of practicality. We begin with the notion that environment conditions circumscribe an n-dimensional hyperspace within which biological processes or end points are embedded. We then randomly sample this hyperspace to establish experimental conditions that span the range of the relevant parameters and conduct the experiment(s) based upon these selected conditions. Models of the complex interactions of the parameters are then extracted using machine learning tools, specifically artificial neural networks. This approach can rapidly generate highly accurate models of biological responses to complex interactions among environmentally relevant toxins, identify critical subspaces where nonlinear responses exist, and provide an expedient means of designing traditional experiments to test the impact of complex mixtures on biological responses. Further, this can be accomplished with an astonishingly small sample size.
Method of and apparatus for modeling interactions
Budge, Kent G.
2004-01-13
A method and apparatus for modeling interactions can accurately model tribological and other properties and accommodate topological disruptions. Two portions of a problem space are represented, a first with a Lagrangian mesh and a second with an ALE mesh. The ALE and Lagrangian meshes are constructed so that each node on the surface of the Lagrangian mesh is in a known correspondence with adjacent nodes in the ALE mesh. The interaction can be predicted for a time interval. Material flow within the ALE mesh can accurately model complex interactions such as bifurcation. After prediction, nodes in the ALE mesh in correspondence with nodes on the surface of the Lagrangian mesh can be mapped so that they are once again adjacent to their corresponding Lagrangian mesh nodes. The ALE mesh can then be smoothed to reduce mesh distortion that might reduce the accuracy or efficiency of subsequent prediction steps. The process, from prediction through mapping and smoothing, can be repeated until a terminal condition is reached.
NASA Astrophysics Data System (ADS)
Liu, Dantong; Taylor, Jonathan W.; Young, Dominque E.; Flynn, Michael J.; Coe, Hugh; Allan, James D.
2015-01-01
of the impacts of brown carbon (BrC) requires accurate determination of its physical properties, but a model must be invoked to derive these from instrument data. Ambient measurements were made in London at a site influenced by traffic and solid fuel (principally wood) burning, apportioned by single particle soot photometer data and optical properties measured using multiwavelength photoacoustic spectroscopy. Two models were applied: a commonly used Mie model treating the particles as single-coated spheres and a Rayleigh-Debye-Gans approximation treating them as aggregates of smaller-coated monomers. The derived solid fuel BrC parameters at 405 nm were found to be highly sensitive to the model treatment, with a mass absorption cross section ranging from 0.47 to 1.81 m2/g and imaginary refractive index from 0.013 to 0.062. This demonstrates that a detailed knowledge of particle morphology must be obtained and invoked to accurately parameterize BrC properties based on aerosol phase measurements.
Methods for compressible multiphase flows and their applications
NASA Astrophysics Data System (ADS)
Kim, H.; Choe, Y.; Kim, H.; Min, D.; Kim, C.
2018-06-01
This paper presents an efficient and robust numerical framework to deal with multiphase real-fluid flows and their broad spectrum of engineering applications. A homogeneous mixture model incorporated with a real-fluid equation of state and a phase change model is considered to calculate complex multiphase problems. As robust and accurate numerical methods to handle multiphase shocks and phase interfaces over a wide range of flow speeds, the AUSMPW+_N and RoeM_N schemes with a system preconditioning method are presented. These methods are assessed by extensive validation problems with various types of equation of state and phase change models. Representative realistic multiphase phenomena, including the flow inside a thermal vapor compressor, pressurization in a cryogenic tank, and unsteady cavitating flow around a wedge, are then investigated as application problems. With appropriate physical modeling followed by robust and accurate numerical treatments, compressible multiphase flow physics such as phase changes, shock discontinuities, and their interactions are well captured, confirming the suitability of the proposed numerical framework to wide engineering applications.
Readmission prediction via deep contextual embedding of clinical concepts.
Xiao, Cao; Ma, Tengfei; Dieng, Adji B; Blei, David M; Wang, Fei
2018-01-01
Hospital readmission costs a lot of money every year. Many hospital readmissions are avoidable, and excessive hospital readmissions could also be harmful to the patients. Accurate prediction of hospital readmission can effectively help reduce the readmission risk. However, the complex relationship between readmission and potential risk factors makes readmission prediction a difficult task. The main goal of this paper is to explore deep learning models to distill such complex relationships and make accurate predictions. We propose CONTENT, a deep model that predicts hospital readmissions via learning interpretable patient representations by capturing both local and global contexts from patient Electronic Health Records (EHR) through a hybrid Topic Recurrent Neural Network (TopicRNN) model. The experiment was conducted using the EHR of a real world Congestive Heart Failure (CHF) cohort of 5,393 patients. The proposed model outperforms state-of-the-art methods in readmission prediction (e.g. 0.6103 ± 0.0130 vs. second best 0.5998 ± 0.0124 in terms of ROC-AUC). The derived patient representations were further utilized for patient phenotyping. The learned phenotypes provide more precise understanding of readmission risks. Embedding both local and global context in patient representation not only improves prediction performance, but also brings interpretable insights of understanding readmission risks for heterogeneous chronic clinical conditions. This is the first of its kind model that integrates the power of both conventional deep neural network and the probabilistic generative models for highly interpretable deep patient representation learning. Experimental results and case studies demonstrate the improved performance and interpretability of the model.
Point process models for localization and interdependence of punctate cellular structures.
Li, Ying; Majarian, Timothy D; Naik, Armaghan W; Johnson, Gregory R; Murphy, Robert F
2016-07-01
Accurate representations of cellular organization for multiple eukaryotic cell types are required for creating predictive models of dynamic cellular function. To this end, we have previously developed the CellOrganizer platform, an open source system for generative modeling of cellular components from microscopy images. CellOrganizer models capture the inherent heterogeneity in the spatial distribution, size, and quantity of different components among a cell population. Furthermore, CellOrganizer can generate quantitatively realistic synthetic images that reflect the underlying cell population. A current focus of the project is to model the complex, interdependent nature of organelle localization. We built upon previous work on developing multiple non-parametric models of organelles or structures that show punctate patterns. The previous models described the relationships between the subcellular localization of puncta and the positions of cell and nuclear membranes and microtubules. We extend these models to consider the relationship to the endoplasmic reticulum (ER), and to consider the relationship between the positions of different puncta of the same type. Our results do not suggest that the punctate patterns we examined are dependent on ER position or inter- and intra-class proximity. With these results, we built classifiers to update previous assignments of proteins to one of 11 patterns in three distinct cell lines. Our generative models demonstrate the ability to construct statistically accurate representations of puncta localization from simple cellular markers in distinct cell types, capturing the complex phenomena of cellular structure interaction with little human input. This protocol represents a novel approach to vesicular protein annotation, a field that is often neglected in high-throughput microscopy. These results suggest that spatial point process models provide useful insight with respect to the spatial dependence between cellular structures. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.
Hansen, Andreas; Bannwarth, Christoph; Grimme, Stefan; Petrović, Predrag; Werlé, Christophe; Djukic, Jean-Pierre
2014-10-01
Reliable thermochemical measurements and theoretical predictions for reactions involving large transition metal complexes in which long-range intramolecular London dispersion interactions contribute significantly to their stabilization are still a challenge, particularly for reactions in solution. As an illustrative and chemically important example, two reactions are investigated where a large dipalladium complex is quenched by bulky phosphane ligands (triphenylphosphane and tricyclohexylphosphane). Reaction enthalpies and Gibbs free energies were measured by isotherm titration calorimetry (ITC) and theoretically 'back-corrected' to yield 0 K gas-phase reaction energies (ΔE). It is shown that the Gibbs free solvation energy calculated with continuum models represents the largest source of error in theoretical thermochemistry protocols. The ('back-corrected') experimental reaction energies were used to benchmark (dispersion-corrected) density functional and wave function theory methods. Particularly, we investigated whether the atom-pairwise D3 dispersion correction is also accurate for transition metal chemistry, and how accurately recently developed local coupled-cluster methods describe the important long-range electron correlation contributions. Both, modern dispersion-corrected density functions (e.g., PW6B95-D3(BJ) or B3LYP-NL), as well as the now possible DLPNO-CCSD(T) calculations, are within the 'experimental' gas phase reference value. The remaining uncertainties of 2-3 kcal mol(-1) can be essentially attributed to the solvation models. Hence, the future for accurate theoretical thermochemistry of large transition metal reactions in solution is very promising.
Conformational Transitions upon Ligand Binding: Holo-Structure Prediction from Apo Conformations
Seeliger, Daniel; de Groot, Bert L.
2010-01-01
Biological function of proteins is frequently associated with the formation of complexes with small-molecule ligands. Experimental structure determination of such complexes at atomic resolution, however, can be time-consuming and costly. Computational methods for structure prediction of protein/ligand complexes, particularly docking, are as yet restricted by their limited consideration of receptor flexibility, rendering them not applicable for predicting protein/ligand complexes if large conformational changes of the receptor upon ligand binding are involved. Accurate receptor models in the ligand-bound state (holo structures), however, are a prerequisite for successful structure-based drug design. Hence, if only an unbound (apo) structure is available distinct from the ligand-bound conformation, structure-based drug design is severely limited. We present a method to predict the structure of protein/ligand complexes based solely on the apo structure, the ligand and the radius of gyration of the holo structure. The method is applied to ten cases in which proteins undergo structural rearrangements of up to 7.1 Å backbone RMSD upon ligand binding. In all cases, receptor models within 1.6 Å backbone RMSD to the target were predicted and close-to-native ligand binding poses were obtained for 8 of 10 cases in the top-ranked complex models. A protocol is presented that is expected to enable structure modeling of protein/ligand complexes and structure-based drug design for cases where crystal structures of ligand-bound conformations are not available. PMID:20066034
An Accurate and Stable FFT-based Method for Pricing Options under Exp-Lévy Processes
NASA Astrophysics Data System (ADS)
Ding, Deng; Chong U, Sio
2010-05-01
An accurate and stable method for pricing European options in exp-Lévy models is presented. The main idea of this new method is combining the quadrature technique and the Carr-Madan Fast Fourier Transform methods. The theoretical analysis shows that the overall complexity of this new method is still O(N log N) with N grid points as the fast Fourier transform methods. Numerical experiments for different exp-Lévy processes also show that the numerical algorithm proposed by this new method has an accuracy and stability for the small strike prices K. That develops and improves the Carr-Madan method.
A New Methodology for Turbulence Modelers Using DNS Database Analysis
NASA Technical Reports Server (NTRS)
Parneix, S.; Durbin, P.
1996-01-01
Many industrial applications in such fields as aeronautical, mechanical, thermal, and environmental engineering involve complex turbulent flows containing global separations and subsequent reattachment zones. Accurate prediction of this phenomena is very important because separations influence the whole fluid flow and may have an even bigger impact on surface heat transfer. In particular, reattaching flows are known to be responsible for large local variations of the local wall heat transfer coefficient as well as modifying the overall heat transfer. For incompressible, non-buoyant situations, the fluid mechanics have to be accurately predicted in order to have a good resolution of the temperature field.
Modeling complexes of modeled proteins.
Anishchenko, Ivan; Kundrotas, Petras J; Vakser, Ilya A
2017-03-01
Structural characterization of proteins is essential for understanding life processes at the molecular level. However, only a fraction of known proteins have experimentally determined structures. This fraction is even smaller for protein-protein complexes. Thus, structural modeling of protein-protein interactions (docking) primarily has to rely on modeled structures of the individual proteins, which typically are less accurate than the experimentally determined ones. Such "double" modeling is the Grand Challenge of structural reconstruction of the interactome. Yet it remains so far largely untested in a systematic way. We present a comprehensive validation of template-based and free docking on a set of 165 complexes, where each protein model has six levels of structural accuracy, from 1 to 6 Å C α RMSD. Many template-based docking predictions fall into acceptable quality category, according to the CAPRI criteria, even for highly inaccurate proteins (5-6 Å RMSD), although the number of such models (and, consequently, the docking success rate) drops significantly for models with RMSD > 4 Å. The results show that the existing docking methodologies can be successfully applied to protein models with a broad range of structural accuracy, and the template-based docking is much less sensitive to inaccuracies of protein models than the free docking. Proteins 2017; 85:470-478. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Lateh, Masitah Abdul; Kamilah Muda, Azah; Yusof, Zeratul Izzah Mohd; Azilah Muda, Noor; Sanusi Azmi, Mohd
2017-09-01
The emerging era of big data for past few years has led to large and complex data which needed faster and better decision making. However, the small dataset problems still arise in a certain area which causes analysis and decision are hard to make. In order to build a prediction model, a large sample is required as a training sample of the model. Small dataset is insufficient to produce an accurate prediction model. This paper will review an artificial data generation approach as one of the solution to solve the small dataset problem.
Generating Facial Expressions Using an Anatomically Accurate Biomechanical Model.
Wu, Tim; Hung, Alice; Mithraratne, Kumar
2014-11-01
This paper presents a computational framework for modelling the biomechanics of human facial expressions. A detailed high-order (Cubic-Hermite) finite element model of the human head was constructed using anatomical data segmented from magnetic resonance images. The model includes a superficial soft-tissue continuum consisting of skin, the subcutaneous layer and the superficial Musculo-Aponeurotic system. Embedded within this continuum mesh, are 20 pairs of facial muscles which drive facial expressions. These muscles were treated as transversely-isotropic and their anatomical geometries and fibre orientations were accurately depicted. In order to capture the relative composition of muscles and fat, material heterogeneity was also introduced into the model. Complex contact interactions between the lips, eyelids, and between superficial soft tissue continuum and deep rigid skeletal bones were also computed. In addition, this paper investigates the impact of incorporating material heterogeneity and contact interactions, which are often neglected in similar studies. Four facial expressions were simulated using the developed model and the results were compared with surface data obtained from a 3D structured-light scanner. Predicted expressions showed good agreement with the experimental data.
Predicting intensity ranks of peptide fragment ions.
Frank, Ari M
2009-05-01
Accurate modeling of peptide fragmentation is necessary for the development of robust scoring functions for peptide-spectrum matches, which are the cornerstone of MS/MS-based identification algorithms. Unfortunately, peptide fragmentation is a complex process that can involve several competing chemical pathways, which makes it difficult to develop generative probabilistic models that describe it accurately. However, the vast amounts of MS/MS data being generated now make it possible to use data-driven machine learning methods to develop discriminative ranking-based models that predict the intensity ranks of a peptide's fragment ions. We use simple sequence-based features that get combined by a boosting algorithm into models that make peak rank predictions with high accuracy. In an accompanying manuscript, we demonstrate how these prediction models are used to significantly improve the performance of peptide identification algorithms. The models can also be useful in the design of optimal multiple reaction monitoring (MRM) transitions, in cases where there is insufficient experimental data to guide the peak selection process. The prediction algorithm can also be run independently through PepNovo+, which is available for download from http://bix.ucsd.edu/Software/PepNovo.html.
Predicting Intensity Ranks of Peptide Fragment Ions
Frank, Ari M.
2009-01-01
Accurate modeling of peptide fragmentation is necessary for the development of robust scoring functions for peptide-spectrum matches, which are the cornerstone of MS/MS-based identification algorithms. Unfortunately, peptide fragmentation is a complex process that can involve several competing chemical pathways, which makes it difficult to develop generative probabilistic models that describe it accurately. However, the vast amounts of MS/MS data being generated now make it possible to use data-driven machine learning methods to develop discriminative ranking-based models that predict the intensity ranks of a peptide's fragment ions. We use simple sequence-based features that get combined by a boosting algorithm in to models that make peak rank predictions with high accuracy. In an accompanying manuscript, we demonstrate how these prediction models are used to significantly improve the performance of peptide identification algorithms. The models can also be useful in the design of optimal MRM transitions, in cases where there is insufficient experimental data to guide the peak selection process. The prediction algorithm can also be run independently through PepNovo+, which is available for download from http://bix.ucsd.edu/Software/PepNovo.html. PMID:19256476
Global Quantitative Modeling of Chromatin Factor Interactions
Zhou, Jian; Troyanskaya, Olga G.
2014-01-01
Chromatin is the driver of gene regulation, yet understanding the molecular interactions underlying chromatin factor combinatorial patterns (or the “chromatin codes”) remains a fundamental challenge in chromatin biology. Here we developed a global modeling framework that leverages chromatin profiling data to produce a systems-level view of the macromolecular complex of chromatin. Our model ultilizes maximum entropy modeling with regularization-based structure learning to statistically dissect dependencies between chromatin factors and produce an accurate probability distribution of chromatin code. Our unsupervised quantitative model, trained on genome-wide chromatin profiles of 73 histone marks and chromatin proteins from modENCODE, enabled making various data-driven inferences about chromatin profiles and interactions. We provided a highly accurate predictor of chromatin factor pairwise interactions validated by known experimental evidence, and for the first time enabled higher-order interaction prediction. Our predictions can thus help guide future experimental studies. The model can also serve as an inference engine for predicting unknown chromatin profiles — we demonstrated that with this approach we can leverage data from well-characterized cell types to help understand less-studied cell type or conditions. PMID:24675896
Double symbolic joint entropy in nonlinear dynamic complexity analysis
NASA Astrophysics Data System (ADS)
Yao, Wenpo; Wang, Jun
2017-07-01
Symbolizations, the base of symbolic dynamic analysis, are classified as global static and local dynamic approaches which are combined by joint entropy in our works for nonlinear dynamic complexity analysis. Two global static methods, symbolic transformations of Wessel N. symbolic entropy and base-scale entropy, and two local ones, namely symbolizations of permutation and differential entropy, constitute four double symbolic joint entropies that have accurate complexity detections in chaotic models, logistic and Henon map series. In nonlinear dynamical analysis of different kinds of heart rate variability, heartbeats of healthy young have higher complexity than those of the healthy elderly, and congestive heart failure (CHF) patients are lowest in heartbeats' joint entropy values. Each individual symbolic entropy is improved by double symbolic joint entropy among which the combination of base-scale and differential symbolizations have best complexity analysis. Test results prove that double symbolic joint entropy is feasible in nonlinear dynamic complexity analysis.
Principles of assembly reveal a periodic table of protein complexes.
Ahnert, Sebastian E; Marsh, Joseph A; Hernández, Helena; Robinson, Carol V; Teichmann, Sarah A
2015-12-11
Structural insights into protein complexes have had a broad impact on our understanding of biological function and evolution. In this work, we sought a comprehensive understanding of the general principles underlying quaternary structure organization in protein complexes. We first examined the fundamental steps by which protein complexes can assemble, using experimental and structure-based characterization of assembly pathways. Most assembly transitions can be classified into three basic types, which can then be used to exhaustively enumerate a large set of possible quaternary structure topologies. These topologies, which include the vast majority of observed protein complex structures, enable a natural organization of protein complexes into a periodic table. On the basis of this table, we can accurately predict the expected frequencies of quaternary structure topologies, including those not yet observed. These results have important implications for quaternary structure prediction, modeling, and engineering. Copyright © 2015, American Association for the Advancement of Science.
Gardner, Jameson K.; Herbst-Kralovetz, Melissa M.
2016-01-01
The key to better understanding complex virus-host interactions is the utilization of robust three-dimensional (3D) human cell cultures that effectively recapitulate native tissue architecture and model the microenvironment. A lack of physiologically-relevant animal models for many viruses has limited the elucidation of factors that influence viral pathogenesis and of complex host immune mechanisms. Conventional monolayer cell cultures may support viral infection, but are unable to form the tissue structures and complex microenvironments that mimic host physiology and, therefore, limiting their translational utility. The rotating wall vessel (RWV) bioreactor was designed by the National Aeronautics and Space Administration (NASA) to model microgravity and was later found to more accurately reproduce features of human tissue in vivo. Cells grown in RWV bioreactors develop in a low fluid-shear environment, which enables cells to form complex 3D tissue-like aggregates. A wide variety of human tissues (from neuronal to vaginal tissue) have been grown in RWV bioreactors and have been shown to support productive viral infection and physiological meaningful host responses. The in vivo-like characteristics and cellular features of the human 3D RWV-derived aggregates make them ideal model systems to effectively recapitulate pathophysiology and host responses necessary to conduct rigorous basic science, preclinical and translational studies. PMID:27834891
NASA Astrophysics Data System (ADS)
Zhang, Shunli; Zhang, Dinghua; Gong, Hao; Ghasemalizadeh, Omid; Wang, Ge; Cao, Guohua
2014-11-01
Iterative algorithms, such as the algebraic reconstruction technique (ART), are popular for image reconstruction. For iterative reconstruction, the area integral model (AIM) is more accurate for better reconstruction quality than the line integral model (LIM). However, the computation of the system matrix for AIM is more complex and time-consuming than that for LIM. Here, we propose a fast and accurate method to compute the system matrix for AIM. First, we calculate the intersection of each boundary line of a narrow fan-beam with pixels in a recursive and efficient manner. Then, by grouping the beam-pixel intersection area into six types according to the slopes of the two boundary lines, we analytically compute the intersection area of the narrow fan-beam with the pixels in a simple algebraic fashion. Overall, experimental results show that our method is about three times faster than the Siddon algorithm and about two times faster than the distance-driven model (DDM) in computation of the system matrix. The reconstruction speed of our AIM-based ART is also faster than the LIM-based ART that uses the Siddon algorithm and DDM-based ART, for one iteration. The fast reconstruction speed of our method was accomplished without compromising the image quality.
A Self-Folding Hydrogel In Vitro Model for Ductal Carcinoma
Kwag, Hye Rin; Serbo, Janna V.; Korangath, Preethi; Sukumar, Saraswati
2016-01-01
A significant challenge in oncology is the need to develop in vitro models that accurately mimic the complex microenvironment within and around normal and diseased tissues. Here, we describe a self-folding approach to create curved hydrogel microstructures that more accurately mimic the geometry of ducts and acini within the mammary glands, as compared to existing three-dimensional block-like models or flat dishes. The microstructures are composed of photopatterned bilayers of poly (ethylene glycol) diacrylate (PEGDA), a hydrogel widely used in tissue engineering. The PEGDA bilayers of dissimilar molecular weights spontaneously curve when released from the underlying substrate due to differential swelling ratios. The photopatterns can be altered via AutoCAD-designed photomasks so that a variety of ductal and acinar mimetic structures can be mass-produced. In addition, by co-polymerizing methacrylated gelatin (methagel) with PEGDA, microstructures with increased cell adherence are synthesized. Biocompatibility and versatility of our approach is highlighted by culturing either SUM159 cells, which were seeded postfabrication, or MDA-MB-231 cells, which were encapsulated in hydrogels; cell viability is verified over 9 and 15 days, respectively. We believe that self-folding processes and associated tubular, curved, and folded constructs like the ones demonstrated here can facilitate the design of more accurate in vitro models for investigating ductal carcinoma. PMID:26831041
A Self-Folding Hydrogel In Vitro Model for Ductal Carcinoma.
Kwag, Hye Rin; Serbo, Janna V; Korangath, Preethi; Sukumar, Saraswati; Romer, Lewis H; Gracias, David H
2016-04-01
A significant challenge in oncology is the need to develop in vitro models that accurately mimic the complex microenvironment within and around normal and diseased tissues. Here, we describe a self-folding approach to create curved hydrogel microstructures that more accurately mimic the geometry of ducts and acini within the mammary glands, as compared to existing three-dimensional block-like models or flat dishes. The microstructures are composed of photopatterned bilayers of poly (ethylene glycol) diacrylate (PEGDA), a hydrogel widely used in tissue engineering. The PEGDA bilayers of dissimilar molecular weights spontaneously curve when released from the underlying substrate due to differential swelling ratios. The photopatterns can be altered via AutoCAD-designed photomasks so that a variety of ductal and acinar mimetic structures can be mass-produced. In addition, by co-polymerizing methacrylated gelatin (methagel) with PEGDA, microstructures with increased cell adherence are synthesized. Biocompatibility and versatility of our approach is highlighted by culturing either SUM159 cells, which were seeded postfabrication, or MDA-MB-231 cells, which were encapsulated in hydrogels; cell viability is verified over 9 and 15 days, respectively. We believe that self-folding processes and associated tubular, curved, and folded constructs like the ones demonstrated here can facilitate the design of more accurate in vitro models for investigating ductal carcinoma.
Revisiting the structures of several antibiotics bound to the bacterial ribosome
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bulkley, David; Innis, C. Axel; Blaha, Gregor
2010-10-08
The increasing prevalence of antibiotic-resistant pathogens reinforces the need for structures of antibiotic-ribosome complexes that are accurate enough to enable the rational design of novel ribosome-targeting therapeutics. Structures of many antibiotics in complex with both archaeal and eubacterial ribosomes have been determined, yet discrepancies between several of these models have raised the question of whether these differences arise from species-specific variations or from experimental problems. Our structure of chloramphenicol in complex with the 70S ribosome from Thermus thermophilus suggests a model for chloramphenicol bound to the large subunit of the bacterial ribosome that is radically different from the prevailing model.more » Further, our structures of the macrolide antibiotics erythromycin and azithromycin in complex with a bacterial ribosome are indistinguishable from those determined of complexes with the 50S subunit of Haloarcula marismortui, but differ significantly from the models that have been published for 50S subunit complexes of the eubacterium Deinococcus radiodurans. Our structure of the antibiotic telithromycin bound to the T. thermophilus ribosome reveals a lactone ring with a conformation similar to that observed in the H. marismortui and D. radiodurans complexes. However, the alkyl-aryl moiety is oriented differently in all three organisms, and the contacts observed with the T. thermophilus ribosome are consistent with biochemical studies performed on the Escherichia coli ribosome. Thus, our results support a mode of macrolide binding that is largely conserved across species, suggesting that the quality and interpretation of electron density, rather than species specificity, may be responsible for many of the discrepancies between the models.« less
Revisiting the Structures of Several Antibiotics Bound to the Bacterial Ribosome
DOE Office of Scientific and Technical Information (OSTI.GOV)
D Bulkley; C Innis; G Blaha
2011-12-31
The increasing prevalence of antibiotic-resistant pathogens reinforces the need for structures of antibiotic-ribosome complexes that are accurate enough to enable the rational design of novel ribosome-targeting therapeutics. Structures of many antibiotics in complex with both archaeal and eubacterial ribosomes have been determined, yet discrepancies between several of these models have raised the question of whether these differences arise from species-specific variations or from experimental problems. Our structure of chloramphenicol in complex with the 70S ribosome from Thermus thermophilus suggests a model for chloramphenicol bound to the large subunit of the bacterial ribosome that is radically different from the prevailing model.more » Further, our structures of the macrolide antibiotics erythromycin and azithromycin in complex with a bacterial ribosome are indistinguishable from those determined of complexes with the 50S subunit of Haloarcula marismortui, but differ significantly from the models that have been published for 50S subunit complexes of the eubacterium Deinococcus radiodurans. Our structure of the antibiotic telithromycin bound to the T. thermophilus ribosome reveals a lactone ring with a conformation similar to that observed in the H. marismortui and D. radiodurans complexes. However, the alkyl-aryl moiety is oriented differently in all three organisms, and the contacts observed with the T. thermophilus ribosome are consistent with biochemical studies performed on the Escherichia coli ribosome. Thus, our results support a mode of macrolide binding that is largely conserved across species, suggesting that the quality and interpretation of electron density, rather than species specificity, may be responsible for many of the discrepancies between the models.« less
Complex Chemical Reaction Networks from Heuristics-Aided Quantum Chemistry.
Rappoport, Dmitrij; Galvin, Cooper J; Zubarev, Dmitry Yu; Aspuru-Guzik, Alán
2014-03-11
While structures and reactivities of many small molecules can be computed efficiently and accurately using quantum chemical methods, heuristic approaches remain essential for modeling complex structures and large-scale chemical systems. Here, we present a heuristics-aided quantum chemical methodology applicable to complex chemical reaction networks such as those arising in cell metabolism and prebiotic chemistry. Chemical heuristics offer an expedient way of traversing high-dimensional reactive potential energy surfaces and are combined here with quantum chemical structure optimizations, which yield the structures and energies of the reaction intermediates and products. Application of heuristics-aided quantum chemical methodology to the formose reaction reproduces the experimentally observed reaction products, major reaction pathways, and autocatalytic cycles.
Molecular modeling of biomolecules by paramagnetic NMR and computational hybrid methods.
Pilla, Kala Bharath; Gaalswyk, Kari; MacCallum, Justin L
2017-11-01
The 3D atomic structures of biomolecules and their complexes are key to our understanding of biomolecular function, recognition, and mechanism. However, it is often difficult to obtain structures, particularly for systems that are complex, dynamic, disordered, or exist in environments like cell membranes. In such cases sparse data from a variety of paramagnetic NMR experiments offers one possible source of structural information. These restraints can be incorporated in computer modeling algorithms that can accurately translate the sparse experimental data into full 3D atomic structures. In this review, we discuss various types of paramagnetic NMR/computational hybrid modeling techniques that can be applied to successful modeling of not only the atomic structure of proteins but also their interacting partners. This article is part of a Special Issue entitled: Biophysics in Canada, edited by Lewis Kay, John Baenziger, Albert Berghuis and Peter Tieleman. Copyright © 2017 Elsevier B.V. All rights reserved.
Stauffer, Reto; Mayr, Georg J; Messner, Jakob W; Umlauf, Nikolaus; Zeileis, Achim
2017-06-15
Flexible spatio-temporal models are widely used to create reliable and accurate estimates for precipitation climatologies. Most models are based on square root transformed monthly or annual means, where a normal distribution seems to be appropriate. This assumption becomes invalid on a daily time scale as the observations involve large fractions of zero observations and are limited to non-negative values. We develop a novel spatio-temporal model to estimate the full climatological distribution of precipitation on a daily time scale over complex terrain using a left-censored normal distribution. The results demonstrate that the new method is able to account for the non-normal distribution and the large fraction of zero observations. The new climatology provides the full climatological distribution on a very high spatial and temporal resolution, and is competitive with, or even outperforms existing methods, even for arbitrary locations.
Computer modeling and simulation in inertial confinement fusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCrory, R.L.; Verdon, C.P.
1989-03-01
The complex hydrodynamic and transport processes associated with the implosion of an inertial confinement fusion (ICF) pellet place considerable demands on numerical simulation programs. Processes associated with implosion can usually be described using relatively simple models, but their complex interplay requires that programs model most of the relevant physical phenomena accurately. Most hydrodynamic codes used in ICF incorporate a one-fluid, two-temperature model. Electrons and ions are assumed to flow as one fluid (no charge separation). Due to the relatively weak coupling between the ions and electrons, each species is treated separately in terms of its temperature. In this paper wemore » describe some of the major components associated with an ICF hydrodynamics simulation code. To serve as an example we draw heavily on a two-dimensional Lagrangian hydrodynamic code (ORCHID) written at the University of Rochester's Laboratory for Laser Energetics. 46 refs., 19 figs., 1 tab.« less
Complex Greenland outlet glacier flow captured
Aschwanden, Andy; Fahnestock, Mark A.; Truffer, Martin
2016-01-01
The Greenland Ice Sheet is losing mass at an accelerating rate due to increased surface melt and flow acceleration in outlet glaciers. Quantifying future dynamic contributions to sea level requires accurate portrayal of outlet glaciers in ice sheet simulations, but to date poor knowledge of subglacial topography and limited model resolution have prevented reproduction of complex spatial patterns of outlet flow. Here we combine a high-resolution ice-sheet model coupled to uniformly applied models of subglacial hydrology and basal sliding, and a new subglacial topography data set to simulate the flow of the Greenland Ice Sheet. Flow patterns of many outlet glaciers are well captured, illustrating fundamental commonalities in outlet glacier flow and highlighting the importance of efforts to map subglacial topography. Success in reproducing present day flow patterns shows the potential for prognostic modelling of ice sheets without the need for spatially varying parameters with uncertain time evolution. PMID:26830316
Complex phase error and motion estimation in synthetic aperture radar imaging
NASA Astrophysics Data System (ADS)
Soumekh, M.; Yang, H.
1991-06-01
Attention is given to a SAR wave equation-based system model that accurately represents the interaction of the impinging radar signal with the target to be imaged. The model is used to estimate the complex phase error across the synthesized aperture from the measured corrupted SAR data by combining the two wave equation models governing the collected SAR data at two temporal frequencies of the radar signal. The SAR system model shows that the motion of an object in a static scene results in coupled Doppler shifts in both the temporal frequency domain and the spatial frequency domain of the synthetic aperture. The velocity of the moving object is estimated through these two Doppler shifts. It is shown that once the dynamic target's velocity is known, its reconstruction can be formulated via a squint-mode SAR geometry with parameters that depend upon the dynamic target's velocity.
Modeling Complex Biological Flows in Multi-Scale Systems using the APDEC Framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trebotich, D
We have developed advanced numerical algorithms to model biological fluids in multiscale flow environments using the software framework developed under the SciDAC APDEC ISIC. The foundation of our computational effort is an approach for modeling DNA-laden fluids as ''bead-rod'' polymers whose dynamics are fully coupled to an incompressible viscous solvent. The method is capable of modeling short range forces and interactions between particles using soft potentials and rigid constraints. Our methods are based on higher-order finite difference methods in complex geometry with adaptivity, leveraging algorithms and solvers in the APDEC Framework. Our Cartesian grid embedded boundary approach to incompressible viscousmore » flow in irregular geometries has also been interfaced to a fast and accurate level-sets method within the APDEC Framework for extracting surfaces from volume renderings of medical image data and used to simulate cardio-vascular and pulmonary flows in critical anatomies.« less
Metallurgical Plant Optimization Through the use of Flowsheet Simulation Modelling
NASA Astrophysics Data System (ADS)
Kennedy, Mark William
Modern metallurgical plants typically have complex flowsheets and operate on a continuous basis. Real time interactions within such processes can be complex and the impacts of streams such as recycles on process efficiency and stability can be highly unexpected prior to actual operation. Current desktop computing power, combined with state-of-the-art flowsheet simulation software like Metsim, allow for thorough analysis of designs to explore the interaction between operating rate, heat and mass balances and in particular the potential negative impact of recycles. Using plant information systems, it is possible to combine real plant data with simple steady state models, using dynamic data exchange links to allow for near real time de-bottlenecking of operations. Accurate analytical results can also be combined with detailed unit operations models to allow for feed-forward model-based-control. This paper will explore some examples of the application of Metsim to real world engineering and plant operational issues.
Modeling complex biological flows in multi-scale systems using the APDEC framework
NASA Astrophysics Data System (ADS)
Trebotich, David
2006-09-01
We have developed advanced numerical algorithms to model biological fluids in multiscale flow environments using the software framework developed under the SciDAC APDEC ISIC. The foundation of our computational effort is an approach for modeling DNA laden fluids as ''bead-rod'' polymers whose dynamics are fully coupled to an incompressible viscous solvent. The method is capable of modeling short range forces and interactions between particles using soft potentials and rigid constraints. Our methods are based on higher-order finite difference methods in complex geometry with adaptivity, leveraging algorithms and solvers in the APDEC Framework. Our Cartesian grid embedded boundary approach to incompressible viscous flow in irregular geometries has also been interfaced to a fast and accurate level-sets method within the APDEC Framework for extracting surfaces from volume renderings of medical image data and used to simulate cardio-vascular and pulmonary flows in critical anatomies.
Tests of high-resolution simulations over a region of complex terrain in Southeast coast of Brazil
NASA Astrophysics Data System (ADS)
Chou, Sin Chan; Luís Gomes, Jorge; Ristic, Ivan; Mesinger, Fedor; Sueiro, Gustavo; Andrade, Diego; Lima-e-Silva, Pedro Paulo
2013-04-01
The Eta Model is used operationally by INPE at the Centre for Weather Forecasts and Climate Studies (CPTEC) to produce weather forecasts over South America since 1997. The model has gone through upgrades along these years. In order to prepare the model for operational higher resolution forecasts, the model is configured and tested over a region of complex topography located near the coast of Southeast Brazil. The model domain includes the two Brazilians cities, Rio de Janeiro and Sao Paulo, urban areas, preserved tropical forest, pasture fields, and complex terrain where it can rise from sea level up to about 1000 m. Accurate near-surface wind direction and magnitude are needed for the power plant emergency plan. Besides, the region suffers from frequent events of floods and landslides, therefore accurate local forecasts are required for disaster warnings. The objective of this work is to carry out a series of numerical experiments to test and evaluate high resolution simulations in this complex area. Verification of model runs uses observations taken from the nuclear power plant and higher resolution reanalyses data. The runs were tested in a period when flow was predominately forced by local conditions and in a period forced by frontal passage. The Eta Model was configured initially with 2-km horizontal resolution and 50 layers. The Eta-2km is a second nesting, it is driven by Eta-15km, which in its turn is driven by Era-Interim reanalyses. The series of experiments consists of replacing surface layer stability function, adjusting cloud microphysics scheme parameters, further increasing vertical and horizontal resolutions. By replacing the stability function for the stable conditions substantially increased the katabatic winds and verified better against the tower wind data. Precipitation produced by the model was excessive in the region. Increasing vertical resolution to 60 layers caused a further increase in precipitation production. This excessive precipitation was reduced by adjusting some parameters in the cloud microphysics scheme. Precipitation overestimate still occurs and further tests are still necessary. The increase of horizontal resolution to 1 km required adjusting model diffusion parameters and refining divergence calculations. Available observations in the region for a thorough evaluation is a major constraint.
Tøndel, Kristin; Indahl, Ulf G; Gjuvsland, Arne B; Vik, Jon Olav; Hunter, Peter; Omholt, Stig W; Martens, Harald
2011-06-01
Deterministic dynamic models of complex biological systems contain a large number of parameters and state variables, related through nonlinear differential equations with various types of feedback. A metamodel of such a dynamic model is a statistical approximation model that maps variation in parameters and initial conditions (inputs) to variation in features of the trajectories of the state variables (outputs) throughout the entire biologically relevant input space. A sufficiently accurate mapping can be exploited both instrumentally and epistemically. Multivariate regression methodology is a commonly used approach for emulating dynamic models. However, when the input-output relations are highly nonlinear or non-monotone, a standard linear regression approach is prone to give suboptimal results. We therefore hypothesised that a more accurate mapping can be obtained by locally linear or locally polynomial regression. We present here a new method for local regression modelling, Hierarchical Cluster-based PLS regression (HC-PLSR), where fuzzy C-means clustering is used to separate the data set into parts according to the structure of the response surface. We compare the metamodelling performance of HC-PLSR with polynomial partial least squares regression (PLSR) and ordinary least squares (OLS) regression on various systems: six different gene regulatory network models with various types of feedback, a deterministic mathematical model of the mammalian circadian clock and a model of the mouse ventricular myocyte function. Our results indicate that multivariate regression is well suited for emulating dynamic models in systems biology. The hierarchical approach turned out to be superior to both polynomial PLSR and OLS regression in all three test cases. The advantage, in terms of explained variance and prediction accuracy, was largest in systems with highly nonlinear functional relationships and in systems with positive feedback loops. HC-PLSR is a promising approach for metamodelling in systems biology, especially for highly nonlinear or non-monotone parameter to phenotype maps. The algorithm can be flexibly adjusted to suit the complexity of the dynamic model behaviour, inviting automation in the metamodelling of complex systems.
2011-01-01
Background Deterministic dynamic models of complex biological systems contain a large number of parameters and state variables, related through nonlinear differential equations with various types of feedback. A metamodel of such a dynamic model is a statistical approximation model that maps variation in parameters and initial conditions (inputs) to variation in features of the trajectories of the state variables (outputs) throughout the entire biologically relevant input space. A sufficiently accurate mapping can be exploited both instrumentally and epistemically. Multivariate regression methodology is a commonly used approach for emulating dynamic models. However, when the input-output relations are highly nonlinear or non-monotone, a standard linear regression approach is prone to give suboptimal results. We therefore hypothesised that a more accurate mapping can be obtained by locally linear or locally polynomial regression. We present here a new method for local regression modelling, Hierarchical Cluster-based PLS regression (HC-PLSR), where fuzzy C-means clustering is used to separate the data set into parts according to the structure of the response surface. We compare the metamodelling performance of HC-PLSR with polynomial partial least squares regression (PLSR) and ordinary least squares (OLS) regression on various systems: six different gene regulatory network models with various types of feedback, a deterministic mathematical model of the mammalian circadian clock and a model of the mouse ventricular myocyte function. Results Our results indicate that multivariate regression is well suited for emulating dynamic models in systems biology. The hierarchical approach turned out to be superior to both polynomial PLSR and OLS regression in all three test cases. The advantage, in terms of explained variance and prediction accuracy, was largest in systems with highly nonlinear functional relationships and in systems with positive feedback loops. Conclusions HC-PLSR is a promising approach for metamodelling in systems biology, especially for highly nonlinear or non-monotone parameter to phenotype maps. The algorithm can be flexibly adjusted to suit the complexity of the dynamic model behaviour, inviting automation in the metamodelling of complex systems. PMID:21627852
Jatobá, Alessandro; de Carvalho, Paulo Victor R; da Cunha, Amauri Marques
2012-01-01
Work in organizations requires a minimum level of consensus on the understanding of the practices performed. To adopt technological devices to support the activities in environments where work is complex, characterized by the interdependence among a large number of variables, understanding about how work is done not only takes an even greater importance, but also becomes a more difficult task. Therefore, this study aims to present a method for modeling of work in complex systems, which allows improving the knowledge about the way activities are performed where these activities do not simply happen by performing procedures. Uniting techniques of Cognitive Task Analysis with the concept of Work Process, this work seeks to provide a method capable of providing a detailed and accurate vision of how people perform their tasks, in order to apply information systems for supporting work in organizations.
WScore: A Flexible and Accurate Treatment of Explicit Water Molecules in Ligand-Receptor Docking.
Murphy, Robert B; Repasky, Matthew P; Greenwood, Jeremy R; Tubert-Brohman, Ivan; Jerome, Steven; Annabhimoju, Ramakrishna; Boyles, Nicholas A; Schmitz, Christopher D; Abel, Robert; Farid, Ramy; Friesner, Richard A
2016-05-12
We have developed a new methodology for protein-ligand docking and scoring, WScore, incorporating a flexible description of explicit water molecules. The locations and thermodynamics of the waters are derived from a WaterMap molecular dynamics simulation. The water structure is employed to provide an atomic level description of ligand and protein desolvation. WScore also contains a detailed model for localized ligand and protein strain energy and integrates an MM-GBSA scoring component with these terms to assess delocalized strain of the complex. Ensemble docking is used to take into account induced fit effects on the receptor conformation, and protein reorganization free energies are assigned via fitting to experimental data. The performance of the method is evaluated for pose prediction, rank ordering of self-docked complexes, and enrichment in virtual screening, using a large data set of PDB complexes and compared with the Glide SP and Glide XP models; significant improvements are obtained.
NASA Technical Reports Server (NTRS)
Rai, Man Mohan (Inventor); Madavan, Nateri K. (Inventor)
2007-01-01
A method and system for data modeling that incorporates the advantages of both traditional response surface methodology (RSM) and neural networks is disclosed. The invention partitions the parameters into a first set of s simple parameters, where observable data are expressible as low order polynomials, and c complex parameters that reflect more complicated variation of the observed data. Variation of the data with the simple parameters is modeled using polynomials; and variation of the data with the complex parameters at each vertex is analyzed using a neural network. Variations with the simple parameters and with the complex parameters are expressed using a first sequence of shape functions and a second sequence of neural network functions. The first and second sequences are multiplicatively combined to form a composite response surface, dependent upon the parameter values, that can be used to identify an accurate mode
Lisa J. Bate; Michael J. Wisdom; Barbara C. Wales
2007-01-01
A key element of forest management is the maintenance of sufficient densities of snags (standing dead trees) to support associated wildlife. Management factors that influence snag densities, however, are numerous and complex. Consequently, accurate methods to estimate and model snag densities are needed. Using data collected in 2002 and Current Vegetation Survey (CVS)...
Finite element analyses of two dimensional, anisotropic heat transfer in wood
John F. Hunt; Hongmei Gu
2004-01-01
The anisotropy of wood creates a complex problem for solving heat and mass transfer problems that require analyses be based on fundamental material properties of the wood structure. Inputting basic orthogonal properties of the wood material alone are not sufficient for accurate modeling because wood is a combination of porous fiber cells that are aligned and mis-...
Fully Flexible Docking of Medium Sized Ligand Libraries with RosettaLigand
DeLuca, Samuel; Khar, Karen; Meiler, Jens
2015-01-01
RosettaLigand has been successfully used to predict binding poses in protein-small molecule complexes. However, the RosettaLigand docking protocol is comparatively slow in identifying an initial starting pose for the small molecule (ligand) making it unfeasible for use in virtual High Throughput Screening (vHTS). To overcome this limitation, we developed a new sampling approach for placing the ligand in the protein binding site during the initial ‘low-resolution’ docking step. It combines the translational and rotational adjustments to the ligand pose in a single transformation step. The new algorithm is both more accurate and more time-efficient. The docking success rate is improved by 10–15% in a benchmark set of 43 protein/ligand complexes, reducing the number of models that typically need to be generated from 1000 to 150. The average time to generate a model is reduced from 50 seconds to 10 seconds. As a result we observe an effective 30-fold speed increase, making RosettaLigand appropriate for docking medium sized ligand libraries. We demonstrate that this improved initial placement of the ligand is critical for successful prediction of an accurate binding position in the ‘high-resolution’ full atom refinement step. PMID:26207742
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bauer, Stephen J.; Flint, Gregory Mark
Accurate knowledge of thermophysical properties of concrete is considered extremely important for meaningful models to be developed of scenarios wherein the concrete is rapidly heated. Test of solid propellant burns on samples of concrete from Launch Complex 17 of the Cape Canaveral show spallation and fragmentation. In response to the need for accurate modeling scenarios of these observations, an experimental program to determine the permeability and thermal properties of the concrete was developed. Room temperature gas permeability measurements of Launch Complex 17 of the Cape Canaveral concrete dried at 50°C yield permeability estimates of 0.07mD (mean), and thermal properties (thermalmore » conductivity, diffusivity, and specific heat) were found to vary with temperatures from room temperature to 300°C. Thermal conductivity ranges from 1.7-1.9 W/mK at 50°C to 1.0-1.15 W/mK at 300°C, thermal diffusivity ranges from 0.75-0.96 mm 2/s at 50°C to 0.44-0.58 mm 2/s at 300°C, and specific heat ranges from 1.76-2.32 /m 3K to 2.00-2.50 /m 3K at 300°C.« less
Off-design Performance Analysis of Multi-Stage Transonic Axial Compressors
NASA Astrophysics Data System (ADS)
Du, W. H.; Wu, H.; Zhang, L.
Because of the complex flow fields and component interaction in modern gas turbine engines, they require extensive experiment to validate performance and stability. The experiment process can become expensive and complex. Modeling and simulation of gas turbine engines are way to reduce experiment costs, provide fidelity and enhance the quality of essential experiment. The flow field of a transonic compressor contains all the flow aspects, which are difficult to present-boundary layer transition and separation, shock-boundary layer interactions, and large flow unsteadiness. Accurate transonic axial compressor off-design performance prediction is especially difficult, due in large part to three-dimensional blade design and the resulting flow field. Although recent advancements in computer capacity have brought computational fluid dynamics to forefront of turbomachinery design and analysis, the grid and turbulence model still limit Reynolds-average Navier-Stokes (RANS) approximations in the multi-stage transonic axial compressor flow field. Streamline curvature methods are still the dominant numerical approach as an important tool for turbomachinery to analyze and design, and it is generally accepted that streamline curvature solution techniques will provide satisfactory flow prediction as long as the losses, deviation and blockage are accurately predicted.
NASA Astrophysics Data System (ADS)
Brunet, V.; Molton, P.; Bézard, H.; Deck, S.; Jacquin, L.
2012-01-01
This paper describes the results obtained during the European Union JEDI (JEt Development Investigations) project carried out in cooperation between ONERA and Airbus. The aim of these studies was first to acquire a complete database of a modern-type engine jet installation set under a wall-to-wall swept wing in various transonic flow conditions. Interactions between the engine jet, the pylon, and the wing were studied thanks to ¤advanced¥ measurement techniques. In parallel, accurate Reynolds-averaged Navier Stokes (RANS) simulations were carried out from simple ones with the Spalart Allmaras model to more complex ones like the DRSM-SSG (Differential Reynolds Stress Modef of Speziale Sarkar Gatski) turbulence model. In the end, Zonal-Detached Eddy Simulations (Z-DES) were also performed to compare different simulation techniques. All numerical results are accurately validated thanks to the experimental database acquired in parallel. This complete and complex study of modern civil aircraft engine installation allowed many upgrades in understanding and simulation methods to be obtained. Furthermore, a setup for engine jet installation studies has been validated for possible future works in the S3Ch transonic research wind-tunnel. The main conclusions are summed up in this paper.
Air breathing engine/rocket trajectory optimization
NASA Technical Reports Server (NTRS)
Smith, V. K., III
1979-01-01
This research has focused on improving the mathematical models of the air-breathing propulsion systems, which can be mated with the rocket engine model and incorporated in trajectory optimization codes. Improved engine simulations provided accurate representation of the complex cycles proposed for advanced launch vehicles, thereby increasing the confidence in propellant use and payload calculations. The versatile QNEP (Quick Navy Engine Program) was modified to allow treatment of advanced turboaccelerator cycles using hydrogen or hydrocarbon fuels and operating in the vehicle flow field.
Ionospheric scintillation studies
NASA Technical Reports Server (NTRS)
Rino, C. L.; Freemouw, E. J.
1973-01-01
The diffracted field of a monochromatic plane wave was characterized by two complex correlation functions. For a Gaussian complex field, these quantities suffice to completely define the statistics of the field. Thus, one can in principle calculate the statistics of any measurable quantity in terms of the model parameters. The best data fits were achieved for intensity statistics derived under the Gaussian statistics hypothesis. The signal structure that achieved the best fit was nearly invariant with scintillation level and irregularity source (ionosphere or solar wind). It was characterized by the fact that more than 80% of the scattered signal power is in phase quadrature with the undeviated or coherent signal component. Thus, the Gaussian-statistics hypothesis is both convenient and accurate for channel modeling work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osterman, Gordon; Keating, Kristina; Binley, Andrew
Here, we estimate parameters from the Katz and Thompson permeability model using laboratory complex electrical conductivity (CC) and nuclear magnetic resonance (NMR) data to build permeability models parameterized with geophysical measurements. We use the Katz and Thompson model based on the characteristic hydraulic length scale, determined from mercury injection capillary pressure estimates of pore throat size, and the intrinsic formation factor, determined from multisalinity conductivity measurements, for this purpose. Two new permeability models are tested, one based on CC data and another that incorporates CC and NMR data. From measurements made on forty-five sandstone cores collected from fifteen different formations,more » we evaluate how well the CC relaxation time and the NMR transverse relaxation times compare to the characteristic hydraulic length scale and how well the formation factor estimated from CC parameters compares to the intrinsic formation factor. We find: (1) the NMR transverse relaxation time models the characteristic hydraulic length scale more accurately than the CC relaxation time (R 2 of 0.69 and 0.33 and normalized root mean square errors (NRMSE) of 0.16 and 0.21, respectively); (2) the CC estimated formation factor is well correlated with the intrinsic formation factor (NRMSE50.23). We demonstrate that that permeability estimates from the joint-NMR-CC model (NRMSE50.13) compare favorably to estimates from the Katz and Thompson model (NRMSE50.074). Lastly, this model advances the capability of the Katz and Thompson model by employing parameters measureable in the field giving it the potential to more accurately estimate permeability using geophysical measurements than are currently possible.« less
Osterman, Gordon; Keating, Kristina; Binley, Andrew; ...
2016-03-18
Here, we estimate parameters from the Katz and Thompson permeability model using laboratory complex electrical conductivity (CC) and nuclear magnetic resonance (NMR) data to build permeability models parameterized with geophysical measurements. We use the Katz and Thompson model based on the characteristic hydraulic length scale, determined from mercury injection capillary pressure estimates of pore throat size, and the intrinsic formation factor, determined from multisalinity conductivity measurements, for this purpose. Two new permeability models are tested, one based on CC data and another that incorporates CC and NMR data. From measurements made on forty-five sandstone cores collected from fifteen different formations,more » we evaluate how well the CC relaxation time and the NMR transverse relaxation times compare to the characteristic hydraulic length scale and how well the formation factor estimated from CC parameters compares to the intrinsic formation factor. We find: (1) the NMR transverse relaxation time models the characteristic hydraulic length scale more accurately than the CC relaxation time (R 2 of 0.69 and 0.33 and normalized root mean square errors (NRMSE) of 0.16 and 0.21, respectively); (2) the CC estimated formation factor is well correlated with the intrinsic formation factor (NRMSE50.23). We demonstrate that that permeability estimates from the joint-NMR-CC model (NRMSE50.13) compare favorably to estimates from the Katz and Thompson model (NRMSE50.074). Lastly, this model advances the capability of the Katz and Thompson model by employing parameters measureable in the field giving it the potential to more accurately estimate permeability using geophysical measurements than are currently possible.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leja, Joel; Johnson, Benjamin D.; Conroy, Charlie
2017-03-10
Broadband photometry of galaxies measures an unresolved mix of complex stellar populations, gas, and dust. Interpreting these data is a challenge for models: many studies have shown that properties derived from modeling galaxy photometry are uncertain by a factor of two or more, and yet answering key questions in the field now requires higher accuracy than this. Here, we present a new model framework specifically designed for these complexities. Our model, Prospector- α , includes dust attenuation and re-radiation, a flexible attenuation curve, nebular emission, stellar metallicity, and a six-component nonparametric star formation history. The flexibility and range of themore » parameter space, coupled with Monte Carlo Markov chain sampling within the Prospector inference framework, is designed to provide unbiased parameters and realistic error bars. We assess the accuracy of the model with aperture-matched optical spectroscopy, which was excluded from the fits. We compare spectral features predicted solely from fits to the broadband photometry to the observed spectral features. Our model predicts H α luminosities with a scatter of ∼0.18 dex and an offset of ∼0.1 dex across a wide range of morphological types and stellar masses. This agreement is remarkable, as the H α luminosity is dependent on accurate star formation rates, dust attenuation, and stellar metallicities. The model also accurately predicts dust-sensitive Balmer decrements, spectroscopic stellar metallicities, polycyclic aromatic hydrocarbon mass fractions, and the age- and metallicity-sensitive features D{sub n}4000 and H δ . Although the model passes all these tests, we caution that we have not yet assessed its performance at higher redshift or the accuracy of recovered stellar masses.« less
Reliable low precision simulations in land surface models
NASA Astrophysics Data System (ADS)
Dawson, Andrew; Düben, Peter D.; MacLeod, David A.; Palmer, Tim N.
2017-12-01
Weather and climate models must continue to increase in both resolution and complexity in order that forecasts become more accurate and reliable. Moving to lower numerical precision may be an essential tool for coping with the demand for ever increasing model complexity in addition to increasing computing resources. However, there have been some concerns in the weather and climate modelling community over the suitability of lower precision for climate models, particularly for representing processes that change very slowly over long time-scales. These processes are difficult to represent using low precision due to time increments being systematically rounded to zero. Idealised simulations are used to demonstrate that a model of deep soil heat diffusion that fails when run in single precision can be modified to work correctly using low precision, by splitting up the model into a small higher precision part and a low precision part. This strategy retains the computational benefits of reduced precision whilst preserving accuracy. This same technique is also applied to a full complexity land surface model, resulting in rounding errors that are significantly smaller than initial condition and parameter uncertainties. Although lower precision will present some problems for the weather and climate modelling community, many of the problems can likely be overcome using a straightforward and physically motivated application of reduced precision.
A model for flexible tools used in minimally invasive medical virtual environments.
Soler, Francisco; Luzon, M Victoria; Pop, Serban R; Hughes, Chris J; John, Nigel W; Torres, Juan Carlos
2011-01-01
Within the limits of current technology, many applications of a virtual environment will trade-off accuracy for speed. This is not an acceptable compromise in a medical training application where both are essential. Efficient algorithms must therefore be developed. The purpose of this project is the development and validation of a novel physics-based real time tool manipulation model, which is easy to integrate into any medical virtual environment that requires support for the insertion of long flexible tools into complex geometries. This encompasses medical specialities such as vascular interventional radiology, endoscopy, and laparoscopy, where training, prototyping of new instruments/tools and mission rehearsal can all be facilitated by using an immersive medical virtual environment. Our model recognises and uses accurately patient specific data and adapts to the geometrical complexity of the vessel in real time.
Comparison of Turbulence Models for Nozzle-Afterbody Flows with Propulsive Jets
NASA Technical Reports Server (NTRS)
Compton, William B., III
1996-01-01
A numerical investigation was conducted to assess the accuracy of two turbulence models when computing non-axisymmetric nozzle-afterbody flows with propulsive jets. Navier-Stokes solutions were obtained for a Convergent-divergent non-axisymmetric nozzle-afterbody and its associated jet exhaust plume at free-stream Mach numbers of 0.600 and 0.938 at an angle of attack of 0 deg. The Reynolds number based on model length was approximately 20 x 10(exp 6). Turbulent dissipation was modeled by the algebraic Baldwin-Lomax turbulence model with the Degani-Schiff modification and by the standard Jones-Launder kappa-epsilon turbulence model. At flow conditions without strong shocks and with little or no separation, both turbulence models predicted the pressures on the surfaces of the nozzle very well. When strong shocks and massive separation existed, both turbulence models were unable to predict the flow accurately. Mixing of the jet exhaust plume and the external flow was underpredicted. The differences in drag coefficients for the two turbulence models illustrate that substantial development is still required for computing very complex flows before nozzle performance can be predicted accurately for all external flow conditions.
Taxi-Out Time Prediction for Departures at Charlotte Airport Using Machine Learning Techniques
NASA Technical Reports Server (NTRS)
Lee, Hanbong; Malik, Waqar; Jung, Yoon C.
2016-01-01
Predicting the taxi-out times of departures accurately is important for improving airport efficiency and takeoff time predictability. In this paper, we attempt to apply machine learning techniques to actual traffic data at Charlotte Douglas International Airport for taxi-out time prediction. To find the key factors affecting aircraft taxi times, surface surveillance data is first analyzed. From this data analysis, several variables, including terminal concourse, spot, runway, departure fix and weight class, are selected for taxi time prediction. Then, various machine learning methods such as linear regression, support vector machines, k-nearest neighbors, random forest, and neural networks model are applied to actual flight data. Different traffic flow and weather conditions at Charlotte airport are also taken into account for more accurate prediction. The taxi-out time prediction results show that linear regression and random forest techniques can provide the most accurate prediction in terms of root-mean-square errors. We also discuss the operational complexity and uncertainties that make it difficult to predict the taxi times accurately.
Kuo, Yin-Ming; Henry, Ryan A; Andrews, Andrew J
2016-01-01
Multiple substrate enzymes present a particular challenge when it comes to understanding their activity in a complex system. Although a single target may be easy to model, it does not always present an accurate representation of what that enzyme will do in the presence of multiple substrates simultaneously. Therefore, there is a need to find better ways to both study these enzymes in complicated systems, as well as accurately describe the interactions through kinetic parameters. This review looks at different methods for studying multiple substrate enzymes, as well as explores options on how to most accurately describe an enzyme's activity within these multi-substrate systems. Identifying and defining this enzymatic activity should help clear the way to using in vitro systems to accurately predicting the behavior of multi-substrate enzymes in vivo. This article is part of a Special Issue entitled: Physiological Enzymology and Protein Functions. Copyright © 2015. Published by Elsevier B.V.
Protein and gene model inference based on statistical modeling in k-partite graphs.
Gerster, Sarah; Qeli, Ermir; Ahrens, Christian H; Bühlmann, Peter
2010-07-06
One of the major goals of proteomics is the comprehensive and accurate description of a proteome. Shotgun proteomics, the method of choice for the analysis of complex protein mixtures, requires that experimentally observed peptides are mapped back to the proteins they were derived from. This process is also known as protein inference. We present Markovian Inference of Proteins and Gene Models (MIPGEM), a statistical model based on clearly stated assumptions to address the problem of protein and gene model inference for shotgun proteomics data. In particular, we are dealing with dependencies among peptides and proteins using a Markovian assumption on k-partite graphs. We are also addressing the problems of shared peptides and ambiguous proteins by scoring the encoding gene models. Empirical results on two control datasets with synthetic mixtures of proteins and on complex protein samples of Saccharomyces cerevisiae, Drosophila melanogaster, and Arabidopsis thaliana suggest that the results with MIPGEM are competitive with existing tools for protein inference.
Advanced Cell Culture Techniques for Cancer Drug Discovery
Lovitt, Carrie J.; Shelper, Todd B.; Avery, Vicky M.
2014-01-01
Human cancer cell lines are an integral part of drug discovery practices. However, modeling the complexity of cancer utilizing these cell lines on standard plastic substrata, does not accurately represent the tumor microenvironment. Research into developing advanced tumor cell culture models in a three-dimensional (3D) architecture that more prescisely characterizes the disease state have been undertaken by a number of laboratories around the world. These 3D cell culture models are particularly beneficial for investigating mechanistic processes and drug resistance in tumor cells. In addition, a range of molecular mechanisms deconstructed by studying cancer cells in 3D models suggest that tumor cells cultured in two-dimensional monolayer conditions do not respond to cancer therapeutics/compounds in a similar manner. Recent studies have demonstrated the potential of utilizing 3D cell culture models in drug discovery programs; however, it is evident that further research is required for the development of more complex models that incorporate the majority of the cellular and physical properties of a tumor. PMID:24887773
Advanced cell culture techniques for cancer drug discovery.
Lovitt, Carrie J; Shelper, Todd B; Avery, Vicky M
2014-05-30
Human cancer cell lines are an integral part of drug discovery practices. However, modeling the complexity of cancer utilizing these cell lines on standard plastic substrata, does not accurately represent the tumor microenvironment. Research into developing advanced tumor cell culture models in a three-dimensional (3D) architecture that more prescisely characterizes the disease state have been undertaken by a number of laboratories around the world. These 3D cell culture models are particularly beneficial for investigating mechanistic processes and drug resistance in tumor cells. In addition, a range of molecular mechanisms deconstructed by studying cancer cells in 3D models suggest that tumor cells cultured in two-dimensional monolayer conditions do not respond to cancer therapeutics/compounds in a similar manner. Recent studies have demonstrated the potential of utilizing 3D cell culture models in drug discovery programs; however, it is evident that further research is required for the development of more complex models that incorporate the majority of the cellular and physical properties of a tumor.
Kerl, Paul Y; Zhang, Wenxian; Moreno-Cruz, Juan B; Nenes, Athanasios; Realff, Matthew J; Russell, Armistead G; Sokol, Joel; Thomas, Valerie M
2015-09-01
Integrating accurate air quality modeling with decision making is hampered by complex atmospheric physics and chemistry and its coupling with atmospheric transport. Existing approaches to model the physics and chemistry accurately lead to significant computational burdens in computing the response of atmospheric concentrations to changes in emissions profiles. By integrating a reduced form of a fully coupled atmospheric model within a unit commitment optimization model, we allow, for the first time to our knowledge, a fully dynamical approach toward electricity planning that accurately and rapidly minimizes both cost and health impacts. The reduced-form model captures the response of spatially resolved air pollutant concentrations to changes in electricity-generating plant emissions on an hourly basis with accuracy comparable to a comprehensive air quality model. The integrated model allows for the inclusion of human health impacts into cost-based decisions for power plant operation. We use the new capability in a case study of the state of Georgia over the years of 2004-2011, and show that a shift in utilization among existing power plants during selected hourly periods could have provided a health cost savings of $175.9 million dollars for an additional electricity generation cost of $83.6 million in 2007 US dollars (USD2007). The case study illustrates how air pollutant health impacts can be cost-effectively minimized by intelligently modulating power plant operations over multihour periods, without implementing additional emissions control technologies.
Kerl, Paul Y.; Zhang, Wenxian; Moreno-Cruz, Juan B.; Nenes, Athanasios; Realff, Matthew J.; Russell, Armistead G.; Sokol, Joel; Thomas, Valerie M.
2015-01-01
Integrating accurate air quality modeling with decision making is hampered by complex atmospheric physics and chemistry and its coupling with atmospheric transport. Existing approaches to model the physics and chemistry accurately lead to significant computational burdens in computing the response of atmospheric concentrations to changes in emissions profiles. By integrating a reduced form of a fully coupled atmospheric model within a unit commitment optimization model, we allow, for the first time to our knowledge, a fully dynamical approach toward electricity planning that accurately and rapidly minimizes both cost and health impacts. The reduced-form model captures the response of spatially resolved air pollutant concentrations to changes in electricity-generating plant emissions on an hourly basis with accuracy comparable to a comprehensive air quality model. The integrated model allows for the inclusion of human health impacts into cost-based decisions for power plant operation. We use the new capability in a case study of the state of Georgia over the years of 2004–2011, and show that a shift in utilization among existing power plants during selected hourly periods could have provided a health cost savings of $175.9 million dollars for an additional electricity generation cost of $83.6 million in 2007 US dollars (USD2007). The case study illustrates how air pollutant health impacts can be cost-effectively minimized by intelligently modulating power plant operations over multihour periods, without implementing additional emissions control technologies. PMID:26283358
Mathematics as a conduit for translational research in post-traumatic osteoarthritis.
Ayati, Bruce P; Kapitanov, Georgi I; Coleman, Mitchell C; Anderson, Donald D; Martin, James A
2017-03-01
Biomathematical models offer a powerful method of clarifying complex temporal interactions and the relationships among multiple variables in a system. We present a coupled in silico biomathematical model of articular cartilage degeneration in response to impact and/or aberrant loading such as would be associated with injury to an articular joint. The model incorporates fundamental biological and mechanical information obtained from explant and small animal studies to predict post-traumatic osteoarthritis (PTOA) progression, with an eye toward eventual application in human patients. In this sense, we refer to the mathematics as a "conduit of translation." The new in silico framework presented in this paper involves a biomathematical model for the cellular and biochemical response to strains computed using finite element analysis. The model predicts qualitative responses presently, utilizing system parameter values largely taken from the literature. To contribute to accurate predictions, models need to be accurately parameterized with values that are based on solid science. We discuss a parameter identification protocol that will enable us to make increasingly accurate predictions of PTOA progression using additional data from smaller scale explant and small animal assays as they become available. By distilling the data from the explant and animal assays into parameters for biomathematical models, mathematics can translate experimental data to clinically relevant knowledge. © 2016 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 35:566-572, 2017. © 2016 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.
Dimension reduction method for SPH equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tartakovsky, Alexandre M.; Scheibe, Timothy D.
2011-08-26
Smoothed Particle Hydrodynamics model of a complex multiscale processe often results in a system of ODEs with an enormous number of unknowns. Furthermore, a time integration of the SPH equations usually requires time steps that are smaller than the observation time by many orders of magnitude. A direct solution of these ODEs can be extremely expensive. Here we propose a novel dimension reduction method that gives an approximate solution of the SPH ODEs and provides an accurate prediction of the average behavior of the modeled system. The method consists of two main elements. First, effective equationss for evolution of averagemore » variables (e.g. average velocity, concentration and mass of a mineral precipitate) are obtained by averaging the SPH ODEs over the entire computational domain. These effective ODEs contain non-local terms in the form of volume integrals of functions of the SPH variables. Second, a computational closure is used to close the system of the effective equations. The computational closure is achieved via short bursts of the SPH model. The dimension reduction model is used to simulate flow and transport with mixing controlled reactions and mineral precipitation. An SPH model is used model transport at the porescale. Good agreement between direct solutions of the SPH equations and solutions obtained with the dimension reduction method for different boundary conditions confirms the accuracy and computational efficiency of the dimension reduction model. The method significantly accelerates SPH simulations, while providing accurate approximation of the solution and accurate prediction of the average behavior of the system.« less
Lei, Huan; Yang, Xiu; Zheng, Bin; ...
2015-11-05
Biomolecules exhibit conformational fluctuations near equilibrium states, inducing uncertainty in various biological properties in a dynamic way. We have developed a general method to quantify the uncertainty of target properties induced by conformational fluctuations. Using a generalized polynomial chaos (gPC) expansion, we construct a surrogate model of the target property with respect to varying conformational states. We also propose a method to increase the sparsity of the gPC expansion by defining a set of conformational “active space” random variables. With the increased sparsity, we employ the compressive sensing method to accurately construct the surrogate model. We demonstrate the performance ofmore » the surrogate model by evaluating fluctuation-induced uncertainty in solvent-accessible surface area for the bovine trypsin inhibitor protein system and show that the new approach offers more accurate statistical information than standard Monte Carlo approaches. Further more, the constructed surrogate model also enables us to directly evaluate the target property under various conformational states, yielding a more accurate response surface than standard sparse grid collocation methods. In particular, the new method provides higher accuracy in high-dimensional systems, such as biomolecules, where sparse grid performance is limited by the accuracy of the computed quantity of interest. Finally, our new framework is generalizable and can be used to investigate the uncertainty of a wide variety of target properties in biomolecular systems.« less
A physical multifield model predicts the development of volume and structure in the human brain
NASA Astrophysics Data System (ADS)
Rooij, Rijk de; Kuhl, Ellen
2018-03-01
The prenatal development of the human brain is characterized by a rapid increase in brain volume and a development of a highly folded cortex. At the cellular level, these events are enabled by symmetric and asymmetric cell division in the ventricular regions of the brain followed by an outwards cell migration towards the peripheral regions. The role of mechanics during brain development has been suggested and acknowledged in past decades, but remains insufficiently understood. Here we propose a mechanistic model that couples cell division, cell migration, and brain volume growth to accurately model the developing brain between weeks 10 and 29 of gestation. Our model accurately predicts a 160-fold volume increase from 1.5 cm3 at week 10 to 235 cm3 at week 29 of gestation. In agreement with human brain development, the cortex begins to form around week 22 and accounts for about 30% of the total brain volume at week 29. Our results show that cell division and coupling between cell density and volume growth are essential to accurately model brain volume development, whereas cell migration and diffusion contribute mainly to the development of the cortex. We demonstrate that complex folding patterns, including sinusoidal folds and creases, emerge naturally as the cortex develops, even for low stiffness contrasts between the cortex and subcortex.
The Material Point Method and Simulation of Wave Propagation in Heterogeneous Media
NASA Astrophysics Data System (ADS)
Bardenhagen, S. G.; Greening, D. R.; Roessig, K. M.
2004-07-01
The mechanical response of polycrystalline materials, particularly under shock loading, is of significant interest in a variety of munitions and industrial applications. Homogeneous continuum models have been developed to describe material response, including Equation of State, strength, and reactive burn models. These models provide good estimates of bulk material response. However, there is little connection to underlying physics and, consequently, they cannot be applied far from their calibrated regime with confidence. Both explosives and metals have important structure at the (energetic or single crystal) grain scale. The anisotropic properties of the individual grains and the presence of interfaces result in the localization of energy during deformation. In explosives energy localization can lead to initiation under weak shock loading, and in metals to material ejecta under strong shock loading. To develop accurate, quantitative and predictive models it is imperative to develop a sound physical understanding of the grain-scale material response. Numerical simulations are performed to gain insight into grain-scale material response. The Generalized Interpolation Material Point Method family of numerical algorithms, selected for their robust treatment of large deformation problems and convenient framework for implementing material interface models, are reviewed. A three-dimensional simulation of wave propagation through a granular material indicates the scale and complexity of a representative grain-scale computation. Verification and validation calculations on model bimaterial systems indicate the minimum numerical algorithm complexity required for accurate simulation of wave propagation across material interfaces and demonstrate the importance of interfacial decohesion. Preliminary results are presented which predict energy localization at the grain boundary in a metallic bicrystal.
Bridging the gap between computation and clinical biology: validation of cable theory in humans
Finlay, Malcolm C.; Xu, Lei; Taggart, Peter; Hanson, Ben; Lambiase, Pier D.
2013-01-01
Introduction: Computerized simulations of cardiac activity have significantly contributed to our understanding of cardiac electrophysiology, but techniques of simulations based on patient-acquired data remain in their infancy. We sought to integrate data acquired from human electrophysiological studies into patient-specific models, and validated this approach by testing whether electrophysiological responses to sequential premature stimuli could be predicted in a quantitatively accurate manner. Methods: Eleven patients with structurally normal hearts underwent electrophysiological studies. Semi-automated analysis was used to reconstruct activation and repolarization dynamics for each electrode. This S2 extrastimuli data was used to inform individualized models of cardiac conduction, including a novel derivation of conduction velocity restitution. Activation dynamics of multiple premature extrastimuli were then predicted from this model and compared against measured patient data as well as data derived from the ten-Tusscher cell-ionic model. Results: Activation dynamics following a premature S3 were significantly different from those after an S2. Patient specific models demonstrated accurate prediction of the S3 activation wave, (Pearson's R2 = 0.90, median error 4%). Examination of the modeled conduction dynamics allowed inferences into the spatial dispersion of activation delay. Further validation was performed against data from the ten-Tusscher cell-ionic model, with our model accurately recapitulating predictions of repolarization times (R2 = 0.99). Conclusions: Simulations based on clinically acquired data can be used to successfully predict complex activation patterns following sequential extrastimuli. Such modeling techniques may be useful as a method of incorporation of clinical data into predictive models. PMID:24027527
How Monte Carlo heuristics aid to identify the physical processes of drug release kinetics.
Lecca, Paola
2018-01-01
We implement a Monte Carlo heuristic algorithm to model drug release from a solid dosage form. We show that with Monte Carlo simulations it is possible to identify and explain the causes of the unsatisfactory predictive power of current drug release models. It is well known that the power-law, the exponential models, as well as those derived from or inspired by them accurately reproduce only the first 60% of the release curve of a drug from a dosage form. In this study, by using Monte Carlo simulation approaches, we show that these models fit quite accurately almost the entire release profile when the release kinetics is not governed by the coexistence of different physico-chemical mechanisms. We show that the accuracy of the traditional models are comparable with those of Monte Carlo heuristics when these heuristics approximate and oversimply the phenomenology of drug release. This observation suggests to develop and use novel Monte Carlo simulation heuristics able to describe the complexity of the release kinetics, and consequently to generate data more similar to those observed in real experiments. Implementing Monte Carlo simulation heuristics of the drug release phenomenology may be much straightforward and efficient than hypothesizing and implementing from scratch complex mathematical models of the physical processes involved in drug release. Identifying and understanding through simulation heuristics what processes of this phenomenology reproduce the observed data and then formalize them in mathematics may allow avoiding time-consuming, trial-error based regression procedures. Three bullet points, highlighting the customization of the procedure. •An efficient heuristics based on Monte Carlo methods for simulating drug release from solid dosage form encodes is presented. It specifies the model of the physical process in a simple but accurate way in the formula of the Monte Carlo Micro Step (MCS) time interval.•Given the experimentally observed curve of drug release, we point out how Monte Carlo heuristics can be integrated in an evolutionary algorithmic approach to infer the mode of MCS best fitting the observed data, and thus the observed release kinetics.•The software implementing the method is written in R language, the free most used language in the bioinformaticians community.
NASA Astrophysics Data System (ADS)
Saide, Pablo E.; Carmichael, Gregory R.; Spak, Scott N.; Gallardo, Laura; Osses, Axel E.; Mena-Carrasco, Marcelo A.; Pagowski, Mariusz
2011-05-01
This study presents a system to predict high pollution events that develop in connection with enhanced subsidence due to coastal lows, particularly in winter over Santiago de Chile. An accurate forecast of these episodes is of interest since the local government is entitled by law to take actions in advance to prevent public exposure to PM10 concentrations in excess of 150 μg m -3 (24 h running averages). The forecasting system is based on accurately simulating carbon monoxide (CO) as a PM10/PM2.5 surrogate, since during episodes and within the city there is a high correlation (over 0.95) among these pollutants. Thus, by accurately forecasting CO, which behaves closely to a tracer on this scale, a PM estimate can be made without involving aerosol-chemistry modeling. Nevertheless, the very stable nocturnal conditions over steep topography associated with maxima in concentrations are hard to represent in models. Here we propose a forecast system based on the WRF-Chem model with optimum settings, determined through extensive testing, that best describe both meteorological and air quality available measurements. Some of the important configurations choices involve the boundary layer (PBL) scheme, model grid resolution (both vertical and horizontal), meteorological initial and boundary conditions and spatial and temporal distribution of the emissions. A forecast for the 2008 winter is performed showing that this forecasting system is able to perform similarly to the authority decision for PM10 and better than persistence when forecasting PM10 and PM2.5 high pollution episodes. Problems regarding false alarm predictions could be related to different uncertainties in the model such as day to day emission variability, inability of the model to completely resolve the complex topography and inaccuracy in meteorological initial and boundary conditions. Finally, according to our simulations, emissions from previous days dominate episode concentrations, which highlights the need for 48 h forecasts that can be achieved by the system presented here. This is in fact the largest advantage of the proposed system.
Accurate FRET Measurements within Single Diffusing Biomolecules Using Alternating-Laser Excitation
Lee, Nam Ki; Kapanidis, Achillefs N.; Wang, You; Michalet, Xavier; Mukhopadhyay, Jayanta; Ebright, Richard H.; Weiss, Shimon
2005-01-01
Fluorescence resonance energy transfer (FRET) between a donor (D) and an acceptor (A) at the single-molecule level currently provides qualitative information about distance, and quantitative information about kinetics of distance changes. Here, we used the sorting ability of confocal microscopy equipped with alternating-laser excitation (ALEX) to measure accurate FRET efficiencies and distances from single molecules, using corrections that account for cross-talk terms that contaminate the FRET-induced signal, and for differences in the detection efficiency and quantum yield of the probes. ALEX yields accurate FRET independent of instrumental factors, such as excitation intensity or detector alignment. Using DNA fragments, we showed that ALEX-based distances agree well with predictions from a cylindrical model of DNA; ALEX-based distances fit better to theory than distances obtained at the ensemble level. Distance measurements within transcription complexes agreed well with ensemble-FRET measurements, and with structural models based on ensemble-FRET and x-ray crystallography. ALEX can benefit structural analysis of biomolecules, especially when such molecules are inaccessible to conventional structural methods due to heterogeneity or transient nature. PMID:15653725
NASA Astrophysics Data System (ADS)
Guan, Fengjiao; Zhang, Guanjun; Liu, Jie; Wang, Shujing; Luo, Xu; Zhu, Feng
2017-10-01
Accurate material parameters are critical to construct the high biofidelity finite element (FE) models. However, it is hard to obtain the brain tissue parameters accurately because of the effects of irregular geometry and uncertain boundary conditions. Considering the complexity of material test and the uncertainty of friction coefficient, a computational inverse method for viscoelastic material parameters identification of brain tissue is presented based on the interval analysis method. Firstly, the intervals are used to quantify the friction coefficient in the boundary condition. And then the inverse problem of material parameters identification under uncertain friction coefficient is transformed into two types of deterministic inverse problem. Finally the intelligent optimization algorithm is used to solve the two types of deterministic inverse problems quickly and accurately, and the range of material parameters can be easily acquired with no need of a variety of samples. The efficiency and convergence of this method are demonstrated by the material parameters identification of thalamus. The proposed method provides a potential effective tool for building high biofidelity human finite element model in the study of traffic accident injury.
NASA Astrophysics Data System (ADS)
Rudzinski, Joseph F.
Atomically-detailed molecular dynamics simulations have emerged as one of the most powerful theoretic tools for studying complex, condensed-phase systems. Despite their ability to provide incredible molecular insight, these simulations are insufficient for investigating complex biological processes, e.g., protein folding or molecular aggregation, on relevant length and time scales. The increasing scope and sophistication of atomically-detailed models has motivated the development of "hierarchical" approaches, which parameterize a low resolution, coarse-grained (CG) model based on simulations of an atomically-detailed model. The utility of hierarchical CG models depends on their ability to accurately incorporate the correct physics of the underlying model. One approach for ensuring this "consistency" between the models is to parameterize the CG model to reproduce the structural ensemble generated by the high resolution model. The many-body potential of mean force is the proper CG energy function for reproducing all structural distributions of the atomically-detailed model, at the CG level of resolution. However, this CG potential is a configuration-dependent free energy function that is generally too complicated to represent or simulate. The multiscale coarse-graining (MS-CG) method employs a generalized Yvon-Born-Green (g-YBG) relation to directly determine a variationally optimal approximation to the many-body potential of mean force. The MS-CG/g-YBG method provides a convenient and transparent framework for investigating the equilibrium structure of the system, at the CG level of resolution. In this work, we investigate the fundamental limitations and approximations of the MS-CG/g-YBG method. Throughout the work, we propose several theoretic constructs to directly relate the MS-CG/g-YBG method to other popular structure-based CG approaches. We investigate the physical interpretation of the MS-CG/g-YBG correlation matrix, the quantity responsible for disentangling the various contributions to the average force on a CG site. We then employ an iterative extension of the MS-CG/g-YBG method that improves the accuracy of a particular set of low order correlation functions relative to the original MS-CG/g-YBG model. We demonstrate that this method provides a powerful framework for identifying the precise source of error in an MS-CG/g-YBG model. We then propose a method for identifying an optimal CG representation, prior to the development of the CG model. We employ these techniques together to demonstrate that in the cases where the MS-CG/g-YBG method fails to determine an accurate model, a fundamental problem likely exists with the chosen CG representation or interaction set. Additionally, we explicitly demonstrate that while the iterative model successfully improves the accuracy of the low order structure, it does so by distorting the higher order structural correlations relative to the underlying model. Finally, we apply these methods to investigate the utility of the MS-CG/g- YBG method for developing models for systems with complex intramolecular structure. Overall, our results demonstrate the power of the g-YBG framework for developing accurate CG models and for investigating the driving forces of equilibrium structures for complex condensed-phase systems. This work also explicitly motivates future development of bottom-up CG methods and highlights some outstanding problems in the field. iii.
Analyzing a suitable elastic geomechanical model for Vaca Muerta Formation
NASA Astrophysics Data System (ADS)
Sosa Massaro, Agustin; Espinoza, D. Nicolas; Frydman, Marcelo; Barredo, Silvia; Cuervo, Sergio
2017-11-01
Accurate geomechanical evaluation of oil and gas reservoir rocks is important to provide design parameters for drilling, completion and predict production rates. In particular, shale reservoir rocks are geologically complex and heterogeneous. Wells need to be hydraulically fractured for stimulation and, in complex tectonic environments, it is to consider that rock fabric and in situ stress, strongly influence fracture propagation geometry. This article presents a combined wellbore-laboratory characterization of the geomechanical properties of a well in El Trapial/Curamched Field, over the Vaca Muerta Formation, located in the Neuquén Basin in Argentina. The study shows the results of triaxial tests with acoustic measurements in rock plugs from outcrops and field cores, and corresponding dynamic to static correlations considering various elastic models. The models, with increasing complexity, include the Isotropic Elastic Model (IEM), the Anisotropic Elastic Model (AEM) and the Detailed Anisotropic Elastic Model (DAEM). Each model shows advantages over the others. An IEM offers a quick overview, being easy to run without much detailed data for heterogeneous and anisotropic rocks. The DAEM requires significant amounts of data, time and a multidisciplinary team to arrive to a detailed model. Finally, an AEM suits well to an anisotropic and realistic rock without the need of massive amounts of data.
A methodology for adaptable and robust ecosystem services assessment.
Villa, Ferdinando; Bagstad, Kenneth J; Voigt, Brian; Johnson, Gary W; Portela, Rosimeiry; Honzák, Miroslav; Batker, David
2014-01-01
Ecosystem Services (ES) are an established conceptual framework for attributing value to the benefits that nature provides to humans. As the promise of robust ES-driven management is put to the test, shortcomings in our ability to accurately measure, map, and value ES have surfaced. On the research side, mainstream methods for ES assessment still fall short of addressing the complex, multi-scale biophysical and socioeconomic dynamics inherent in ES provision, flow, and use. On the practitioner side, application of methods remains onerous due to data and model parameterization requirements. Further, it is increasingly clear that the dominant "one model fits all" paradigm is often ill-suited to address the diversity of real-world management situations that exist across the broad spectrum of coupled human-natural systems. This article introduces an integrated ES modeling methodology, named ARIES (ARtificial Intelligence for Ecosystem Services), which aims to introduce improvements on these fronts. To improve conceptual detail and representation of ES dynamics, it adopts a uniform conceptualization of ES that gives equal emphasis to their production, flow and use by society, while keeping model complexity low enough to enable rapid and inexpensive assessment in many contexts and for multiple services. To improve fit to diverse application contexts, the methodology is assisted by model integration technologies that allow assembly of customized models from a growing model base. By using computer learning and reasoning, model structure may be specialized for each application context without requiring costly expertise. In this article we discuss the founding principles of ARIES--both its innovative aspects for ES science and as an example of a new strategy to support more accurate decision making in diverse application contexts.
A methodology for adaptable and robust ecosystem services assessment
Villa, Ferdinando; Bagstad, Kenneth J.; Voigt, Brian; Johnson, Gary W.; Portela, Rosimeiry; Honzák, Miroslav; Batker, David
2014-01-01
Ecosystem Services (ES) are an established conceptual framework for attributing value to the benefits that nature provides to humans. As the promise of robust ES-driven management is put to the test, shortcomings in our ability to accurately measure, map, and value ES have surfaced. On the research side, mainstream methods for ES assessment still fall short of addressing the complex, multi-scale biophysical and socioeconomic dynamics inherent in ES provision, flow, and use. On the practitioner side, application of methods remains onerous due to data and model parameterization requirements. Further, it is increasingly clear that the dominant “one model fits all” paradigm is often ill-suited to address the diversity of real-world management situations that exist across the broad spectrum of coupled human-natural systems. This article introduces an integrated ES modeling methodology, named ARIES (ARtificial Intelligence for Ecosystem Services), which aims to introduce improvements on these fronts. To improve conceptual detail and representation of ES dynamics, it adopts a uniform conceptualization of ES that gives equal emphasis to their production, flow and use by society, while keeping model complexity low enough to enable rapid and inexpensive assessment in many contexts and for multiple services. To improve fit to diverse application contexts, the methodology is assisted by model integration technologies that allow assembly of customized models from a growing model base. By using computer learning and reasoning, model structure may be specialized for each application context without requiring costly expertise. In this article we discuss the founding principles of ARIES - both its innovative aspects for ES science and as an example of a new strategy to support more accurate decision making in diverse application contexts.
A Methodology for Adaptable and Robust Ecosystem Services Assessment
Villa, Ferdinando; Bagstad, Kenneth J.; Voigt, Brian; Johnson, Gary W.; Portela, Rosimeiry; Honzák, Miroslav; Batker, David
2014-01-01
Ecosystem Services (ES) are an established conceptual framework for attributing value to the benefits that nature provides to humans. As the promise of robust ES-driven management is put to the test, shortcomings in our ability to accurately measure, map, and value ES have surfaced. On the research side, mainstream methods for ES assessment still fall short of addressing the complex, multi-scale biophysical and socioeconomic dynamics inherent in ES provision, flow, and use. On the practitioner side, application of methods remains onerous due to data and model parameterization requirements. Further, it is increasingly clear that the dominant “one model fits all” paradigm is often ill-suited to address the diversity of real-world management situations that exist across the broad spectrum of coupled human-natural systems. This article introduces an integrated ES modeling methodology, named ARIES (ARtificial Intelligence for Ecosystem Services), which aims to introduce improvements on these fronts. To improve conceptual detail and representation of ES dynamics, it adopts a uniform conceptualization of ES that gives equal emphasis to their production, flow and use by society, while keeping model complexity low enough to enable rapid and inexpensive assessment in many contexts and for multiple services. To improve fit to diverse application contexts, the methodology is assisted by model integration technologies that allow assembly of customized models from a growing model base. By using computer learning and reasoning, model structure may be specialized for each application context without requiring costly expertise. In this article we discuss the founding principles of ARIES - both its innovative aspects for ES science and as an example of a new strategy to support more accurate decision making in diverse application contexts. PMID:24625496
NASA Astrophysics Data System (ADS)
Ge, Liang; Sotiropoulos, Fotis
2007-08-01
A novel numerical method is developed that integrates boundary-conforming grids with a sharp interface, immersed boundary methodology. The method is intended for simulating internal flows containing complex, moving immersed boundaries such as those encountered in several cardiovascular applications. The background domain (e.g. the empty aorta) is discretized efficiently with a curvilinear boundary-fitted mesh while the complex moving immersed boundary (say a prosthetic heart valve) is treated with the sharp-interface, hybrid Cartesian/immersed-boundary approach of Gilmanov and Sotiropoulos [A. Gilmanov, F. Sotiropoulos, A hybrid cartesian/immersed boundary method for simulating flows with 3d, geometrically complex, moving bodies, Journal of Computational Physics 207 (2005) 457-492.]. To facilitate the implementation of this novel modeling paradigm in complex flow simulations, an accurate and efficient numerical method is developed for solving the unsteady, incompressible Navier-Stokes equations in generalized curvilinear coordinates. The method employs a novel, fully-curvilinear staggered grid discretization approach, which does not require either the explicit evaluation of the Christoffel symbols or the discretization of all three momentum equations at cell interfaces as done in previous formulations. The equations are integrated in time using an efficient, second-order accurate fractional step methodology coupled with a Jacobian-free, Newton-Krylov solver for the momentum equations and a GMRES solver enhanced with multigrid as preconditioner for the Poisson equation. Several numerical experiments are carried out on fine computational meshes to demonstrate the accuracy and efficiency of the proposed method for standard benchmark problems as well as for unsteady, pulsatile flow through a curved, pipe bend. To demonstrate the ability of the method to simulate flows with complex, moving immersed boundaries we apply it to calculate pulsatile, physiological flow through a mechanical, bileaflet heart valve mounted in a model straight aorta with an anatomical-like triple sinus.
Acoustic backscatter models of fish: Gradual or punctuated evolution
NASA Astrophysics Data System (ADS)
Horne, John K.
2004-05-01
Sound-scattering characteristics of aquatic organisms are routinely investigated using theoretical and numerical models. Development of the inverse approach by van Holliday and colleagues in the 1970s catalyzed the development and validation of backscatter models for fish and zooplankton. As the understanding of biological scattering properties increased, so did the number and computational sophistication of backscatter models. The complexity of data used to represent modeled organisms has also evolved in parallel to model development. Simple geometric shapes representing body components or the whole organism have been replaced by anatomically accurate representations derived from imaging sensors such as computer-aided tomography (CAT) scans. In contrast, Medwin and Clay (1998) recommend that fish and zooplankton should be described by simple theories and models, without acoustically superfluous extensions. Since van Holliday's early work, how has data and computational complexity influenced accuracy and precision of model predictions? How has the understanding of aquatic organism scattering properties increased? Significant steps in the history of model development will be identified and changes in model results will be characterized and compared. [Work supported by ONR and the Alaska Fisheries Science Center.
Reed, H; Leckey, Cara A C; Dick, A; Harvey, G; Dobson, J
2018-01-01
Ultrasonic damage detection and characterization is commonly used in nondestructive evaluation (NDE) of aerospace composite components. In recent years there has been an increased development of guided wave based methods. In real materials and structures, these dispersive waves result in complicated behavior in the presence of complex damage scenarios. Model-based characterization methods utilize accurate three dimensional finite element models (FEMs) of guided wave interaction with realistic damage scenarios to aid in defect identification and classification. This work describes an inverse solution for realistic composite damage characterization by comparing the wavenumber-frequency spectra of experimental and simulated ultrasonic inspections. The composite laminate material properties are first verified through a Bayesian solution (Markov chain Monte Carlo), enabling uncertainty quantification surrounding the characterization. A study is undertaken to assess the efficacy of the proposed damage model and comparative metrics between the experimental and simulated output. The FEM is then parameterized with a damage model capable of describing the typical complex damage created by impact events in composites. The damage is characterized through a transdimensional Markov chain Monte Carlo solution, enabling a flexible damage model capable of adapting to the complex damage geometry investigated here. The posterior probability distributions of the individual delamination petals as well as the overall envelope of the damage site are determined. Copyright © 2017 Elsevier B.V. All rights reserved.
Bioprinting the Cancer Microenvironment.
Zhang, Yu Shrike; Duchamp, Margaux; Oklu, Rahmi; Ellisen, Leif W; Langer, Robert; Khademhosseini, Ali
2016-10-10
Cancer is intrinsically complex, comprising both heterogeneous cellular compositions and microenvironmental cues. During the various stages of cancer initiation, development, and metastasis, cell-cell interactions (involving vascular and immune cells besides cancerous cells) as well as cell-extracellular matrix (ECM) interactions (e.g., alteration in stiffness and composition of the surrounding matrix) play major roles. Conventional cancer models both two- and three-dimensional (2D and 3D) present numerous limitations as they lack good vascularization and cannot mimic the complexity of tumors, thereby restricting their use as biomimetic models for applications such as drug screening and fundamental cancer biology studies. Bioprinting as an emerging biofabrication platform enables the creation of high-resolution 3D structures and has been extensively used in the past decade to model multiple organs and diseases. More recently, this versatile technique has further found its application in studying cancer genesis, growth, metastasis, and drug responses through creation of accurate models that recreate the complexity of the cancer microenvironment. In this review we will focus first on cancer biology and limitations with current cancer models. We then detail the current bioprinting strategies including the selection of bioinks for capturing the properties of the tumor matrices, after which we discuss bioprinting of vascular structures that are critical toward construction of complex 3D cancer organoids. We finally conclude with current literature on bioprinted cancer models and propose future perspectives.
Remontet, Laurent; Uhry, Zoé; Bossard, Nadine; Iwaz, Jean; Belot, Aurélien; Danieli, Coraline; Charvat, Hadrien; Roche, Laurent
2018-01-01
Cancer survival trend analyses are essential to describe accurately the way medical practices impact patients' survival according to the year of diagnosis. To this end, survival models should be able to account simultaneously for non-linear and non-proportional effects and for complex interactions between continuous variables. However, in the statistical literature, there is no consensus yet on how to build such models that should be flexible but still provide smooth estimates of survival. In this article, we tackle this challenge by smoothing the complex hypersurface (time since diagnosis, age at diagnosis, year of diagnosis, and mortality hazard) using a multidimensional penalized spline built from the tensor product of the marginal bases of time, age, and year. Considering this penalized survival model as a Poisson model, we assess the performance of this approach in estimating the net survival with a comprehensive simulation study that reflects simple and complex realistic survival trends. The bias was generally small and the root mean squared error was good and often similar to that of the true model that generated the data. This parametric approach offers many advantages and interesting prospects (such as forecasting) that make it an attractive and efficient tool for survival trend analyses.
Tensorial Minkowski functionals of triply periodic minimal surfaces
Mickel, Walter; Schröder-Turk, Gerd E.; Mecke, Klaus
2012-01-01
A fundamental understanding of the formation and properties of a complex spatial structure relies on robust quantitative tools to characterize morphology. A systematic approach to the characterization of average properties of anisotropic complex interfacial geometries is provided by integral geometry which furnishes a family of morphological descriptors known as tensorial Minkowski functionals. These functionals are curvature-weighted integrals of tensor products of position vectors and surface normal vectors over the interfacial surface. We here demonstrate their use by application to non-cubic triply periodic minimal surface model geometries, whose Weierstrass parametrizations allow for accurate numerical computation of the Minkowski tensors. PMID:24098847
An evolutionary firefly algorithm for the estimation of nonlinear biological model parameters.
Abdullah, Afnizanfaizal; Deris, Safaai; Anwar, Sohail; Arjunan, Satya N V
2013-01-01
The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test.
An Evolutionary Firefly Algorithm for the Estimation of Nonlinear Biological Model Parameters
Abdullah, Afnizanfaizal; Deris, Safaai; Anwar, Sohail; Arjunan, Satya N. V.
2013-01-01
The development of accurate computational models of biological processes is fundamental to computational systems biology. These models are usually represented by mathematical expressions that rely heavily on the system parameters. The measurement of these parameters is often difficult. Therefore, they are commonly estimated by fitting the predicted model to the experimental data using optimization methods. The complexity and nonlinearity of the biological processes pose a significant challenge, however, to the development of accurate and fast optimization methods. We introduce a new hybrid optimization method incorporating the Firefly Algorithm and the evolutionary operation of the Differential Evolution method. The proposed method improves solutions by neighbourhood search using evolutionary procedures. Testing our method on models for the arginine catabolism and the negative feedback loop of the p53 signalling pathway, we found that it estimated the parameters with high accuracy and within a reasonable computation time compared to well-known approaches, including Particle Swarm Optimization, Nelder-Mead, and Firefly Algorithm. We have also verified the reliability of the parameters estimated by the method using an a posteriori practical identifiability test. PMID:23469172
NASA Astrophysics Data System (ADS)
Wei, Zhongbao; Meng, Shujuan; Tseng, King Jet; Lim, Tuti Mariana; Soong, Boon Hee; Skyllas-Kazacos, Maria
2017-03-01
An accurate battery model is the prerequisite for reliable state estimate of vanadium redox battery (VRB). As the battery model parameters are time varying with operating condition variation and battery aging, the common methods where model parameters are empirical or prescribed offline lacks accuracy and robustness. To address this issue, this paper proposes to use an online adaptive battery model to reproduce the VRB dynamics accurately. The model parameters are online identified with both the recursive least squares (RLS) and the extended Kalman filter (EKF). Performance comparison shows that the RLS is superior with respect to the modeling accuracy, convergence property, and computational complexity. Based on the online identified battery model, an adaptive peak power estimator which incorporates the constraints of voltage limit, SOC limit and design limit of current is proposed to fully exploit the potential of the VRB. Experiments are conducted on a lab-scale VRB system and the proposed peak power estimator is verified with a specifically designed "two-step verification" method. It is shown that different constraints dominate the allowable peak power at different stages of cycling. The influence of prediction time horizon selection on the peak power is also analyzed.
Bayesian analysis of physiologically based toxicokinetic and toxicodynamic models.
Hack, C Eric
2006-04-17
Physiologically based toxicokinetic (PBTK) and toxicodynamic (TD) models of bromate in animals and humans would improve our ability to accurately estimate the toxic doses in humans based on available animal studies. These mathematical models are often highly parameterized and must be calibrated in order for the model predictions of internal dose to adequately fit the experimentally measured doses. Highly parameterized models are difficult to calibrate and it is difficult to obtain accurate estimates of uncertainty or variability in model parameters with commonly used frequentist calibration methods, such as maximum likelihood estimation (MLE) or least squared error approaches. The Bayesian approach called Markov chain Monte Carlo (MCMC) analysis can be used to successfully calibrate these complex models. Prior knowledge about the biological system and associated model parameters is easily incorporated in this approach in the form of prior parameter distributions, and the distributions are refined or updated using experimental data to generate posterior distributions of parameter estimates. The goal of this paper is to give the non-mathematician a brief description of the Bayesian approach and Markov chain Monte Carlo analysis, how this technique is used in risk assessment, and the issues associated with this approach.
Enhanced visual performance in obsessive compulsive personality disorder.
Ansari, Zohreh; Fadardi, Javad Salehi
2016-12-01
Visual performance is considered as commanding modality in human perception. We tested whether Obsessive-compulsive personality disorder (OCPD) people do differently in visual performance tasks than people without OCPD. One hundred ten students of Ferdowsi University of Mashhad and non-student participants were tested by Structured Clinical Interview for DSM-IV Axis II Personality Disorders (SCID-II), among whom 18 (mean age = 29.55; SD = 5.26; 84% female) met the criteria for OCPD classification; controls were 20 persons (mean age = 27.85; SD = 5.26; female = 84%), who did not met the OCPD criteria. Both groups were tested on a modified Flicker task for two dimensions of visual performance (i.e., visual acuity: detecting the location of change, complexity, and size; and visual contrast sensitivity). The OCPD group had responded more accurately on pairs related to size, complexity, and contrast, but spent more time to detect a change on pairs related to complexity and contrast. The OCPD individuals seem to have more accurate visual performance than non-OCPD controls. The findings support the relationship between personality characteristics and visual performance within the framework of top-down processing model. © 2016 Scandinavian Psychological Associations and John Wiley & Sons Ltd.
Multi-level emulation of complex climate model responses to boundary forcing data
NASA Astrophysics Data System (ADS)
Tran, Giang T.; Oliver, Kevin I. C.; Holden, Philip B.; Edwards, Neil R.; Sóbester, András; Challenor, Peter
2018-04-01
Climate model components involve both high-dimensional input and output fields. It is desirable to efficiently generate spatio-temporal outputs of these models for applications in integrated assessment modelling or to assess the statistical relationship between such sets of inputs and outputs, for example, uncertainty analysis. However, the need for efficiency often compromises the fidelity of output through the use of low complexity models. Here, we develop a technique which combines statistical emulation with a dimensionality reduction technique to emulate a wide range of outputs from an atmospheric general circulation model, PLASIM, as functions of the boundary forcing prescribed by the ocean component of a lower complexity climate model, GENIE-1. Although accurate and detailed spatial information on atmospheric variables such as precipitation and wind speed is well beyond the capability of GENIE-1's energy-moisture balance model of the atmosphere, this study demonstrates that the output of this model is useful in predicting PLASIM's spatio-temporal fields through multi-level emulation. Meaningful information from the fast model, GENIE-1 was extracted by utilising the correlation between variables of the same type in the two models and between variables of different types in PLASIM. We present here the construction and validation of several PLASIM variable emulators and discuss their potential use in developing a hybrid model with statistical components.
NASA Astrophysics Data System (ADS)
Li, Mengtian; Zhang, Ruisheng; Hu, Rongjing; Yang, Fan; Yao, Yabing; Yuan, Yongna
2018-03-01
Identifying influential spreaders is a crucial problem that can help authorities to control the spreading process in complex networks. Based on the classical degree centrality (DC), several improved measures have been presented. However, these measures cannot rank spreaders accurately. In this paper, we first calculate the sum of the degrees of the nearest neighbors of a given node, and based on the calculated sum, a novel centrality named clustered local-degree (CLD) is proposed, which combines the sum and the clustering coefficients of nodes to rank spreaders. By assuming that the spreading process in networks follows the susceptible-infectious-recovered (SIR) model, we perform extensive simulations on a series of real networks to compare the performances between the CLD centrality and other six measures. The results show that the CLD centrality has a competitive performance in distinguishing the spreading ability of nodes, and exposes the best performance to identify influential spreaders accurately.
New strategy for protein interactions and application to structure-based drug design
NASA Astrophysics Data System (ADS)
Zou, Xiaoqin
One of the greatest challenges in computational biophysics is to predict interactions between biological molecules, which play critical roles in biological processes and rational design of therapeutic drugs. Biomolecular interactions involve delicate interplay between multiple interactions, including electrostatic interactions, van der Waals interactions, solvent effect, and conformational entropic effect. Accurate determination of these complex and subtle interactions is challenging. Moreover, a biological molecule such as a protein usually consists of thousands of atoms, and thus occupies a huge conformational space. The large degrees of freedom pose further challenges for accurate prediction of biomolecular interactions. Here, I will present our development of physics-based theory and computational modeling on protein interactions with other molecules. The major strategy is to extract microscopic energetics from the information embedded in the experimentally-determined structures of protein complexes. I will also present applications of the methods to structure-based therapeutic design. Supported by NSF CAREER Award DBI-0953839, NIH R01GM109980, and the American Heart Association (Midwest Affiliate) [13GRNT16990076].
Evaluation of indirect impedance for measuring microbial growth in complex food matrices.
Johnson, N; Chang, Z; Bravo Almeida, C; Michel, M; Iversen, C; Callanan, M
2014-09-01
The suitability of indirect impedance to accurately measure microbial growth in real food matrices was investigated. A variety of semi-solid and liquid food products were inoculated with Bacillus cereus, Listeria monocytogenes, Staphylococcus aureus, Lactobacillus plantarum, Pseudomonas aeruginosa, Escherichia coli, Salmonella enteriditis, Candida tropicalis or Zygosaccharomyces rouxii and CO2 production was monitored using a conductimetric (Don Whitely R.A.B.I.T.) system. The majority (80%) of food and microbe combinations produced a detectable growth signal. The linearity of conductance responses in selected food products was investigated and a good correlation (R(2) ≥ 0.84) was observed between inoculum levels and times to detection. Specific growth rate estimations from the data were sufficiently accurate for predictive modeling in some cases. This initial evaluation of the suitability of indirect impedance to generate microbial growth data in complex food matrices indicates significant potential for the technology as an alternative to plating methods. Copyright © 2014 Elsevier Ltd. All rights reserved.
Model identification of signal transduction networks from data using a state regulator problem.
Gadkar, K G; Varner, J; Doyle, F J
2005-03-01
Advances in molecular biology provide an opportunity to develop detailed models of biological processes that can be used to obtain an integrated understanding of the system. However, development of useful models from the available knowledge of the system and experimental observations still remains a daunting task. In this work, a model identification strategy for complex biological networks is proposed. The approach includes a state regulator problem (SRP) that provides estimates of all the component concentrations and the reaction rates of the network using the available measurements. The full set of the estimates is utilised for model parameter identification for the network of known topology. An a priori model complexity test that indicates the feasibility of performance of the proposed algorithm is developed. Fisher information matrix (FIM) theory is used to address model identifiability issues. Two signalling pathway case studies, the caspase function in apoptosis and the MAP kinase cascade system, are considered. The MAP kinase cascade, with measurements restricted to protein complex concentrations, fails the a priori test and the SRP estimates are poor as expected. The apoptosis network structure used in this work has moderate complexity and is suitable for application of the proposed tools. Using a measurement set of seven protein concentrations, accurate estimates for all unknowns are obtained. Furthermore, the effects of measurement sampling frequency and quality of information in the measurement set on the performance of the identified model are described.
State-of-charge estimation in lithium-ion batteries: A particle filter approach
NASA Astrophysics Data System (ADS)
Tulsyan, Aditya; Tsai, Yiting; Gopaluni, R. Bhushan; Braatz, Richard D.
2016-11-01
The dynamics of lithium-ion batteries are complex and are often approximated by models consisting of partial differential equations (PDEs) relating the internal ionic concentrations and potentials. The Pseudo two-dimensional model (P2D) is one model that performs sufficiently accurately under various operating conditions and battery chemistries. Despite its widespread use for prediction, this model is too complex for standard estimation and control applications. This article presents an original algorithm for state-of-charge estimation using the P2D model. Partial differential equations are discretized using implicit stable algorithms and reformulated into a nonlinear state-space model. This discrete, high-dimensional model (consisting of tens to hundreds of states) contains implicit, nonlinear algebraic equations. The uncertainty in the model is characterized by additive Gaussian noise. By exploiting the special structure of the pseudo two-dimensional model, a novel particle filter algorithm that sweeps in time and spatial coordinates independently is developed. This algorithm circumvents the degeneracy problems associated with high-dimensional state estimation and avoids the repetitive solution of implicit equations by defining a 'tether' particle. The approach is illustrated through extensive simulations.
Robust encoding of stimulus identity and concentration in the accessory olfactory system.
Arnson, Hannah A; Holy, Timothy E
2013-08-14
Sensory systems represent stimulus identity and intensity, but in the neural periphery these two variables are typically intertwined. Moreover, stable detection may be complicated by environmental uncertainty; stimulus properties can differ over time and circumstance in ways that are not necessarily biologically relevant. We explored these issues in the context of the mouse accessory olfactory system, which specializes in detection of chemical social cues and infers myriad aspects of the identity and physiological state of conspecifics from complex mixtures, such as urine. Using mixtures of sulfated steroids, key constituents of urine, we found that spiking responses of individual vomeronasal sensory neurons encode both individual compounds and mixtures in a manner consistent with a simple model of receptor-ligand interactions. Although typical neurons did not accurately encode concentration over a large dynamic range, from population activity it was possible to reliably estimate the log-concentration of pure compounds over several orders of magnitude. For binary mixtures, simple models failed to accurately segment the individual components, largely because of the prevalence of neurons responsive to both components. By accounting for such overlaps during model tuning, we show that, from neuronal firing, one can accurately estimate log-concentration of both components, even when tested across widely varying concentrations. With this foundation, the difference of logarithms, log A - log B = log A/B, provides a natural mechanism to accurately estimate concentration ratios. Thus, we show that a biophysically plausible circuit model can reconstruct concentration ratios from observed neuronal firing, representing a powerful mechanism to separate stimulus identity from absolute concentration.
Improved Simulation of Electrodiffusion in the Node of Ranvier by Mesh Adaptation.
Dione, Ibrahima; Deteix, Jean; Briffard, Thomas; Chamberland, Eric; Doyon, Nicolas
2016-01-01
In neural structures with complex geometries, numerical resolution of the Poisson-Nernst-Planck (PNP) equations is necessary to accurately model electrodiffusion. This formalism allows one to describe ionic concentrations and the electric field (even away from the membrane) with arbitrary spatial and temporal resolution which is impossible to achieve with models relying on cable theory. However, solving the PNP equations on complex geometries involves handling intricate numerical difficulties related either to the spatial discretization, temporal discretization or the resolution of the linearized systems, often requiring large computational resources which have limited the use of this approach. In the present paper, we investigate the best ways to use the finite elements method (FEM) to solve the PNP equations on domains with discontinuous properties (such as occur at the membrane-cytoplasm interface). 1) Using a simple 2D geometry to allow comparison with analytical solution, we show that mesh adaptation is a very (if not the most) efficient way to obtain accurate solutions while limiting the computational efforts, 2) We use mesh adaptation in a 3D model of a node of Ranvier to reveal details of the solution which are nearly impossible to resolve with other modelling techniques. For instance, we exhibit a non linear distribution of the electric potential within the membrane due to the non uniform width of the myelin and investigate its impact on the spatial profile of the electric field in the Debye layer.
NASA Technical Reports Server (NTRS)
Jones, Gregory S.; Yao, Chung-Sheng; Allan, Brian G.
2006-01-01
Recent efforts in extreme short takeoff and landing aircraft configurations have renewed the interest in circulation control wing design and optimization. The key to accurately designing and optimizing these configurations rests in the modeling of the complex physics of these flows. This paper will highlight the physics of the stagnation and separation regions on two typical circulation control airfoil sections.
Observing Consistency in Online Communication Patterns for User Re-Identification
Venter, Hein S.
2016-01-01
Comprehension of the statistical and structural mechanisms governing human dynamics in online interaction plays a pivotal role in online user identification, online profile development, and recommender systems. However, building a characteristic model of human dynamics on the Internet involves a complete analysis of the variations in human activity patterns, which is a complex process. This complexity is inherent in human dynamics and has not been extensively studied to reveal the structural composition of human behavior. A typical method of anatomizing such a complex system is viewing all independent interconnectivity that constitutes the complexity. An examination of the various dimensions of human communication pattern in online interactions is presented in this paper. The study employed reliable server-side web data from 31 known users to explore characteristics of human-driven communications. Various machine-learning techniques were explored. The results revealed that each individual exhibited a relatively consistent, unique behavioral signature and that the logistic regression model and model tree can be used to accurately distinguish online users. These results are applicable to one-to-one online user identification processes, insider misuse investigation processes, and online profiling in various areas. PMID:27918593
Integration of Infrared Thermography and Photogrammetric Surveying of Built Landscape
NASA Astrophysics Data System (ADS)
Scaioni, M.; Rosina, E.; L'Erario, A.; Dìaz-Vilariño, L.
2017-05-01
The thermal analysis of buildings represents a key-step for reduction of energy consumption, also in the case of Cultural Heritage. Here the complexity of the constructions and the adopted materials might require special analysis and tailored solutions. Infrared Thermography (IRT) is an important non-destructive investigation technique that may aid in the thermal analysis of buildings. The paper reports the application of IRT on a listed building, belonging to the Cultural Heritage and to a residential one, as a demonstration that IRT is a suitable and convenient tool for analysing the existing buildings. The purposes of the analysis are the assessment of the damages and energy efficiency of the building envelope. Since in many cases the complex geometry of historic constructions may involve the thermal analysis, the integration of IRT and accurate 3D models were developed during the latest years. Here authors propose a solution based on the up-to-date photogrammetric solutions for purely image-based 3D modelling, including automatic image orientation/sensor calibration using Structure-from-Motion and dense matching. Thus, an almost fully automatic pipeline for the generation of accurate 3D models showing the temperatures on a building skin in a realistic manner is described, where the only manual task is given by the measurement of a few common points for co-registration of RGB and IR photogrammetric projects.
Evaluation of an urban vegetative canopy scheme and impact on plume dispersion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Matthew A; Williams, Michael D; Zajic, Dragan
2009-01-01
The Quick Urban and Industrial Complex (QUIC) atmospheric dispersion modeling system attempts to fill an important gap between the fast, but nonbuilding-aware Gaussian plume models and the building-aware but slow computational fluid dynamics (CFD) models. While Gaussian models have the ability to give answers quickly to emergency responders, they are unlikely to be able to adequately account for the effects of the building-induced complex flow patterns on the near-source dispersion of contaminants. QUIC uses a diagnostic massconsistent empirical wind model called QUIC-URB that is based on the methodology of Rockle (1990), (see also Kaplan and Dinar 1996). In this approach,more » the recirculation zones that form around and between buildings are inserted into the flow using empirical parameterizations and then the wind field is forced to be mass consistent. Although not as accurate as CFD codes, this approach is several orders of magnitude faster and accounts for the bulk effects of buildings.« less
Learning complex temporal patterns with resource-dependent spike timing-dependent plasticity.
Hunzinger, Jason F; Chan, Victor H; Froemke, Robert C
2012-07-01
Studies of spike timing-dependent plasticity (STDP) have revealed that long-term changes in the strength of a synapse may be modulated substantially by temporal relationships between multiple presynaptic and postsynaptic spikes. Whereas long-term potentiation (LTP) and long-term depression (LTD) of synaptic strength have been modeled as distinct or separate functional mechanisms, here, we propose a new shared resource model. A functional consequence of our model is fast, stable, and diverse unsupervised learning of temporal multispike patterns with a biologically consistent spiking neural network. Due to interdependencies between LTP and LTD, dendritic delays, and proactive homeostatic aspects of the model, neurons are equipped to learn to decode temporally coded information within spike bursts. Moreover, neurons learn spike timing with few exposures in substantial noise and jitter. Surprisingly, despite having only one parameter, the model also accurately predicts in vitro observations of STDP in more complex multispike trains, as well as rate-dependent effects. We discuss candidate commonalities in natural long-term plasticity mechanisms.
The robust nature of the biopsychosocial model challenge and threat: a reply to Wright and Kirby.
Blascovich, Jim; Mendes, Wendy Berry; Tomaka, Joe; Salomon, Kristen; Seery, Mark
2003-01-01
This article responds to Wright and Kirby's (this issue) critique of our biopsychosocial (BPS) analysis of challenge and threat motivation. We counter their arguments by reviewing the current state of our theory as well as supporting data, then turn to their specific criticisms. We believe that Wright and Kirby failed to accurately represent the corpus of our work, including both our theoretical model and its supporting data. They critiqued our model from a contextual, rational-economic perspective that ignores the complexity and subjectivity of person-person and person-environmental interactions as well as nonconscious influences. Finally, they provided criticisms regarding possible underspecificity of antecedent components of our model that do not so much indicate theoretical flaws as provide important and interesting questions for future research. We conclude by affirming that our BPS model of challenge and threat is an evolving, generative theory directed toward understanding the complexity of personality and social psychological factors underlying challenge and threat states.
Dendritic trafficking faces physiologically critical speed-precision tradeoffs
Williams, Alex H; O'Donnell, Cian; Sejnowski, Terrence J; O'Leary, Timothy
2016-01-01
Nervous system function requires intracellular transport of channels, receptors, mRNAs, and other cargo throughout complex neuronal morphologies. Local signals such as synaptic input can regulate cargo trafficking, motivating the leading conceptual model of neuron-wide transport, sometimes called the ‘sushi-belt model’ (Doyle and Kiebler, 2011). Current theories and experiments are based on this model, yet its predictions are not rigorously understood. We formalized the sushi belt model mathematically, and show that it can achieve arbitrarily complex spatial distributions of cargo in reconstructed morphologies. However, the model also predicts an unavoidable, morphology dependent tradeoff between speed, precision and metabolic efficiency of cargo transport. With experimental estimates of trafficking kinetics, the model predicts delays of many hours or days for modestly accurate and efficient cargo delivery throughout a dendritic tree. These findings challenge current understanding of the efficacy of nucleus-to-synapse trafficking and may explain the prevalence of local biosynthesis in neurons. DOI: http://dx.doi.org/10.7554/eLife.20556.001 PMID:28034367
Söderlund, Johan; Lindskog, Maria
2018-04-23
The diagnosis of a mental disorder generally depends on clinical observations and phenomenological symptoms reported by the patient. The definition of a given diagnosis is criteria based and relies on the ability to accurately interpret subjective symptoms and complex behavior. This type of diagnosis comprises a challenge to translate to reliable animal models, and these translational uncertainties hamper the development of new treatments. In this review, we will discuss how depressive-like behavior can be induced in rodents, and the relationship between these models and depression in humans. Specifically, we suggest similarities between triggers of depressive-like behavior in animal models and human conditions known to increase the risk of depression, for example exhaustion and bullying. Although we acknowledge the potential problems in comparing animal findings to human conditions, such comparisons are useful for understanding the complexity of depression, and we highlight the need to develop clinical diagnoses and animal models in parallel to overcome translational uncertainties.
Predicting Human Preferences Using the Block Structure of Complex Social Networks
Guimerà, Roger; Llorente, Alejandro; Moro, Esteban; Sales-Pardo, Marta
2012-01-01
With ever-increasing available data, predicting individuals' preferences and helping them locate the most relevant information has become a pressing need. Understanding and predicting preferences is also important from a fundamental point of view, as part of what has been called a “new” computational social science. Here, we propose a novel approach based on stochastic block models, which have been developed by sociologists as plausible models of complex networks of social interactions. Our model is in the spirit of predicting individuals' preferences based on the preferences of others but, rather than fitting a particular model, we rely on a Bayesian approach that samples over the ensemble of all possible models. We show that our approach is considerably more accurate than leading recommender algorithms, with major relative improvements between 38% and 99% over industry-level algorithms. Besides, our approach sheds light on decision-making processes by identifying groups of individuals that have consistently similar preferences, and enabling the analysis of the characteristics of those groups. PMID:22984533
BEYOND ELLIPSE(S): ACCURATELY MODELING THE ISOPHOTAL STRUCTURE OF GALAXIES WITH ISOFIT AND CMODEL
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ciambur, B. C., E-mail: bciambur@swin.edu.au
2015-09-10
This work introduces a new fitting formalism for isophotes that enables more accurate modeling of galaxies with non-elliptical shapes, such as disk galaxies viewed edge-on or galaxies with X-shaped/peanut bulges. Within this scheme, the angular parameter that defines quasi-elliptical isophotes is transformed from the commonly used, but inappropriate, polar coordinate to the “eccentric anomaly.” This provides a superior description of deviations from ellipticity, better capturing the true isophotal shape. Furthermore, this makes it possible to accurately recover both the surface brightness profile, using the correct azimuthally averaged isophote, and the two-dimensional model of any galaxy: the hitherto ubiquitous, but artificial,more » cross-like features in residual images are completely removed. The formalism has been implemented into the Image Reduction and Analysis Facility tasks Ellipse and Bmodel to create the new tasks “Isofit,” and “Cmodel.” The new tools are demonstrated here with application to five galaxies, chosen to be representative case-studies for several areas where this technique makes it possible to gain new scientific insight. Specifically: properly quantifying boxy/disky isophotes via the fourth harmonic order in edge-on galaxies, quantifying X-shaped/peanut bulges, higher-order Fourier moments for modeling bars in disks, and complex isophote shapes. Higher order (n > 4) harmonics now become meaningful and may correlate with structural properties, as boxyness/diskyness is known to do. This work also illustrates how the accurate construction, and subtraction, of a model from a galaxy image facilitates the identification and recovery of over-lapping sources such as globular clusters and the optical counterparts of X-ray sources.« less
Determination of effective loss factors in reduced SEA models
NASA Astrophysics Data System (ADS)
Chimeno Manguán, M.; Fernández de las Heras, M. J.; Roibás Millán, E.; Simón Hidalgo, F.
2017-01-01
The definition of Statistical Energy Analysis (SEA) models for large complex structures is highly conditioned by the classification of the structure elements into a set of coupled subsystems and the subsequent determination of the loss factors representing both the internal damping and the coupling between subsystems. The accurate definition of the complete system can lead to excessively large models as the size and complexity increases. This fact can also rise practical issues for the experimental determination of the loss factors. This work presents a formulation of reduced SEA models for incomplete systems defined by a set of effective loss factors. This reduced SEA model provides a feasible number of subsystems for the application of the Power Injection Method (PIM). For structures of high complexity, their components accessibility can be restricted, for instance internal equipments or panels. For these cases the use of PIM to carry out an experimental SEA analysis is not possible. New methods are presented for this case in combination with the reduced SEA models. These methods allow defining some of the model loss factors that could not be obtained through PIM. The methods are validated with a numerical analysis case and they are also applied to an actual spacecraft structure with accessibility restrictions: a solar wing in folded configuration.
AST: Activity-Security-Trust driven modeling of time varying networks
Wang, Jian; Xu, Jiake; Liu, Yanheng; Deng, Weiwen
2016-01-01
Network modeling is a flexible mathematical structure that enables to identify statistical regularities and structural principles hidden in complex systems. The majority of recent driving forces in modeling complex networks are originated from activity, in which an activity potential of a time invariant function is introduced to identify agents’ interactions and to construct an activity-driven model. However, the new-emerging network evolutions are already deeply coupled with not only the explicit factors (e.g. activity) but also the implicit considerations (e.g. security and trust), so more intrinsic driving forces behind should be integrated into the modeling of time varying networks. The agents undoubtedly seek to build a time-dependent trade-off among activity, security, and trust in generating a new connection to another. Thus, we reasonably propose the Activity-Security-Trust (AST) driven model through synthetically considering the explicit and implicit driving forces (e.g. activity, security, and trust) underlying the decision process. AST-driven model facilitates to more accurately capture highly dynamical network behaviors and figure out the complex evolution process, allowing a profound understanding of the effects of security and trust in driving network evolution, and improving the biases induced by only involving activity representations in analyzing the dynamical processes. PMID:26888717
Validated Predictions of Metabolic Energy Consumption for Submaximal Effort Movement
Tsianos, George A.; MacFadden, Lisa N.
2016-01-01
Physical performance emerges from complex interactions among many physiological systems that are largely driven by the metabolic energy demanded. Quantifying metabolic demand is an essential step for revealing the many mechanisms of physical performance decrement, but accurate predictive models do not exist. The goal of this study was to investigate if a recently developed model of muscle energetics and force could be extended to reproduce the kinematics, kinetics, and metabolic demand of submaximal effort movement. Upright dynamic knee extension against various levels of ergometer load was simulated. Task energetics were estimated by combining the model of muscle contraction with validated models of lower limb musculotendon paths and segment dynamics. A genetic algorithm was used to compute the muscle excitations that reproduced the movement with the lowest energetic cost, which was determined to be an appropriate criterion for this task. Model predictions of oxygen uptake rate (VO2) were well within experimental variability for the range over which the model parameters were confidently known. The model's accurate estimates of metabolic demand make it useful for assessing the likelihood and severity of physical performance decrement for a given task as well as investigating underlying physiologic mechanisms. PMID:27248429
Assessing the fit of site-occupancy models
MacKenzie, D.I.; Bailey, L.L.
2004-01-01
Few species are likely to be so evident that they will always be detected at a site when present. Recently a model has been developed that enables estimation of the proportion of area occupied, when the target species is not detected with certainty. Here we apply this modeling approach to data collected on terrestrial salamanders in the Plethodon glutinosus complex in the Great Smoky Mountains National Park, USA, and wish to address the question 'how accurately does the fitted model represent the data?' The goodness-of-fit of the model needs to be assessed in order to make accurate inferences. This article presents a method where a simple Pearson chi-square statistic is calculated and a parametric bootstrap procedure is used to determine whether the observed statistic is unusually large. We found evidence that the most global model considered provides a poor fit to the data, hence estimated an overdispersion factor to adjust model selection procedures and inflate standard errors. Two hypothetical datasets with known assumption violations are also analyzed, illustrating that the method may be used to guide researchers to making appropriate inferences. The results of a simulation study are presented to provide a broader view of the methods properties.
Models to teach lung sonopathology and ultrasound-guided thoracentesis.
Wojtczak, Jacek A
2014-12-01
Lung sonography allows rapid diagnosis of lung emergencies such as pulmonary edema, hemothorax or pneumothorax. The ability to timely diagnose an intraoperative pneumothorax is an important skill for the anesthesiologist. However, lung ultrasound exams require an interpretation of not only real images but also complex acoustic artifacts such as A-lines and B-lines. Therefore, appropriate training to gain proficiency is important. Simulated environment using ultrasound phantom models allows controlled, supervised learning. We have developed hybrid models that combine dry or wet polyurethane foams, porcine rib cages and human hand simulating a rib cage. These models simulate fairly accurately pulmonary sonopathology and allow supervised teaching of lung sonography with the immediate feedback. In-vitro models can also facilitate learning of procedural skills, improving transducer and needle positioning and movement, rapid recognition of thoracic anatomy and hand - eye coordination skills. We described a new model to teach an ultrasound guided thoracentesis. This model consists of the experimenter's hand placed on top of the water-filled container with a wet foam. Metacarpal bones of the human hand simulate a rib cage and a wet foam simulates a diseased lung immersed in the pleural fluid. Positive fluid flow offers users feedback when a simulated pleural effusion is accurately assessed.
Relationships between host viremia and vector susceptibility for arboviruses.
Lord, Cynthia C; Rutledge, C Roxanne; Tabachnick, Walter J
2006-05-01
Using a threshold model where a minimum level of host viremia is necessary to infect vectors affects our assessment of the relative importance of different host species in the transmission and spread of these pathogens. Other models may be more accurate descriptions of the relationship between host viremia and vector infection. Under the threshold model, the intensity and duration of the viremia above the threshold level is critical in determining the potential numbers of infected mosquitoes. A probabilistic model relating host viremia to the probability distribution of virions in the mosquito bloodmeal shows that the threshold model will underestimate the significance of hosts with low viremias. A probabilistic model that includes avian mortality shows that the maximum number of mosquitoes is infected by feeding on hosts whose viremia peaks just below the lethal level. The relationship between host viremia and vector infection is complex, and there is little experimental information to determine the most accurate model for different arthropod-vector-host systems. Until there is more information, the ability to distinguish the relative importance of different hosts in infecting vectors will remain problematic. Relying on assumptions with little support may result in erroneous conclusions about the importance of different hosts.
Relationships Between Host Viremia and Vector Susceptibility for Arboviruses
Lord, Cynthia C.; Rutledge, C. Roxanne; Tabachnick, Walter J.
2010-01-01
Using a threshold model where a minimum level of host viremia is necessary to infect vectors affects our assessment of the relative importance of different host species in the transmission and spread of these pathogens. Other models may be more accurate descriptions of the relationship between host viremia and vector infection. Under the threshold model, the intensity and duration of the viremia above the threshold level is critical in determining the potential numbers of infected mosquitoes. A probabilistic model relating host viremia to the probability distribution of virions in the mosquito bloodmeal shows that the threshold model will underestimate the significance of hosts with low viremias. A probabilistic model that includes avian mortality shows that the maximum number of mosquitoes is infected by feeding on hosts whose viremia peaks just below the lethal level. The relationship between host viremia and vector infection is complex, and there is little experimental information to determine the most accurate model for different arthropod–vector–host systems. Until there is more information, the ability to distinguish the relative importance of different hosts in infecting vectors will remain problematic. Relying on assumptions with little support may result in erroneous conclusions about the importance of different hosts. PMID:16739425
Self-learning Monte Carlo with deep neural networks
NASA Astrophysics Data System (ADS)
Shen, Huitao; Liu, Junwei; Fu, Liang
2018-05-01
The self-learning Monte Carlo (SLMC) method is a general algorithm to speedup MC simulations. Its efficiency has been demonstrated in various systems by introducing an effective model to propose global moves in the configuration space. In this paper, we show that deep neural networks can be naturally incorporated into SLMC, and without any prior knowledge can learn the original model accurately and efficiently. Demonstrated in quantum impurity models, we reduce the complexity for a local update from O (β2) in Hirsch-Fye algorithm to O (β lnβ ) , which is a significant speedup especially for systems at low temperatures.
Accuracy of parameterized proton range models; A comparison
NASA Astrophysics Data System (ADS)
Pettersen, H. E. S.; Chaar, M.; Meric, I.; Odland, O. H.; Sølie, J. R.; Röhrich, D.
2018-03-01
An accurate calculation of proton ranges in phantoms or detector geometries is crucial for decision making in proton therapy and proton imaging. To this end, several parameterizations of the range-energy relationship exist, with different levels of complexity and accuracy. In this study we compare the accuracy of four different parameterizations models for proton range in water: Two analytical models derived from the Bethe equation, and two different interpolation schemes applied to range-energy tables. In conclusion, a spline interpolation scheme yields the highest reproduction accuracy, while the shape of the energy loss-curve is best reproduced with the differentiated Bragg-Kleeman equation.
Pearce, Timothy C.; Karout, Salah; Rácz, Zoltán; Capurro, Alberto; Gardner, Julian W.; Cole, Marina
2012-01-01
We present a biologically-constrained neuromorphic spiking model of the insect antennal lobe macroglomerular complex that encodes concentration ratios of chemical components existing within a blend, implemented using a set of programmable logic neuronal modeling cores. Depending upon the level of inhibition and symmetry in its inhibitory connections, the model exhibits two dynamical regimes: fixed point attractor (winner-takes-all type), and limit cycle attractor (winnerless competition type) dynamics. We show that, when driven by chemosensor input in real-time, the dynamical trajectories of the model's projection neuron population activity accurately encode the concentration ratios of binary odor mixtures in both dynamical regimes. By deploying spike timing-dependent plasticity in a subset of the synapses in the model, we demonstrate that a Hebbian-like associative learning rule is able to organize weights into a stable configuration after exposure to a randomized training set comprising a variety of input ratios. Examining the resulting local interneuron weights in the model shows that each inhibitory neuron competes to represent possible ratios across the population, forming a ratiometric representation via mutual inhibition. After training the resulting dynamical trajectories of the projection neuron population activity show amplification and better separation in their response to inputs of different ratios. Finally, we demonstrate that by using limit cycle attractor dynamics, it is possible to recover and classify blend ratio information from the early transient phases of chemosensor responses in real-time more rapidly and accurately compared to a nearest-neighbor classifier applied to the normalized chemosensor data. Our results demonstrate the potential of biologically-constrained neuromorphic spiking models in achieving rapid and efficient classification of early phase chemosensor array transients with execution times well beyond biological timescales. PMID:23874265
Postprocessing of docked protein-ligand complexes using implicit solvation models.
Lindström, Anton; Edvinsson, Lotta; Johansson, Andreas; Andersson, C David; Andersson, Ida E; Raubacher, Florian; Linusson, Anna
2011-02-28
Molecular docking plays an important role in drug discovery as a tool for the structure-based design of small organic ligands for macromolecules. Possible applications of docking are identification of the bioactive conformation of a protein-ligand complex and the ranking of different ligands with respect to their strength of binding to a particular target. We have investigated the effect of implicit water on the postprocessing of binding poses generated by molecular docking using MM-PB/GB-SA (molecular mechanics Poisson-Boltzmann and generalized Born surface area) methodology. The investigation was divided into three parts: geometry optimization, pose selection, and estimation of the relative binding energies of docked protein-ligand complexes. Appropriate geometry optimization afforded more accurate binding poses for 20% of the complexes investigated. The time required for this step was greatly reduced by minimizing the energy of the binding site using GB solvation models rather than minimizing the entire complex using the PB model. By optimizing the geometries of docking poses using the GB(HCT+SA) model then calculating their free energies of binding using the PB implicit solvent model, binding poses similar to those observed in crystal structures were obtained. Rescoring of these poses according to their calculated binding energies resulted in improved correlations with experimental binding data. These correlations could be further improved by applying the postprocessing to several of the most highly ranked poses rather than focusing exclusively on the top-scored pose. The postprocessing protocol was successfully applied to the analysis of a set of Factor Xa inhibitors and a set of glycopeptide ligands for the class II major histocompatibility complex (MHC) A(q) protein. These results indicate that the protocol for the postprocessing of docked protein-ligand complexes developed in this paper may be generally useful for structure-based design in drug discovery.
Fixation Probability in a Haploid-Diploid Population
Bessho, Kazuhiro; Otto, Sarah P.
2017-01-01
Classical population genetic theory generally assumes either a fully haploid or fully diploid life cycle. However, many organisms exhibit more complex life cycles, with both free-living haploid and diploid stages. Here we ask what the probability of fixation is for selected alleles in organisms with haploid-diploid life cycles. We develop a genetic model that considers the population dynamics using both the Moran model and Wright–Fisher model. Applying a branching process approximation, we obtain an accurate fixation probability assuming that the population is large and the net effect of the mutation is beneficial. We also find the diffusion approximation for the fixation probability, which is accurate even in small populations and for deleterious alleles, as long as selection is weak. These fixation probabilities from branching process and diffusion approximations are similar when selection is weak for beneficial mutations that are not fully recessive. In many cases, particularly when one phase predominates, the fixation probability differs substantially for haploid-diploid organisms compared to either fully haploid or diploid species. PMID:27866168
Terahertz beam propagation measured through three-dimensional amplitude profile determination
NASA Astrophysics Data System (ADS)
Reiten, Matthew T.; Harmon, Stacee A.; Cheville, Richard Alan
2003-10-01
To determine the spatio-temporal field distribution of freely propagating terahertz bandwidth pulses, we measure the time-resolved electric field in two spatial dimensions with high resolution. The measured, phase-coherent electric-field distributions are compared with an analytic model in which the radiation from a dipole antenna near a dielectric interface is coupled to free space through a spherical lens. The field external to the lens is limited by reflection at the lens-air dielectric interface, which is minimized at Brewster's angle, leading to an annular field pattern. Field measurements compare favorably with theory. Propagation of terahertz beams is determined both by assuming a TEM0,0 Gaussian profile as well as expanding the beam into a superposition of Laguerre-Gauss modes. The Laguerre-Gauss model more accurately describes the beam profile for free-space propagation and after propagating through a simple optical system. The accuracy of both models for predicting far-field beam patterns depend upon accurately measuring complex field amplitudes of terahertz beams.
Prediction of far-field wind turbine noise propagation with parabolic equation.
Lee, Seongkyu; Lee, Dongjai; Honhoff, Saskia
2016-08-01
Sound propagation of wind farms is typically simulated by the use of engineering tools that are neglecting some atmospheric conditions and terrain effects. Wind and temperature profiles, however, can affect the propagation of sound and thus the perceived sound in the far field. A better understanding and application of those effects would allow a more optimized farm operation towards meeting noise regulations and optimizing energy yield. This paper presents the parabolic equation (PE) model development for accurate wind turbine noise propagation. The model is validated against analytic solutions for a uniform sound speed profile, benchmark problems for nonuniform sound speed profiles, and field sound test data for real environmental acoustics. It is shown that PE provides good agreement with the measured data, except upwind propagation cases in which turbulence scattering is important. Finally, the PE model uses computational fluid dynamics results as input to accurately predict sound propagation for complex flows such as wake flows. It is demonstrated that wake flows significantly modify the sound propagation characteristics.
NASA Astrophysics Data System (ADS)
Robinson, Mitchell; Butcher, Ryan; Coté, Gerard L.
2017-02-01
Monte Carlo modeling of photon propagation has been used in the examination of particular areas of the body to further enhance the understanding of light propagation through tissue. This work seeks to improve upon the established simulation methods through more accurate representations of the simulated tissues in the wrist as well as the characteristics of the light source. The Monte Carlo simulation program was developed using Matlab. Generation of different tissue domains, such as muscle, vasculature, and bone, was performed in Solidworks, where each domain was saved as a separate .stl file that was read into the program. The light source was altered to give considerations to both viewing angle of the simulated LED as well as the nominal diameter of the source. It is believed that the use of these more accurate models generates results that more closely match those seen in-vivo, and can be used to better guide the design of optical wrist-worn measurement devices.
Equivalent circuit-based analysis of CMUT cell dynamics in arrays.
Oguz, H K; Atalar, Abdullah; Köymen, Hayrettin
2013-05-01
Capacitive micromachined ultrasonic transducers (CMUTs) are usually composed of large arrays of closely packed cells. In this work, we use an equivalent circuit model to analyze CMUT arrays with multiple cells. We study the effects of mutual acoustic interactions through the immersion medium caused by the pressure field generated by each cell acting upon the others. To do this, all the cells in the array are coupled through a radiation impedance matrix at their acoustic terminals. An accurate approximation for the mutual radiation impedance is defined between two circular cells, which can be used in large arrays to reduce computational complexity. Hence, a performance analysis of CMUT arrays can be accurately done with a circuit simulator. By using the proposed model, one can very rapidly obtain the linear frequency and nonlinear transient responses of arrays with an arbitrary number of CMUT cells. We performed several finite element method (FEM) simulations for arrays with small numbers of cells and showed that the results are very similar to those obtained by the equivalent circuit model.
NASA Astrophysics Data System (ADS)
Welch, Dale; Font, Gabriel; Mitchell, Robert; Rose, David
2017-10-01
We report on particle-in-cell developments of the study of the Compact Fusion Reactor. Millisecond, two and three-dimensional simulations (cubic meter volume) of confinement and neutral beam heating of the magnetic confinement device requires accurate representation of the complex orbits, near perfect energy conservation, and significant computational power. In order to determine initial plasma fill and neutral beam heating, these simulations include ionization, elastic and charge exchange hydrogen reactions. To this end, we are pursuing fast electromagnetic kinetic modeling algorithms including a two implicit techniques and a hybrid quasi-neutral algorithm with kinetic ions. The kinetic modeling includes use of the Poisson-corrected direct implicit, magnetic implicit, as well as second-order cloud-in-cell techniques. The hybrid algorithm, ignoring electron inertial effects, is two orders of magnitude faster than kinetic but not as accurate with respect to confinement. The advantages and disadvantages of these techniques will be presented. Funded by Lockheed Martin.
Structural damage detection using deep learning of ultrasonic guided waves
NASA Astrophysics Data System (ADS)
Melville, Joseph; Alguri, K. Supreet; Deemer, Chris; Harley, Joel B.
2018-04-01
Structural health monitoring using ultrasonic guided waves relies on accurate interpretation of guided wave propagation to distinguish damage state indicators. However, traditional physics based models do not provide an accurate representation, and classic data driven techniques, such as a support vector machine, are too simplistic to capture the complex nature of ultrasonic guide waves. To address this challenge, this paper uses a deep learning interpretation of ultrasonic guided waves to achieve fast, accurate, and automated structural damaged detection. To achieve this, full wavefield scans of thin metal plates are used, half from the undamaged state and half from the damaged state. This data is used to train our deep network to predict the damage state of a plate with 99.98% accuracy given signals from just 10 spatial locations on the plate, as compared to that of a support vector machine (SVM), which achieved a 62% accuracy.
Development of Tripropellant CFD Design Code
NASA Technical Reports Server (NTRS)
Farmer, Richard C.; Cheng, Gary C.; Anderson, Peter G.
1998-01-01
A tripropellant, such as GO2/H2/RP-1, CFD design code has been developed to predict the local mixing of multiple propellant streams as they are injected into a rocket motor. The code utilizes real fluid properties to account for the mixing and finite-rate combustion processes which occur near an injector faceplate, thus the analysis serves as a multi-phase homogeneous spray combustion model. Proper accounting of the combustion allows accurate gas-side temperature predictions which are essential for accurate wall heating analyses. The complex secondary flows which are predicted to occur near a faceplate cannot be quantitatively predicted by less accurate methodology. Test cases have been simulated to describe an axisymmetric tripropellant coaxial injector and a 3-dimensional RP-1/LO2 impinger injector system. The analysis has been shown to realistically describe such injector combustion flowfields. The code is also valuable to design meaningful future experiments by determining the critical location and type of measurements needed.
NASA Astrophysics Data System (ADS)
Heberling, Brian
Computational fluid dynamics (CFD) simulations can offer a detailed view of the complex flow fields within an axial compressor and greatly aid the design process. However, the desire for quick turnaround times raises the question of how exact the model must be. At design conditions, steady CFD simulating an isolated blade row can accurately predict the performance of a rotor. However, as a compressor is throttled and mass flow rate decreased, axial flow becomes weaker making the capturing of unsteadiness, wakes, or other flow features more important. The unsteadiness of the tip clearance flow and upstream blade wake can have a significant impact on a rotor. At off-design conditions, time-accurate simulations or modeling multiple blade rows can become necessary in order to receive accurate performance predictions. Unsteady and multi- bladerow simulations are computationally expensive, especially when used in conjunction. It is important to understand which features are important to model in order to accurately capture a compressor's performance. CFD simulations of a transonic axial compressor throttling from the design point to stall are presented. The importance of capturing the unsteadiness of the rotor tip clearance flow versus capturing upstream blade-row interactions is examined through steady and unsteady, single- and multi-bladerow computations. It is shown that there are significant differences at near stall conditions between the different types of simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun Wei; Huang, Guo H., E-mail: huang@iseis.org; Institute for Energy, Environment and Sustainable Communities, University of Regina, Regina, Saskatchewan, S4S 0A2
2012-06-15
Highlights: Black-Right-Pointing-Pointer Inexact piecewise-linearization-based fuzzy flexible programming is proposed. Black-Right-Pointing-Pointer It's the first application to waste management under multiple complexities. Black-Right-Pointing-Pointer It tackles nonlinear economies-of-scale effects in interval-parameter constraints. Black-Right-Pointing-Pointer It estimates costs more accurately than the linear-regression-based model. Black-Right-Pointing-Pointer Uncertainties are decreased and more satisfactory interval solutions are obtained. - Abstract: To tackle nonlinear economies-of-scale (EOS) effects in interval-parameter constraints for a representative waste management problem, an inexact piecewise-linearization-based fuzzy flexible programming (IPFP) model is developed. In IPFP, interval parameters for waste amounts and transportation/operation costs can be quantified; aspiration levels for net system costs, as well as tolerancemore » intervals for both capacities of waste treatment facilities and waste generation rates can be reflected; and the nonlinear EOS effects transformed from objective function to constraints can be approximated. An interactive algorithm is proposed for solving the IPFP model, which in nature is an interval-parameter mixed-integer quadratically constrained programming model. To demonstrate the IPFP's advantages, two alternative models are developed to compare their performances. One is a conventional linear-regression-based inexact fuzzy programming model (IPFP2) and the other is an IPFP model with all right-hand-sides of fussy constraints being the corresponding interval numbers (IPFP3). The comparison results between IPFP and IPFP2 indicate that the optimized waste amounts would have the similar patterns in both models. However, when dealing with EOS effects in constraints, the IPFP2 may underestimate the net system costs while the IPFP can estimate the costs more accurately. The comparison results between IPFP and IPFP3 indicate that their solutions would be significantly different. The decreased system uncertainties in IPFP's solutions demonstrate its effectiveness for providing more satisfactory interval solutions than IPFP3. Following its first application to waste management, the IPFP can be potentially applied to other environmental problems under multiple complexities.« less
Direct Numerical Simulation of Complex Turbulence
NASA Astrophysics Data System (ADS)
Hsieh, Alan
Direct numerical simulations (DNS) of spanwise-rotating turbulent channel flow were conducted. The data base obtained from these DNS simulations were used to investigate the turbulence generation cycle for simple and complex turbulence. For turbulent channel flow, three theoretical models concerning the formation and evolution of sublayer streaks, three-dimensional hairpin vortices and propagating plane waves were validated using visualizations from the present DNS data. The principal orthogonal decomposition (POD) method was used to verify the existence of the propagating plane waves; a new extension of the POD method was derived to demonstrate these plane waves in a spatial channel model. The analyses of coherent structures was extended to complex turbulence and used to determine the proper computational box size for a minimal flow unit (MFU) at Rob < 0.5. Proper realization of Taylor-Gortler vortices in the highly turbulent pressure region was demonstrated to be necessary for acceptably accurate MFU turbulence statistics, which required a minimum spanwise domain length Lz = pi. A dependence of MFU accuracy on Reynolds number was also discovered and MFU models required a larger domain to accurately approximate higher-Reynolds number flows. In addition, the results obtained from the DNS simulations were utilized to evaluate several turbulence closure models for momentum and thermal transport in rotating turbulent channel flow. Four nonlinear eddy viscosity turbulence models were tested and among these, Explicit Algebraic Reynolds Stress Models (EARSM) obtained the Reynolds stress distributions in best agreement with DNS data for rotational flows. The modeled pressure-strain functions of EARSM were shown to have strong influence on the Reynolds stress distributions near the wall. Turbulent heatflux distributions obtained from two explicit algebraic heat flux models consistently displayed increasing disagreement with DNS data with increasing rotation rate. Results were also obtained regarding flow control of fully-developed spatially-evolving turbulent channel flow using phononic subsurface structures. Fluid-structure interaction (FSI) simulations were conducted by attaching phononic structures to the bottom wall of a turbulent channel flow field and reduction of turbulent kinetic energy was observed for different phononic designs.
Application of Probabilistic Analysis to Aircraft Impact Dynamics
NASA Technical Reports Server (NTRS)
Lyle, Karen H.; Padula, Sharon L.; Stockwell, Alan E.
2003-01-01
Full-scale aircraft crash simulations performed with nonlinear, transient dynamic, finite element codes can incorporate structural complexities such as: geometrically accurate models; human occupant models; and advanced material models to include nonlinear stressstrain behaviors, laminated composites, and material failure. Validation of these crash simulations is difficult due to a lack of sufficient information to adequately determine the uncertainty in the experimental data and the appropriateness of modeling assumptions. This paper evaluates probabilistic approaches to quantify the uncertainty in the simulated responses. Several criteria are used to determine that a response surface method is the most appropriate probabilistic approach. The work is extended to compare optimization results with and without probabilistic constraints.
NASA Astrophysics Data System (ADS)
Edrisi, Siroos; Bidhendi, Norollah Kasiri; Haghighi, Maryam
2017-01-01
Effective thermal conductivity of the porous media was modeled based on a self-consistent method. This model estimates the heat transfer between insulator surface and air cavities accurately. In this method, the pore size and shape, the temperature gradient and other thermodynamic properties of the fluid was taken into consideration. The results are validated by experimental data for fire bricks used in cracking furnaces at the olefin plant of Maroon petrochemical complexes well as data published for polyurethane foam (synthetic polymers) IPTM and IPM. The model predictions present a good agreement against experimental data with thermal conductivity deviating <1 %.
Design and modelling of a 3D compliant leg for Bioloid
NASA Astrophysics Data System (ADS)
Couto, Mafalda; Santos, Cristina; Machado, José
2012-09-01
In the growing field of rehabilitation robotics, the modelling of a real robot is a complex and passionate challenge. On the crossing point of mechanics, physics and computer-science, the development of a complete 3D model involves the knowledge of the different physic properties, for an accurate simulation. In this paper, it is proposed the design of an efficient three-dimensional model of the quadruped Bioloid robot setting segmented pantographic legs, in order to actively retract the quadruped legs during locomotion and minimizing large forces due to shocks, such that the robot is able to safely and dynamically interact with the user or the environment.
Parameter Estimation for Viscoplastic Material Modeling
NASA Technical Reports Server (NTRS)
Saleeb, Atef F.; Gendy, Atef S.; Wilt, Thomas E.
1997-01-01
A key ingredient in the design of engineering components and structures under general thermomechanical loading is the use of mathematical constitutive models (e.g. in finite element analysis) capable of accurate representation of short and long term stress/deformation responses. In addition to the ever-increasing complexity of recent viscoplastic models of this type, they often also require a large number of material constants to describe a host of (anticipated) physical phenomena and complicated deformation mechanisms. In turn, the experimental characterization of these material parameters constitutes the major factor in the successful and effective utilization of any given constitutive model; i.e., the problem of constitutive parameter estimation from experimental measurements.
NASA Technical Reports Server (NTRS)
Taylor, B. K.; Casasent, D. P.
1989-01-01
The use of simplified error models to accurately simulate and evaluate the performance of an optical linear-algebra processor is described. The optical architecture used to perform banded matrix-vector products is reviewed, along with a linear dynamic finite-element case study. The laboratory hardware and ac-modulation technique used are presented. The individual processor error-source models and their simulator implementation are detailed. Several significant simplifications are introduced to ease the computational requirements and complexity of the simulations. The error models are verified with a laboratory implementation of the processor, and are used to evaluate its potential performance.
NASA Astrophysics Data System (ADS)
Davis, R.
2013-12-01
The purpose of this study is to test the conjecture that environmentally sustainable decisions and behaviors are related to individuals' conceptions of the natural world, in this case climate change; individuals' attitudes towards climate change; and the situations in which these decisions are made. The nature of mental models is an ongoing subject of disagreement. Some argue that mental models are coherent theories, much like scientific theories, that individuals employ systematically when reasoning about the world (Gopnik & Meltzoff, 1998). Others maintain that mental models are cobbled together from fragmented collections of ideas that are only loosely connected and context dependent (Disessa, 1988; Minstrell, 2000). It is likely that individuals sometimes reason about complex phenomena using systematic mental models and at other times reason using knowledge that is organized in fragmented pieces (Steedle & Shavelson, 2009). Thus, in measuring mental models of complex environmental systems, such as climate change, the assumption of systematicity may not be justified. Individuals may apply certain chains of reasoning in some contexts but not in others. The current study hypothesizes that an accurate mental model of climate change enables an individual to make effective evaluative judgments of environmental behavior options. The more an individual's mental model resembles that of an expert, the more consistent, accurate and automatic these judgments become. However, an accurate mental model is not sufficient to change environmental behavior. Real decisions and behaviors are products of a person-situation interaction: an interplay between psychosocial factors (such as knowledge and attitudes) and the situation in which the decision is made. This study investigates the relationship between both psychosocial and situational factors for climate change decisions. Data was collected from 436 adult participants through an online survey. The survey was comprised of demographic questions; three discreet instruments measuring (1) mental models of climate change, (2) attitudes and beliefs about climate change, and (3) self-reported behaviors; and an experimental intervention, followed by a behavioral intention question. Latent class analysis (LCA) and item-response theory (IRT) will be employed to analyze multiple-choice responses to the mental model survey to create groupings of individuals assumed to hold similar mental of climate change. A principal component analysis (PCA) using oblique rotation was employed to identify five scales (Chronbach's alpha > 0.80) within the attitude/belief instrument. Total and sub-scale scores were also calculated for self-reported behaviors. The relationships between mental models, attitudes and behaviors will be analyzed using multiple regression models. This work presents not only the development and validation of three novel instruments for accurately and efficiently measuring mental models, attitudes, and self-reported behaviors, but also provides insight into the types of mental models individuals hold. Understanding how climate change is conceptualized and how such knowledge influences attitudes and behaviors gives educators tools for guiding students towards more expert understandings while also enabling environmentalists to craft more effective messages.
van der Kruk, E; Veeger, H E J; van der Helm, F C T; Schwab, A L
2017-11-07
Advice about the optimal coordination pattern for an individual speed skater, could be addressed by simulation and optimization of a biomechanical speed skating model. But before getting to this optimization approach one needs a model that can reasonably match observed behaviour. Therefore, the objective of this study is to present a verified three dimensional inverse skater model with minimal complexity, which models the speed skating motion on the straights. The model simulates the upper body transverse translation of the skater together with the forces exerted by the skates on the ice. The input of the model is the changing distance between the upper body and the skate, referred to as the leg extension (Euclidean distance in 3D space). Verification shows that the model mimics the observed forces and motions well. The model is most accurate for the position and velocity estimation (respectively 1.2% and 2.9% maximum residuals) and least accurate for the force estimations (underestimation of 4.5-10%). The model can be used to further investigate variables in the skating motion. For this, the input of the model, the leg extension, can be optimized to obtain a maximal forward velocity of the upper body. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Computational modelling of oxygenation processes in enzymes and biomimetic model complexes.
de Visser, Sam P; Quesne, Matthew G; Martin, Bodo; Comba, Peter; Ryde, Ulf
2014-01-11
With computational resources becoming more efficient and more powerful and at the same time cheaper, computational methods have become more and more popular for studies on biochemical and biomimetic systems. Although large efforts from the scientific community have gone into exploring the possibilities of computational methods for studies on large biochemical systems, such studies are not without pitfalls and often cannot be routinely done but require expert execution. In this review we summarize and highlight advances in computational methodology and its application to enzymatic and biomimetic model complexes. In particular, we emphasize on topical and state-of-the-art methodologies that are able to either reproduce experimental findings, e.g., spectroscopic parameters and rate constants, accurately or make predictions of short-lived intermediates and fast reaction processes in nature. Moreover, we give examples of processes where certain computational methods dramatically fail.
Modeling the magnetoelectric effect in laminated composites using Hamilton’s principle
NASA Astrophysics Data System (ADS)
Zhang, Shengyao; Zhang, Ru; Jiang, Jiqing
2018-01-01
Mathematical modeling of the magnetoelectric (ME) effect has been established for some rectangular and disk laminate structures. However, these methods are difficult in other cases, particularly for complex structures. In this work, a new method for the analysis of the ME effect is proposed using a generalized Hamilton’s principle, which is conveniently applicable to various laminate structures. As an example, the performance of the rectangular ME laminated composite is analyzed and the equivalent circuit model for the laminate is obtained directly from the analysis. The experimental data is also obtained to compare with the theoretical calculations and to validate the new method. Compared with Dong’s method, the new method is more accurate and convenient. In particular, the equivalent circuit for the rectangular laminated composite can be obtained more easily by the proposed method as it does not require the complex treatment used in Dong’s method.
Improved Multi-Axial, Temperature and Time Dependent (MATT) Failure Model
NASA Technical Reports Server (NTRS)
Richardson, D. E.; Anderson, G. L.; Macon, D. J.
2002-01-01
An extensive effort has recently been completed by the Space Shuttle's Reusable Solid Rocket Motor (RSRM) nozzle program to completely characterize the effects of multi-axial loading, temperature and time on the failure characteristics of three filled epoxy adhesives (TIGA 321, EA913NA, EA946). As part of this effort, a single general failure criterion was developed that accounted for these effects simultaneously. This model was named the Multi- Axial, Temperature, and Time Dependent or MATT failure criterion. Due to the intricate nature of the failure criterion, some parameters were required to be calculated using complex equations or numerical methods. This paper documents some simple but accurate modifications to the failure criterion to allow for calculations of failure conditions without complex equations or numerical techniques.
Conformational Transitions and Convergence of Absolute Binding Free Energy Calculations
Lapelosa, Mauro; Gallicchio, Emilio; Levy, Ronald M.
2011-01-01
The Binding Energy Distribution Analysis Method (BEDAM) is employed to compute the standard binding free energies of a series of ligands to a FK506 binding protein (FKBP12) with implicit solvation. Binding free energy estimates are in reasonably good agreement with experimental affinities. The conformations of the complexes identified by the simulations are in good agreement with crystallographic data, which was not used to restrain ligand orientations. The BEDAM method is based on λ -hopping Hamiltonian parallel Replica Exchange (HREM) molecular dynamics conformational sampling, the OPLS-AA/AGBNP2 effective potential, and multi-state free energy estimators (MBAR). Achieving converged and accurate results depends on all of these elements of the calculation. Convergence of the binding free energy is tied to the level of convergence of binding energy distributions at critical intermediate states where bound and unbound states are at equilibrium, and where the rate of binding/unbinding conformational transitions is maximal. This finding mirrors similar observations in the context of order/disorder transitions as for example in protein folding. Insights concerning the physical mechanism of ligand binding and unbinding are obtained. Convergence for the largest FK506 ligand is achieved only after imposing strict conformational restraints, which however require accurate prior structural knowledge of the structure of the complex. The analytical AGBNP2 model is found to underestimate the magnitude of the hydrophobic driving force towards binding in these systems characterized by loosely packed protein-ligand binding interfaces. Rescoring of the binding energies using a numerical surface area model corrects this deficiency. This study illustrates the complex interplay between energy models, exploration of conformational space, and free energy estimators needed to obtain robust estimates from binding free energy calculations. PMID:22368530
NASA Astrophysics Data System (ADS)
Hungerford, Aimee; Fontes, Christopher J.
2018-06-01
Gravitational wave observations benefit from accompanying electromagnetic signals in order to accurately determine the sky positions of the sources. The ejecta of neutron star mergers are expected to produce such electromagnetic transients, called macronovae (e.g. the recent and unprecedented observation of GW170817). Characteristics of the ejecta include large velocity gradients and the presence of heavy r-process elements, which pose significant challenges to the accurate calculation of radiative opacities and radiation transport. Opacities include a dense forest of bound-bound features arising from near-neutral lanthanide and actinide elements. Here we present an overview of current theoretical opacity determinations that are used by neutron star merger light curve modelers. We will touch on atomic physics and plasma modeling codes that are used to generate these opacities, as well as the limited body of laboratory experiments that may serve as points of validation for these complex atomic physics calculations.
Inferring mass in complex scenes by mental simulation.
Hamrick, Jessica B; Battaglia, Peter W; Griffiths, Thomas L; Tenenbaum, Joshua B
2016-12-01
After observing a collision between two boxes, you can immediately tell which is empty and which is full of books based on how the boxes moved. People form rich perceptions about the physical properties of objects from their interactions, an ability that plays a crucial role in learning about the physical world through our experiences. Here, we present three experiments that demonstrate people's capacity to reason about the relative masses of objects in naturalistic 3D scenes. We find that people make accurate inferences, and that they continue to fine-tune their beliefs over time. To explain our results, we propose a cognitive model that combines Bayesian inference with approximate knowledge of Newtonian physics by estimating probabilities from noisy physical simulations. We find that this model accurately predicts judgments from our experiments, suggesting that the same simulation mechanism underlies both peoples' predictions and inferences about the physical world around them. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Duru, K.; Dunham, E. M.; Bydlon, S. A.; Radhakrishnan, H.
2014-12-01
Dynamic propagation of shear ruptures on a frictional interface is a useful idealization of a natural earthquake.The conditions relating slip rate and fault shear strength are often expressed as nonlinear friction laws.The corresponding initial boundary value problems are both numerically and computationally challenging.In addition, seismic waves generated by earthquake ruptures must be propagated, far away from fault zones, to seismic stations and remote areas.Therefore, reliable and efficient numerical simulations require both provably stable and high order accurate numerical methods.We present a numerical method for:a) enforcing nonlinear friction laws, in a consistent and provably stable manner, suitable for efficient explicit time integration;b) dynamic propagation of earthquake ruptures along rough faults; c) accurate propagation of seismic waves in heterogeneous media with free surface topography.We solve the first order form of the 3D elastic wave equation on a boundary-conforming curvilinear mesh, in terms of particle velocities and stresses that are collocated in space and time, using summation-by-parts finite differences in space. The finite difference stencils are 6th order accurate in the interior and 3rd order accurate close to the boundaries. Boundary and interface conditions are imposed weakly using penalties. By deriving semi-discrete energy estimates analogous to the continuous energy estimates we prove numerical stability. Time stepping is performed with a 4th order accurate explicit low storage Runge-Kutta scheme. We have performed extensive numerical experiments using a slip-weakening friction law on non-planar faults, including recent SCEC benchmark problems. We also show simulations on fractal faults revealing the complexity of rupture dynamics on rough faults. We are presently extending our method to rate-and-state friction laws and off-fault plasticity.
A hierarchy of models for simulating experimental results from a 3D heterogeneous porous medium
NASA Astrophysics Data System (ADS)
Vogler, Daniel; Ostvar, Sassan; Paustian, Rebecca; Wood, Brian D.
2018-04-01
In this work we examine the dispersion of conservative tracers (bromide and fluorescein) in an experimentally-constructed three-dimensional dual-porosity porous medium. The medium is highly heterogeneous (σY2 = 5.7), and consists of spherical, low-hydraulic-conductivity inclusions embedded in a high-hydraulic-conductivity matrix. The bimodal medium was saturated with tracers, and then flushed with tracer-free fluid while the effluent breakthrough curves were measured. The focus for this work is to examine a hierarchy of four models (in the absence of adjustable parameters) with decreasing complexity to assess their ability to accurately represent the measured breakthrough curves. The most information-rich model was (1) a direct numerical simulation of the system in which the geometry, boundary and initial conditions, and medium properties were fully independently characterized experimentally with high fidelity. The reduced-information models included; (2) a simplified numerical model identical to the fully-resolved direct numerical simulation (DNS) model, but using a domain that was one-tenth the size; (3) an upscaled mobile-immobile model that allowed for a time-dependent mass-transfer coefficient; and, (4) an upscaled mobile-immobile model that assumed a space-time constant mass-transfer coefficient. The results illustrated that all four models provided accurate representations of the experimental breakthrough curves as measured by global RMS error. The primary component of error induced in the upscaled models appeared to arise from the neglect of convection within the inclusions. We discuss the necessity to assign value (via a utility function or other similar method) to outcomes if one is to further select from among model options. Interestingly, these results suggested that the conventional convection-dispersion equation, when applied in a way that resolves the heterogeneities, yields models with high fidelity without requiring the imposition of a more complex non-Fickian model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fernandez-Serra, Maria Victoria
2016-09-12
The research objective of this proposal is the computational modeling of the metal-electrolyte interface purely from first principles. The accurate calculation of the electrostatic potential at electrically biased metal-electrolyte interfaces is a current challenge for periodic “ab-initio” simulations. It is also an essential requisite for predicting the correspondence between the macroscopic voltage and the microscopic interfacial charge distribution in electrochemical fuel cells. This interfacial charge distribution is the result of the chemical bonding between solute and metal atoms, and therefore cannot be accurately calculated with the use of semi-empirical classical force fields. The project aims to study in detail themore » structure and dynamics of aqueous electrolytes at metallic interfaces taking into account the effect of the electrode potential. Another side of the project is to produce an accurate method to simulate the water/metal interface. While both experimental and theoretical surface scientists have made a lot of progress on the understanding and characterization of both atomistic structures and reactions at the solid/vacuum interface, the theoretical description of electrochemical interfaces is still lacking behind. A reason for this is that a complete and accurate first principles description of both the liquid and the metal interfaces is still computationally too expensive and complex, since their characteristics are governed by the explicit atomic and electronic structure built at the interface as a response to environmental conditions. This project will characterize in detail how different theoretical levels of modeling describer the metal/water interface. In particular the role of van der Waals interactions will be carefully analyzed and prescriptions to perform accurate simulations will be produced.« less
Finite difference elastic wave modeling with an irregular free surface using ADER scheme
NASA Astrophysics Data System (ADS)
Almuhaidib, Abdulaziz M.; Nafi Toksöz, M.
2015-06-01
In numerical modeling of seismic wave propagation in the earth, we encounter two important issues: the free surface and the topography of the surface (i.e. irregularities). In this study, we develop a 2D finite difference solver for the elastic wave equation that combines a 4th- order ADER scheme (Arbitrary high-order accuracy using DERivatives), which is widely used in aeroacoustics, with the characteristic variable method at the free surface boundary. The idea is to treat the free surface boundary explicitly by using ghost values of the solution for points beyond the free surface to impose the physical boundary condition. The method is based on the velocity-stress formulation. The ultimate goal is to develop a numerical solver for the elastic wave equation that is stable, accurate and computationally efficient. The solver treats smooth arbitrary-shaped boundaries as simple plane boundaries. The computational cost added by treating the topography is negligible compared to flat free surface because only a small number of grid points near the boundary need to be computed. In the presence of topography, using 10 grid points per shortest shear-wavelength, the solver yields accurate results. Benchmark numerical tests using several complex models that are solved by our method and other independent accurate methods show an excellent agreement, confirming the validity of the method for modeling elastic waves with an irregular free surface.
The Complex Refractive Index of Volcanic Ash Aerosol Retrieved From Spectral Mass Extinction
NASA Astrophysics Data System (ADS)
Reed, Benjamin E.; Peters, Daniel M.; McPheat, Robert; Grainger, R. G.
2018-01-01
The complex refractive indices of eight volcanic ash samples, chosen to have a representative range of SiO2 contents, were retrieved from simultaneous measurements of their spectral mass extinction coefficient and size distribution. The mass extinction coefficients, at 0.33-19 μm, were measured using two optical systems: a Fourier transform spectrometer in the infrared and two diffraction grating spectrometers covering visible and ultraviolet wavelengths. The particle size distribution was measured using a scanning mobility particle sizer and an optical particle counter; values for the effective radius of ash particles measured in this study varied from 0.574 to 1.16 μm. Verification retrievals on high-purity silica aerosol demonstrated that the Rayleigh continuous distribution of ellipsoids (CDEs) scattering model significantly outperformed Mie theory in retrieving the complex refractive index, when compared to literature values. Assuming the silica particles provided a good analogue of volcanic ash, the CDE scattering model was applied to retrieve the complex refractive index of the eight ash samples. The Lorentz formulation of the complex refractive index was used within the retrievals as a convenient way to ensure consistency with the Kramers-Kronig relation. The short-wavelength limit of the electric susceptibility was constrained by using independently measured reference values of the complex refractive index of the ash samples at a visible wavelength. The retrieved values of the complex refractive indices of the ash samples showed considerable variation, highlighting the importance of using accurate refractive index data in ash cloud radiative transfer models.
NASA Astrophysics Data System (ADS)
Sell, K.; Herbert, B.; Schielack, J.
2004-05-01
Students organize scientific knowledge and reason about environmental issues through manipulation of mental models. The nature of the environmental sciences, which are focused on the study of complex, dynamic systems, may present cognitive difficulties to students in their development of authentic, accurate mental models of environmental systems. The inquiry project seeks to develop and assess the coupling of information technology (IT)-based learning with physical models in order to foster rich mental model development of environmental systems in geoscience undergraduate students. The manipulation of multiple representations, the development and testing of conceptual models based on available evidence, and exposure to authentic, complex and ill-constrained problems were the components of investigation utilized to reach the learning goals. Upper-level undergraduate students enrolled in an environmental geology course at Texas A&M University participated in this research which served as a pilot study. Data based on rubric evaluations interpreted by principal component analyses suggest students' understanding of the nature of scientific inquiry is limited and the ability to cross scales and link systems proved problematic. Results categorized into content knowledge and cognition processes where reasoning, critical thinking and cognitive load were driving factors behind difficulties in student learning. Student mental model development revealed multiple misconceptions and lacked complexity and completeness to represent the studied systems. Further, the positive learning impacts of the implemented modules favored the physical model over the IT-based learning projects, likely due to cognitive load issues. This study illustrates the need to better understand student difficulties in solving complex problems when using IT, where the appropriate scaffolding can then be implemented to enhance student learning of the earth system sciences.
Optimised analytical models of the dielectric properties of biological tissue.
Salahuddin, Saqib; Porter, Emily; Krewer, Finn; O' Halloran, Martin
2017-05-01
The interaction of electromagnetic fields with the human body is quantified by the dielectric properties of biological tissues. These properties are incorporated into complex numerical simulations using parametric models such as Debye and Cole-Cole, for the computational investigation of electromagnetic wave propagation within the body. These parameters can be acquired through a variety of optimisation algorithms to achieve an accurate fit to measured data sets. A number of different optimisation techniques have been proposed, but these are often limited by the requirement for initial value estimations or by the large overall error (often up to several percentage points). In this work, a novel two-stage genetic algorithm proposed by the authors is applied to optimise the multi-pole Debye parameters for 54 types of human tissues. The performance of the two-stage genetic algorithm has been examined through a comparison with five other existing algorithms. The experimental results demonstrate that the two-stage genetic algorithm produces an accurate fit to a range of experimental data and efficiently out-performs all other optimisation algorithms under consideration. Accurate values of the three-pole Debye models for 54 types of human tissues, over 500 MHz to 20 GHz, are also presented for reference. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
Computer Simulation of Microwave Devices
NASA Technical Reports Server (NTRS)
Kory, Carol L.
1997-01-01
The accurate simulation of cold-test results including dispersion, on-axis beam interaction impedance, and attenuation of a helix traveling-wave tube (TWT) slow-wave circuit using the three-dimensional code MAFIA (Maxwell's Equations Solved by the Finite Integration Algorithm) was demonstrated for the first time. Obtaining these results is a critical step in the design of TWT's. A well-established procedure to acquire these parameters is to actually build and test a model or a scale model of the circuit. However, this procedure is time-consuming and expensive, and it limits freedom to examine new variations to the basic circuit. These limitations make the need for computational methods crucial since they can lower costs, reduce tube development time, and lessen limitations on novel designs. Computer simulation has been used to accurately obtain cold-test parameters for several slow-wave circuits. Although the helix slow-wave circuit remains the mainstay of the TWT industry because of its exceptionally wide bandwidth, until recently it has been impossible to accurately analyze a helical TWT using its exact dimensions because of the complexity of its geometrical structure. A new computer modeling technique developed at the NASA Lewis Research Center overcomes these difficulties. The MAFIA three-dimensional mesh for a C-band helix slow-wave circuit is shown.
Inter-model analysis of tsunami-induced coastal currents
NASA Astrophysics Data System (ADS)
Lynett, Patrick J.; Gately, Kara; Wilson, Rick; Montoya, Luis; Arcas, Diego; Aytore, Betul; Bai, Yefei; Bricker, Jeremy D.; Castro, Manuel J.; Cheung, Kwok Fai; David, C. Gabriel; Dogan, Gozde Guney; Escalante, Cipriano; González-Vida, José Manuel; Grilli, Stephan T.; Heitmann, Troy W.; Horrillo, Juan; Kânoğlu, Utku; Kian, Rozita; Kirby, James T.; Li, Wenwen; Macías, Jorge; Nicolsky, Dmitry J.; Ortega, Sergio; Pampell-Manis, Alyssa; Park, Yong Sung; Roeber, Volker; Sharghivand, Naeimeh; Shelby, Michael; Shi, Fengyan; Tehranirad, Babak; Tolkova, Elena; Thio, Hong Kie; Velioğlu, Deniz; Yalçıner, Ahmet Cevdet; Yamazaki, Yoshiki; Zaytsev, Andrey; Zhang, Y. J.
2017-06-01
To help produce accurate and consistent maritime hazard products, the National Tsunami Hazard Mitigation Program organized a benchmarking workshop to evaluate the numerical modeling of tsunami currents. Thirteen teams of international researchers, using a set of tsunami models currently utilized for hazard mitigation studies, presented results for a series of benchmarking problems; these results are summarized in this paper. Comparisons focus on physical situations where the currents are shear and separation driven, and are thus de-coupled from the incident tsunami waveform. In general, we find that models of increasing physical complexity provide better accuracy, and that low-order three-dimensional models are superior to high-order two-dimensional models. Inside separation zones and in areas strongly affected by eddies, the magnitude of both model-data errors and inter-model differences can be the same as the magnitude of the mean flow. Thus, we make arguments for the need of an ensemble modeling approach for areas affected by large-scale turbulent eddies, where deterministic simulation may be misleading. As a result of the analyses presented herein, we expect that tsunami modelers now have a better awareness of their ability to accurately capture the physics of tsunami currents, and therefore a better understanding of how to use these simulation tools for hazard assessment and mitigation efforts.
Model-based MPC enables curvilinear ILT using either VSB or multi-beam mask writers
NASA Astrophysics Data System (ADS)
Pang, Linyong; Takatsukasa, Yutetsu; Hara, Daisuke; Pomerantsev, Michael; Su, Bo; Fujimura, Aki
2017-07-01
Inverse Lithography Technology (ILT) is becoming the choice for Optical Proximity Correction (OPC) of advanced technology nodes in IC design and production. Multi-beam mask writers promise significant mask writing time reduction for complex ILT style masks. Before multi-beam mask writers become the main stream working tools in mask production, VSB writers will continue to be the tool of choice to write both curvilinear ILT and Manhattanized ILT masks. To enable VSB mask writers for complex ILT style masks, model-based mask process correction (MB-MPC) is required to do the following: 1). Make reasonable corrections for complex edges for those features that exhibit relatively large deviations from both curvilinear ILT and Manhattanized ILT designs. 2). Control and manage both Edge Placement Errors (EPE) and shot count. 3. Assist in easing the migration to future multi-beam mask writer and serve as an effective backup solution during the transition. In this paper, a solution meeting all those requirements, MB-MPC with GPU acceleration, will be presented. One model calibration per process allows accurate correction regardless of the target mask writer.
Ultrasound breast imaging using frequency domain reverse time migration
NASA Astrophysics Data System (ADS)
Roy, O.; Zuberi, M. A. H.; Pratt, R. G.; Duric, N.
2016-04-01
Conventional ultrasonography reconstruction techniques, such as B-mode, are based on a simple wave propagation model derived from a high frequency approximation. Therefore, to minimize model mismatch, the central frequency of the input pulse is typically chosen between 3 and 15 megahertz. Despite the increase in theoretical resolution, operating at higher frequencies comes at the cost of lower signal-to-noise ratio. This ultimately degrades the image contrast and overall quality at higher imaging depths. To address this issue, we investigate a reflection imaging technique, known as reverse time migration, which uses a more accurate propagation model for reconstruction. We present preliminary simulation results as well as physical phantom image reconstructions obtained using data acquired with a breast imaging ultrasound tomography prototype. The original reconstructions are filtered to remove low-wavenumber artifacts that arise due to the inclusion of the direct arrivals. We demonstrate the advantage of using an accurate sound speed model in the reverse time migration process. We also explain how the increase in computational complexity can be mitigated using a frequency domain approach and a parallel computing platform.
Utilizing Direct Numerical Simulations of Transition and Turbulence in Design Optimization
NASA Technical Reports Server (NTRS)
Rai, Man M.
2015-01-01
Design optimization methods that use the Reynolds-averaged Navier-Stokes equations with the associated turbulence and transition models, or other model-based forms of the governing equations, may result in aerodynamic designs with actual performance levels that are noticeably different from the expected values because of the complexity of modeling turbulence/transition accurately in certain flows. Flow phenomena such as wake-blade interaction and trailing edge vortex shedding in turbines and compressors (examples of such flows) may require a computational approach that is free of transition/turbulence models, such as direct numerical simulations (DNS), for the underlying physics to be computed accurately. Here we explore the possibility of utilizing DNS data in designing a turbine blade section. The ultimate objective is to substantially reduce differences between predicted performance metrics and those obtained in reality. The redesign of a typical low-pressure turbine blade section with the goal of reducing total pressure loss in the row is provided as an example. The basic ideas presented here are of course just as applicable elsewhere in aerodynamic shape optimization as long as the computational costs are not excessive.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donkin, S.G.
1997-09-01
A new method of performing soil toxicity tests with free-living nematodes exposed to several metals and soil types has been adapted to the Langmuir sorption model in an attempt at bridging the gap between physico-chemical and biological data gathered in the complex soil matrix. Pseudo-Langmuir sorption isotherms have been developed using nematode toxic responses (lethality, in this case) in place of measured solvated metal, in order to more accurately model bioavailability. This method allows the graphical determination of Langmuir coefficients describing maximum sorption capacities and sorption affinities of various metal-soil combinations in the context of real biological responses of indigenousmore » organisms. Results from nematode mortality tests with zinc, cadmium, copper, and lead in four soil types and water were used for isotherm construction. The level of agreement between these results and available literature data on metal sorption behavior in soils suggests that biologically relevant data may be successfully fitted to sorption models such as the Langmuir. This would allow for accurate prediction of soil contaminant concentrations which have minimal effect on indigenous invertebrates.« less
Liang, Liang; Liu, Minliang; Martin, Caitlin; Sun, Wei
2018-01-01
Structural finite-element analysis (FEA) has been widely used to study the biomechanics of human tissues and organs, as well as tissue-medical device interactions, and treatment strategies. However, patient-specific FEA models usually require complex procedures to set up and long computing times to obtain final simulation results, preventing prompt feedback to clinicians in time-sensitive clinical applications. In this study, by using machine learning techniques, we developed a deep learning (DL) model to directly estimate the stress distributions of the aorta. The DL model was designed and trained to take the input of FEA and directly output the aortic wall stress distributions, bypassing the FEA calculation process. The trained DL model is capable of predicting the stress distributions with average errors of 0.492% and 0.891% in the Von Mises stress distribution and peak Von Mises stress, respectively. This study marks, to our knowledge, the first study that demonstrates the feasibility and great potential of using the DL technique as a fast and accurate surrogate of FEA for stress analysis. © 2018 The Author(s).
NASA Astrophysics Data System (ADS)
Saleh, F.; Garambois, P. A.; Biancamaria, S.
2017-12-01
Floods are considered the major natural threats to human societies across all continents. Consequences of floods in highly populated areas are more dramatic with losses of human lives and substantial property damage. This risk is projected to increase with the effects of climate change, particularly sea-level rise, increasing storm frequencies and intensities and increasing population and economic assets in such urban watersheds. Despite the advances in computational resources and modeling techniques, significant gaps exist in predicting complex processes and accurately representing the initial state of the system. Improving flood prediction models and data assimilation chains through satellite has become an absolute priority to produce accurate flood forecasts with sufficient lead times. The overarching goal of this work is to assess the benefits of the Surface Water Ocean Topography SWOT satellite data from a flood prediction perspective. The near real time methodology is based on combining satellite data from a simulator that mimics the future SWOT data, numerical models, high resolution elevation data and real-time local measurement in the New York/New Jersey area.
NASA Technical Reports Server (NTRS)
Ryan, Harry M.; Coote, David J.; Ahuja, Vineet; Hosangadi, Ashvin
2006-01-01
Accurate modeling of liquid rocket engine test processes involves assessing critical fluid mechanic and heat and mass transfer mechanisms within a cryogenic environment, and accurately modeling fluid properties such as vapor pressure and liquid and gas densities as a function of pressure and temperature. The Engineering and Science Directorate at the NASA John C. Stennis Space Center has developed and implemented such analytic models and analysis processes that have been used over a broad range of thermodynamic systems and resulted in substantial improvements in rocket propulsion testing services. In this paper, we offer an overview of the analyses techniques used to simulate pressurization and propellant fluid systems associated with the test stands at the NASA John C. Stennis Space Center. More specifically, examples of the global performance (one-dimensional) of a propellant system are provided as predicted using the Rocket Propulsion Test Analysis (RPTA) model. Computational fluid dynamic (CFD) analyses utilizing multi-element, unstructured, moving grid capability of complex cryogenic feed ducts, transient valve operation, and pressurization and mixing in propellant tanks are provided as well.
Non-lambertian reflectance modeling and shape recovery of faces using tensor splines.
Kumar, Ritwik; Barmpoutis, Angelos; Banerjee, Arunava; Vemuri, Baba C
2011-03-01
Modeling illumination effects and pose variations of a face is of fundamental importance in the field of facial image analysis. Most of the conventional techniques that simultaneously address both of these problems work with the Lambertian assumption and thus fall short of accurately capturing the complex intensity variation that the facial images exhibit or recovering their 3D shape in the presence of specularities and cast shadows. In this paper, we present a novel Tensor-Spline-based framework for facial image analysis. We show that, using this framework, the facial apparent BRDF field can be accurately estimated while seamlessly accounting for cast shadows and specularities. Further, using local neighborhood information, the same framework can be exploited to recover the 3D shape of the face (to handle pose variation). We quantitatively validate the accuracy of the Tensor Spline model using a more general model based on the mixture of single-lobed spherical functions. We demonstrate the effectiveness of our technique by presenting extensive experimental results for face relighting, 3D shape recovery, and face recognition using the Extended Yale B and CMU PIE benchmark data sets.
Ruel, Jean; Lachance, Geneviève
2010-01-01
This paper presents an experimental study of three bioreactor configurations. The bioreactor is intended to be used for the development of tissue-engineered heart valve substitutes. Therefore it must be able to reproduce physiological flow and pressure waveforms accurately. A detailed analysis of three bioreactor arrangements is presented using mathematical models based on the windkessel (WK) approach. First, a review of the many applications of this approach in medical studies enhances its fundamental nature and its usefulness. Then the models are developed with reference to the actual components of the bioreactor. This study emphasizes different conflicting issues arising in the design process of a bioreactor for biomedical purposes, where an optimization process is essential to reach a compromise satisfying all conditions. Two important aspects are the need for a simple system providing ease of use and long-term sterility, opposed to the need for an advanced (thus more complex) architecture capable of a more accurate reproduction of the physiological environment. Three classic WK architectures are analyzed, and experimental results enhance the advantages and limitations of each one. PMID:21977286
Using LSTMs to learn physiological models of blood glucose behavior.
Mirshekarian, Sadegh; Bunescu, Razvan; Marling, Cindy; Schwartz, Frank
2017-07-01
For people with type 1 diabetes, good blood glucose control is essential to keeping serious disease complications at bay. This entails carefully monitoring blood glucose levels and taking corrective steps whenever they are too high or too low. If blood glucose levels could be accurately predicted, patients could take proactive steps to prevent blood glucose excursions from occurring. However, accurate predictions require complex physiological models of blood glucose behavior. Factors such as insulin boluses, carbohydrate intake, and exercise influence blood glucose in ways that are difficult to capture through manually engineered equations. In this paper, we describe a recursive neural network (RNN) approach that uses long short-term memory (LSTM) units to learn a physiological model of blood glucose. When trained on raw data from real patients, the LSTM networks (LSTMs) obtain results that are competitive with a previous state-of-the-art model based on manually engineered physiological equations. The RNN approach can incorporate arbitrary physiological parameters without the need for sophisticated manual engineering, thus holding the promise of further improvements in prediction accuracy.
Local spatio-temporal analysis in vision systems
NASA Astrophysics Data System (ADS)
Geisler, Wilson S.; Bovik, Alan; Cormack, Lawrence; Ghosh, Joydeep; Gildeen, David
1994-07-01
The aims of this project are the following: (1) develop a physiologically and psychophysically based model of low-level human visual processing (a key component of which are local frequency coding mechanisms); (2) develop image models and image-processing methods based upon local frequency coding; (3) develop algorithms for performing certain complex visual tasks based upon local frequency representations, (4) develop models of human performance in certain complex tasks based upon our understanding of low-level processing; and (5) develop a computational testbed for implementing, evaluating and visualizing the proposed models and algorithms, using a massively parallel computer. Progress has been substantial on all aims. The highlights include the following: (1) completion of a number of psychophysical and physiological experiments revealing new, systematic and exciting properties of the primate (human and monkey) visual system; (2) further development of image models that can accurately represent the local frequency structure in complex images; (3) near completion in the construction of the Texas Active Vision Testbed; (4) development and testing of several new computer vision algorithms dealing with shape-from-texture, shape-from-stereo, and depth-from-focus; (5) implementation and evaluation of several new models of human visual performance; and (6) evaluation, purchase and installation of a MasPar parallel computer.
Calibration of a γ- Re θ transition model and its application in low-speed flows
NASA Astrophysics Data System (ADS)
Wang, YunTao; Zhang, YuLun; Meng, DeHong; Wang, GunXue; Li, Song
2014-12-01
The prediction of laminar-turbulent transition in boundary layer is very important for obtaining accurate aerodynamic characteristics with computational fluid dynamic (CFD) tools, because laminar-turbulent transition is directly related to complex flow phenomena in boundary layer and separated flow in space. Unfortunately, the transition effect isn't included in today's major CFD tools because of non-local calculations in transition modeling. In this paper, Menter's γ- Re θ transition model is calibrated and incorporated into a Reynolds-Averaged Navier-Stokes (RANS) code — Trisonic Platform (TRIP) developed in China Aerodynamic Research and Development Center (CARDC). Based on the experimental data of flat plate from the literature, the empirical correlations involved in the transition model are modified and calibrated numerically. Numerical simulation for low-speed flow of Trapezoidal Wing (Trap Wing) is performed and compared with the corresponding experimental data. It is indicated that the γ- Re θ transition model can accurately predict the location of separation-induced transition and natural transition in the flow region with moderate pressure gradient. The transition model effectively imporves the simulation accuracy of the boundary layer and aerodynamic characteristics.
Review of Thawing Time Prediction Models Depending on Process Conditions and Product Characteristics
Kluza, Franciszek; Spiess, Walter E. L.; Kozłowicz, Katarzyna
2016-01-01
Summary Determining thawing times of frozen foods is a challenging problem as the thermophysical properties of the product change during thawing. A number of calculation models and solutions have been developed. The proposed solutions range from relatively simple analytical equations based on a number of assumptions to a group of empirical approaches that sometimes require complex calculations. In this paper analytical, empirical and graphical models are presented and critically reviewed. The conditions of solution, limitations and possible applications of the models are discussed. The graphical and semi--graphical models are derived from numerical methods. Using the numerical methods is not always possible as running calculations takes time, whereas the specialized software and equipment are not always cheap. For these reasons, the application of analytical-empirical models is more useful for engineering. It is demonstrated that there is no simple, accurate and feasible analytical method for thawing time prediction. Consequently, simplified methods are needed for thawing time estimation of agricultural and food products. The review reveals the need for further improvement of the existing solutions or development of new ones that will enable accurate determination of thawing time within a wide range of practical conditions of heat transfer during processing. PMID:27904387
Multi-Node Thermal System Model for Lithium-Ion Battery Packs: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Ying; Smith, Kandler; Wood, Eric
Temperature is one of the main factors that controls the degradation in lithium ion batteries. Accurate knowledge and control of cell temperatures in a pack helps the battery management system (BMS) to maximize cell utilization and ensure pack safety and service life. In a pack with arrays of cells, a cells temperature is not only affected by its own thermal characteristics but also by its neighbors, the cooling system and pack configuration, which increase the noise level and the complexity of cell temperatures prediction. This work proposes to model lithium ion packs thermal behavior using a multi-node thermal network model,more » which predicts the cell temperatures by zones. The model was parametrized and validated using commercial lithium-ion battery packs. neighbors, the cooling system and pack configuration, which increase the noise level and the complexity of cell temperatures prediction. This work proposes to model lithium ion packs thermal behavior using a multi-node thermal network model, which predicts the cell temperatures by zones. The model was parametrized and validated using commercial lithium-ion battery packs.« less
Multi-source micro-friction identification for a class of cable-driven robots with passive backbone
NASA Astrophysics Data System (ADS)
Tjahjowidodo, Tegoeh; Zhu, Ke; Dailey, Wayne; Burdet, Etienne; Campolo, Domenico
2016-12-01
This paper analyses the dynamics of cable-driven robots with a passive backbone and develops techniques for their dynamic identification, which are tested on the H-Man, a planar cabled differential transmission robot for haptic interaction. The mechanism is optimized for human-robot interaction by accounting for the cost-benefit-ratio of the system, specifically by eliminating the necessity of an external force sensor to reduce the overall cost. As a consequence, this requires an effective dynamic model for accurate force feedback applications which include friction behavior in the system. We first consider the significance of friction in both the actuator and backbone spaces. Subsequently, we study the required complexity of the stiction model for the application. Different models representing different levels of complexity are investigated, ranging from the conventional approach of Coulomb to an advanced model which includes hysteresis. The results demonstrate each model's ability to capture the dynamic behavior of the system. In general, it is concluded that there is a trade-off between model accuracy and the model cost.
Ditlev, Jonathon A; Mayer, Bruce J; Loew, Leslie M
2013-02-05
Mathematical modeling has established its value for investigating the interplay of biochemical and mechanical mechanisms underlying actin-based motility. Because of the complex nature of actin dynamics and its regulation, many of these models are phenomenological or conceptual, providing a general understanding of the physics at play. But the wealth of carefully measured kinetic data on the interactions of many of the players in actin biochemistry cries out for the creation of more detailed and accurate models that could permit investigators to dissect interdependent roles of individual molecular components. Moreover, no human mind can assimilate all of the mechanisms underlying complex protein networks; so an additional benefit of a detailed kinetic model is that the numerous binding proteins, signaling mechanisms, and biochemical reactions can be computationally organized in a fully explicit, accessible, visualizable, and reusable structure. In this review, we will focus on how comprehensive and adaptable modeling allows investigators to explain experimental observations and develop testable hypotheses on the intracellular dynamics of the actin cytoskeleton. Copyright © 2013 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Ditlev, Jonathon A.; Mayer, Bruce J.; Loew, Leslie M.
2013-01-01
Mathematical modeling has established its value for investigating the interplay of biochemical and mechanical mechanisms underlying actin-based motility. Because of the complex nature of actin dynamics and its regulation, many of these models are phenomenological or conceptual, providing a general understanding of the physics at play. But the wealth of carefully measured kinetic data on the interactions of many of the players in actin biochemistry cries out for the creation of more detailed and accurate models that could permit investigators to dissect interdependent roles of individual molecular components. Moreover, no human mind can assimilate all of the mechanisms underlying complex protein networks; so an additional benefit of a detailed kinetic model is that the numerous binding proteins, signaling mechanisms, and biochemical reactions can be computationally organized in a fully explicit, accessible, visualizable, and reusable structure. In this review, we will focus on how comprehensive and adaptable modeling allows investigators to explain experimental observations and develop testable hypotheses on the intracellular dynamics of the actin cytoskeleton. PMID:23442903
NASA Technical Reports Server (NTRS)
Aldrin, John C.; Williams, Phillip A.; Wincheski, Russell (Buzz) A.
2008-01-01
A case study is presented for using models in eddy current NDE design for crack detection in Shuttle Reaction Control System thruster components. Numerical methods were used to address the complex geometry of the part and perform parametric studies of potential transducer designs. Simulations were found to show agreement with experimental results. Accurate representation of the coherent noise associated with the measurement and part geometry was found to be critical to properly evaluate the best probe designs.
Parreiras, P M; Sirota, L A; Wagner, L D; Menzies, S L; Arciniega, J L
2009-07-16
Complexities of lethal challenge models have prompted the investigation of immunogenicity assays as potency tests of anthrax vaccines. An ELISA and a lethal toxin neutralization assay (TNA) were used to measure antibody response to Protective Antigen (PA) in mice immunized once with either a commercial or a recombinant PA (rPA) vaccine formulated in-house. Even though ELISA and TNA results showed correlation, ELISA results may not be able to accurately predict TNA results in this single immunization model.
Predictive Modeling for NASA Entry, Descent and Landing Missions
NASA Technical Reports Server (NTRS)
Wright, Michael
2016-01-01
Entry, Descent and Landing (EDL) Modeling and Simulation (MS) is an enabling capability for complex NASA entry missions such as MSL and Orion. MS is used in every mission phase to define mission concepts, select appropriate architectures, design EDL systems, quantify margin and risk, ensure correct system operation, and analyze data returned from the entry. In an environment where it is impossible to fully test EDL concepts on the ground prior to use, accurate MS capability is required to extrapolate ground test results to expected flight performance.
NASA Technical Reports Server (NTRS)
Wang, Qun-Zhen; Massey, Steven J.; Abdol-Hamid, Khaled S.; Frink, Neal T.
1999-01-01
USM3D is a widely-used unstructured flow solver for simulating inviscid and viscous flows over complex geometries. The current version (version 5.0) of USM3D, however, does not have advanced turbulence models to accurately simulate complicated flows. We have implemented two modified versions of the original Jones and Launder k-epsilon two-equation turbulence model and the Girimaji algebraic Reynolds stress model in USM3D. Tests have been conducted for two flat plate boundary layer cases, a RAE2822 airfoil and an ONERA M6 wing. The results are compared with those of empirical formulae, theoretical results and the existing Spalart-Allmaras one-equation model.
A photogrammetric technique for generation of an accurate multispectral optical flow dataset
NASA Astrophysics Data System (ADS)
Kniaz, V. V.
2017-06-01
A presence of an accurate dataset is the key requirement for a successful development of an optical flow estimation algorithm. A large number of freely available optical flow datasets were developed in recent years and gave rise for many powerful algorithms. However most of the datasets include only images captured in the visible spectrum. This paper is focused on the creation of a multispectral optical flow dataset with an accurate ground truth. The generation of an accurate ground truth optical flow is a rather complex problem, as no device for error-free optical flow measurement was developed to date. Existing methods for ground truth optical flow estimation are based on hidden textures, 3D modelling or laser scanning. Such techniques are either work only with a synthetic optical flow or provide a sparse ground truth optical flow. In this paper a new photogrammetric method for generation of an accurate ground truth optical flow is proposed. The method combines the benefits of the accuracy and density of a synthetic optical flow datasets with the flexibility of laser scanning based techniques. A multispectral dataset including various image sequences was generated using the developed method. The dataset is freely available on the accompanying web site.
Karnon, Jonathan; Haji Ali Afzali, Hossein
2014-06-01
Modelling in economic evaluation is an unavoidable fact of life. Cohort-based state transition models are most common, though discrete event simulation (DES) is increasingly being used to implement more complex model structures. The benefits of DES relate to the greater flexibility around the implementation and population of complex models, which may provide more accurate or valid estimates of the incremental costs and benefits of alternative health technologies. The costs of DES relate to the time and expertise required to implement and review complex models, when perhaps a simpler model would suffice. The costs are not borne solely by the analyst, but also by reviewers. In particular, modelled economic evaluations are often submitted to support reimbursement decisions for new technologies, for which detailed model reviews are generally undertaken on behalf of the funding body. This paper reports the results from a review of published DES-based economic evaluations. Factors underlying the use of DES were defined, and the characteristics of applied models were considered, to inform options for assessing the potential benefits of DES in relation to each factor. Four broad factors underlying the use of DES were identified: baseline heterogeneity, continuous disease markers, time varying event rates, and the influence of prior events on subsequent event rates. If relevant, individual-level data are available, representation of the four factors is likely to improve model validity, and it is possible to assess the importance of their representation in individual cases. A thorough model performance evaluation is required to overcome the costs of DES from the users' perspective, but few of the reviewed DES models reported such a process. More generally, further direct, empirical comparisons of complex models with simpler models would better inform the benefits of DES to implement more complex models, and the circumstances in which such benefits are most likely.
Meris, Ronald G; Barbera, Joseph A
2014-01-01
In a large-scale outdoor, airborne, hazardous materials (HAZMAT) incident, such as ruptured chlorine rail cars during a train derailment, the local Incident Commanders and HAZMAT emergency responders must obtain accurate information quickly to assess the situation and act promptly and appropriately. HAZMAT responders must have a clear understanding of key information and how to integrate it into timely and effective decisions for action planning. This study examined the use of HAZMAT plume modeling as a decision support tool during incident action planning in this type of extreme HAZMAT incident. The concept of situation awareness as presented by Endsley's dynamic situation awareness model contains three levels: perception, comprehension, and projection. It was used to examine the actions of incident managers related to adequate data acquisition, current situational understanding, and accurate situation projection. Scientists and engineers have created software to simulate and predict HAZMAT plume behavior, the projected hazard impact areas, and the associated health effects. Incorporating the use of HAZMAT plume projection modeling into an incident action plan may be a complex process. The present analysis used a mixed qualitative and quantitative methodological approach and examined the use and limitations of a "HAZMAT Plume Modeling Cycle" process that can be integrated into the incident action planning cycle. HAZMAT response experts were interviewed using a computer-based simulation. One of the research conclusions indicated the "HAZMAT Plume Modeling Cycle" is a critical function so that an individual/team can be tasked with continually updating the hazard plume model with evolving data, promoting more accurate situation awareness.
Evidence of complex contagion of information in social media: An experiment using Twitter bots.
Mønsted, Bjarke; Sapieżyński, Piotr; Ferrara, Emilio; Lehmann, Sune
2017-01-01
It has recently become possible to study the dynamics of information diffusion in techno-social systems at scale, due to the emergence of online platforms, such as Twitter, with millions of users. One question that systematically recurs is whether information spreads according to simple or complex dynamics: does each exposure to a piece of information have an independent probability of a user adopting it (simple contagion), or does this probability depend instead on the number of sources of exposure, increasing above some threshold (complex contagion)? Most studies to date are observational and, therefore, unable to disentangle the effects of confounding factors such as social reinforcement, homophily, limited attention, or network community structure. Here we describe a novel controlled experiment that we performed on Twitter using 'social bots' deployed to carry out coordinated attempts at spreading information. We propose two Bayesian statistical models describing simple and complex contagion dynamics, and test the competing hypotheses. We provide experimental evidence that the complex contagion model describes the observed information diffusion behavior more accurately than simple contagion. Future applications of our results include more effective defenses against malicious propaganda campaigns on social media, improved marketing and advertisement strategies, and design of effective network intervention techniques.
NASA Astrophysics Data System (ADS)
Rutter, Nick; Sandells, Mel; Derksen, Chris; Toose, Peter; Royer, Alain; Montpetit, Benoit; Langlois, Alex; Lemmetyinen, Juha; Pulliainen, Jouni
2014-03-01
Two-dimensional measurements of snowpack properties (stratigraphic layering, density, grain size, and temperature) were used as inputs to the multilayer Helsinki University of Technology (HUT) microwave emission model at a centimeter-scale horizontal resolution, across a 4.5 m transect of ground-based passive microwave radiometer footprints near Churchill, Manitoba, Canada. Snowpack stratigraphy was complex (between six and eight layers) with only three layers extending continuously throughout the length of the transect. Distributions of one-dimensional simulations, accurately representing complex stratigraphic layering, were evaluated using measured brightness temperatures. Large biases (36 to 68 K) between simulated and measured brightness temperatures were minimized (-0.5 to 0.6 K), within measurement accuracy, through application of grain scaling factors (2.6 to 5.3) at different combinations of frequencies, polarizations, and model extinction coefficients. Grain scaling factors compensated for uncertainty relating optical specific surface area to HUT effective grain size inputs and quantified relative differences in scattering and absorption properties of various extinction coefficients. The HUT model required accurate representation of ice lenses, particularly at horizontal polarization, and large grain scaling factors highlighted the need to consider microstructure beyond the size of individual grains. As variability of extinction coefficients was strongly influenced by the proportion of large (hoar) grains in a vertical profile, it is important to consider simulations from distributions of one-dimensional profiles rather than single profiles, especially in sub-Arctic snowpacks where stratigraphic variability can be high. Model sensitivity experiments suggested that the level of error in field measurements and the new methodological framework used to apply them in a snow emission model were satisfactory. Layer amalgamation showed that a three-layer representation of snowpack stratigraphy reduced the bias of a one-layer representation by about 50%.
Michino, Mayako; Chen, Jianhan; Stevens, Raymond C; Brooks, Charles L
2010-08-01
Building reliable structural models of G protein-coupled receptors (GPCRs) is a difficult task because of the paucity of suitable templates, low sequence identity, and the wide variety of ligand specificities within the superfamily. Template-based modeling is known to be the most successful method for protein structure prediction. However, refinement of homology models within 1-3 A C alpha RMSD of the native structure remains a major challenge. Here, we address this problem by developing a novel protocol (foldGPCR) for modeling the transmembrane (TM) region of GPCRs in complex with a ligand, aimed to accurately model the structural divergence between the template and target in the TM helices. The protocol is based on predicted conserved inter-residue contacts between the template and target, and exploits an all-atom implicit membrane force field. The placement of the ligand in the binding pocket is guided by biochemical data. The foldGPCR protocol is implemented by a stepwise hierarchical approach, in which the TM helical bundle and the ligand are assembled by simulated annealing trials in the first step, and the receptor-ligand complex is refined with replica exchange sampling in the second step. The protocol is applied to model the human beta(2)-adrenergic receptor (beta(2)AR) bound to carazolol, using contacts derived from the template structure of bovine rhodopsin. Comparison with the X-ray crystal structure of the beta(2)AR shows that our protocol is particularly successful in accurately capturing helix backbone irregularities and helix-helix packing interactions that distinguish rhodopsin from beta(2)AR. (c) 2010 Wiley-Liss, Inc.