Zeinali-Rafsanjani, B.; Mosleh-Shirazi, M. A.; Faghihi, R.; Karbasi, S.; Mosalaei, A.
2015-01-01
To accurately recompute dose distributions in chest-wall radiotherapy with 120 kVp kilovoltage X-rays, an MCNP4C Monte Carlo model is presented using a fast method that obviates the need to fully model the tube components. To validate the model, half-value layer (HVL), percentage depth doses (PDDs) and beam profiles were measured. Dose measurements were performed for a more complex situation using thermoluminescence dosimeters (TLDs) placed within a Rando phantom. The measured and computed first and second HVLs were 3.8, 10.3 mm Al and 3.8, 10.6 mm Al, respectively. The differences between measured and calculated PDDs and beam profiles in water were within 2 mm/2% for all data points. In the Rando phantom, differences for majority of data points were within 2%. The proposed model offered an approximately 9500-fold reduced run time compared to the conventional full simulation. The acceptable agreement, based on international criteria, between the simulations and the measurements validates the accuracy of the model for its use in treatment planning and radiobiological modeling studies of superficial therapies including chest-wall irradiation using kilovoltage beam. PMID:26170553
Beckwith, Martha A; Ames, William; Vila, Fernando D; Krewald, Vera; Pantazis, Dimitrios A; Mantel, Claire; Pécaut, Jacques; Gennari, Marcello; Duboc, Carole; Collomb, Marie-Noëlle; Yano, Junko; Rehr, John J; Neese, Frank; DeBeer, Serena
2015-10-14
First principle calculations of extended X-ray absorption fine structure (EXAFS) data have seen widespread use in bioinorganic chemistry, perhaps most notably for modeling the Mn4Ca site in the oxygen evolving complex (OEC) of photosystem II (PSII). The logic implied by the calculations rests on the assumption that it is possible to a priori predict an accurate EXAFS spectrum provided that the underlying geometric structure is correct. The present study investigates the extent to which this is possible using state of the art EXAFS theory. The FEFF program is used to evaluate the ability of a multiple scattering-based approach to directly calculate the EXAFS spectrum of crystallographically defined model complexes. The results of these parameter free predictions are compared with the more traditional approach of fitting FEFF calculated spectra to experimental data. A series of seven crystallographically characterized Mn monomers and dimers is used as a test set. The largest deviations between the FEFF calculated EXAFS spectra and the experimental EXAFS spectra arise from the amplitudes. The amplitude errors result from a combination of errors in calculated S0(2) and Debye-Waller values as well as uncertainties in background subtraction. Additional errors may be attributed to structural parameters, particularly in cases where reliable high-resolution crystal structures are not available. Based on these investigations, the strengths and weaknesses of using first-principle EXAFS calculations as a predictive tool are discussed. We demonstrate that a range of DFT optimized structures of the OEC may all be considered consistent with experimental EXAFS data and that caution must be exercised when using EXAFS data to obtain topological arrangements of complex clusters.
Pre-Modeling Ensures Accurate Solid Models
ERIC Educational Resources Information Center
Gow, George
2010-01-01
Successful solid modeling requires a well-organized design tree. The design tree is a list of all the object's features and the sequential order in which they are modeled. The solid-modeling process is faster and less prone to modeling errors when the design tree is a simple and geometrically logical definition of the modeled object. Few high…
New model accurately predicts reformate composition
Ancheyta-Juarez, J.; Aguilar-Rodriguez, E. )
1994-01-31
Although naphtha reforming is a well-known process, the evolution of catalyst formulation, as well as new trends in gasoline specifications, have led to rapid evolution of the process, including: reactor design, regeneration mode, and operating conditions. Mathematical modeling of the reforming process is an increasingly important tool. It is fundamental to the proper design of new reactors and revamp of existing ones. Modeling can be used to optimize operating conditions, analyze the effects of process variables, and enhance unit performance. Instituto Mexicano del Petroleo has developed a model of the catalytic reforming process that accurately predicts reformate composition at the higher-severity conditions at which new reformers are being designed. The new AA model is more accurate than previous proposals because it takes into account the effects of temperature and pressure on the rate constants of each chemical reaction.
An Accurate, Simplified Model Intrabeam Scattering
Bane, Karl LF
2002-05-23
Beginning with the general Bjorken-Mtingwa solution for intrabeam scattering (IBS) we derive an accurate, greatly simplified model of IBS, valid for high energy beams in normal storage ring lattices. In addition, we show that, under the same conditions, a modified version of Piwinski's IBS formulation (where {eta}{sub x,y}{sup 2}/{beta}{sub x,y} has been replaced by {Eta}{sub x,y}) asymptotically approaches the result of Bjorken-Mtingwa.
NASA Astrophysics Data System (ADS)
Rosati, Dora P.; Molina, Chai; Earn, David J. D.
2015-12-01
Human behaviour and disease dynamics can greatly influence each other. In particular, people often engage in self-protective behaviours that affect epidemic patterns (e.g., vaccination, use of barrier precautions, isolation, etc.). Self-protective measures usually have a mitigating effect on an epidemic [16], but can in principle have negative impacts at the population level [12,15,18]. The structure of underlying social and biological contact networks can significantly influence the specific ways in which population-level effects are manifested. Using a different contact network in a disease dynamics model-keeping all else equal-can yield very different epidemic patterns. For example, it has been shown that when individuals imitate their neighbours' vaccination decisions with some probability, this can lead to herd immunity in some networks [9], yet for other networks it can preserve clusters of susceptible individuals that can drive further outbreaks of infectious disease [12].
Fast and Accurate Circuit Design Automation through Hierarchical Model Switching.
Huynh, Linh; Tagkopoulos, Ilias
2015-08-21
In computer-aided biological design, the trifecta of characterized part libraries, accurate models and optimal design parameters is crucial for producing reliable designs. As the number of parts and model complexity increase, however, it becomes exponentially more difficult for any optimization method to search the solution space, hence creating a trade-off that hampers efficient design. To address this issue, we present a hierarchical computer-aided design architecture that uses a two-step approach for biological design. First, a simple model of low computational complexity is used to predict circuit behavior and assess candidate circuit branches through branch-and-bound methods. Then, a complex, nonlinear circuit model is used for a fine-grained search of the reduced solution space, thus achieving more accurate results. Evaluation with a benchmark of 11 circuits and a library of 102 experimental designs with known characterization parameters demonstrates a speed-up of 3 orders of magnitude when compared to other design methods that provide optimality guarantees.
Accurate modelling of unsteady flows in collapsible tubes.
Marchandise, Emilie; Flaud, Patrice
2010-01-01
The context of this paper is the development of a general and efficient numerical haemodynamic tool to help clinicians and researchers in understanding of physiological flow phenomena. We propose an accurate one-dimensional Runge-Kutta discontinuous Galerkin (RK-DG) method coupled with lumped parameter models for the boundary conditions. The suggested model has already been successfully applied to haemodynamics in arteries and is now extended for the flow in collapsible tubes such as veins. The main difference with cardiovascular simulations is that the flow may become supercritical and elastic jumps may appear with the numerical consequence that scheme may not remain monotone if no limiting procedure is introduced. We show that our second-order RK-DG method equipped with an approximate Roe's Riemann solver and a slope-limiting procedure allows us to capture elastic jumps accurately. Moreover, this paper demonstrates that the complex physics associated with such flows is more accurately modelled than with traditional methods such as finite difference methods or finite volumes. We present various benchmark problems that show the flexibility and applicability of the numerical method. Our solutions are compared with analytical solutions when they are available and with solutions obtained using other numerical methods. Finally, to illustrate the clinical interest, we study the emptying process in a calf vein squeezed by contracting skeletal muscle in a normal and pathological subject. We compare our results with experimental simulations and discuss the sensitivity to parameters of our model.
More-Accurate Model of Flows in Rocket Injectors
NASA Technical Reports Server (NTRS)
Hosangadi, Ashvin; Chenoweth, James; Brinckman, Kevin; Dash, Sanford
2011-01-01
An improved computational model for simulating flows in liquid-propellant injectors in rocket engines has been developed. Models like this one are needed for predicting fluxes of heat in, and performances of, the engines. An important part of predicting performance is predicting fluctuations of temperature, fluctuations of concentrations of chemical species, and effects of turbulence on diffusion of heat and chemical species. Customarily, diffusion effects are represented by parameters known in the art as the Prandtl and Schmidt numbers. Prior formulations include ad hoc assumptions of constant values of these parameters, but these assumptions and, hence, the formulations, are inaccurate for complex flows. In the improved model, these parameters are neither constant nor specified in advance: instead, they are variables obtained as part of the solution. Consequently, this model represents the effects of turbulence on diffusion of heat and chemical species more accurately than prior formulations do, and may enable more-accurate prediction of mixing and flows of heat in rocket-engine combustion chambers. The model has been implemented within CRUNCH CFD, a proprietary computational fluid dynamics (CFD) computer program, and has been tested within that program. The model could also be implemented within other CFD programs.
A quick accurate model of nozzle backflow
NASA Technical Reports Server (NTRS)
Kuharski, R. A.
1991-01-01
Backflow from nozzles is a major source of contamination on spacecraft. If the craft contains any exposed high voltages, the neutral density produced by the nozzles in the vicinity of the craft needs to be known in order to assess the possibility of Paschen breakdown or the probability of sheath ionization around a region of the craft that collects electrons for the plasma. A model for backflow has been developed for incorporation into the Environment-Power System Analysis Tool (EPSAT) which quickly estimates both the magnitude of the backflow and the species makeup of the flow. By combining the backflow model with the Simons (1972) model for continuum flow it is possible to quickly estimate the density of each species from a nozzle at any position in space. The model requires only a few physical parameters of the nozzle and the gas as inputs and is therefore ideal for engineering applications.
Nakhleh, Luay
2014-03-12
I proposed to develop computationally efficient tools for accurate detection and reconstruction of microbes' complex evolutionary mechanisms, thus enabling rapid and accurate annotation, analysis and understanding of their genomes. To achieve this goal, I proposed to address three aspects. (1) Mathematical modeling. A major challenge facing the accurate detection of HGT is that of distinguishing between these two events on the one hand and other events that have similar "effects." I proposed to develop a novel mathematical approach for distinguishing among these events. Further, I proposed to develop a set of novel optimization criteria for the evolutionary analysis of microbial genomes in the presence of these complex evolutionary events. (2) Algorithm design. In this aspect of the project, I proposed to develop an array of e cient and accurate algorithms for analyzing microbial genomes based on the formulated optimization criteria. Further, I proposed to test the viability of the criteria and the accuracy of the algorithms in an experimental setting using both synthetic as well as biological data. (3) Software development. I proposed the nal outcome to be a suite of software tools which implements the mathematical models as well as the algorithms developed.
Simple Mathematical Models Do Not Accurately Predict Early SIV Dynamics
Noecker, Cecilia; Schaefer, Krista; Zaccheo, Kelly; Yang, Yiding; Day, Judy; Ganusov, Vitaly V.
2015-01-01
Upon infection of a new host, human immunodeficiency virus (HIV) replicates in the mucosal tissues and is generally undetectable in circulation for 1–2 weeks post-infection. Several interventions against HIV including vaccines and antiretroviral prophylaxis target virus replication at this earliest stage of infection. Mathematical models have been used to understand how HIV spreads from mucosal tissues systemically and what impact vaccination and/or antiretroviral prophylaxis has on viral eradication. Because predictions of such models have been rarely compared to experimental data, it remains unclear which processes included in these models are critical for predicting early HIV dynamics. Here we modified the “standard” mathematical model of HIV infection to include two populations of infected cells: cells that are actively producing the virus and cells that are transitioning into virus production mode. We evaluated the effects of several poorly known parameters on infection outcomes in this model and compared model predictions to experimental data on infection of non-human primates with variable doses of simian immunodifficiency virus (SIV). First, we found that the mode of virus production by infected cells (budding vs. bursting) has a minimal impact on the early virus dynamics for a wide range of model parameters, as long as the parameters are constrained to provide the observed rate of SIV load increase in the blood of infected animals. Interestingly and in contrast with previous results, we found that the bursting mode of virus production generally results in a higher probability of viral extinction than the budding mode of virus production. Second, this mathematical model was not able to accurately describe the change in experimentally determined probability of host infection with increasing viral doses. Third and finally, the model was also unable to accurately explain the decline in the time to virus detection with increasing viral dose. These results
Accurate spectral modeling for infrared radiation
NASA Technical Reports Server (NTRS)
Tiwari, S. N.; Gupta, S. K.
1977-01-01
Direct line-by-line integration and quasi-random band model techniques are employed to calculate the spectral transmittance and total band absorptance of 4.7 micron CO, 4.3 micron CO2, 15 micron CO2, and 5.35 micron NO bands. Results are obtained for different pressures, temperatures, and path lengths. These are compared with available theoretical and experimental investigations. For each gas, extensive tabulations of results are presented for comparative purposes. In almost all cases, line-by-line results are found to be in excellent agreement with the experimental values. The range of validity of other models and correlations are discussed.
Generating Facial Expressions Using an Anatomically Accurate Biomechanical Model.
Wu, Tim; Hung, Alice; Mithraratne, Kumar
2014-11-01
This paper presents a computational framework for modelling the biomechanics of human facial expressions. A detailed high-order (Cubic-Hermite) finite element model of the human head was constructed using anatomical data segmented from magnetic resonance images. The model includes a superficial soft-tissue continuum consisting of skin, the subcutaneous layer and the superficial Musculo-Aponeurotic system. Embedded within this continuum mesh, are 20 pairs of facial muscles which drive facial expressions. These muscles were treated as transversely-isotropic and their anatomical geometries and fibre orientations were accurately depicted. In order to capture the relative composition of muscles and fat, material heterogeneity was also introduced into the model. Complex contact interactions between the lips, eyelids, and between superficial soft tissue continuum and deep rigid skeletal bones were also computed. In addition, this paper investigates the impact of incorporating material heterogeneity and contact interactions, which are often neglected in similar studies. Four facial expressions were simulated using the developed model and the results were compared with surface data obtained from a 3D structured-light scanner. Predicted expressions showed good agreement with the experimental data.
Accurate Wind Characterization in Complex Terrain Using the Immersed Boundary Method
Lundquist, K A; Chow, F K; Lundquist, J K; Kosovic, B
2009-09-30
This paper describes an immersed boundary method (IBM) that facilitates the explicit resolution of complex terrain within the Weather Research and Forecasting (WRF) model. Two different interpolation methods, trilinear and inverse distance weighting, are used at the core of the IBM algorithm. Functional aspects of the algorithm's implementation and the accuracy of results are considered. Simulations of flow over a three-dimensional hill with shallow terrain slopes are preformed with both WRF's native terrain-following coordinate and with both IB methods. Comparisons of flow fields from the three simulations show excellent agreement, indicating that both IB methods produce accurate results. However, when ease of implementation is considered, inverse distance weighting is superior. Furthermore, inverse distance weighting is shown to be more adept at handling highly complex urban terrain, where the trilinear interpolation algorithm breaks down. This capability is demonstrated by using the inverse distance weighting core of the IBM to model atmospheric flow in downtown Oklahoma City.
Sporadic meteoroid complex: Modeling
NASA Astrophysics Data System (ADS)
Andreev, V.
2014-07-01
The distribution of the sporadic meteoroids flux density over the celestial sphere is the common form of representation of the meteoroids distribution in the vicinity of the Earth's orbit. The determination of the flux density of sporadic meteor bodies is Q(V,e,f) = Q_0 P_e(V) P(e,f) where V is the meteoroid velocity, e,f are the radiant coordinates, Q_0 is the meteoroid flux over whole celestial sphere, P_e(V) is the conditional velocity distributions and P(e,f) is the radiant distribution over the celestial sphere. The sporadic meteoroid complex model is analytical and based on heliocentric velocities and radiant distributions. The multi-mode character of the heliocentric velocity and radiant distributions follows from the analysis of meteor observational data. This fact points to a complicated structure of the sporadic meteoroid complex. It is the consequence of the plurality of the parent bodies and the origin mechanisms of the meteoroids. The meteoroid complex was divided into four groups for that reason and with a goal of more accurate modelling of velocities and radiant distributions. As the classifying parameter to determine the meteoroid membership in any group, we adopt the Tisserand invariant relative to Jupiter T_J = 1/a + 2 A_J^{-3/2} √{a (1 - e^2)} cos i and the meteoroid orbit inclination i. Two meteoroid groups relate to long-period and short-period comets. One meteoroid group is related to asteroids. The relationship to the last, fourth group is a problematic one. Then, we construct models of radiant and velocity distributions for each group. The analytical model for the whole sporadic meteoroid complex is the sum of the ones for each group.
NASA Technical Reports Server (NTRS)
Cao, Fang; Fichot, Cedric G.; Hooker, Stanford B.; Miller, William L.
2014-01-01
Photochemical processes driven by high-energy ultraviolet radiation (UVR) in inshore, estuarine, and coastal waters play an important role in global bio geochemical cycles and biological systems. A key to modeling photochemical processes in these optically complex waters is an accurate description of the vertical distribution of UVR in the water column which can be obtained using the diffuse attenuation coefficients of down welling irradiance (Kd()). The Sea UV Sea UVc algorithms (Fichot et al., 2008) can accurately retrieve Kd ( 320, 340, 380,412, 443 and 490 nm) in oceanic and coastal waters using multispectral remote sensing reflectances (Rrs(), Sea WiFS bands). However, SeaUVSeaUVc algorithms are currently not optimized for use in optically complex, inshore waters, where they tend to severely underestimate Kd(). Here, a new training data set of optical properties collected in optically complex, inshore waters was used to re-parameterize the published SeaUVSeaUVc algorithms, resulting in improved Kd() retrievals for turbid, estuarine waters. Although the updated SeaUVSeaUVc algorithms perform best in optically complex waters, the published SeaUVSeaUVc models still perform well in most coastal and oceanic waters. Therefore, we propose a composite set of SeaUVSeaUVc algorithms, optimized for Kd() retrieval in almost all marine systems, ranging from oceanic to inshore waters. The composite algorithm set can retrieve Kd from ocean color with good accuracy across this wide range of water types (e.g., within 13 mean relative error for Kd(340)). A validation step using three independent, in situ data sets indicates that the composite SeaUVSeaUVc can generate accurate Kd values from 320 490 nm using satellite imagery on a global scale. Taking advantage of the inherent benefits of our statistical methods, we pooled the validation data with the training set, obtaining an optimized composite model for estimating Kd() in UV wavelengths for almost all marine waters. This
An Accurate and Dynamic Computer Graphics Muscle Model
NASA Technical Reports Server (NTRS)
Levine, David Asher
1997-01-01
A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.
Accurate in situ measurement of complex refractive index and particle size in intralipid emulsions.
Dong, Miao L; Goyal, Kashika G; Worth, Bradley W; Makkar, Sorab S; Calhoun, William R; Bali, Lalit M; Bali, Samir
2013-08-01
A first accurate measurement of the complex refractive index in an intralipid emulsion is demonstrated, and thereby the average scatterer particle size using standard Mie scattering calculations is extracted. Our method is based on measurement and modeling of the reflectance of a divergent laser beam from the sample surface. In the absence of any definitive reference data for the complex refractive index or particle size in highly turbid intralipid emulsions, we base our claim of accuracy on the fact that our work offers several critically important advantages over previously reported attempts. First, our measurements are in situ in the sense that they do not require any sample dilution, thus eliminating dilution errors. Second, our theoretical model does not employ any fitting parameters other than the two quantities we seek to determine, i.e., the real and imaginary parts of the refractive index, thus eliminating ambiguities arising from multiple extraneous fitting parameters. Third, we fit the entire reflectance-versus-incident-angle data curve instead of focusing on only the critical angle region, which is just a small subset of the data. Finally, despite our use of highly scattering opaque samples, our experiment uniquely satisfies a key assumption behind the Mie scattering formalism, namely, no multiple scattering occurs. Further proof of our method's validity is given by the fact that our measured particle size finds good agreement with the value obtained by dynamic light scattering.
Accurate in situ measurement of complex refractive index and particle size in intralipid emulsions
NASA Astrophysics Data System (ADS)
Dong, Miao L.; Goyal, Kashika G.; Worth, Bradley W.; Makkar, Sorab S.; Calhoun, William R.; Bali, Lalit M.; Bali, Samir
2013-08-01
A first accurate measurement of the complex refractive index in an intralipid emulsion is demonstrated, and thereby the average scatterer particle size using standard Mie scattering calculations is extracted. Our method is based on measurement and modeling of the reflectance of a divergent laser beam from the sample surface. In the absence of any definitive reference data for the complex refractive index or particle size in highly turbid intralipid emulsions, we base our claim of accuracy on the fact that our work offers several critically important advantages over previously reported attempts. First, our measurements are in situ in the sense that they do not require any sample dilution, thus eliminating dilution errors. Second, our theoretical model does not employ any fitting parameters other than the two quantities we seek to determine, i.e., the real and imaginary parts of the refractive index, thus eliminating ambiguities arising from multiple extraneous fitting parameters. Third, we fit the entire reflectance-versus-incident-angle data curve instead of focusing on only the critical angle region, which is just a small subset of the data. Finally, despite our use of highly scattering opaque samples, our experiment uniquely satisfies a key assumption behind the Mie scattering formalism, namely, no multiple scattering occurs. Further proof of our method's validity is given by the fact that our measured particle size finds good agreement with the value obtained by dynamic light scattering.
Local Debonding and Fiber Breakage in Composite Materials Modeled Accurately
NASA Technical Reports Server (NTRS)
Bednarcyk, Brett A.; Arnold, Steven M.
2001-01-01
A prerequisite for full utilization of composite materials in aerospace components is accurate design and life prediction tools that enable the assessment of component performance and reliability. Such tools assist both structural analysts, who design and optimize structures composed of composite materials, and materials scientists who design and optimize the composite materials themselves. NASA Glenn Research Center's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) software package (http://www.grc.nasa.gov/WWW/LPB/mac) addresses this need for composite design and life prediction tools by providing a widely applicable and accurate approach to modeling composite materials. Furthermore, MAC/GMC serves as a platform for incorporating new local models and capabilities that are under development at NASA, thus enabling these new capabilities to progress rapidly to a stage in which they can be employed by the code's end users.
Accurate Mass Assignment of Native Protein Complexes Detected by Electrospray Mass Spectrometry
Liepold, Lars O.; Oltrogge, Luke M.; Suci, Peter; Douglas, Trevor; Young, Mark J.
2009-01-01
Correct charge state assignment is crucial to assigning an accurate mass to supramolecular complexes analyzed by electrospray mass spectrometry. Conventional charge state assignment techniques fall short of reliably and unambiguously predicting the correct charge state for many supramolecular complexes. We provide an explanation of the shortcomings of the conventional techniques and have developed a robust charge state assignment method that is applicable to all spectra. PMID:19103497
An Accurate Temperature Correction Model for Thermocouple Hygrometers 1
Savage, Michael J.; Cass, Alfred; de Jager, James M.
1982-01-01
Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques. In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38°C). The model based on calibration at two temperatures is superior to that based on only one calibration. The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25°C, if the calibration slopes are corrected for temperature. PMID:16662241
An accurate temperature correction model for thermocouple hygrometers.
Savage, M J; Cass, A; de Jager, J M
1982-02-01
Numerous water relation studies have used thermocouple hygrometers routinely. However, the accurate temperature correction of hygrometer calibration curve slopes seems to have been largely neglected in both psychrometric and dewpoint techniques.In the case of thermocouple psychrometers, two temperature correction models are proposed, each based on measurement of the thermojunction radius and calculation of the theoretical voltage sensitivity to changes in water potential. The first model relies on calibration at a single temperature and the second at two temperatures. Both these models were more accurate than the temperature correction models currently in use for four psychrometers calibrated over a range of temperatures (15-38 degrees C). The model based on calibration at two temperatures is superior to that based on only one calibration.The model proposed for dewpoint hygrometers is similar to that for psychrometers. It is based on the theoretical voltage sensitivity to changes in water potential. Comparison with empirical data from three dewpoint hygrometers calibrated at four different temperatures indicates that these instruments need only be calibrated at, e.g. 25 degrees C, if the calibration slopes are corrected for temperature.
Accurate refinement of docked protein complexes using evolutionary information and deep learning.
Akbal-Delibas, Bahar; Farhoodi, Roshanak; Pomplun, Marc; Haspel, Nurit
2016-06-01
One of the major challenges for protein docking methods is to accurately discriminate native-like structures from false positives. Docking methods are often inaccurate and the results have to be refined and re-ranked to obtain native-like complexes and remove outliers. In a previous work, we introduced AccuRefiner, a machine learning based tool for refining protein-protein complexes. Given a docked complex, the refinement tool produces a small set of refined versions of the input complex, with lower root-mean-square-deviation (RMSD) of atomic positions with respect to the native structure. The method employs a unique ranking tool that accurately predicts the RMSD of docked complexes with respect to the native structure. In this work, we use a deep learning network with a similar set of features and five layers. We show that a properly trained deep learning network can accurately predict the RMSD of a docked complex with 1.40 Å error margin on average, by approximating the complex relationship between a wide set of scoring function terms and the RMSD of a docked structure. The network was trained on 35000 unbound docking complexes generated by RosettaDock. We tested our method on 25 different putative docked complexes produced also by RosettaDock for five proteins that were not included in the training data. The results demonstrate that the high accuracy of the ranking tool enables AccuRefiner to consistently choose the refinement candidates with lower RMSD values compared to the coarsely docked input structures.
Accurate pressure gradient calculations in hydrostatic atmospheric models
NASA Technical Reports Server (NTRS)
Carroll, John J.; Mendez-Nunez, Luis R.; Tanrikulu, Saffet
1987-01-01
A method for the accurate calculation of the horizontal pressure gradient acceleration in hydrostatic atmospheric models is presented which is especially useful in situations where the isothermal surfaces are not parallel to the vertical coordinate surfaces. The present method is shown to be exact if the potential temperature lapse rate is constant between the vertical pressure integration limits. The technique is applied to both the integration of the hydrostatic equation and the computation of the slope correction term in the horizontal pressure gradient. A fixed vertical grid and a dynamic grid defined by the significant levels in the vertical temperature distribution are employed.
Mouse models of human AML accurately predict chemotherapy response
Zuber, Johannes; Radtke, Ina; Pardee, Timothy S.; Zhao, Zhen; Rappaport, Amy R.; Luo, Weijun; McCurrach, Mila E.; Yang, Miao-Miao; Dolan, M. Eileen; Kogan, Scott C.; Downing, James R.; Lowe, Scott W.
2009-01-01
The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to conventional therapy that mirror clinical experience. Specifically, murine leukemias expressing the AML1/ETO fusion oncoprotein, associated with a favorable prognosis in patients, show a dramatic response to induction chemotherapy owing to robust activation of the p53 tumor suppressor network. Conversely, murine leukemias expressing MLL fusion proteins, associated with a dismal prognosis in patients, are drug-resistant due to an attenuated p53 response. Our studies highlight the importance of genetic information in guiding the treatment of human AML, functionally establish the p53 network as a central determinant of chemotherapy response in AML, and demonstrate that genetically engineered mouse models of human cancer can accurately predict therapy response in patients. PMID:19339691
Mouse models of human AML accurately predict chemotherapy response.
Zuber, Johannes; Radtke, Ina; Pardee, Timothy S; Zhao, Zhen; Rappaport, Amy R; Luo, Weijun; McCurrach, Mila E; Yang, Miao-Miao; Dolan, M Eileen; Kogan, Scott C; Downing, James R; Lowe, Scott W
2009-04-01
The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to conventional therapy that mirror clinical experience. Specifically, murine leukemias expressing the AML1/ETO fusion oncoprotein, associated with a favorable prognosis in patients, show a dramatic response to induction chemotherapy owing to robust activation of the p53 tumor suppressor network. Conversely, murine leukemias expressing MLL fusion proteins, associated with a dismal prognosis in patients, are drug-resistant due to an attenuated p53 response. Our studies highlight the importance of genetic information in guiding the treatment of human AML, functionally establish the p53 network as a central determinant of chemotherapy response in AML, and demonstrate that genetically engineered mouse models of human cancer can accurately predict therapy response in patients.
Accurate electronic-structure description of Mn complexes: a GGA+U approach
NASA Astrophysics Data System (ADS)
Li, Elise Y.; Kulik, Heather; Marzari, Nicola
2008-03-01
Conventional density-functional approach often fail in offering an accurate description of the spin-resolved energetics in transition metals complexes. We will focus here on Mn complexes, where many aspects of the molecular structure and the reaction mechanisms are still unresolved - most notably in the oxygen-evolving complex (OEC) of photosystem II and the manganese catalase (MC). We apply a self-consistent GGA + U approach [1], originally designed within the DFT framework for the treatment of strongly correlated materials, to describe the geometry, the electronic and the magnetic properties of various manganese oxide complexes, finding very good agreement with higher-order ab-initio calculations. In particular, the different oxidation states of dinuclear systems containing the [Mn2O2]^n+ (n= 2, 3, 4) core are investigated, in order to mimic the basic face unit of the OEC complex. [1]. H. J. Kulik, M. Cococcioni, D. A. Scherlis, N. Marzari, Phys. Rev. Lett., 2006, 97, 103001
Turbulence Models for Accurate Aerothermal Prediction in Hypersonic Flows
NASA Astrophysics Data System (ADS)
Zhang, Xiang-Hong; Wu, Yi-Zao; Wang, Jiang-Feng
Accurate description of the aerodynamic and aerothermal environment is crucial to the integrated design and optimization for high performance hypersonic vehicles. In the simulation of aerothermal environment, the effect of viscosity is crucial. The turbulence modeling remains a major source of uncertainty in the computational prediction of aerodynamic forces and heating. In this paper, three turbulent models were studied: the one-equation eddy viscosity transport model of Spalart-Allmaras, the Wilcox k-ω model and the Menter SST model. For the k-ω model and SST model, the compressibility correction, press dilatation and low Reynolds number correction were considered. The influence of these corrections for flow properties were discussed by comparing with the results without corrections. In this paper the emphasis is on the assessment and evaluation of the turbulence models in prediction of heat transfer as applied to a range of hypersonic flows with comparison to experimental data. This will enable establishing factor of safety for the design of thermal protection systems of hypersonic vehicle.
Technology Transfer Automated Retrieval System (TEKTRAN)
Adsorption-desorption reactions are important processes that affect the transport of contaminants in the environment. Surface complexation models are chemical models that can account for the effects of variable chemical conditions, such as pH, on adsorption reactions. These models define specific ...
The utility of accurate mass and LC elution time information in the analysis of complex proteomes
Norbeck, Angela D.; Monroe, Matthew E.; Adkins, Joshua N.; Anderson, Kevin K.; Daly, Don S.; Smith, Richard D.
2005-08-01
Theoretical tryptic digests of all predicted proteins from the genomes of three organisms of varying complexity were evaluated for specificity and possible utility of combined peptide accurate mass and predicted LC normalized elution time (NET) information. The uniqueness of each peptide was evaluated using its combined mass (+/- 5 ppm and 1 ppm) and NET value (no constraint, +/- 0.05 and 0.01 on a 0-1 NET scale). The set of peptides both underestimates actual biological complexity due to the lack of specific modifications, and overestimates the expected complexity since many proteins will not be present in the sample or observable on the mass spectrometer because of dynamic range limitations. Once a peptide is identified from an LCMS/MS experiment, its mass and elution time is representative of a unique fingerprint for that peptide. The uniqueness of that fingerprint in comparison to that for the other peptides present is indicative of the ability to confidently identify that peptide based on accurate mass and NET measurements. These measurements can be made using HPLC coupled with high resolution MS in a high-throughput manner. Results show that for organisms with comparatively small proteomes, such as Deinococcus radiodurans, modest mass and elution time accuracies are generally adequate for peptide identifications. For more complex proteomes, increasingly accurate easurements are required. However, the majority of proteins should be uniquely identifiable by using LC-MS with mass accuracies within +/- 1 ppm and elution time easurements within +/- 0.01 NET.
Granata, Daniele; Carnevale, Vincenzo
2016-01-01
The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant “collective” variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset. PMID:27510265
NASA Astrophysics Data System (ADS)
Granata, Daniele; Carnevale, Vincenzo
2016-08-01
The collective behavior of a large number of degrees of freedom can be often described by a handful of variables. This observation justifies the use of dimensionality reduction approaches to model complex systems and motivates the search for a small set of relevant “collective” variables. Here, we analyze this issue by focusing on the optimal number of variable needed to capture the salient features of a generic dataset and develop a novel estimator for the intrinsic dimension (ID). By approximating geodesics with minimum distance paths on a graph, we analyze the distribution of pairwise distances around the maximum and exploit its dependency on the dimensionality to obtain an ID estimate. We show that the estimator does not depend on the shape of the intrinsic manifold and is highly accurate, even for exceedingly small sample sizes. We apply the method to several relevant datasets from image recognition databases and protein multiple sequence alignments and discuss possible interpretations for the estimated dimension in light of the correlations among input variables and of the information content of the dataset.
Atmospheric modeling in complex terrain
Williams, M. D.; Streit, G. E.
1990-05-01
Los Alamos investigators have developed several models which are relevant to modeling Mexico City air quality. The collection of models includes: meteorological models, dispersion models, air chemistry models, and visibility models. The models have been applied in several different contexts. They have been developed primarily to address the complexities posed by complex terrain. HOTMAC is the meteorological model which requires terrain and limited meteorological information. HOTMAC incorporates a relatively complete description of atmospheric physics to give good descriptions of the wind, temperature, and turbulence fields. RAPTAD is a dispersion code which uses random particle transport and kernel representations to efficiently provide accurate pollutant concentration fields. RAPTAD provides a much better description of tracer dispersion than do Gaussian puff models which fail to properly represent the effects of the wind profile near the surface. ATMOS and LAVM treat photochemistry and visibility respectively. ATMOS has been used to describe wintertime chemistry of the Denver brown cloud. Its description provided reasonable agreement with measurements for the high altitude of Denver. LAVM can provide both numerical indices or pictoral representations of visibility effects of pollutants. 15 refs., 74 figs.
Inverter Modeling For Accurate Energy Predictions Of Tracking HCPV Installations
NASA Astrophysics Data System (ADS)
Bowman, J.; Jensen, S.; McDonald, Mark
2010-10-01
High efficiency high concentration photovoltaic (HCPV) solar plants of megawatt scale are now operational, and opportunities for expanded adoption are plentiful. However, effective bidding for sites requires reliable prediction of energy production. HCPV module nameplate power is rated for specific test conditions; however, instantaneous HCPV power varies due to site specific irradiance and operating temperature, and is degraded by soiling, protective stowing, shading, and electrical connectivity. These factors interact with the selection of equipment typically supplied by third parties, e.g., wire gauge and inverters. We describe a time sequence model accurately accounting for these effects that predicts annual energy production, with specific reference to the impact of the inverter on energy output and interactions between system-level design decisions and the inverter. We will also show two examples, based on an actual field design, of inverter efficiency calculations and the interaction between string arrangements and inverter selection.
Accurate SHAPE-directed RNA secondary structure modeling, including pseudoknots
Hajdin, Christine E.; Bellaousov, Stanislav; Huggins, Wayne; Leonard, Christopher W.; Mathews, David H.; Weeks, Kevin M.
2013-01-01
A pseudoknot forms in an RNA when nucleotides in a loop pair with a region outside the helices that close the loop. Pseudoknots occur relatively rarely in RNA but are highly overrepresented in functionally critical motifs in large catalytic RNAs, in riboswitches, and in regulatory elements of viruses. Pseudoknots are usually excluded from RNA structure prediction algorithms. When included, these pairings are difficult to model accurately, especially in large RNAs, because allowing this structure dramatically increases the number of possible incorrect folds and because it is difficult to search the fold space for an optimal structure. We have developed a concise secondary structure modeling approach that combines SHAPE (selective 2′-hydroxyl acylation analyzed by primer extension) experimental chemical probing information and a simple, but robust, energy model for the entropic cost of single pseudoknot formation. Structures are predicted with iterative refinement, using a dynamic programming algorithm. This melded experimental and thermodynamic energy function predicted the secondary structures and the pseudoknots for a set of 21 challenging RNAs of known structure ranging in size from 34 to 530 nt. On average, 93% of known base pairs were predicted, and all pseudoknots in well-folded RNAs were identified. PMID:23503844
Towards Accurate Molecular Modeling of Plastic Bonded Explosives
NASA Astrophysics Data System (ADS)
Chantawansri, T. L.; Andzelm, J.; Taylor, D.; Byrd, E.; Rice, B.
2010-03-01
There is substantial interest in identifying the controlling factors that influence the susceptibility of polymer bonded explosives (PBXs) to accidental initiation. Numerous Molecular Dynamics (MD) simulations of PBXs using the COMPASS force field have been reported in recent years, where the validity of the force field in modeling the solid EM fill has been judged solely on its ability to reproduce lattice parameters, which is an insufficient metric. Performance of the COMPASS force field in modeling EMs and the polymeric binder has been assessed by calculating structural, thermal, and mechanical properties, where only fair agreement with experimental data is obtained. We performed MD simulations using the COMPASS force field for the polymer binder hydroxyl-terminated polybutadiene and five EMs: cyclotrimethylenetrinitramine, 1,3,5,7-tetranitro-1,3,5,7-tetra-azacyclo-octane, 2,4,6,8,10,12-hexantirohexaazazisowurzitane, 2,4,6-trinitro-1,3,5-benzenetriamine, and pentaerythritol tetranitate. Predicted EM crystallographic and molecular structural parameters, as well as calculated properties for the binder will be compared with experimental results for different simulation conditions. We also present novel simulation protocols, which improve agreement between experimental and computation results thus leading to the accurate modeling of PBXs.
The utility of accurate mass and LC elution time information in the analysis of complex proteomes
Norbeck, Angela D.; Monroe, Matthew E.; Adkins, Joshua N.; Smith, Richard D.
2007-01-01
Theoretical tryptic digests of all predicted proteins from the genomes of three organisms of varying complexity were evaluated for specificity and possible utility of combined peptide accurate mass and predicted LC normalized elution time (NET) information. The uniqueness of each peptide was evaluated using its combined mass (+/− 5 ppm and 1 ppm) and NET value (no constraint, +/− 0.05 and 0.01 on a 0–1 NET scale). The set of peptides both underestimates actual biological complexity due to the lack of specific modifications, and overestimates the expected complexity since many proteins will not be present in the sample or observable on the mass spectrometer because of dynamic range limitations. Once a peptide is identified from an LC-MS/MS experiment, its mass and elution time is representative of a unique fingerprint for that peptide. The uniqueness of that fingerprint in comparison to that for the other peptides present is indicative of the ability to confidently identify that peptide based on accurate mass and NET measurements. These measurements can be made using HPLC coupled with high resolution MS in a high-throughput manner. Results show that for organisms with comparatively small proteomes, such as Deinococcus radiodurans, modest mass and elution time accuracies are generally adequate for peptide identifications. For more complex proteomes, increasingly accurate measurements are required. However, the majority of proteins should be uniquely identifiable by using LC-MS with mass accuracies within +/− 1 ppm and elution time measurements within +/− 0.01 NET. PMID:15979333
NASA Astrophysics Data System (ADS)
Cary, John R.; Abell, D.; Amundson, J.; Bruhwiler, D. L.; Busby, R.; Carlsson, J. A.; Dimitrov, D. A.; Kashdan, E.; Messmer, P.; Nieter, C.; Smithe, D. N.; Spentzouris, P.; Stoltz, P.; Trines, R. M.; Wang, H.; Werner, G. R.
2006-09-01
As the size and cost of particle accelerators escalate, high-performance computing plays an increasingly important role; optimization through accurate, detailed computermodeling increases performance and reduces costs. But consequently, computer simulations face enormous challenges. Early approximation methods, such as expansions in distance from the design orbit, were unable to supply detailed accurate results, such as in the computation of wake fields in complex cavities. Since the advent of message-passing supercomputers with thousands of processors, earlier approximations are no longer necessary, and it is now possible to compute wake fields, the effects of dampers, and self-consistent dynamics in cavities accurately. In this environment, the focus has shifted towards the development and implementation of algorithms that scale to large numbers of processors. So-called charge-conserving algorithms evolve the electromagnetic fields without the need for any global solves (which are difficult to scale up to many processors). Using cut-cell (or embedded) boundaries, these algorithms can simulate the fields in complex accelerator cavities with curved walls. New implicit algorithms, which are stable for any time-step, conserve charge as well, allowing faster simulation of structures with details small compared to the characteristic wavelength. These algorithmic and computational advances have been implemented in the VORPAL7 Framework, a flexible, object-oriented, massively parallel computational application that allows run-time assembly of algorithms and objects, thus composing an application on the fly.
2015-01-01
SEP 2015 2. REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE Complexity and animal models 5a. CONTRACT NUMBER 5b. GRANT NUMBER...decrease W/Wmax, thereby maintaining the relationship between variability and W/Wmax. doi:10.1016/j.jcrc.2010.05.012 Complexity and animal models...may not be possible during mass casualty and natural disaster situations or may need to be postponed during combat to avoid danger to the medic’s life
Personalized Orthodontic Accurate Tooth Arrangement System with Complete Teeth Model.
Cheng, Cheng; Cheng, Xiaosheng; Dai, Ning; Liu, Yi; Fan, Qilei; Hou, Yulin; Jiang, Xiaotong
2015-09-01
The accuracy, validity and lack of relation information between dental root and jaw in tooth arrangement are key problems in tooth arrangement technology. This paper aims to describe a newly developed virtual, personalized and accurate tooth arrangement system based on complete information about dental root and skull. Firstly, a feature constraint database of a 3D teeth model is established. Secondly, for computed simulation of tooth movement, the reference planes and lines are defined by the anatomical reference points. The matching mathematical model of teeth pattern and the principle of the specific pose transformation of rigid body are fully utilized. The relation of position between dental root and alveolar bone is considered during the design process. Finally, the relative pose relationships among various teeth are optimized using the object mover, and a personalized therapeutic schedule is formulated. Experimental results show that the virtual tooth arrangement system can arrange abnormal teeth very well and is sufficiently flexible. The relation of position between root and jaw is favorable. This newly developed system is characterized by high-speed processing and quantitative evaluation of the amount of 3D movement of an individual tooth.
Winters, Taylor M; Takahashi, Mitsuhiko; Lieber, Richard L; Ward, Samuel R
2011-01-04
An a priori model of the whole active muscle length-tension relationship was constructed utilizing only myofilament length and serial sarcomere number for rabbit tibialis anterior (TA), extensor digitorum longus (EDL), and extensor digitorum II (EDII) muscles. Passive tension was modeled with a two-element Hill-type model. Experimental length-tension relations were then measured for each of these muscles and compared to predictions. The model was able to accurately capture the active-tension characteristics of experimentally-measured data for all muscles (ICC=0.88 ± 0.03). Despite their varied architecture, no differences in predicted versus experimental correlations were observed among muscles. In addition, the model demonstrated that excursion, quantified by full-width-at-half-maximum (FWHM) of the active length-tension relationship, scaled linearly (slope=0.68) with normalized muscle fiber length. Experimental and theoretical FWHM values agreed well with an intraclass correlation coefficient of 0.99 (p<0.001). In contrast to active tension, the passive tension model deviated from experimentally-measured values and thus, was not an accurate predictor of passive tension (ICC=0.70 ± 0.07). These data demonstrate that modeling muscle as a scaled sarcomere provides accurate active functional but not passive functional predictions for rabbit TA, EDL, and EDII muscles and call into question the need for more complex modeling assumptions often proposed.
Kang, Dongwan D.; Froula, Jeff; Egan, Rob; Wang, Zhong
2015-01-01
Grouping large genomic fragments assembled from shotgun metagenomic sequences to deconvolute complex microbial communities, or metagenome binning, enables the study of individual organisms and their interactions. Because of the complex nature of these communities, existing metagenome binning methods often miss a large number of microbial species. In addition, most of the tools are not scalable to large datasets. Here we introduce automated software called MetaBAT that integrates empirical probabilistic distances of genome abundance and tetranucleotide frequency for accurate metagenome binning. MetaBAT outperforms alternative methods in accuracy and computational efficiency on both synthetic and real metagenome datasets. Lastly, it automatically forms hundreds of high quality genome bins on a very large assembly consisting millions of contigs in a matter of hours on a single node. MetaBAT is open source software and available at https://bitbucket.org/berkeleylab/metabat.
Kang, Dongwan D.; Froula, Jeff; Egan, Rob; ...
2015-01-01
Grouping large genomic fragments assembled from shotgun metagenomic sequences to deconvolute complex microbial communities, or metagenome binning, enables the study of individual organisms and their interactions. Because of the complex nature of these communities, existing metagenome binning methods often miss a large number of microbial species. In addition, most of the tools are not scalable to large datasets. Here we introduce automated software called MetaBAT that integrates empirical probabilistic distances of genome abundance and tetranucleotide frequency for accurate metagenome binning. MetaBAT outperforms alternative methods in accuracy and computational efficiency on both synthetic and real metagenome datasets. Lastly, it automatically formsmore » hundreds of high quality genome bins on a very large assembly consisting millions of contigs in a matter of hours on a single node. MetaBAT is open source software and available at https://bitbucket.org/berkeleylab/metabat.« less
NASA Astrophysics Data System (ADS)
Eaton, M.; Pearson, M.; Lee, W.; Pullin, R.
2015-07-01
The ability to accurately locate damage in any given structure is a highly desirable attribute for an effective structural health monitoring system and could help to reduce operating costs and improve safety. This becomes a far greater challenge in complex geometries and materials, such as modern composite airframes. The poor translation of promising laboratory based SHM demonstrators to industrial environments forms a barrier to commercial up take of technology. The acoustic emission (AE) technique is a passive NDT method that detects elastic stress waves released by the growth of damage. It offers very sensitive damage detection, using a sparse array of sensors to detect and globally locate damage within a structure. However its application to complex structures commonly yields poor accuracy due to anisotropic wave propagation and the interruption of wave propagation by structural features such as holes and thickness changes. This work adopts an empirical mapping technique for AE location, known as Delta T Mapping, which uses experimental training data to account for such structural complexities. The technique is applied to a complex geometry composite aerospace structure undergoing certification testing. The component consists of a carbon fibre composite tube with varying wall thickness and multiple holes, that was loaded under bending. The damage location was validated using X-ray CT scanning and the Delta T Mapping technique was shown to improve location accuracy when compared with commercial algorithms. The onset and progression of damage were monitored throughout the test and used to inform future design iterations.
Predictive Surface Complexation Modeling
Sverjensky, Dimitri A.
2016-11-29
Surface complexation plays an important role in the equilibria and kinetics of processes controlling the compositions of soilwaters and groundwaters, the fate of contaminants in groundwaters, and the subsurface storage of CO_{2} and nuclear waste. Over the last several decades, many dozens of individual experimental studies have addressed aspects of surface complexation that have contributed to an increased understanding of its role in natural systems. However, there has been no previous attempt to develop a model of surface complexation that can be used to link all the experimental studies in order to place them on a predictive basis. Overall, my research has successfully integrated the results of the work of many experimentalists published over several decades. For the first time in studies of the geochemistry of the mineral-water interface, a practical predictive capability for modeling has become available. The predictive correlations developed in my research now enable extrapolations of experimental studies to provide estimates of surface chemistry for systems not yet studied experimentally and for natural and anthropogenically perturbed systems.
Ballester, Pedro J; Schreyer, Adrian; Blundell, Tom L
2014-03-24
Predicting the binding affinities of large sets of diverse molecules against a range of macromolecular targets is an extremely challenging task. The scoring functions that attempt such computational prediction are essential for exploiting and analyzing the outputs of docking, which is in turn an important tool in problems such as structure-based drug design. Classical scoring functions assume a predetermined theory-inspired functional form for the relationship between the variables that describe an experimentally determined or modeled structure of a protein-ligand complex and its binding affinity. The inherent problem of this approach is in the difficulty of explicitly modeling the various contributions of intermolecular interactions to binding affinity. New scoring functions based on machine-learning regression models, which are able to exploit effectively much larger amounts of experimental data and circumvent the need for a predetermined functional form, have already been shown to outperform a broad range of state-of-the-art scoring functions in a widely used benchmark. Here, we investigate the impact of the chemical description of the complex on the predictive power of the resulting scoring function using a systematic battery of numerical experiments. The latter resulted in the most accurate scoring function to date on the benchmark. Strikingly, we also found that a more precise chemical description of the protein-ligand complex does not generally lead to a more accurate prediction of binding affinity. We discuss four factors that may contribute to this result: modeling assumptions, codependence of representation and regression, data restricted to the bound state, and conformational heterogeneity in data.
Debating complexity in modeling
Hunt, Randall J.; Zheng, Chunmiao
1999-01-01
As scientists trying to understand the natural world, how should our effort be apportioned? We know that the natural world is characterized by complex and interrelated processes. Yet do we need to explicitly incorporate these intricacies to perform the tasks we are charged with? In this era of expanding computer power and development of sophisticated preprocessors and postprocessors, are bigger machines making better models? Put another way, do we understand the natural world better now with all these advancements in our simulation ability? Today the public's patience for long-term projects producing indeterminate results is wearing thin. This increases pressure on the investigator to use the appropriate technology efficiently. On the other hand, bringing scientific results into the legal arena opens up a new dimension to the issue: to the layperson, a tool that includes more of the complexity known to exist in the real world is expected to provide the more scientifically valid answer.
NASA Astrophysics Data System (ADS)
Hughes, Timothy J.; Kandathil, Shaun M.; Popelier, Paul L. A.
2015-02-01
As intermolecular interactions such as the hydrogen bond are electrostatic in origin, rigorous treatment of this term within force field methodologies should be mandatory. We present a method able of accurately reproducing such interactions for seven van der Waals complexes. It uses atomic multipole moments up to hexadecupole moment mapped to the positions of the nuclear coordinates by the machine learning method kriging. Models were built at three levels of theory: HF/6-31G**, B3LYP/aug-cc-pVDZ and M06-2X/aug-cc-pVDZ. The quality of the kriging models was measured by their ability to predict the electrostatic interaction energy between atoms in external test examples for which the true energies are known. At all levels of theory, >90% of test cases for small van der Waals complexes were predicted within 1 kJ mol-1, decreasing to 60-70% of test cases for larger base pair complexes. Models built on moments obtained at B3LYP and M06-2X level generally outperformed those at HF level. For all systems the individual interactions were predicted with a mean unsigned error of less than 1 kJ mol-1.
Hughes, Timothy J; Kandathil, Shaun M; Popelier, Paul L A
2015-02-05
As intermolecular interactions such as the hydrogen bond are electrostatic in origin, rigorous treatment of this term within force field methodologies should be mandatory. We present a method able of accurately reproducing such interactions for seven van der Waals complexes. It uses atomic multipole moments up to hexadecupole moment mapped to the positions of the nuclear coordinates by the machine learning method kriging. Models were built at three levels of theory: HF/6-31G(**), B3LYP/aug-cc-pVDZ and M06-2X/aug-cc-pVDZ. The quality of the kriging models was measured by their ability to predict the electrostatic interaction energy between atoms in external test examples for which the true energies are known. At all levels of theory, >90% of test cases for small van der Waals complexes were predicted within 1 kJ mol(-1), decreasing to 60-70% of test cases for larger base pair complexes. Models built on moments obtained at B3LYP and M06-2X level generally outperformed those at HF level. For all systems the individual interactions were predicted with a mean unsigned error of less than 1 kJ mol(-1).
Voskoboinik, Lev; Ayers, Sheri B; LeFebvre, Aaron K; Darvasi, Ariel
2015-05-01
Common forensic and mass disaster scenarios present DNA evidence that comprises a mixture of several contributors. Identifying the presence of an individual in such mixtures has proven difficult. In the current study, we evaluate the practical usefulness of currently available "off-the-shelf" SNP microarrays for such purposes. We found that a set of 3000 SNPs specifically selected for this purpose can accurately identify the presence of an individual in complex DNA mixtures of various compositions. For example, individuals contributing as little as 5% to a complex DNA mixture can be robustly identified even if the starting DNA amount was as little as 5.0ng and had undergone whole-genome amplification (WGA) prior to SNP analysis. The work presented in this study represents proof-of-principle that our previously proposed approach, can work with real "forensic-type" samples. Furthermore, in the absence of a low-density focused forensic SNP microarray, the use of standard, currently available high-density SNP microarrays can be similarly used and even increase statistical power due to the larger amount of available information.
Coarse-grained red blood cell model with accurate mechanical properties, rheology and dynamics.
Fedosov, Dmitry A; Caswell, Bruce; Karniadakis, George E
2009-01-01
We present a coarse-grained red blood cell (RBC) model with accurate and realistic mechanical properties, rheology and dynamics. The modeled membrane is represented by a triangular mesh which incorporates shear inplane energy, bending energy, and area and volume conservation constraints. The macroscopic membrane elastic properties are imposed through semi-analytic theory, and are matched with those obtained in optical tweezers stretching experiments. Rheological measurements characterized by time-dependent complex modulus are extracted from the membrane thermal fluctuations, and compared with those obtained from the optical magnetic twisting cytometry results. The results allow us to define a meaningful characteristic time of the membrane. The dynamics of RBCs observed in shear flow suggests that a purely elastic model for the RBC membrane is not appropriate, and therefore a viscoelastic model is required. The set of proposed analyses and numerical tests can be used as a complete model testbed in order to calibrate the modeled viscoelastic membranes to accurately represent RBCs in health and disease.
5D model for accurate representation and visualization of dynamic cardiac structures
NASA Astrophysics Data System (ADS)
Lin, Wei-te; Robb, Richard A.
2000-05-01
Accurate cardiac modeling is challenging due to the intricate structure and complex contraction patterns of myocardial tissues. Fast imaging techniques can provide 4D structural information acquired as a sequence of 3D images throughout the cardiac cycle. To mode. The beating heart, we created a physics-based surface model that deforms between successive time point in the cardiac cycle. 3D images of canine hearts were acquired during one complete cardiac cycle using the DSR and the EBCT. The left ventricle of the first time point is reconstructed as a triangular mesh. A mass-spring physics-based deformable mode,, which can expand and shrink with local contraction and stretching forces distributed in an anatomically accurate simulation of cardiac motion, is applied to the initial mesh and allows the initial mesh to deform to fit the left ventricle in successive time increments of the sequence. The resulting 4D model can be interactively transformed and displayed with associated regional electrical activity mapped onto anatomic surfaces, producing a 5D model, which faithfully exhibits regional cardiac contraction and relaxation patterns over the entire heart. The model faithfully represents structural changes throughout the cardiac cycle. Such models provide the framework for minimizing the number of time points required to usefully depict regional motion of myocardium and allow quantitative assessment of regional myocardial motion. The electrical activation mapping provides spatial and temporal correlation within the cardiac cycle. In procedures which as intra-cardiac catheter ablation, visualization of the dynamic model can be used to accurately localize the foci of myocardial arrhythmias and guide positioning of catheters for optimal ablation.
Fast and accurate analytical model to solve inverse problem in SHM using Lamb wave propagation
NASA Astrophysics Data System (ADS)
Poddar, Banibrata; Giurgiutiu, Victor
2016-04-01
Lamb wave propagation is at the center of attention of researchers for structural health monitoring of thin walled structures. This is due to the fact that Lamb wave modes are natural modes of wave propagation in these structures with long travel distances and without much attenuation. This brings the prospect of monitoring large structure with few sensors/actuators. However the problem of damage detection and identification is an "inverse problem" where we do not have the luxury to know the exact mathematical model of the system. On top of that the problem is more challenging due to the confounding factors of statistical variation of the material and geometric properties. Typically this problem may also be ill posed. Due to all these complexities the direct solution of the problem of damage detection and identification in SHM is impossible. Therefore an indirect method using the solution of the "forward problem" is popular for solving the "inverse problem". This requires a fast forward problem solver. Due to the complexities involved with the forward problem of scattering of Lamb waves from damages researchers rely primarily on numerical techniques such as FEM, BEM, etc. But these methods are slow and practically impossible to be used in structural health monitoring. We have developed a fast and accurate analytical forward problem solver for this purpose. This solver, CMEP (complex modes expansion and vector projection), can simulate scattering of Lamb waves from all types of damages in thin walled structures fast and accurately to assist the inverse problem solver.
Gutten, Ondrej; Beššeová, Ivana; Rulíšek, Lubomír
2011-10-20
To address fundamental questions in bioinorganic chemistry, such as metal ion selectivity, accurate computational protocols for both the gas-phase association of metal-ligand complexes and solvation/desolvation energies of the species involved are needed. In this work, we attempt to critically evaluate the performance of the ab initio and DFT electronic structure methods available and recent solvation models in calculations of the energetics associated with metal ion complexation. On the example of five model complexes ([M(II)(CH(3)S)(H(2)O)](+), [M(II)(H(2)O)(2)(H(2)S)(NH(3))](2+), [M(II)(CH(3)S)(NH(3))(H(2)O)(CH(3)COO)], [M(II)(H(2)O)(3)(SH)(CH(3)COO)(Im)], [M(II)(H(2)S)(H(2)O)(CH(3)COO)(PhOH)(Im)](+) in typical coordination geometries) and four metal ions (Fe(2+), Cu(2+), Zn(2+), and Cd(2+); representing open- and closed-shell and the first- and second-row transition metal elements), we provide reference values for the gas-phase complexation energies, as presumably obtained using the CCSD(T)/aug-cc-pVTZ method, and compare them with cheaper methods, such as DFT and RI-MP2, that can be used for large-scale calculations. We also discuss two possible definitions of interaction energies underlying the theoretically predicted metal-ion selectivity and the effect of geometry optimization on these values. Finally, popular solvation models, such as COSMO-RS and SMD, are used to demonstrate whether quantum chemical calculations can provide the overall free enthalpy (ΔG) changes in the range of the expected experimental values for the model complexes or match the experimental stability constants in the case of three complexes for which the experimental data exist. The data presented highlight several intricacies in the theoretical predictions of the experimental stability constants: the covalent character of some metal-ligand bonds (e.g., Cu(II)-thiolate) causing larger errors in the gas-phase complexation energies, inaccuracies in the treatment of solvation of the
Shu, Yu-Chen; Chern, I-Liang; Chang, Chien C.
2014-10-15
Most elliptic interface solvers become complicated for complex interface problems at those “exceptional points” where there are not enough neighboring interior points for high order interpolation. Such complication increases especially in three dimensions. Usually, the solvers are thus reduced to low order accuracy. In this paper, we classify these exceptional points and propose two recipes to maintain order of accuracy there, aiming at improving the previous coupling interface method [26]. Yet the idea is also applicable to other interface solvers. The main idea is to have at least first order approximations for second order derivatives at those exceptional points. Recipe 1 is to use the finite difference approximation for the second order derivatives at a nearby interior grid point, whenever this is possible. Recipe 2 is to flip domain signatures and introduce a ghost state so that a second-order method can be applied. This ghost state is a smooth extension of the solution at the exceptional point from the other side of the interface. The original state is recovered by a post-processing using nearby states and jump conditions. The choice of recipes is determined by a classification scheme of the exceptional points. The method renders the solution and its gradient uniformly second-order accurate in the entire computed domain. Numerical examples are provided to illustrate the second order accuracy of the presently proposed method in approximating the gradients of the original states for some complex interfaces which we had tested previous in two and three dimensions, and a real molecule ( (1D63)) which is double-helix shape and composed of hundreds of atoms.
Clarifying types of uncertainty: when are models accurate, and uncertainties small?
Cox, Louis Anthony Tony
2011-10-01
Professor Aven has recently noted the importance of clarifying the meaning of terms such as "scientific uncertainty" for use in risk management and policy decisions, such as when to trigger application of the precautionary principle. This comment examines some fundamental conceptual challenges for efforts to define "accurate" models and "small" input uncertainties by showing that increasing uncertainty in model inputs may reduce uncertainty in model outputs; that even correct models with "small" input uncertainties need not yield accurate or useful predictions for quantities of interest in risk management (such as the duration of an epidemic); and that accurate predictive models need not be accurate causal models.
Accurate Modeling of Scaffold Hopping Transformations in Drug Discovery.
Wang, Lingle; Deng, Yuqing; Wu, Yujie; Kim, Byungchan; LeBard, David N; Wandschneider, Dan; Beachy, Mike; Friesner, Richard A; Abel, Robert
2017-01-10
The accurate prediction of protein-ligand binding free energies remains a significant challenge of central importance in computational biophysics and structure-based drug design. Multiple recent advances including the development of greatly improved protein and ligand molecular mechanics force fields, more efficient enhanced sampling methods, and low-cost powerful GPU computing clusters have enabled accurate and reliable predictions of relative protein-ligand binding free energies through the free energy perturbation (FEP) methods. However, the existing FEP methods can only be used to calculate the relative binding free energies for R-group modifications or single-atom modifications and cannot be used to efficiently evaluate scaffold hopping modifications to a lead molecule. Scaffold hopping or core hopping, a very common design strategy in drug discovery projects, is critical not only in the early stages of a discovery campaign where novel active matter must be identified but also in lead optimization where the resolution of a variety of ADME/Tox problems may require identification of a novel core structure. In this paper, we introduce a method that enables theoretically rigorous, yet computationally tractable, relative protein-ligand binding free energy calculations to be pursued for scaffold hopping modifications. We apply the method to six pharmaceutically interesting cases where diverse types of scaffold hopping modifications were required to identify the drug molecules ultimately sent into the clinic. For these six diverse cases, the predicted binding affinities were in close agreement with experiment, demonstrating the wide applicability and the significant impact Core Hopping FEP may provide in drug discovery projects.
Accurate determination of the complex refractive index of solid tissue-equivalent phantom
NASA Astrophysics Data System (ADS)
Wang, Jin; Ye, Qing; Deng, Zhichao; Zhou, Wenyuan; Zhang, Chunping; Tian, Jianguo
2012-06-01
Tissue-equivalent phantom is becoming widespread as a substitute in the biological field to verify optical theories, test measuring systems and study the tissue performances for varying boundary conditions, sample size and shape at a quantitative level. Compared with phantoms made with Intralipid solution, ink and other liquid substances, phantom in solid state is stable over time, reproducible, easy to handle and has been testified to be a suitable optical simulator in the visible and near-infrared region. We present accurate determination of the complex refractive index (RI) of a solid tissueequivalent phantom using extended derivative total reflection method (EDTRM). Scattering phantoms in solid state were measured for p-polarized and s-polarized incident light respectively. The reflectance curves of the sample as a function of incident angle were recorded. The real part of RI is directly determined by derivative of the reflectance curve, and the imaginary part is obtained from nonlinear fitting based on the Fresnel equation and Nelder-Mead simplex method. The EDTRM method is applicable for RI measurement of high scattering media such as biotissue, solid tissue-equivalent phantom and bulk material. The obtained RI information can be used in the study of tissue optics and biomedical field.
Dicer–TRBP complex formation ensures accurate mammalian microRNA biogenesis
Wilson, Ross C.; Tambe, Akshay; Kidwell, Mary Anne; Noland, Cameron L.; Schneider, Catherine P.; Doudna, Jennifer A.
2014-01-01
Summary RNA-mediated gene silencing in human cells requires the accurate generation of ∼22-nucleotide microRNAs (miRNAs) from double-stranded RNA substrates by the endonuclease Dicer. Although the phylogenetically conserved RNA-binding proteins TRBP and PACT are known to contribute to this process, their mode of Dicer binding and their genome-wide effects on miRNA processing have not been determined. We solved the crystal structure of a human Dicer–TRBP interaction complex comprising two domains of previously unknown structure. Interface residues conserved between TRBP and PACT show that the proteins bind to Dicer in a similar manner and by mutual exclusion. Based on the structure, a catalytically active Dicer that cannot bind TRBP or PACT was designed and introduced into Dicer-deficient mammalian cells, revealing selective defects in guide strand selection. These results demonstrate the role of Dicer-associated RNA binding proteins in maintenance of gene silencing fidelity. PMID:25557550
New process model proves accurate in tests on catalytic reformer
Aguilar-Rodriguez, E.; Ancheyta-Juarez, J. )
1994-07-25
A mathematical model has been devised to represent the process that takes place in a fixed-bed, tubular, adiabatic catalytic reforming reactor. Since its development, the model has been applied to the simulation of a commercial semiregenerative reformer. The development of mass and energy balances for this reformer led to a model that predicts both concentration and temperature profiles along the reactor. A comparison of the model's results with experimental data illustrates its accuracy at predicting product profiles. Simple steps show how the model can be applied to simulate any fixed-bed catalytic reformer.
Etch modeling for accurate full-chip process proximity correction
NASA Astrophysics Data System (ADS)
Beale, Daniel F.; Shiely, James P.
2005-05-01
The challenges of the 65 nm node and beyond require new formulations of the compact convolution models used in OPC. In addition to simulating more optical and resist effects, these models must accommodate pattern distortions due to etch which can no longer be treated as small perturbations on photo-lithographic effects. (Methods for combining optical and process modules while optimizing the speed/accuracy tradeoff were described in "Advanced Model Formulations for Optical and Process Proximity Correction", D. Beale et al, SPIE 2004.) In this paper, we evaluate new physics-based etch model formulations that differ from the convolution-based process models used previously. The new models are expressed within the compact modeling framework described by J. Stirniman et al. in SPIE, vol. 3051, p469, 1997, and thus can be used for high-speed process simulation during full-chip OPC.
Accurate method for including solid-fluid boundary interactions in mesoscopic model fluids
Berkenbos, A. Lowe, C.P.
2008-04-20
Particle models are attractive methods for simulating the dynamics of complex mesoscopic fluids. Many practical applications of this methodology involve flow through a solid geometry. As the system is modeled using particles whose positions move continuously in space, one might expect that implementing the correct stick boundary condition exactly at the solid-fluid interface is straightforward. After all, unlike discrete methods there is no mapping onto a grid to contend with. In this article we describe a method that, for axisymmetric flows, imposes both the no-slip condition and continuity of stress at the interface. We show that the new method then accurately reproduces correct hydrodynamic behavior right up to the location of the interface. As such, computed flow profiles are correct even using a relatively small number of particles to model the fluid.
A multiscale red blood cell model with accurate mechanics, rheology, and dynamics.
Fedosov, Dmitry A; Caswell, Bruce; Karniadakis, George Em
2010-05-19
Red blood cells (RBCs) have highly deformable viscoelastic membranes exhibiting complex rheological response and rich hydrodynamic behavior governed by special elastic and bending properties and by the external/internal fluid and membrane viscosities. We present a multiscale RBC model that is able to predict RBC mechanics, rheology, and dynamics in agreement with experiments. Based on an analytic theory, the modeled membrane properties can be uniquely related to the experimentally established RBC macroscopic properties without any adjustment of parameters. The RBC linear and nonlinear elastic deformations match those obtained in optical-tweezers experiments. The rheological properties of the membrane are compared with those obtained in optical magnetic twisting cytometry, membrane thermal fluctuations, and creep followed by cell recovery. The dynamics of RBCs in shear and Poiseuille flows is tested against experiments and theoretical predictions, and the applicability of the latter is discussed. Our findings clearly indicate that a purely elastic model for the membrane cannot accurately represent the RBC's rheological properties and its dynamics, and therefore accurate modeling of a viscoelastic membrane is necessary.
When do perturbative approaches accurately capture the dynamics of complex quantum systems?
Fruchtman, Amir; Lambert, Neill; Gauger, Erik M.
2016-01-01
Understanding the dynamics of higher-dimensional quantum systems embedded in a complex environment remains a significant theoretical challenge. While several approaches yielding numerically converged solutions exist, these are computationally expensive and often provide only limited physical insight. Here we address the question: when do more intuitive and simpler-to-compute second-order perturbative approaches provide adequate accuracy? We develop a simple analytical criterion and verify its validity for the case of the much-studied FMO dynamics as well as the canonical spin-boson model. PMID:27335176
Towards an Accurate Performance Modeling of Parallel SparseFactorization
Grigori, Laura; Li, Xiaoye S.
2006-05-26
We present a performance model to analyze a parallel sparseLU factorization algorithm on modern cached-based, high-end parallelarchitectures. Our model characterizes the algorithmic behavior bytakingaccount the underlying processor speed, memory system performance, aswell as the interconnect speed. The model is validated using theSuperLU_DIST linear system solver, the sparse matrices from realapplications, and an IBM POWER3 parallel machine. Our modelingmethodology can be easily adapted to study performance of other types ofsparse factorizations, such as Cholesky or QR.
How Accurate Is A Hydraulic Model? | Science Inventory | US ...
Symposium paper Network hydraulic models are widely used, but their overall accuracy is often unknown. Models are developed to give utilities better insight into system hydraulic behavior, and increasingly the ability to predict the fate and transport of chemicals. Without an accessible and consistent means of validating a given model against the system it is meant to represent, the value of those supposed benefits should be questioned. Supervisory Control And Data Acquisition (SCADA) databases, though ubiquitous, are underused data sources for this type of task. Integrating a network model with a measurement database would offer professionals the ability to assess the model’s assumptions in an automated fashion by leveraging enormous amounts of data.
Felmy, Andrew R.; Mason, Marvin; Qafoku, Odeta; Xia, Yuanxian; Wang, Zheming; MacLean, Graham
2003-03-27
Developing accurate thermodynamic models for predicting the chemistry of the high-level waste tanks at Hanford is an extremely daunting challenge in electrolyte and radionuclide chemistry. These challenges stem from the extremely high ionic strength of the tank waste supernatants, presence of chelating agents in selected tanks, wide temperature range in processing conditions and the presence of important actinide species in multiple oxidation states. This presentation summarizes progress made to date in developing accurate models for these tank waste solutions, how these data are being used at Hanford and the important challenges that remain. New thermodynamic measurements on Sr and actinide complexation with specific chelating agents (EDTA, HEDTA and gluconate) will also be presented.
Modeling for accurate dimensional scanning electron microscope metrology: then and now.
Postek, Michael T; Vladár, András E
2011-01-01
A review of the evolution of modeling for accurate dimensional scanning electron microscopy is presented with an emphasis on developments in the Monte Carlo technique for modeling the generation of the electrons used for imaging and measurement. The progress of modeling for accurate metrology is discussed through a schematic technology timeline. In addition, a discussion of a future vision for accurate SEM dimensional metrology and the requirements to achieve it are presented.
NASA Astrophysics Data System (ADS)
Martin, Y. L.
The performance of quantitative analysis of 1D NMR spectra depends greatly on the choice of the NMR signal model. Complex least-squares analysis is well suited for optimizing the quantitative determination of spectra containing a limited number of signals (<30) obtained under satisfactory conditions of signal-to-noise ratio (>20). From a general point of view it is concluded, on the basis of mathematical considerations and numerical simulations, that, in the absence of truncation of the free-induction decay, complex least-squares curve fitting either in the time or in the frequency domain and linear-prediction methods are in fact nearly equivalent and give identical results. However, in the situation considered, complex least-squares analysis in the frequency domain is more flexible since it enables the quality of convergence to be appraised at every resonance position. An efficient data-processing strategy has been developed which makes use of an approximate conjugate-gradient algorithm. All spectral parameters (frequency, damping factors, amplitudes, phases, initial delay associated with intensity, and phase parameters of a baseline correction) are simultaneously managed in an integrated approach which is fully automatable. The behavior of the error as a function of the signal-to-noise ratio is theoretically estimated, and the influence of apodization is discussed. The least-squares curve fitting is theoretically proved to be the most accurate approach for quantitative analysis of 1D NMR data acquired with reasonable signal-to-noise ratio. The method enables complex spectral residuals to be sorted out. These residuals, which can be cumulated thanks to the possibility of correcting for frequency shifts and phase errors, extract systematic components, such as isotopic satellite lines, and characterize the shape and the intensity of the spectral distortion with respect to the Lorentzian model. This distortion is shown to be nearly independent of the chemical species
Accurate, multi-kb reads resolve complex populations and detect rare microorganisms
Sharon, Itai; Kertesz, Michael; Hug, Laura A.; Pushkarev, Dmitry; Blauwkamp, Timothy A.; Castelle, Cindy J.; Amirebrahimi, Mojgan; Thomas, Brian C.; Burstein, David; Tringe, Susannah G.; Williams, Kenneth H.
2015-01-01
Accurate evaluation of microbial communities is essential for understanding global biogeochemical processes and can guide bioremediation and medical treatments. Metagenomics is most commonly used to analyze microbial diversity and metabolic potential, but assemblies of the short reads generated by current sequencing platforms may fail to recover heterogeneous strain populations and rare organisms. Here we used short (150-bp) and long (multi-kb) synthetic reads to evaluate strain heterogeneity and study microorganisms at low abundance in complex microbial communities from terrestrial sediments. The long-read data revealed multiple (probably dozens of) closely related species and strains from previously undescribed Deltaproteobacteria and Aminicenantes (candidate phylum OP8). Notably, these are the most abundant organisms in the communities, yet short-read assemblies achieved only partial genome coverage, mostly in the form of short scaffolds (N50 = ∼2200 bp). Genome architecture and metabolic potential for these lineages were reconstructed using a new synteny-based method. Analysis of long-read data also revealed thousands of species whose abundances were <0.1% in all samples. Most of the organisms in this “long tail” of rare organisms belong to phyla that are also represented by abundant organisms. Genes encoding glycosyl hydrolases are significantly more abundant than expected in rare genomes, suggesting that rare species may augment the capability for carbon turnover and confer resilience to changing environmental conditions. Overall, the study showed that a diversity of closely related strains and rare organisms account for a major portion of the communities. These are probably common features of many microbial communities and can be effectively studied using a combination of long and short reads. PMID:25665577
ACCURATE LOW-MASS STELLAR MODELS OF KOI-126
Feiden, Gregory A.; Chaboyer, Brian; Dotter, Aaron
2011-10-10
The recent discovery of an eclipsing hierarchical triple system with two low-mass stars in a close orbit (KOI-126) by Carter et al. appeared to reinforce the evidence that theoretical stellar evolution models are not able to reproduce the observational mass-radius relation for low-mass stars. We present a set of stellar models for the three stars in the KOI-126 system that show excellent agreement with the observed radii. This agreement appears to be due to the equation of state implemented by our code. A significant dispersion in the observed mass-radius relation for fully convective stars is demonstrated; indicative of the influence of physics currently not incorporated in standard stellar evolution models. We also predict apsidal motion constants for the two M dwarf companions. These values should be observationally determined to within 1% by the end of the Kepler mission.
Accurate two-equation modelling of falling film flows
NASA Astrophysics Data System (ADS)
Ruyer-Quil, Christian
2015-11-01
The low-dimensional modeling of the wave dynamics of a falling liquid film on an inclined plane is revisited. The advantages and shortcomings of existing modelling approaches: weighted residual method, center-manifold analysis, consistent Saint-Venant approach are discussed and contrasted. A novel formulation of a two-equation consistent model is proposed. The proposed formulation cures the principal limitations of previous approaches: (i) apart from surface tension terms, it admits a conservative form which enables to make use of efficient numerical schemes, (ii) it recovers with less than 1 percent of error the asymptotic speed of solitary waves in the inertial regime found by DNS, (iii) it adequately captures the velocity field under the waves and in particular the wall drag. Research supported by Insitut Universitaire de France.
Building accurate geometric models from abundant range imaging information
Diegert, C.; Sackos, J.; Nellums, R.
1997-05-01
The authors define two simple metrics for accuracy of models built from range imaging information. They apply the metric to a model built from a recent range image taken at the Laser Radar Development and Evaluation Facility (LDERF), Eglin AFB, using a Scannerless Range Imager (SRI) from Sandia National Laboratories. They also present graphical displays of the residual information produced as a byproduct of this measurement, and discuss mechanisms that these data suggest for further improvement in the performance of this already impressive SRI.
Beyond Ellipse(s): Accurately Modelling the Isophotal Structure of Galaxies with ISOFIT and CMODEL
NASA Astrophysics Data System (ADS)
Ciambur, B. C.
2015-09-01
This work introduces a new fitting formalism for isophotes that enables more accurate modeling of galaxies with non-elliptical shapes, such as disk galaxies viewed edge-on or galaxies with X-shaped/peanut bulges. Within this scheme, the angular parameter that defines quasi-elliptical isophotes is transformed from the commonly used, but inappropriate, polar coordinate to the “eccentric anomaly.” This provides a superior description of deviations from ellipticity, better capturing the true isophotal shape. Furthermore, this makes it possible to accurately recover both the surface brightness profile, using the correct azimuthally averaged isophote, and the two-dimensional model of any galaxy: the hitherto ubiquitous, but artificial, cross-like features in residual images are completely removed. The formalism has been implemented into the Image Reduction and Analysis Facility tasks Ellipse and Bmodel to create the new tasks “Isofit,” and “Cmodel.” The new tools are demonstrated here with application to five galaxies, chosen to be representative case-studies for several areas where this technique makes it possible to gain new scientific insight. Specifically: properly quantifying boxy/disky isophotes via the fourth harmonic order in edge-on galaxies, quantifying X-shaped/peanut bulges, higher-order Fourier moments for modeling bars in disks, and complex isophote shapes. Higher order (n > 4) harmonics now become meaningful and may correlate with structural properties, as boxyness/diskyness is known to do. This work also illustrates how the accurate construction, and subtraction, of a model from a galaxy image facilitates the identification and recovery of over-lapping sources such as globular clusters and the optical counterparts of X-ray sources.
BEYOND ELLIPSE(S): ACCURATELY MODELING THE ISOPHOTAL STRUCTURE OF GALAXIES WITH ISOFIT AND CMODEL
Ciambur, B. C.
2015-09-10
This work introduces a new fitting formalism for isophotes that enables more accurate modeling of galaxies with non-elliptical shapes, such as disk galaxies viewed edge-on or galaxies with X-shaped/peanut bulges. Within this scheme, the angular parameter that defines quasi-elliptical isophotes is transformed from the commonly used, but inappropriate, polar coordinate to the “eccentric anomaly.” This provides a superior description of deviations from ellipticity, better capturing the true isophotal shape. Furthermore, this makes it possible to accurately recover both the surface brightness profile, using the correct azimuthally averaged isophote, and the two-dimensional model of any galaxy: the hitherto ubiquitous, but artificial, cross-like features in residual images are completely removed. The formalism has been implemented into the Image Reduction and Analysis Facility tasks Ellipse and Bmodel to create the new tasks “Isofit,” and “Cmodel.” The new tools are demonstrated here with application to five galaxies, chosen to be representative case-studies for several areas where this technique makes it possible to gain new scientific insight. Specifically: properly quantifying boxy/disky isophotes via the fourth harmonic order in edge-on galaxies, quantifying X-shaped/peanut bulges, higher-order Fourier moments for modeling bars in disks, and complex isophote shapes. Higher order (n > 4) harmonics now become meaningful and may correlate with structural properties, as boxyness/diskyness is known to do. This work also illustrates how the accurate construction, and subtraction, of a model from a galaxy image facilitates the identification and recovery of over-lapping sources such as globular clusters and the optical counterparts of X-ray sources.
Magnetic field models of nine CP stars from "accurate" measurements
NASA Astrophysics Data System (ADS)
Glagolevskij, Yu. V.
2013-01-01
The dipole models of magnetic fields in nine CP stars are constructed based on the measurements of metal lines taken from the literature, and performed by the LSD method with an accuracy of 10-80 G. The model parameters are compared with the parameters obtained for the same stars from the hydrogen line measurements. For six out of nine stars the same type of structure was obtained. Some parameters, such as the field strength at the poles B p and the average surface magnetic field B s differ considerably in some stars due to differences in the amplitudes of phase dependences B e (Φ) and B s (Φ), obtained by different authors. It is noted that a significant increase in the measurement accuracy has little effect on the modelling of the large-scale structures of the field. By contrast, it is more important to construct the shape of the phase dependence based on a fairly large number of field measurements, evenly distributed by the rotation period phases. It is concluded that the Zeeman component measurement methods have a strong effect on the shape of the phase dependence, and that the measurements of the magnetic field based on the lines of hydrogen are more preferable for modelling the large-scale structures of the field.
NASA Astrophysics Data System (ADS)
Zhang, Shunli; Zhang, Dinghua; Gong, Hao; Ghasemalizadeh, Omid; Wang, Ge; Cao, Guohua
2014-11-01
Iterative algorithms, such as the algebraic reconstruction technique (ART), are popular for image reconstruction. For iterative reconstruction, the area integral model (AIM) is more accurate for better reconstruction quality than the line integral model (LIM). However, the computation of the system matrix for AIM is more complex and time-consuming than that for LIM. Here, we propose a fast and accurate method to compute the system matrix for AIM. First, we calculate the intersection of each boundary line of a narrow fan-beam with pixels in a recursive and efficient manner. Then, by grouping the beam-pixel intersection area into six types according to the slopes of the two boundary lines, we analytically compute the intersection area of the narrow fan-beam with the pixels in a simple algebraic fashion. Overall, experimental results show that our method is about three times faster than the Siddon algorithm and about two times faster than the distance-driven model (DDM) in computation of the system matrix. The reconstruction speed of our AIM-based ART is also faster than the LIM-based ART that uses the Siddon algorithm and DDM-based ART, for one iteration. The fast reconstruction speed of our method was accomplished without compromising the image quality.
NASA Astrophysics Data System (ADS)
Rumple, C.; Richter, J.; Craven, B. A.; Krane, M.
2012-11-01
A summary of the research being carried out by our multidisciplinary team to better understand the form and function of the nose in different mammalian species that include humans, carnivores, ungulates, rodents, and marine animals will be presented. The mammalian nose houses a convoluted airway labyrinth, where two hallmark features of mammals occur, endothermy and olfaction. Because of the complexity of the nasal cavity, the anatomy and function of these upper airways remain poorly understood in most mammals. However, recent advances in high-resolution medical imaging, computational modeling, and experimental flow measurement techniques are now permitting the study of airflow and respiratory and olfactory transport phenomena in anatomically-accurate reconstructions of the nasal cavity. Here, we focus on efforts to manufacture transparent, anatomically-accurate models for stereo particle image velocimetry (SPIV) measurements of nasal airflow. Challenges in the design and manufacture of index-matched anatomical models are addressed and preliminary SPIV measurements are presented. Such measurements will constitute a validation database for concurrent computational fluid dynamics (CFD) simulations of mammalian respiration and olfaction. Supported by the National Science Foundation.
Accurate first principles model potentials for intermolecular interactions.
Gordon, Mark S; Smith, Quentin A; Xu, Peng; Slipchenko, Lyudmila V
2013-01-01
The general effective fragment potential (EFP) method provides model potentials for any molecule that is derived from first principles, with no empirically fitted parameters. The EFP method has been interfaced with most currently used ab initio single-reference and multireference quantum mechanics (QM) methods, ranging from Hartree-Fock and coupled cluster theory to multireference perturbation theory. The most recent innovations in the EFP model have been to make the computationally expensive charge transfer term much more efficient and to interface the general EFP dispersion and exchange repulsion interactions with QM methods. Following a summary of the method and its implementation in generally available computer programs, these most recent new developments are discussed.
Accurate numerical solutions for elastic-plastic models. [LMFBR
Schreyer, H. L.; Kulak, R. F.; Kramer, J. M.
1980-03-01
The accuracy of two integration algorithms is studied for the common engineering condition of a von Mises, isotropic hardening model under plane stress. Errors in stress predictions for given total strain increments are expressed with contour plots of two parameters: an angle in the pi plane and the difference between the exact and computed yield-surface radii. The two methods are the tangent-predictor/radial-return approach and the elastic-predictor/radial-corrector algorithm originally developed by Mendelson. The accuracy of a combined tangent-predictor/radial-corrector algorithm is also investigated.
Accurate Force Field Development for Modeling Conjugated Polymers.
DuBay, Kateri H; Hall, Michelle Lynn; Hughes, Thomas F; Wu, Chuanjie; Reichman, David R; Friesner, Richard A
2012-11-13
The modeling of the conformational properties of conjugated polymers entails a unique challenge for classical force fields. Conjugation imposes strong constraints upon bond rotation. Planar configurations are favored, but the concomitantly shortened bond lengths result in moieties being brought into closer proximity than usual. The ensuing steric repulsions are particularly severe in the presence of side chains, straining angles, and stretching bonds to a degree infrequently found in nonconjugated systems. We herein demonstrate the resulting inaccuracies by comparing the LMP2-calculated inter-ring torsion potentials for a series of substituted stilbenes and bithiophenes to those calculated using standard classical force fields. We then implement adjustments to the OPLS-2005 force field in order to improve its ability to model such systems. Finally, we show the impact of these changes on the dihedral angle distributions, persistence lengths, and conjugation length distributions observed during molecular dynamics simulations of poly[2-methoxy-5-(2'-ethylhexyloxy)-p-phenylene vinylene] (MEH-PPV) and poly 3-hexylthiophene (P3HT), two of the most widely used conjugated polymers.
Accurate structure prediction of peptide–MHC complexes for identifying highly immunogenic antigens
Park, Min-Sun; Park, Sung Yong; Miller, Keith R.; Collins, Edward J.; Lee, Ha Youn
2013-11-01
Designing an optimal HIV-1 vaccine faces the challenge of identifying antigens that induce a broad immune capacity. One factor to control the breadth of T cell responses is the surface morphology of a peptide–MHC complex. Here, we present an in silico protocol for predicting peptide–MHC structure. A robust signature of a conformational transition was identified during all-atom molecular dynamics, which results in a model with high accuracy. A large test set was used in constructing our protocol and we went another step further using a blind test with a wild-type peptide and two highly immunogenic mutants, which predicted substantial conformational changes in both mutants. The center residues at position five of the analogs were configured to be accessible to solvent, forming a prominent surface, while the residue of the wild-type peptide was to point laterally toward the side of the binding cleft. We then experimentally determined the structures of the blind test set, using high resolution of X-ray crystallography, which verified predicted conformational changes. Our observation strongly supports a positive association of the surface morphology of a peptide–MHC complex to its immunogenicity. Our study offers the prospect of enhancing immunogenicity of vaccines by identifying MHC binding immunogens.
NASA Astrophysics Data System (ADS)
Mead, A. J.; Peacock, J. A.; Heymans, C.; Joudaki, S.; Heavens, A. F.
2015-12-01
We present an optimized variant of the halo model, designed to produce accurate matter power spectra well into the non-linear regime for a wide range of cosmological models. To do this, we introduce physically motivated free parameters into the halo-model formalism and fit these to data from high-resolution N-body simulations. For a variety of Λ cold dark matter (ΛCDM) and wCDM models, the halo-model power is accurate to ≃ 5 per cent for k ≤ 10h Mpc-1 and z ≤ 2. An advantage of our new halo model is that it can be adapted to account for the effects of baryonic feedback on the power spectrum. We demonstrate this by fitting the halo model to power spectra from the OWLS (OverWhelmingly Large Simulations) hydrodynamical simulation suite via parameters that govern halo internal structure. We are able to fit all feedback models investigated at the 5 per cent level using only two free parameters, and we place limits on the range of these halo parameters for feedback models investigated by the OWLS simulations. Accurate predictions to high k are vital for weak-lensing surveys, and these halo parameters could be considered nuisance parameters to marginalize over in future analyses to mitigate uncertainty regarding the details of feedback. Finally, we investigate how lensing observables predicted by our model compare to those from simulations and from HALOFIT for a range of k-cuts and feedback models and quantify the angular scales at which these effects become important. Code to calculate power spectra from the model presented in this paper can be found at https://github.com/alexander-mead/hmcode.
Ye, Jing; Wang, Jianling; Li, Qiwei; Dong, Xiawei; Ge, Wei; Chen, Yun; Jiang, Xuerui; Liu, Hongde; Jiang, Hui; Wang, Xuemei
2016-04-01
A new and facile method for rapidly and accurately achieving tumor targeting fluorescent images has been explored using a specifically biosynthesized europium (Eu) complex in vivo and in vitro. It demonstrated that a fluorescent Eu complex could be bio-synthesized through a spontaneous molecular process in cancerous cells and tumors, but not prepared in normal cells and tissues. In addition, the proteomics analyses show that some biological pathways of metabolism, especially for NADPH production and glutamine metabolism, are remarkably affected during the relevant biosynthesis process, where molecular precursors of europium ions are reduced to fluorescent europium complexes inside cancerous cells or tumor tissues. These results proved that the specific self-biosynthesis of a fluorescent Eu complex by cancer cells or tumor tissues can provide a new strategy for accurate diagnosis and treatment strategies in the early stages of cancers and thus is beneficial for realizing precise surgical intervention based on the relevant cheap and readily available agents.
Digitalized accurate modeling of SPCB with multi-spiral surface based on CPC algorithm
NASA Astrophysics Data System (ADS)
Huang, Yanhua; Gu, Lizhi
2015-09-01
The main methods of the existing multi-spiral surface geometry modeling include spatial analytic geometry algorithms, graphical method, interpolation and approximation algorithms. However, there are some shortcomings in these modeling methods, such as large amount of calculation, complex process, visible errors, and so on. The above methods have, to some extent, restricted the design and manufacture of the premium and high-precision products with spiral surface considerably. This paper introduces the concepts of the spatially parallel coupling with multi-spiral surface and spatially parallel coupling body. The typical geometry and topological features of each spiral surface forming the multi-spiral surface body are determined, by using the extraction principle of datum point cluster, the algorithm of coupling point cluster by removing singular point, and the "spatially parallel coupling" principle based on the non-uniform B-spline for each spiral surface. The orientation and quantitative relationships of datum point cluster and coupling point cluster in Euclidean space are determined accurately and in digital description and expression, coupling coalescence of the surfaces with multi-coupling point clusters under the Pro/E environment. The digitally accurate modeling of spatially parallel coupling body with multi-spiral surface is realized. The smooth and fairing processing is done to the three-blade end-milling cutter's end section area by applying the principle of spatially parallel coupling with multi-spiral surface, and the alternative entity model is processed in the four axis machining center after the end mill is disposed. And the algorithm is verified and then applied effectively to the transition area among the multi-spiral surface. The proposed model and algorithms may be used in design and manufacture of the multi-spiral surface body products, as well as in solving essentially the problems of considerable modeling errors in computer graphics and
Construction of feasible and accurate kinetic models of metabolism: A Bayesian approach
Saa, Pedro A.; Nielsen, Lars K.
2016-01-01
Kinetic models are essential to quantitatively understand and predict the behaviour of metabolic networks. Detailed and thermodynamically feasible kinetic models of metabolism are inherently difficult to formulate and fit. They have a large number of heterogeneous parameters, are non-linear and have complex interactions. Many powerful fitting strategies are ruled out by the intractability of the likelihood function. Here, we have developed a computational framework capable of fitting feasible and accurate kinetic models using Approximate Bayesian Computation. This framework readily supports advanced modelling features such as model selection and model-based experimental design. We illustrate this approach on the tightly-regulated mammalian methionine cycle. Sampling from the posterior distribution, the proposed framework generated thermodynamically feasible parameter samples that converged on the true values, and displayed remarkable prediction accuracy in several validation tests. Furthermore, a posteriori analysis of the parameter distributions enabled appraisal of the systems properties of the network (e.g., control structure) and key metabolic regulations. Finally, the framework was used to predict missing allosteric interactions. PMID:27417285
Modelling Canopy Flows over Complex Terrain
NASA Astrophysics Data System (ADS)
Grant, Eleanor R.; Ross, Andrew N.; Gardiner, Barry A.
2016-12-01
Recent studies of flow over forested hills have been motivated by a number of important applications including understanding CO_2 and other gaseous fluxes over forests in complex terrain, predicting wind damage to trees, and modelling wind energy potential at forested sites. Current modelling studies have focussed almost exclusively on highly idealized, and usually fully forested, hills. Here, we present model results for a site on the Isle of Arran, Scotland with complex terrain and heterogeneous forest canopy. The model uses an explicit representation of the canopy and a 1.5-order turbulence closure for flow within and above the canopy. The validity of the closure scheme is assessed using turbulence data from a field experiment before comparing predictions of the full model with field observations. For near-neutral stability, the results compare well with the observations, showing that such a relatively simple canopy model can accurately reproduce the flow patterns observed over complex terrain and realistic, variable forest cover, while at the same time remaining computationally feasible for real case studies. The model allows closer examination of the flow separation observed over complex forested terrain. Comparisons with model simulations using a roughness length parametrization show significant differences, particularly with respect to flow separation, highlighting the need to explicitly model the forest canopy if detailed predictions of near-surface flow around forests are required.
Seth, Ajay; Matias, Ricardo; Veloso, António P.; Delp, Scott L.
2016-01-01
The complexity of shoulder mechanics combined with the movement of skin relative to the scapula makes it difficult to measure shoulder kinematics with sufficient accuracy to distinguish between symptomatic and asymptomatic individuals. Multibody skeletal models can improve motion capture accuracy by reducing the space of possible joint movements, and models are used widely to improve measurement of lower limb kinematics. In this study, we developed a rigid-body model of a scapulothoracic joint to describe the kinematics of the scapula relative to the thorax. This model describes scapular kinematics with four degrees of freedom: 1) elevation and 2) abduction of the scapula on an ellipsoidal thoracic surface, 3) upward rotation of the scapula normal to the thoracic surface, and 4) internal rotation of the scapula to lift the medial border of the scapula off the surface of the thorax. The surface dimensions and joint axes can be customized to match an individual’s anthropometry. We compared the model to “gold standard” bone-pin kinematics collected during three shoulder tasks and found modeled scapular kinematics to be accurate to within 2mm root-mean-squared error for individual bone-pin markers across all markers and movement tasks. As an additional test, we added random and systematic noise to the bone-pin marker data and found that the model reduced kinematic variability due to noise by 65% compared to Euler angles computed without the model. Our scapulothoracic joint model can be used for inverse and forward dynamics analyses and to compute joint reaction loads. The computational performance of the scapulothoracic joint model is well suited for real-time applications; it is freely available for use with OpenSim 3.2, and is customizable and usable with other OpenSim models. PMID:26734761
2011-01-01
Background Data assimilation refers to methods for updating the state vector (initial condition) of a complex spatiotemporal model (such as a numerical weather model) by combining new observations with one or more prior forecasts. We consider the potential feasibility of this approach for making short-term (60-day) forecasts of the growth and spread of a malignant brain cancer (glioblastoma multiforme) in individual patient cases, where the observations are synthetic magnetic resonance images of a hypothetical tumor. Results We apply a modern state estimation algorithm (the Local Ensemble Transform Kalman Filter), previously developed for numerical weather prediction, to two different mathematical models of glioblastoma, taking into account likely errors in model parameters and measurement uncertainties in magnetic resonance imaging. The filter can accurately shadow the growth of a representative synthetic tumor for 360 days (six 60-day forecast/update cycles) in the presence of a moderate degree of systematic model error and measurement noise. Conclusions The mathematical methodology described here may prove useful for other modeling efforts in biology and oncology. An accurate forecast system for glioblastoma may prove useful in clinical settings for treatment planning and patient counseling. Reviewers This article was reviewed by Anthony Almudevar, Tomas Radivoyevitch, and Kristin Swanson (nominated by Georg Luebeck). PMID:22185645
Optimal Cluster Mill Pass Scheduling With an Accurate and Rapid New Strip Crown Model
NASA Astrophysics Data System (ADS)
Malik, Arif S.; Grandhi, Ramana V.; Zipf, Mark E.
2007-05-01
Besides the requirement to roll coiled sheet at high levels of productivity, the optimal pass scheduling of cluster-type reversing cold mills presents the added challenge of assigning mill parameters that facilitate the best possible strip flatness. The pressures of intense global competition, and the requirements for increasingly thinner, higher quality specialty sheet products that are more difficult to roll, continue to force metal producers to commission innovative flatness-control technologies. This means that during the on-line computerized set-up of rolling mills, the mathematical model should not only determine the minimum total number of passes and maximum rolling speed, it should simultaneously optimize the pass-schedule so that desired flatness is assured, either by manual or automated means. In many cases today, however, on-line prediction of strip crown and corresponding flatness for the complex cluster-type rolling mills is typically addressed either by trial and error, by approximate deflection models for equivalent vertical roll-stacks, or by non-physical pattern recognition style models. The abundance of the aforementioned methods is largely due to the complexity of cluster-type mill configurations and the lack of deflection models with sufficient accuracy and speed for on-line use. Without adequate assignment of the pass-schedule set-up parameters, it may be difficult or impossible to achieve the required strip flatness. In this paper, we demonstrate optimization of cluster mill pass-schedules using a new accurate and rapid strip crown model. This pass-schedule optimization includes computations of the predicted strip thickness profile to validate mathematical constraints. In contrast to many of the existing methods for on-line prediction of strip crown and flatness on cluster mills, the demonstrated method requires minimal prior tuning and no extensive training with collected mill data. To rapidly and accurately solve the multi-contact problem
Discrete state model and accurate estimation of loop entropy of RNA secondary structures.
Zhang, Jian; Lin, Ming; Chen, Rong; Wang, Wei; Liang, Jie
2008-03-28
Conformational entropy makes important contribution to the stability and folding of RNA molecule, but it is challenging to either measure or compute conformational entropy associated with long loops. We develop optimized discrete k-state models of RNA backbone based on known RNA structures for computing entropy of loops, which are modeled as self-avoiding walks. To estimate entropy of hairpin, bulge, internal loop, and multibranch loop of long length (up to 50), we develop an efficient sampling method based on the sequential Monte Carlo principle. Our method considers excluded volume effect. It is general and can be applied to calculating entropy of loops with longer length and arbitrary complexity. For loops of short length, our results are in good agreement with a recent theoretical model and experimental measurement. For long loops, our estimated entropy of hairpin loops is in excellent agreement with the Jacobson-Stockmayer extrapolation model. However, for bulge loops and more complex secondary structures such as internal and multibranch loops, we find that the Jacobson-Stockmayer extrapolation model has large errors. Based on estimated entropy, we have developed empirical formulae for accurate calculation of entropy of long loops in different secondary structures. Our study on the effect of asymmetric size of loops suggest that loop entropy of internal loops is largely determined by the total loop length, and is only marginally affected by the asymmetric size of the two loops. Our finding suggests that the significant asymmetric effects of loop length in internal loops measured by experiments are likely to be partially enthalpic. Our method can be applied to develop improved energy parameters important for studying RNA stability and folding, and for predicting RNA secondary and tertiary structures. The discrete model and the program used to calculate loop entropy can be downloaded at http://gila.bioengr.uic.edu/resources/RNA.html.
Validation of an Accurate Three-Dimensional Helical Slow-Wave Circuit Model
NASA Technical Reports Server (NTRS)
Kory, Carol L.
1997-01-01
The helical slow-wave circuit embodies a helical coil of rectangular tape supported in a metal barrel by dielectric support rods. Although the helix slow-wave circuit remains the mainstay of the traveling-wave tube (TWT) industry because of its exceptionally wide bandwidth, a full helical circuit, without significant dimensional approximations, has not been successfully modeled until now. Numerous attempts have been made to analyze the helical slow-wave circuit so that the performance could be accurately predicted without actually building it, but because of its complex geometry, many geometrical approximations became necessary rendering the previous models inaccurate. In the course of this research it has been demonstrated that using the simulation code, MAFIA, the helical structure can be modeled with actual tape width and thickness, dielectric support rod geometry and materials. To demonstrate the accuracy of the MAFIA model, the cold-test parameters including dispersion, on-axis interaction impedance and attenuation have been calculated for several helical TWT slow-wave circuits with a variety of support rod geometries including rectangular and T-shaped rods, as well as various support rod materials including isotropic, anisotropic and partially metal coated dielectrics. Compared with experimentally measured results, the agreement is excellent. With the accuracy of the MAFIA helical model validated, the code was used to investigate several conventional geometric approximations in an attempt to obtain the most computationally efficient model. Several simplifications were made to a standard model including replacing the helical tape with filaments, and replacing rectangular support rods with shapes conforming to the cylindrical coordinate system with effective permittivity. The approximate models are compared with the standard model in terms of cold-test characteristics and computational time. The model was also used to determine the sensitivity of various
Development and application of accurate analytical models for single active electron potentials
NASA Astrophysics Data System (ADS)
Miller, Michelle; Jaron-Becker, Agnieszka; Becker, Andreas
2015-05-01
The single active electron (SAE) approximation is a theoretical model frequently employed to study scenarios in which inner-shell electrons may productively be treated as frozen spectators to a physical process of interest, and accurate analytical approximations for these potentials are sought as a useful simulation tool. Density function theory is often used to construct a SAE potential, requiring that a further approximation for the exchange correlation functional be enacted. In this study, we employ the Krieger, Li, and Iafrate (KLI) modification to the optimized-effective-potential (OEP) method to reduce the complexity of the problem to the straightforward solution of a system of linear equations through simple arguments regarding the behavior of the exchange-correlation potential in regions where a single orbital dominates. We employ this method for the solution of atomic and molecular potentials, and use the resultant curve to devise a systematic construction for highly accurate and useful analytical approximations for several systems. Supported by the U.S. Department of Energy (Grant No. DE-FG02-09ER16103), and the U.S. National Science Foundation (Graduate Research Fellowship, Grants No. PHY-1125844 and No. PHY-1068706).
Technology Transfer Automated Retrieval System (TEKTRAN)
The three evapotranspiration (ET) measurement/retrieval techniques used in this study, lysimeter, scintillometer and remote sensing vary in their level of complexity, accuracy, resolution and applicability. The lysimeter with its point measurement is the most accurate and direct method to measure ET...
Dicer-TRBP complex formation ensures accurate mammalian microRNA biogenesis.
Wilson, Ross C; Tambe, Akshay; Kidwell, Mary Anne; Noland, Cameron L; Schneider, Catherine P; Doudna, Jennifer A
2015-02-05
RNA-mediated gene silencing in human cells requires the accurate generation of ∼22 nt microRNAs (miRNAs) from double-stranded RNA substrates by the endonuclease Dicer. Although the phylogenetically conserved RNA-binding proteins TRBP and PACT are known to contribute to this process, their mode of Dicer binding and their genome-wide effects on miRNA processing have not been determined. We solved the crystal structure of the human Dicer-TRBP interface, revealing the structural basis of the interaction. Interface residues conserved between TRBP and PACT show that the proteins bind to Dicer in a similar manner and by mutual exclusion. Based on the structure, a catalytically active Dicer that cannot bind TRBP or PACT was designed and introduced into Dicer-deficient mammalian cells, revealing selective defects in guide strand selection. These results demonstrate the role of Dicer-associated RNA binding proteins in maintenance of gene silencing fidelity.
Accurate Computed Enthalpies of Spin Crossover in Iron and Cobalt Complexes
NASA Astrophysics Data System (ADS)
Jensen, Kasper P.; Cirera, Jordi
2009-08-01
Despite their importance in many chemical processes, the relative energies of spin states of transition metal complexes have so far been haunted by large computational errors. By the use of six functionals, B3LYP, BP86, TPSS, TPSSh, M06, and M06L, this work studies nine complexes (seven with iron and two with cobalt) for which experimental enthalpies of spin crossover are available. It is shown that such enthalpies can be used as quantitative benchmarks of a functional's ability to balance electron correlation in both the involved states. TPSSh achieves an unprecedented mean absolute error of ˜11 kJ/mol in spin transition energies, with the local functional M06L a distant second (25 kJ/mol). Other tested functionals give mean absolute errors of 40 kJ/mol or more. This work confirms earlier suggestions that 10% exact exchange is near-optimal for describing the electron correlation effects of first-row transition metal systems. Furthermore, it is shown that given an experimental structure of an iron complex, TPSSh can predict the electronic state corresponding to that experimental structure. We recommend this functional as current state-of-the-art for studying spin crossover and relative energies of close-lying electronic configurations in first-row transition metal systems.
Creation of Anatomically Accurate Computer-Aided Design (CAD) Solid Models from Medical Images
NASA Technical Reports Server (NTRS)
Stewart, John E.; Graham, R. Scott; Samareh, Jamshid A.; Oberlander, Eric J.; Broaddus, William C.
1999-01-01
Most surgical instrumentation and implants used in the world today are designed with sophisticated Computer-Aided Design (CAD)/Computer-Aided Manufacturing (CAM) software. This software automates the mechanical development of a product from its conceptual design through manufacturing. CAD software also provides a means of manipulating solid models prior to Finite Element Modeling (FEM). Few surgical products are designed in conjunction with accurate CAD models of human anatomy because of the difficulty with which these models are created. We have developed a novel technique that creates anatomically accurate, patient specific CAD solids from medical images in a matter of minutes.
Yoshidome, Takashi; Ekimoto, Toru; Matubayasi, Nobuyuki; Harano, Yuichi; Kinoshita, Masahiro; Ikeguchi, Mitsunori
2015-05-07
The hydration free energy (HFE) is a crucially important physical quantity to discuss various chemical processes in aqueous solutions. Although an explicit-solvent computation with molecular dynamics (MD) simulations is a preferable treatment of the HFE, huge computational load has been inevitable for large, complex solutes like proteins. In the present paper, we propose an efficient computation method for the HFE. In our method, the HFE is computed as a sum of 〈UUV〉/2 (〈UUV〉 is the ensemble average of the sum of pair interaction energy between solute and water molecule) and the water reorganization term mainly reflecting the excluded volume effect. Since 〈UUV〉 can readily be computed through a MD of the system composed of solute and water, an efficient computation of the latter term leads to a reduction of computational load. We demonstrate that the water reorganization term can quantitatively be calculated using the morphometric approach (MA) which expresses the term as the linear combinations of the four geometric measures of a solute and the corresponding coefficients determined with the energy representation (ER) method. Since the MA enables us to finish the computation of the solvent reorganization term in less than 0.1 s once the coefficients are determined, the use of the MA enables us to provide an efficient computation of the HFE even for large, complex solutes. Through the applications, we find that our method has almost the same quantitative performance as the ER method with substantial reduction of the computational load.
Accurate Hf isotope determinations of complex zircons using the "laser ablation split stream" method
NASA Astrophysics Data System (ADS)
Fisher, Christopher M.; Vervoort, Jeffery D.; DuFrane, S. Andrew
2014-01-01
The "laser ablation split stream" (LASS) technique is a powerful tool for mineral-scale isotope analyses and in particular, for concurrent determination of age and Hf isotope composition of zircon. Because LASS utilizes two independent mass spectrometers, a large range of masses can be measured during a single ablation, and thus, the same sample volume can be analyzed for multiple geochemical systems. This paper describes a simple analytical setup using a laser ablation system coupled to a single-collector (for U-Pb age determination) and a multicollector (for Hf isotope analyses) inductively coupled plasma mass spectrometer (MC-ICPMS). The ability of the LASS for concurrent Hf + age technique to extract meaningful Hf isotope compositions in isotopically zoned zircon is demonstrated using zircons from two Proterozoic gneisses from northern Idaho, USA. These samples illustrate the potential problems associated with inadvertently sampling multiple age and Hf components in zircons, as well as the potential of LASS to recover meaningful Hf isotope compositions. We suggest that such inadvertent sampling of differing age and Hf components can be a significant cause of excess scatter in Hf isotope analyses and demonstrate that the LASS approach offers a robust solution to these issues. The veracity of the approach is demonstrated by accurate analyses of 10 reference zircons with well-characterized age and Hf isotopic composition, using laser spot diameters of 30 and 40 µm. In order to expand the database of high-precision Lu-Hf isotope analyses of reference zircons, we present 27 new isotope dilution-MC-ICPMS Lu-Hf isotope measurements of five U-Pb zircon standards: FC1, Temora, R33, QGNG, and 91500.
Fast and accurate calculation of dilute quantum gas using Uehling-Uhlenbeck model equation
NASA Astrophysics Data System (ADS)
Yano, Ryosuke
2017-02-01
The Uehling-Uhlenbeck (U-U) model equation is studied for the fast and accurate calculation of a dilute quantum gas. In particular, the direct simulation Monte Carlo (DSMC) method is used to solve the U-U model equation. DSMC analysis based on the U-U model equation is expected to enable the thermalization to be accurately obtained using a small number of sample particles and the dilute quantum gas dynamics to be calculated in a practical time. Finally, the applicability of DSMC analysis based on the U-U model equation to the fast and accurate calculation of a dilute quantum gas is confirmed by calculating the viscosity coefficient of a Bose gas on the basis of the Green-Kubo expression and the shock layer of a dilute Bose gas around a cylinder.
An efficient and accurate model of the coax cable feeding structure for FEM simulations
NASA Technical Reports Server (NTRS)
Gong, Jian; Volakis, John L.
1995-01-01
An efficient and accurate coax cable feed model is proposed for microstrip or cavity-backed patch antennas in the context of a hybrid finite element method (FEM). A TEM mode at the cavity-cable junction is assumed for the FEM truncation and system excitation. Of importance in this implementation is that the cavity unknowns are related to the model fields by enforcing an equipotential condition rather than field continuity. This scheme proved quite accurate and may be applied to other decomposed systems as a connectivity constraint. Comparisons of our predictions with input impedance measurements are presented and demonstrate the substantially improved accuracy of the proposed model.
Accurate numerical forward model for optimal retracking of SIRAL2 SAR echoes over open ocean
NASA Astrophysics Data System (ADS)
Phalippou, L.; Demeestere, F.
2011-12-01
The SAR mode of SIRAL-2 on board Cryosat-2 has been designed to measure primarily sea-ice and continental ice (Wingham et al. 2005). In 2005, K. Raney (KR, 2005) pointed out the improvements brought by SAR altimeter for open ocean. KR results were mostly based on 'rule of thumb' considerations on speckle noise reduction due to the higher PRF and to speckle decorrelation after SAR processing. In 2007, Phalippou and Enjolras (PE,2007) provided the theoretical background for optimal retracking of SAR echoes over ocean with a focus on the forward modelling of the power-waveforms. The accuracies of geophysical parameters (range, significant wave heights, and backscattering coefficient) retrieved from SAR altimeter data were derived accounting for SAR echo shape and speckle noise accurate modelling. The step forward to optimal retracking using numerical forward model (NFM) was also pointed out. NFM of the power waveform avoids analytical approximation, a warranty to minimise the geophysical dependent biases in the retrieval. NFM have been used for many years, in operational meteorology in particular, for retrieving temperature and humidity profiles from IR and microwave radiometers as the radiative transfer function is complex (Eyre, 1989). So far this technique was not used in the field of ocean conventional altimetry as analytical models (e.g. Brown's model for instance) were found to give sufficient accuracy. However, although NFM seems desirable even for conventional nadir altimetry, it becomes inevitable if one wish to process SAR altimeter data as the transfer function is too complex to be approximated by a simple analytical function. This was clearly demonstrated in PE 2007. The paper describes the background to SAR data retracking over open ocean. Since PE 2007 improvements have been brought to the forward model and it is shown that the altimeter on-ground and in flight characterisation (e.g antenna pattern range impulse response, azimuth impulse response
Bayesian parameter estimation of a k-ε model for accurate jet-in-crossflow simulations
Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan; Dechant, Lawrence
2016-05-31
Reynolds-averaged Navier–Stokes models are not very accurate for high-Reynolds-number compressible jet-in-crossflow interactions. The inaccuracy arises from the use of inappropriate model parameters and model-form errors in the Reynolds-averaged Navier–Stokes model. In this study, the hypothesis is pursued that Reynolds-averaged Navier–Stokes predictions can be significantly improved by using parameters inferred from experimental measurements of a supersonic jet interacting with a transonic crossflow.
Reduced-Complexity Models for Network Performance Prediction
2005-05-01
traffic over the network . To understand such a complex system it is necessary to develop accurate, yet simple, models to describe the performance...interconnected in complex ways, with millions of users sending traffic over the network . To understand such a complex system, it is necessary to develop...number of downloaders . . . . . . . . . . . . . . . . . 17 11 A network of ISP clouds. In this figure, the ISPs are connected via peering points, denoted
Improving light propagation Monte Carlo simulations with accurate 3D modeling of skin tissue
Paquit, Vincent C; Price, Jeffery R; Meriaudeau, Fabrice; Tobin Jr, Kenneth William
2008-01-01
In this paper, we present a 3D light propagation model to simulate multispectral reflectance images of large skin surface areas. In particular, we aim to simulate more accurately the effects of various physiological properties of the skin in the case of subcutaneous vein imaging compared to existing models. Our method combines a Monte Carlo light propagation model, a realistic three-dimensional model of the skin using parametric surfaces and a vision system for data acquisition. We describe our model in detail, present results from the Monte Carlo modeling and compare our results with those obtained with a well established Monte Carlo model and with real skin reflectance images.
Accurate modeling of high-repetition rate ultrashort pulse amplification in optical fibers
Lindberg, Robert; Zeil, Peter; Malmström, Mikael; Laurell, Fredrik; Pasiskevicius, Valdas
2016-01-01
A numerical model for amplification of ultrashort pulses with high repetition rates in fiber amplifiers is presented. The pulse propagation is modeled by jointly solving the steady-state rate equations and the generalized nonlinear Schrödinger equation, which allows accurate treatment of nonlinear and dispersive effects whilst considering arbitrary spatial and spectral gain dependencies. Comparison of data acquired by using the developed model and experimental results prove to be in good agreement. PMID:27713496
Victora, Andrea; Möller, Heiko M.; Exner, Thomas E.
2014-01-01
NMR chemical shift predictions based on empirical methods are nowadays indispensable tools during resonance assignment and 3D structure calculation of proteins. However, owing to the very limited statistical data basis, such methods are still in their infancy in the field of nucleic acids, especially when non-canonical structures and nucleic acid complexes are considered. Here, we present an ab initio approach for predicting proton chemical shifts of arbitrary nucleic acid structures based on state-of-the-art fragment-based quantum chemical calculations. We tested our prediction method on a diverse set of nucleic acid structures including double-stranded DNA, hairpins, DNA/protein complexes and chemically-modified DNA. Overall, our quantum chemical calculations yield highly/very accurate predictions with mean absolute deviations of 0.3–0.6 ppm and correlation coefficients (r2) usually above 0.9. This will allow for identifying misassignments and validating 3D structures. Furthermore, our calculations reveal that chemical shifts of protons involved in hydrogen bonding are predicted significantly less accurately. This is in part caused by insufficient inclusion of solvation effects. However, it also points toward shortcomings of current force fields used for structure determination of nucleic acids. Our quantum chemical calculations could therefore provide input for force field optimization. PMID:25404135
Accurate De Novo Prediction of Protein Contact Map by Ultra-Deep Learning Model
Li, Zhen; Zhang, Renyu
2017-01-01
Motivation Protein contacts contain key information for the understanding of protein structure and function and thus, contact prediction from sequence is an important problem. Recently exciting progress has been made on this problem, but the predicted contacts for proteins without many sequence homologs is still of low quality and not very useful for de novo structure prediction. Method This paper presents a new deep learning method that predicts contacts by integrating both evolutionary coupling (EC) and sequence conservation information through an ultra-deep neural network formed by two deep residual neural networks. The first residual network conducts a series of 1-dimensional convolutional transformation of sequential features; the second residual network conducts a series of 2-dimensional convolutional transformation of pairwise information including output of the first residual network, EC information and pairwise potential. By using very deep residual networks, we can accurately model contact occurrence patterns and complex sequence-structure relationship and thus, obtain higher-quality contact prediction regardless of how many sequence homologs are available for proteins in question. Results Our method greatly outperforms existing methods and leads to much more accurate contact-assisted folding. Tested on 105 CASP11 targets, 76 past CAMEO hard targets, and 398 membrane proteins, the average top L long-range prediction accuracy obtained by our method, one representative EC method CCMpred and the CASP11 winner MetaPSICOV is 0.47, 0.21 and 0.30, respectively; the average top L/10 long-range accuracy of our method, CCMpred and MetaPSICOV is 0.77, 0.47 and 0.59, respectively. Ab initio folding using our predicted contacts as restraints but without any force fields can yield correct folds (i.e., TMscore>0.6) for 203 of the 579 test proteins, while that using MetaPSICOV- and CCMpred-predicted contacts can do so for only 79 and 62 of them, respectively. Our contact
Slip complexity in dynamic models of earthquake faults.
Langer, J S; Carlson, J M; Myers, C R; Shaw, B E
1996-01-01
We summarize recent evidence that models of earthquake faults with dynamically unstable friction laws but no externally imposed heterogeneities can exhibit slip complexity. Two models are described here. The first is a one-dimensional model with velocity-weakening stick-slip friction; the second is a two-dimensional elastodynamic model with slip-weakening friction. Both exhibit small-event complexity and chaotic sequences of large characteristic events. The large events in both models are composed of Heaton pulses. We argue that the key ingredients of these models are reasonably accurate representations of the properties of real faults. PMID:11607671
Complex Networks in Psychological Models
NASA Astrophysics Data System (ADS)
Wedemann, R. S.; Carvalho, L. S. A. V. D.; Donangelo, R.
We develop schematic, self-organizing, neural-network models to describe mechanisms associated with mental processes, by a neurocomputational substrate. These models are examples of real world complex networks with interesting general topological structures. Considering dopaminergic signal-to-noise neuronal modulation in the central nervous system, we propose neural network models to explain development of cortical map structure and dynamics of memory access, and unify different mental processes into a single neurocomputational substrate. Based on our neural network models, neurotic behavior may be understood as an associative memory process in the brain, and the linguistic, symbolic associative process involved in psychoanalytic working-through can be mapped onto a corresponding process of reconfiguration of the neural network. The models are illustrated through computer simulations, where we varied dopaminergic modulation and observed the self-organizing emergent patterns at the resulting semantic map, interpreting them as different manifestations of mental functioning, from psychotic through to normal and neurotic behavior, and creativity.
Accurate protein structure modeling using sparse NMR data and homologous structure information.
Thompson, James M; Sgourakis, Nikolaos G; Liu, Gaohua; Rossi, Paolo; Tang, Yuefeng; Mills, Jeffrey L; Szyperski, Thomas; Montelione, Gaetano T; Baker, David
2012-06-19
While information from homologous structures plays a central role in X-ray structure determination by molecular replacement, such information is rarely used in NMR structure determination because it can be incorrect, both locally and globally, when evolutionary relationships are inferred incorrectly or there has been considerable evolutionary structural divergence. Here we describe a method that allows robust modeling of protein structures of up to 225 residues by combining (1)H(N), (13)C, and (15)N backbone and (13)Cβ chemical shift data, distance restraints derived from homologous structures, and a physically realistic all-atom energy function. Accurate models are distinguished from inaccurate models generated using incorrect sequence alignments by requiring that (i) the all-atom energies of models generated using the restraints are lower than models generated in unrestrained calculations and (ii) the low-energy structures converge to within 2.0 Å backbone rmsd over 75% of the protein. Benchmark calculations on known structures and blind targets show that the method can accurately model protein structures, even with very remote homology information, to a backbone rmsd of 1.2-1.9 Å relative to the conventional determined NMR ensembles and of 0.9-1.6 Å relative to X-ray structures for well-defined regions of the protein structures. This approach facilitates the accurate modeling of protein structures using backbone chemical shift data without need for side-chain resonance assignments and extensive analysis of NOESY cross-peak assignments.
Vargas, Alfredo; Krivokapic, Itana; Hauser, Andreas; Lawson Daku, Latévi Max
2013-03-21
We report a detailed DFT study of the energetic and structural properties of the spin-crossover Co(ii) complex [Co(tpy)(2)](2+) (tpy = 2,2':6',2''-terpyridine) in the low-spin (LS) and the high-spin (HS) states, using several generalized gradient approximation and hybrid functionals. In either spin-state, the results obtained with the functionals are consistent with one another and in good agreement with available experimental data. Although the different functionals correctly predict the LS state as the electronic ground state of [Co(tpy)(2)](2+), they give estimates of the HS-LS zero-point energy difference which strongly depend on the functional used. This dependency on the functional was also reported for the DFT estimates of the zero-point energy difference in the HS complex [Co(bpy)(3)](2+) (bpy = 2,2'-bipyridine) [A. Vargas, A. Hauser and L. M. Lawson Daku, J. Chem. Theory Comput., 2009, 5, 97]. The comparison of the and estimates showed that all functionals correctly predict an increase of the zero-point energy difference upon the bpy → tpy ligand substitution, which furthermore weakly depends on the functionals, amounting to . From these results and basic thermodynamic considerations, we establish that, despite their limitations, current DFT methods can be applied to the accurate determination of the spin-state energetics of complexes of a transition metal ion, or of these complexes in different environments, provided that the spin-state energetics is accurately known in one case. Thus, making use of the availability of a highly accurate ab initio estimate of the HS-LS energy difference in the complex [Co(NCH)(6)](2+) [L. M. Lawson Daku, F. Aquilante, T. W. Robinson and A. Hauser, J. Chem. Theory Comput., 2012, 8, 4216], we obtain for [Co(tpy)(2)](2+) and [Co(bpy)(3)](2+) best estimates of and , in good agreement with the known magnetic behaviour of the two complexes.
Fast and accurate focusing analysis of large photon sieve using pinhole ring diffraction model.
Liu, Tao; Zhang, Xin; Wang, Lingjie; Wu, Yanxiong; Zhang, Jizhen; Qu, Hemeng
2015-06-10
In this paper, we developed a pinhole ring diffraction model for the focusing analysis of a large photon sieve. Instead of analyzing individual pinholes, we discuss the focusing of all of the pinholes in a single ring. An explicit equation for the diffracted field of individual pinhole ring has been proposed. We investigated the validity range of this generalized model and analytically describe the sufficient conditions for the validity of this pinhole ring diffraction model. A practical example and investigation reveals the high accuracy of the pinhole ring diffraction model. This simulation method could be used for fast and accurate focusing analysis of a large photon sieve.
Development of modified cable models to simulate accurate neuronal active behaviors
2014-01-01
In large network and single three-dimensional (3-D) neuron simulations, high computing speed dictates using reduced cable models to simulate neuronal firing behaviors. However, these models are unwarranted under active conditions and lack accurate representation of dendritic active conductances that greatly shape neuronal firing. Here, realistic 3-D (R3D) models (which contain full anatomical details of dendrites) of spinal motoneurons were systematically compared with their reduced single unbranched cable (SUC, which reduces the dendrites to a single electrically equivalent cable) counterpart under passive and active conditions. The SUC models matched the R3D model's passive properties but failed to match key active properties, especially active behaviors originating from dendrites. For instance, persistent inward currents (PIC) hysteresis, frequency-current (FI) relationship secondary range slope, firing hysteresis, plateau potential partial deactivation, staircase currents, synaptic current transfer ratio, and regional FI relationships were not accurately reproduced by the SUC models. The dendritic morphology oversimplification and lack of dendritic active conductances spatial segregation in the SUC models caused significant underestimation of those behaviors. Next, SUC models were modified by adding key branching features in an attempt to restore their active behaviors. The addition of primary dendritic branching only partially restored some active behaviors, whereas the addition of secondary dendritic branching restored most behaviors. Importantly, the proposed modified models successfully replicated the active properties without sacrificing model simplicity, making them attractive candidates for running R3D single neuron and network simulations with accurate firing behaviors. The present results indicate that using reduced models to examine PIC behaviors in spinal motoneurons is unwarranted. PMID:25277743
Accurate path integration in continuous attractor network models of grid cells.
Burak, Yoram; Fiete, Ila R
2009-02-01
Grid cells in the rat entorhinal cortex display strikingly regular firing responses to the animal's position in 2-D space and have been hypothesized to form the neural substrate for dead-reckoning. However, errors accumulate rapidly when velocity inputs are integrated in existing models of grid cell activity. To produce grid-cell-like responses, these models would require frequent resets triggered by external sensory cues. Such inadequacies, shared by various models, cast doubt on the dead-reckoning potential of the grid cell system. Here we focus on the question of accurate path integration, specifically in continuous attractor models of grid cell activity. We show, in contrast to previous models, that continuous attractor models can generate regular triangular grid responses, based on inputs that encode only the rat's velocity and heading direction. We consider the role of the network boundary in the integration performance of the network and show that both periodic and aperiodic networks are capable of accurate path integration, despite important differences in their attractor manifolds. We quantify the rate at which errors in the velocity integration accumulate as a function of network size and intrinsic noise within the network. With a plausible range of parameters and the inclusion of spike variability, our model networks can accurately integrate velocity inputs over a maximum of approximately 10-100 meters and approximately 1-10 minutes. These findings form a proof-of-concept that continuous attractor dynamics may underlie velocity integration in the dorsolateral medial entorhinal cortex. The simulations also generate pertinent upper bounds on the accuracy of integration that may be achieved by continuous attractor dynamics in the grid cell network. We suggest experiments to test the continuous attractor model and differentiate it from models in which single cells establish their responses independently of each other.
Can phenological models predict tree phenology accurately under climate change conditions?
NASA Astrophysics Data System (ADS)
Chuine, Isabelle; Bonhomme, Marc; Legave, Jean Michel; García de Cortázar-Atauri, Inaki; Charrier, Guillaume; Lacointe, André; Améglio, Thierry
2014-05-01
The onset of the growing season of trees has been globally earlier by 2.3 days/decade during the last 50 years because of global warming and this trend is predicted to continue according to climate forecast. The effect of temperature on plant phenology is however not linear because temperature has a dual effect on bud development. On one hand, low temperatures are necessary to break bud dormancy, and on the other hand higher temperatures are necessary to promote bud cells growth afterwards. Increasing phenological changes in temperate woody species have strong impacts on forest trees distribution and productivity, as well as crops cultivation areas. Accurate predictions of trees phenology are therefore a prerequisite to understand and foresee the impacts of climate change on forests and agrosystems. Different process-based models have been developed in the last two decades to predict the date of budburst or flowering of woody species. They are two main families: (1) one-phase models which consider only the ecodormancy phase and make the assumption that endodormancy is always broken before adequate climatic conditions for cell growth occur; and (2) two-phase models which consider both the endodormancy and ecodormancy phases and predict a date of dormancy break which varies from year to year. So far, one-phase models have been able to predict accurately tree bud break and flowering under historical climate. However, because they do not consider what happens prior to ecodormancy, and especially the possible negative effect of winter temperature warming on dormancy break, it seems unlikely that they can provide accurate predictions in future climate conditions. It is indeed well known that a lack of low temperature results in abnormal pattern of bud break and development in temperate fruit trees. An accurate modelling of the dormancy break date has thus become a major issue in phenology modelling. Two-phases phenological models predict that global warming should delay
Seth A Veitzer
2008-10-21
Effects of stray electrons are a main factor limiting performance of many accelerators. Because heavy-ion fusion (HIF) accelerators will operate in regimes of higher current and with walls much closer to the beam than accelerators operating today, stray electrons might have a large, detrimental effect on the performance of an HIF accelerator. A primary source of stray electrons is electrons generated when halo ions strike the beam pipe walls. There is some research on these types of secondary electrons for the HIF community to draw upon, but this work is missing one crucial ingredient: the effect of grazing incidence. The overall goal of this project was to develop the numerical tools necessary to accurately model the effect of grazing incidence on the behavior of halo ions in a HIF accelerator, and further, to provide accurate models of heavy ion stopping powers with applications to ICF, WDM, and HEDP experiments.
Flynn, Jullien M; Brown, Emily A; Chain, Frédéric J J; MacIsaac, Hugh J; Cristescu, Melania E
2015-01-01
Metabarcoding has the potential to become a rapid, sensitive, and effective approach for identifying species in complex environmental samples. Accurate molecular identification of species depends on the ability to generate operational taxonomic units (OTUs) that correspond to biological species. Due to the sometimes enormous estimates of biodiversity using this method, there is a great need to test the efficacy of data analysis methods used to derive OTUs. Here, we evaluate the performance of various methods for clustering length variable 18S amplicons from complex samples into OTUs using a mock community and a natural community of zooplankton species. We compare analytic procedures consisting of a combination of (1) stringent and relaxed data filtering, (2) singleton sequences included and removed, (3) three commonly used clustering algorithms (mothur, UCLUST, and UPARSE), and (4) three methods of treating alignment gaps when calculating sequence divergence. Depending on the combination of methods used, the number of OTUs varied by nearly two orders of magnitude for the mock community (60–5068 OTUs) and three orders of magnitude for the natural community (22–22191 OTUs). The use of relaxed filtering and the inclusion of singletons greatly inflated OTU numbers without increasing the ability to recover species. Our results also suggest that the method used to treat gaps when calculating sequence divergence can have a great impact on the number of OTUs. Our findings are particularly relevant to studies that cover taxonomically diverse species and employ markers such as rRNA genes in which length variation is extensive. PMID:26078860
Accurate and efficient halo-based galaxy clustering modelling with simulations
NASA Astrophysics Data System (ADS)
Zheng, Zheng; Guo, Hong
2016-06-01
Small- and intermediate-scale galaxy clustering can be used to establish the galaxy-halo connection to study galaxy formation and evolution and to tighten constraints on cosmological parameters. With the increasing precision of galaxy clustering measurements from ongoing and forthcoming large galaxy surveys, accurate models are required to interpret the data and extract relevant information. We introduce a method based on high-resolution N-body simulations to accurately and efficiently model the galaxy two-point correlation functions (2PCFs) in projected and redshift spaces. The basic idea is to tabulate all information of haloes in the simulations necessary for computing the galaxy 2PCFs within the framework of halo occupation distribution or conditional luminosity function. It is equivalent to populating galaxies to dark matter haloes and using the mock 2PCF measurements as the model predictions. Besides the accurate 2PCF calculations, the method is also fast and therefore enables an efficient exploration of the parameter space. As an example of the method, we decompose the redshift-space galaxy 2PCF into different components based on the type of galaxy pairs and show the redshift-space distortion effect in each component. The generalizations and limitations of the method are discussed.
Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang
2015-10-29
Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson's ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers.
Monte Carlo modeling provides accurate calibration factors for radionuclide activity meters.
Zagni, F; Cicoria, G; Lucconi, G; Infantino, A; Lodi, F; Marengo, M
2014-12-01
Accurate determination of calibration factors for radionuclide activity meters is crucial for quantitative studies and in the optimization step of radiation protection, as these detectors are widespread in radiopharmacy and nuclear medicine facilities. In this work we developed the Monte Carlo model of a widely used activity meter, using the Geant4 simulation toolkit. More precisely the "PENELOPE" EM physics models were employed. The model was validated by means of several certified sources, traceable to primary activity standards, and other sources locally standardized with spectrometry measurements, plus other experimental tests. Great care was taken in order to accurately reproduce the geometrical details of the gas chamber and the activity sources, each of which is different in shape and enclosed in a unique container. Both relative calibration factors and ionization current obtained with simulations were compared against experimental measurements; further tests were carried out, such as the comparison of the relative response of the chamber for a source placed at different positions. The results showed a satisfactory level of accuracy in the energy range of interest, with the discrepancies lower than 4% for all the tested parameters. This shows that an accurate Monte Carlo modeling of this type of detector is feasible using the low-energy physics models embedded in Geant4. The obtained Monte Carlo model establishes a powerful tool for first instance determination of new calibration factors for non-standard radionuclides, for custom containers, when a reference source is not available. Moreover, the model provides an experimental setup for further research and optimization with regards to materials and geometrical details of the measuring setup, such as the ionization chamber itself or the containers configuration.
Using a highly accurate self-stop Cu-CMP model in the design flow
NASA Astrophysics Data System (ADS)
Izuha, Kyoko; Sakairi, Takashi; Shibuki, Shunichi; Bora, Monalisa; Hatem, Osama; Ghulghazaryan, Ruben; Strecker, Norbert; Wilson, Jeff; Takeshita, Noritsugu
2010-03-01
An accurate model for the self-stop copper chemical mechanical polishing (Cu-CMP) process has been developed using CMP modeling technology from Mentor Graphics. This technology was applied on data from Sony to create and optimize copper electroplating (ECD), Cu-CMP, and barrier metal polishing (BM-CMP) process models. These models take into account layout pattern dependency, long range diffusion and planarization effects, as well as microloading from local pattern density. The developed ECD model accurately predicted erosion and dishing over the entire range of width and space combinations present on the test chip. Then, the results of the ECD model were used as an initial structure to model the Cu-CMP step. Subsequently, the result of Cu-CMP was used for the BM-CMP model creation. The created model was successful in reproducing the measured data, including trends for a broad range of metal width and densities. Its robustness is demonstrated by the fact that it gives acceptable prediction of final copper thickness data although the calibration data included noise from line scan measurements. Accuracy of the Cu-CMP model has a great impact on the prediction results for BM-CMP. This is a critical feature for the modeling of high precision CMP such as self-stop Cu-CMP. Finally, the developed model could successfully extract planarity hotspots that helped identify potential problems in production chips before they were manufactured. The output thickness values of metal and dielectric can be used to drive layout enhancement tools and improve the accuracy of timing analysis.
Luijsterburg, Martijn S.; von Bornstaedt, Gesa; Gourdin, Audrey M.; Politi, Antonio Z.; Moné, Martijn J.; Warmerdam, Daniël O.; Goedhart, Joachim; Vermeulen, Wim
2010-01-01
To understand how multiprotein complexes assemble and function on chromatin, we combined quantitative analysis of the mammalian nucleotide excision DNA repair (NER) machinery in living cells with computational modeling. We found that individual NER components exchange within tens of seconds between the bound state in repair complexes and the diffusive state in the nucleoplasm, whereas their net accumulation at repair sites evolves over several hours. Based on these in vivo data, we developed a predictive kinetic model for the assembly and function of repair complexes. DNA repair is orchestrated by the interplay of reversible protein-binding events and progressive enzymatic modifications of the chromatin substrate. We demonstrate that faithful recognition of DNA lesions is time consuming, whereas subsequently, repair complexes form rapidly through random and reversible assembly of NER proteins. Our kinetic analysis of the NER system reveals a fundamental conflict between specificity and efficiency of chromatin-associated protein machineries and shows how a trade off is negotiated through reversibility of protein binding. PMID:20439997
Mezei, Pál D; Csonka, Gábor I; Ruzsinszky, Adrienn; Sun, Jianwei
2015-01-13
A correct description of the anion-π interaction is essential for the design of selective anion receptors and channels and important for advances in the field of supramolecular chemistry. However, it is challenging to do accurate, precise, and efficient calculations of this interaction, which are lacking in the literature. In this article, by testing sets of 20 binary anion-π complexes of fluoride, chloride, bromide, nitrate, or carbonate ions with hexafluorobenzene, 1,3,5-trifluorobenzene, 2,4,6-trifluoro-1,3,5-triazine, or 1,3,5-triazine and 30 ternary π-anion-π' sandwich complexes composed from the same monomers, we suggest domain-based local-pair natural orbital coupled cluster energies extrapolated to the complete basis-set limit as reference values. We give a detailed explanation of the origin of anion-π interactions, using the permanent quadrupole moments, static dipole polarizabilities, and electrostatic potential maps. We use symmetry-adapted perturbation theory (SAPT) to calculate the components of the anion-π interaction energies. We examine the performance of the direct random phase approximation (dRPA), the second-order screened exchange (SOSEX), local-pair natural-orbital (LPNO) coupled electron pair approximation (CEPA), and several dispersion-corrected density functionals (including generalized gradient approximation (GGA), meta-GGA, and double hybrid density functional). The LPNO-CEPA/1 results show the best agreement with the reference results. The dRPA method is only slightly less accurate and precise than the LPNO-CEPA/1, but it is considerably more efficient (6-17 times faster) for the binary complexes studied in this paper. For 30 ternary π-anion-π' sandwich complexes, we give dRPA interaction energies as reference values. The double hybrid functionals are much more efficient but less accurate and precise than dRPA. The dispersion-corrected double hybrid PWPB95-D3(BJ) and B2PLYP-D3(BJ) functionals perform better than the GGA and meta
Yield-Ensuring DAC-Embedded Opamp Design Based on Accurate Behavioral Model Development
NASA Astrophysics Data System (ADS)
Jang, Yeong-Shin; Nguyen, Hoai-Nam; Ryu, Seung-Tak; Lee, Sang-Gug
An accurate behavioral model of a DAC-embedded opamp (DAC-opamp) is developed for a yield-ensuring LCD column driver design. A lookup table for the V-I curve of the unit differential pair in the DAC-opamp is extracted from a circuit simulation and is later manipulated through a random error insertion. Virtual ground assumption simplifies the output voltage estimation algorithm. The developed behavioral model of a 5-bit DAC-opamp shows good agreement with the circuit level simulation with less than 5% INL difference.
Guggenheim, James A.; Bargigia, Ilaria; Farina, Andrea; Pifferi, Antonio; Dehghani, Hamid
2016-01-01
A novel straightforward, accessible and efficient approach is presented for performing hyperspectral time-domain diffuse optical spectroscopy to determine the optical properties of samples accurately using geometry specific models. To allow bulk parameter recovery from measured spectra, a set of libraries based on a numerical model of the domain being investigated is developed as opposed to the conventional approach of using an analytical semi-infinite slab approximation, which is known and shown to introduce boundary effects. Results demonstrate that the method improves the accuracy of derived spectrally varying optical properties over the use of the semi-infinite approximation. PMID:27699137
Body charge modelling for accurate simulation of small-signal behaviour in floating body SOI
NASA Astrophysics Data System (ADS)
Benson, James; Redman-White, William; D'Halleweyn, Nele V.; Easson, Craig A.; Uren, Michael J.
2002-04-01
We show that careful modelling of body node elements in floating body PD-SOI MOSFET compact models is required in order to obtain accurate small-signal simulation results in the saturation region. The body network modifies the saturation output conductance of the device via the body-source transconductance, resulting in a pole/zero pair being introduced in the conductance-frequency response. We show that neglecting the presence of body charge in the saturation region can often yield inaccurate values for the body capacitances, which in turn can adversely affect the modelling of the output conductance above the pole/zero frequency. We conclude that the underlying cause of this problem is the use of separate models for the intrinsic and extrinsic capacitances. Finally, we present a simple saturation body charge model which can greatly improve small-signal simulation accuracy for floating body devices.
Molecular modeling of polynucleotide complexes.
Meneksedag-Erol, Deniz; Tang, Tian; Uludağ, Hasan
2014-08-01
Delivery of polynucleotides into patient cells is a promising strategy for treatment of genetic disorders. Gene therapy aims to either synthesize desired proteins (DNA delivery) or suppress expression of endogenous genes (siRNA delivery). Carriers constitute an important part of gene therapeutics due to limitations arising from the pharmacokinetics of polynucleotides. Non-viral carriers such as polymers and lipids protect polynucleotides from intra and extracellular threats and facilitate formation of cell-permeable nanoparticles through shielding and/or bridging multiple polynucleotide molecules. Formation of nanoparticulate systems with optimal features, their cellular uptake and intracellular trafficking are crucial steps for an effective gene therapy. Despite the great amount of experimental work pursued, critical features of the nanoparticles as well as their processing mechanisms are still under debate due to the lack of instrumentation at atomic resolution. Molecular modeling based computational approaches can shed light onto the atomic level details of gene delivery systems, thus provide valuable input that cannot be readily obtained with experimental techniques. Here, we review the molecular modeling research pursued on critical gene therapy steps, highlight the knowledge gaps in the field and providing future perspectives. Existing modeling studies revealed several important aspects of gene delivery, such as nanoparticle formation dynamics with various carriers, effect of carrier properties on complexation, carrier conformations in endosomal stages, and release of polynucleotides from carriers. Rate-limiting steps related to cellular events (i.e. internalization, endosomal escape, and nuclear uptake) are now beginning to be addressed by computational approaches. Limitations arising from current computational power and accuracy of modeling have been hindering the development of more realistic models. With the help of rapidly-growing computational power
Tao, Jianmin; Rappe, Andrew M.
2016-01-21
Due to the absence of the long-range van der Waals (vdW) interaction, conventional density functional theory (DFT) often fails in the description of molecular complexes and solids. In recent years, considerable progress has been made in the development of the vdW correction. However, the vdW correction based on the leading-order coefficient C{sub 6} alone can only achieve limited accuracy, while accurate modeling of higher-order coefficients remains a formidable task, due to the strong non-additivity effect. Here, we apply a model dynamic multipole polarizability within a modified single-frequency approximation to calculate C{sub 8} and C{sub 10} between small molecules. We find that the higher-order vdW coefficients from this model can achieve remarkable accuracy, with mean absolute relative deviations of 5% for C{sub 8} and 7% for C{sub 10}. Inclusion of accurate higher-order contributions in the vdW correction will effectively enhance the predictive power of DFT in condensed matter physics and quantum chemistry.
Accurate prediction of wall shear stress in a stented artery: newtonian versus non-newtonian models.
Mejia, Juan; Mongrain, Rosaire; Bertrand, Olivier F
2011-07-01
A significant amount of evidence linking wall shear stress to neointimal hyperplasia has been reported in the literature. As a result, numerical and experimental models have been created to study the influence of stent design on wall shear stress. Traditionally, blood has been assumed to behave as a Newtonian fluid, but recently that assumption has been challenged. The use of a linear model; however, can reduce computational cost, and allow the use of Newtonian fluids (e.g., glycerine and water) instead of a blood analog fluid in an experimental setup. Therefore, it is of interest whether a linear model can be used to accurately predict the wall shear stress caused by a non-Newtonian fluid such as blood within a stented arterial segment. The present work compares the resulting wall shear stress obtained using two linear and one nonlinear model under the same flow waveform. All numerical models are fully three-dimensional, transient, and incorporate a realistic stent geometry. It is shown that traditional linear models (based on blood's lowest viscosity limit, 3.5 Pa s) underestimate the wall shear stress within a stented arterial segment, which can lead to an overestimation of the risk of restenosis. The second linear model, which uses a characteristic viscosity (based on an average strain rate, 4.7 Pa s), results in higher wall shear stress levels, but which are still substantially below those of the nonlinear model. It is therefore shown that nonlinear models result in more accurate predictions of wall shear stress within a stented arterial segment.
NASA Astrophysics Data System (ADS)
Mead, A. J.; Heymans, C.; Lombriser, L.; Peacock, J. A.; Steele, O. I.; Winther, H. A.
2016-06-01
We present an accurate non-linear matter power spectrum prediction scheme for a variety of extensions to the standard cosmological paradigm, which uses the tuned halo model previously developed in Mead et al. We consider dark energy models that are both minimally and non-minimally coupled, massive neutrinos and modified gravitational forces with chameleon and Vainshtein screening mechanisms. In all cases, we compare halo-model power spectra to measurements from high-resolution simulations. We show that the tuned halo-model method can predict the non-linear matter power spectrum measured from simulations of parametrized w(a) dark energy models at the few per cent level for k < 10 h Mpc-1, and we present theoretically motivated extensions to cover non-minimally coupled scalar fields, massive neutrinos and Vainshtein screened modified gravity models that result in few per cent accurate power spectra for k < 10 h Mpc-1. For chameleon screened models, we achieve only 10 per cent accuracy for the same range of scales. Finally, we use our halo model to investigate degeneracies between different extensions to the standard cosmological model, finding that the impact of baryonic feedback on the non-linear matter power spectrum can be considered independently of modified gravity or massive neutrino extensions. In contrast, considering the impact of modified gravity and massive neutrinos independently results in biased estimates of power at the level of 5 per cent at scales k > 0.5 h Mpc-1. An updated version of our publicly available HMCODE can be found at https://github.com/alexander-mead/hmcode.
Accurate modeling of switched reluctance machine based on hybrid trained WNN
NASA Astrophysics Data System (ADS)
Song, Shoujun; Ge, Lefei; Ma, Shaojie; Zhang, Man
2014-04-01
According to the strong nonlinear electromagnetic characteristics of switched reluctance machine (SRM), a novel accurate modeling method is proposed based on hybrid trained wavelet neural network (WNN) which combines improved genetic algorithm (GA) with gradient descent (GD) method to train the network. In the novel method, WNN is trained by GD method based on the initial weights obtained per improved GA optimization, and the global parallel searching capability of stochastic algorithm and local convergence speed of deterministic algorithm are combined to enhance the training accuracy, stability and speed. Based on the measured electromagnetic characteristics of a 3-phase 12/8-pole SRM, the nonlinear simulation model is built by hybrid trained WNN in Matlab. The phase current and mechanical characteristics from simulation under different working conditions meet well with those from experiments, which indicates the accuracy of the model for dynamic and static performance evaluation of SRM and verifies the effectiveness of the proposed modeling method.
A hamster model for Marburg virus infection accurately recapitulates Marburg hemorrhagic fever
Marzi, Andrea; Banadyga, Logan; Haddock, Elaine; Thomas, Tina; Shen, Kui; Horne, Eva J.; Scott, Dana P.; Feldmann, Heinz; Ebihara, Hideki
2016-01-01
Marburg virus (MARV), a close relative of Ebola virus, is the causative agent of a severe human disease known as Marburg hemorrhagic fever (MHF). No licensed vaccine or therapeutic exists to treat MHF, and MARV is therefore classified as a Tier 1 select agent and a category A bioterrorism agent. In order to develop countermeasures against this severe disease, animal models that accurately recapitulate human disease are required. Here we describe the development of a novel, uniformly lethal Syrian golden hamster model of MHF using a hamster-adapted MARV variant Angola. Remarkably, this model displayed almost all of the clinical features of MHF seen in humans and non-human primates, including coagulation abnormalities, hemorrhagic manifestations, petechial rash, and a severely dysregulated immune response. This MHF hamster model represents a powerful tool for further dissecting MARV pathogenesis and accelerating the development of effective medical countermeasures against human MHF. PMID:27976688
A hamster model for Marburg virus infection accurately recapitulates Marburg hemorrhagic fever.
Marzi, Andrea; Banadyga, Logan; Haddock, Elaine; Thomas, Tina; Shen, Kui; Horne, Eva J; Scott, Dana P; Feldmann, Heinz; Ebihara, Hideki
2016-12-15
Marburg virus (MARV), a close relative of Ebola virus, is the causative agent of a severe human disease known as Marburg hemorrhagic fever (MHF). No licensed vaccine or therapeutic exists to treat MHF, and MARV is therefore classified as a Tier 1 select agent and a category A bioterrorism agent. In order to develop countermeasures against this severe disease, animal models that accurately recapitulate human disease are required. Here we describe the development of a novel, uniformly lethal Syrian golden hamster model of MHF using a hamster-adapted MARV variant Angola. Remarkably, this model displayed almost all of the clinical features of MHF seen in humans and non-human primates, including coagulation abnormalities, hemorrhagic manifestations, petechial rash, and a severely dysregulated immune response. This MHF hamster model represents a powerful tool for further dissecting MARV pathogenesis and accelerating the development of effective medical countermeasures against human MHF.
Accurate modeling of switched reluctance machine based on hybrid trained WNN
Song, Shoujun Ge, Lefei; Ma, Shaojie; Zhang, Man
2014-04-15
According to the strong nonlinear electromagnetic characteristics of switched reluctance machine (SRM), a novel accurate modeling method is proposed based on hybrid trained wavelet neural network (WNN) which combines improved genetic algorithm (GA) with gradient descent (GD) method to train the network. In the novel method, WNN is trained by GD method based on the initial weights obtained per improved GA optimization, and the global parallel searching capability of stochastic algorithm and local convergence speed of deterministic algorithm are combined to enhance the training accuracy, stability and speed. Based on the measured electromagnetic characteristics of a 3-phase 12/8-pole SRM, the nonlinear simulation model is built by hybrid trained WNN in Matlab. The phase current and mechanical characteristics from simulation under different working conditions meet well with those from experiments, which indicates the accuracy of the model for dynamic and static performance evaluation of SRM and verifies the effectiveness of the proposed modeling method.
Beekhuizen, Johan; Kromhout, Hans; Bürgi, Alfred; Huss, Anke; Vermeulen, Roel
2015-01-01
The increase in mobile communication technology has led to concern about potential health effects of radio frequency electromagnetic fields (RF-EMFs) from mobile phone base stations. Different RF-EMF prediction models have been applied to assess population exposure to RF-EMF. Our study examines what input data are needed to accurately model RF-EMF, as detailed data are not always available for epidemiological studies. We used NISMap, a 3D radio wave propagation model, to test models with various levels of detail in building and antenna input data. The model outcomes were compared with outdoor measurements taken in Amsterdam, the Netherlands. Results showed good agreement between modelled and measured RF-EMF when 3D building data and basic antenna information (location, height, frequency and direction) were used: Spearman correlations were >0.6. Model performance was not sensitive to changes in building damping parameters. Antenna-specific information about down-tilt, type and output power did not significantly improve model performance compared with using average down-tilt and power values, or assuming one standard antenna type. We conclude that 3D radio wave propagation modelling is a feasible approach to predict outdoor RF-EMF levels for ranking exposure levels in epidemiological studies, when 3D building data and information on the antenna height, frequency, location and direction are available.
Accurate verification of the conserved-vector-current and standard-model predictions
Sirlin, A.; Zucchini, R.
1986-10-20
An approximate analytic calculation of O(Z..cap alpha../sup 2/) corrections to Fermi decays is presented. When the analysis of Koslowsky et al. is modified to take into account the new results, it is found that each of the eight accurately studied scrFt values differs from the average by approx. <1sigma, thus significantly improving the comparison of experiments with conserved-vector-current predictions. The new scrFt values are lower than before, which also brings experiments into very good agreement with the three-generation standard model, at the level of its quantum corrections.
Vladescu, Jason C; Carroll, Regina; Paden, Amber; Kodak, Tiffany M
2012-01-01
The present study replicates and extends previous research on the use of video modeling (VM) with voiceover instruction to train staff to implement discrete-trial instruction (DTI). After staff trainees reached the mastery criterion when teaching an adult confederate with VM, they taught a child with a developmental disability using DTI. The results showed that the staff trainees' accurate implementation of DTI remained high, and both child participants acquired new skills. These findings provide additional support that VM may be an effective method to train staff members to conduct DTI.
Double Cluster Heads Model for Secure and Accurate Data Fusion in Wireless Sensor Networks
Fu, Jun-Song; Liu, Yun
2015-01-01
Secure and accurate data fusion is an important issue in wireless sensor networks (WSNs) and has been extensively researched in the literature. In this paper, by combining clustering techniques, reputation and trust systems, and data fusion algorithms, we propose a novel cluster-based data fusion model called Double Cluster Heads Model (DCHM) for secure and accurate data fusion in WSNs. Different from traditional clustering models in WSNs, two cluster heads are selected after clustering for each cluster based on the reputation and trust system and they perform data fusion independently of each other. Then, the results are sent to the base station where the dissimilarity coefficient is computed. If the dissimilarity coefficient of the two data fusion results exceeds the threshold preset by the users, the cluster heads will be added to blacklist, and the cluster heads must be reelected by the sensor nodes in a cluster. Meanwhile, feedback is sent from the base station to the reputation and trust system, which can help us to identify and delete the compromised sensor nodes in time. Through a series of extensive simulations, we found that the DCHM performed very well in data fusion security and accuracy. PMID:25608211
NASA Astrophysics Data System (ADS)
Campforts, Benjamin; Schwanghart, Wolfgang; Govers, Gerard
2017-01-01
Landscape evolution models (LEMs) allow the study of earth surface responses to changing climatic and tectonic forcings. While much effort has been devoted to the development of LEMs that simulate a wide range of processes, the numerical accuracy of these models has received less attention. Most LEMs use first-order accurate numerical methods that suffer from substantial numerical diffusion. Numerical diffusion particularly affects the solution of the advection equation and thus the simulation of retreating landforms such as cliffs and river knickpoints. This has potential consequences for the integrated response of the simulated landscape. Here we test a higher-order flux-limiting finite volume method that is total variation diminishing (TVD-FVM) to solve the partial differential equations of river incision and tectonic displacement. We show that using the TVD-FVM to simulate river incision significantly influences the evolution of simulated landscapes and the spatial and temporal variability of catchment-wide erosion rates. Furthermore, a two-dimensional TVD-FVM accurately simulates the evolution of landscapes affected by lateral tectonic displacement, a process whose simulation was hitherto largely limited to LEMs with flexible spatial discretization. We implement the scheme in TTLEM (TopoToolbox Landscape Evolution Model), a spatially explicit, raster-based LEM for the study of fluvially eroding landscapes in TopoToolbox 2.
Gay, Guillaume; Courtheoux, Thibault; Reyes, Céline; Tournier, Sylvie; Gachet, Yannick
2012-03-19
In fission yeast, erroneous attachments of spindle microtubules to kinetochores are frequent in early mitosis. Most are corrected before anaphase onset by a mechanism involving the protein kinase Aurora B, which destabilizes kinetochore microtubules (ktMTs) in the absence of tension between sister chromatids. In this paper, we describe a minimal mathematical model of fission yeast chromosome segregation based on the stochastic attachment and detachment of ktMTs. The model accurately reproduces the timing of correct chromosome biorientation and segregation seen in fission yeast. Prevention of attachment defects requires both appropriate kinetochore orientation and an Aurora B-like activity. The model also reproduces abnormal chromosome segregation behavior (caused by, for example, inhibition of Aurora B). It predicts that, in metaphase, merotelic attachment is prevented by a kinetochore orientation effect and corrected by an Aurora B-like activity, whereas in anaphase, it is corrected through unbalanced forces applied to the kinetochore. These unbalanced forces are sufficient to prevent aneuploidy.
NASA Technical Reports Server (NTRS)
Sokalski, W. A.; Shibata, M.; Ornstein, R. L.; Rein, R.
1992-01-01
The quality of several atomic charge models based on different definitions has been analyzed using cumulative atomic multipole moments (CAMM). This formalism can generate higher atomic moments starting from any atomic charges, while preserving the corresponding molecular moments. The atomic charge contribution to the higher molecular moments, as well as to the electrostatic potentials, has been examined for CO and HCN molecules at several different levels of theory. The results clearly show that the electrostatic potential obtained from CAMM expansion is convergent up to R-5 term for all atomic charge models used. This illustrates that higher atomic moments can be used to supplement any atomic charge model to obtain more accurate description of electrostatic properties.
Gröning, Flora; Jones, Marc E. H.; Curtis, Neil; Herrel, Anthony; O'Higgins, Paul; Evans, Susan E.; Fagan, Michael J.
2013-01-01
Computer-based simulation techniques such as multi-body dynamics analysis are becoming increasingly popular in the field of skull mechanics. Multi-body models can be used for studying the relationships between skull architecture, muscle morphology and feeding performance. However, to be confident in the modelling results, models need to be validated against experimental data, and the effects of uncertainties or inaccuracies in the chosen model attributes need to be assessed with sensitivity analyses. Here, we compare the bite forces predicted by a multi-body model of a lizard (Tupinambis merianae) with in vivo measurements, using anatomical data collected from the same specimen. This subject-specific model predicts bite forces that are very close to the in vivo measurements and also shows a consistent increase in bite force as the bite position is moved posteriorly on the jaw. However, the model is very sensitive to changes in muscle attributes such as fibre length, intrinsic muscle strength and force orientation, with bite force predictions varying considerably when these three variables are altered. We conclude that accurate muscle measurements are crucial to building realistic multi-body models and that subject-specific data should be used whenever possible. PMID:23614944
Can a Global Model Accurately Simulate Land-Atmosphere Interactions under Climate Change Conditions?
NASA Astrophysics Data System (ADS)
Zhou, C., VI; Wang, K.
2015-12-01
Surface air temperature (Ta) is largely determined by surface net radiation (Rn) and its partitioning into latent (LE) and sensible heat fluxes (H). Existing model evaluations of the absolute values of these fluxes are less helpful because the evaluation results are a blending of inconsistent spatial scales, inaccurate model forcing data and inaccurate parameterizations. This study further evaluates the relationship of LE and H with Rn and environmental parameters, including Ta, relative humidity (RH) and wind speed (WS), using ERA-interim reanalysis data at a grid of 0.125°×0.125° with measurements at AmeriFlux sites from 1998 to 2012. The results demonstrate that ERA-Interim can reproduce the absolute values of environmental parameters, radiation and turbulent fluxes rather accurately. The model performs well in simulating the correlation of LE and H to Rn, except for the notable correlation overestimation of H against Rn over high-density vegetation (e.g., deciduous broadleaf forest (DBF), grassland (GRA) and cropland (CRO)). The sensitivity of LE to Rn in the model is similar to the observations, but that of H to Rn is overestimated by 24.2%. In regions with high-density vegetation, the correlation coefficient between H and Ta is overestimated by more than 0.2, whereas that between H and WS is underestimated by more than 0.43. The sensitivity of H to Ta is overestimated by 0.72 Wm-2 °C-1, whereas that of H to WS in the model is underestimated by 16.15 Wm-2/(ms-1) over all of the sites. Considering both LE and H, the model cannot accurately capture the response of the evaporative fraction (EF=LE/(LE+H)) to Rn and the environmental parameters.
Quinci, Federico; Dressler, Matthew; Strickland, Anthony M; Limbert, Georges
2014-04-01
Considerable progress has been made in understanding implant wear and developing numerical models to predict wear for new orthopaedic devices. However any model of wear could be improved through a more accurate representation of the biomaterial mechanics, including time-varying dynamic and inelastic behaviour such as viscosity and plastic deformation. In particular, most computational models of wear of UHMWPE implement a time-invariant version of Archard's law that links the volume of worn material to the contact pressure between the metal implant and the polymeric tibial insert. During in-vivo conditions, however, the contact area is a time-varying quantity and is therefore dependent upon the dynamic deformation response of the material. From this observation one can conclude that creep deformations of UHMWPE may be very important to consider when conducting computational wear analyses, in stark contrast to what can be found in the literature. In this study, different numerical modelling techniques are compared with experimental creep testing on a unicondylar knee replacement system in a physiologically representative context. Linear elastic, plastic and time-varying visco-dynamic models are benchmarked using literature data to predict contact deformations, pressures and areas. The aim of this study is to elucidate the contributions of viscoelastic and plastic effects on these surface quantities. It is concluded that creep deformations have a significant effect on the contact pressure measured (experiment) and calculated (computational models) at the surface of the UHMWPE unicondylar insert. The use of a purely elastoplastic constitutive model for UHMWPE lead to compressive deformations of the insert which are much smaller than those predicted by a creep-capturing viscoelastic model (and those measured experimentally). This shows again the importance of including creep behaviour into a constitutive model in order to predict the right level of surface deformation
Algal productivity modeling: a step toward accurate assessments of full-scale algal cultivation.
Béchet, Quentin; Chambonnière, Paul; Shilton, Andy; Guizard, Guillaume; Guieysse, Benoit
2015-05-01
A new biomass productivity model was parameterized for Chlorella vulgaris using short-term (<30 min) oxygen productivities from algal microcosms exposed to 6 light intensities (20-420 W/m(2)) and 6 temperatures (5-42 °C). The model was then validated against experimental biomass productivities recorded in bench-scale photobioreactors operated under 4 light intensities (30.6-74.3 W/m(2)) and 4 temperatures (10-30 °C), yielding an accuracy of ± 15% over 163 days of cultivation. This modeling approach addresses major challenges associated with the accurate prediction of algal productivity at full-scale. Firstly, while most prior modeling approaches have only considered the impact of light intensity on algal productivity, the model herein validated also accounts for the critical impact of temperature. Secondly, this study validates a theoretical approach to convert short-term oxygen productivities into long-term biomass productivities. Thirdly, the experimental methodology used has the practical advantage of only requiring one day of experimental work for complete model parameterization. The validation of this new modeling approach is therefore an important step for refining feasibility assessments of algae biotechnologies.
Teacher Modeling Using Complex Informational Texts
ERIC Educational Resources Information Center
Fisher, Douglas; Frey, Nancy
2015-01-01
Modeling in complex texts requires that teachers analyze the text for factors of qualitative complexity and then design lessons that introduce students to that complexity. In addition, teachers can model the disciplinary nature of content area texts as well as word solving and comprehension strategies. Included is a planning guide for think aloud.
An accurate, fast and stable material model for shape memory alloys
NASA Astrophysics Data System (ADS)
Junker, Philipp
2014-10-01
Shape memory alloys possess several features that make them interesting for industrial applications. However, due to their complex and thermo-mechanically coupled behavior, direct use of shape memory alloys in engineering construction is problematic. There is thus a demand for tools to achieve realistic, predictive simulations that are numerically robust when computing complex, coupled load states, are fast enough to calculate geometries of industrial interest, and yield realistic and reliable results without the use of fitting curves. In this paper a new and numerically fast material model for shape memory alloys is presented. It is based solely on energetic quantities, which thus creates a quite universal approach. In the beginning, a short derivation is given before it is demonstrated how this model can be easily calibrated by means of tension tests. Then, several examples of engineering applications under mechanical and thermal loads are presented to demonstrate the numerical stability and high computation speed of the model.
Innes, Carrie R H; Lee, Dominic; Chen, Chen; Ponder-Sutton, Agate M; Melzer, Tracy R; Jones, Richard D
2011-09-01
Prediction of complex behavioural tasks via relatively simple modelling techniques, such as logistic regression and discriminant analysis, often has limited success. We hypothesized that to more accurately model complex behaviour, more complex models, such as kernel-based methods, would be needed. To test this hypothesis, we assessed the value of six modelling approaches for predicting driving ability based on performance on computerized sensory-motor and cognitive tests (SMCTests™) in 501 people with brain disorders. The models included three models previously used to predict driving ability (discriminant analysis, DA; binary logistic regression, BLR; and nonlinear causal resource analysis, NCRA) and three kernel methods (support vector machine, SVM; product kernel density, PK; and kernel product density, KP). At the classification level, two kernel methods were substantially more accurate at classifying on-road pass or fail (SVM 99.6%, PK 99.8%) than the other models (DA 76%, BLR 78%, NCRA 74%, KP 81%). However, accuracy decreased substantially for all of the kernel models when cross-validation techniques were used to estimate prediction of on-road pass or fail in an independent referral group (SVM 73-76%, PK 72-73%, KP 71-72%) but decreased only slightly for DA (74-75%) and BLR (75-76%). Cross-validation of NCRA was not possible. In conclusion, while kernel-based models are successful at modelling complex data at a classification level, this is likely to be due to overfitting of the data, which does not lead to an improvement in accuracy in independent data over and above the accuracy of other less complex modelling techniques.
Accurately modeling Gaussian beam propagation in the context of Monte Carlo techniques
NASA Astrophysics Data System (ADS)
Hokr, Brett H.; Winblad, Aidan; Bixler, Joel N.; Elpers, Gabriel; Zollars, Byron; Scully, Marlan O.; Yakovlev, Vladislav V.; Thomas, Robert J.
2016-03-01
Monte Carlo simulations are widely considered to be the gold standard for studying the propagation of light in turbid media. However, traditional Monte Carlo methods fail to account for diffraction because they treat light as a particle. This results in converging beams focusing to a point instead of a diffraction limited spot, greatly effecting the accuracy of Monte Carlo simulations near the focal plane. Here, we present a technique capable of simulating a focusing beam in accordance to the rules of Gaussian optics, resulting in a diffraction limited focal spot. This technique can be easily implemented into any traditional Monte Carlo simulation allowing existing models to be converted to include accurate focusing geometries with minimal effort. We will present results for a focusing beam in a layered tissue model, demonstrating that for different scenarios the region of highest intensity, thus the greatest heating, can change from the surface to the focus. The ability to simulate accurate focusing geometries will greatly enhance the usefulness of Monte Carlo for countless applications, including studying laser tissue interactions in medical applications and light propagation through turbid media.
Accurate model of electron beam profiles with emittance effects for pierce guns
NASA Astrophysics Data System (ADS)
Zeng, Peng; Wang, Guangqiang; Wang, Jianguo; Wang, Dongyang; Li, Shuang
2016-09-01
Accurate prediction of electron beam profile is one of the key objectives of electron optics, and the basis for design of the practical electron gun. In this paper, an improved model describing electron beam in Pierce gun with both space charge effects and emittance effects is proposed. The theory developed by Cutler and Hines is still applied for the accelerating region of the Pierce gun, while the motion equations of the electron beams in the anode aperture and drift tunnel are improved by modifying electron optics theory with emittance. As a result, a more universal and accurate formula of the focal length of the lens for the electron beam with both effects is derived for the anode aperture with finite dimension, and a modified universal spread curve considering beam emittance is introduced in drift tunnel region. Based on these improved motion equations of the electron beam, beam profiles with space charge effects and emittance effects can be theoretically predicted, which are subsequently approved to agree well with the experimentally measured ones. The developed model here is helpful to design more applicable Pierce guns at high frequencies.
Accurate and scalable social recommendation using mixed-membership stochastic block models
Godoy-Lorite, Antonia; Moore, Cristopher
2016-01-01
With increasing amounts of information available, modeling and predicting user preferences—for books or articles, for example—are becoming more important. We present a collaborative filtering model, with an associated scalable algorithm, that makes accurate predictions of users’ ratings. Like previous approaches, we assume that there are groups of users and of items and that the rating a user gives an item is determined by their respective group memberships. However, we allow each user and each item to belong simultaneously to mixtures of different groups and, unlike many popular approaches such as matrix factorization, we do not assume that users in each group prefer a single group of items. In particular, we do not assume that ratings depend linearly on a measure of similarity, but allow probability distributions of ratings to depend freely on the user’s and item’s groups. The resulting overlapping groups and predicted ratings can be inferred with an expectation-maximization algorithm whose running time scales linearly with the number of observed ratings. Our approach enables us to predict user preferences in large datasets and is considerably more accurate than the current algorithms for such large datasets. PMID:27911773
NASA Astrophysics Data System (ADS)
Malik, Arif Sultan
This work presents improved technology for attaining high-quality rolled metal strip. The new technology is based on an innovative method to model both the static and dynamic characteristics of rolling mill deflection, and it applies equally to both cluster-type and non cluster-type rolling mill configurations. By effectively combining numerical Finite Element Analysis (FEA) with analytical solid mechanics, the devised approach delivers a rapid, accurate, flexible, high-fidelity model useful for optimizing many important rolling parameters. The associated static deflection model enables computation of the thickness profile and corresponding flatness of the rolled strip. Accurate methods of predicting the strip thickness profile and strip flatness are important in rolling mill design, rolling schedule set-up, control of mill flatness actuators, and optimization of ground roll profiles. The corresponding dynamic deflection model enables solution of the standard eigenvalue problem to determine natural frequencies and modes of vibration. The presented method for solving the roll-stack deflection problem offers several important advantages over traditional methods. In particular, it includes continuity of elastic foundations, non-iterative solution when using pre-determined elastic foundation moduli, continuous third-order displacement fields, simple stress-field determination, the ability to calculate dynamic characteristics, and a comparatively faster solution time. Consistent with the most advanced existing methods, the presented method accommodates loading conditions that represent roll crowning, roll bending, roll shifting, and roll crossing mechanisms. Validation of the static model is provided by comparing results and solution time with large-scale, commercial finite element simulations. In addition to examples with the common 4-high vertical stand rolling mill, application of the presented method to the most complex of rolling mill configurations is demonstrated
NASA Technical Reports Server (NTRS)
Kopasakis, George
2014-01-01
The presentation covers a recently developed methodology to model atmospheric turbulence as disturbances for aero vehicle gust loads and for controls development like flutter and inlet shock position. The approach models atmospheric turbulence in their natural fractional order form, which provides for more accuracy compared to traditional methods like the Dryden model, especially for high speed vehicle. The presentation provides a historical background on atmospheric turbulence modeling and the approaches utilized for air vehicles. This is followed by the motivation and the methodology utilized to develop the atmospheric turbulence fractional order modeling approach. Some examples covering the application of this method are also provided, followed by concluding remarks.
Accurate Cold-Test Model of Helical TWT Slow-Wave Circuits
NASA Technical Reports Server (NTRS)
Kory, Carol L.; Dayton, James A., Jr.
1997-01-01
Recently, a method has been established to accurately calculate cold-test data for helical slow-wave structures using the three-dimensional electromagnetic computer code, MAFIA. Cold-test parameters have been calculated for several helical traveling-wave tube (TWT) slow-wave circuits possessing various support rod configurations, and results are presented here showing excellent agreement with experiment. The helical models include tape thickness, dielectric support shapes and material properties consistent with the actual circuits. The cold-test data from this helical model can be used as input into large-signal helical TWT interaction codes making it possible, for the first time, to design a complete TWT via computer simulation.
Blackman, Jonathan; Field, Scott E; Galley, Chad R; Szilágyi, Béla; Scheel, Mark A; Tiglio, Manuel; Hemberger, Daniel A
2015-09-18
Simulating a binary black hole coalescence by solving Einstein's equations is computationally expensive, requiring days to months of supercomputing time. Using reduced order modeling techniques, we construct an accurate surrogate model, which is evaluated in a millisecond to a second, for numerical relativity (NR) waveforms from nonspinning binary black hole coalescences with mass ratios in [1, 10] and durations corresponding to about 15 orbits before merger. We assess the model's uncertainty and show that our modeling strategy predicts NR waveforms not used for the surrogate's training with errors nearly as small as the numerical error of the NR code. Our model includes all spherical-harmonic _{-2}Y_{ℓm} waveform modes resolved by the NR code up to ℓ=8. We compare our surrogate model to effective one body waveforms from 50M_{⊙} to 300M_{⊙} for advanced LIGO detectors and find that the surrogate is always more faithful (by at least an order of magnitude in most cases).
Fitmunk: improving protein structures by accurate, automatic modeling of side-chain conformations.
Porebski, Przemyslaw Jerzy; Cymborowski, Marcin; Pasenkiewicz-Gierula, Marta; Minor, Wladek
2016-02-01
Improvements in crystallographic hardware and software have allowed automated structure-solution pipelines to approach a near-`one-click' experience for the initial determination of macromolecular structures. However, in many cases the resulting initial model requires a laborious, iterative process of refinement and validation. A new method has been developed for the automatic modeling of side-chain conformations that takes advantage of rotamer-prediction methods in a crystallographic context. The algorithm, which is based on deterministic dead-end elimination (DEE) theory, uses new dense conformer libraries and a hybrid energy function derived from experimental data and prior information about rotamer frequencies to find the optimal conformation of each side chain. In contrast to existing methods, which incorporate the electron-density term into protein-modeling frameworks, the proposed algorithm is designed to take advantage of the highly discriminatory nature of electron-density maps. This method has been implemented in the program Fitmunk, which uses extensive conformational sampling. This improves the accuracy of the modeling and makes it a versatile tool for crystallographic model building, refinement and validation. Fitmunk was extensively tested on over 115 new structures, as well as a subset of 1100 structures from the PDB. It is demonstrated that the ability of Fitmunk to model more than 95% of side chains accurately is beneficial for improving the quality of crystallographic protein models, especially at medium and low resolutions. Fitmunk can be used for model validation of existing structures and as a tool to assess whether side chains are modeled optimally or could be better fitted into electron density. Fitmunk is available as a web service at http://kniahini.med.virginia.edu/fitmunk/server/ or at http://fitmunk.bitbucket.org/.
Fitmunk: improving protein structures by accurate, automatic modeling of side-chain conformations
Porebski, Przemyslaw Jerzy; Cymborowski, Marcin; Pasenkiewicz-Gierula, Marta; Minor, Wladek
2016-01-01
Improvements in crystallographic hardware and software have allowed automated structure-solution pipelines to approach a near-‘one-click’ experience for the initial determination of macromolecular structures. However, in many cases the resulting initial model requires a laborious, iterative process of refinement and validation. A new method has been developed for the automatic modeling of side-chain conformations that takes advantage of rotamer-prediction methods in a crystallographic context. The algorithm, which is based on deterministic dead-end elimination (DEE) theory, uses new dense conformer libraries and a hybrid energy function derived from experimental data and prior information about rotamer frequencies to find the optimal conformation of each side chain. In contrast to existing methods, which incorporate the electron-density term into protein-modeling frameworks, the proposed algorithm is designed to take advantage of the highly discriminatory nature of electron-density maps. This method has been implemented in the program Fitmunk, which uses extensive conformational sampling. This improves the accuracy of the modeling and makes it a versatile tool for crystallographic model building, refinement and validation. Fitmunk was extensively tested on over 115 new structures, as well as a subset of 1100 structures from the PDB. It is demonstrated that the ability of Fitmunk to model more than 95% of side chains accurately is beneficial for improving the quality of crystallographic protein models, especially at medium and low resolutions. Fitmunk can be used for model validation of existing structures and as a tool to assess whether side chains are modeled optimally or could be better fitted into electron density. Fitmunk is available as a web service at http://kniahini.med.virginia.edu/fitmunk/server/ or at http://fitmunk.bitbucket.org/. PMID:26894674
A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina.
Maturana, Matias I; Apollo, Nicholas V; Hadjinicolaou, Alex E; Garrett, David J; Cloherty, Shaun L; Kameneva, Tatiana; Grayden, David B; Ibbotson, Michael R; Meffin, Hamish
2016-04-01
Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron's electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy.
A Simple and Accurate Model to Predict Responses to Multi-electrode Stimulation in the Retina
Maturana, Matias I.; Apollo, Nicholas V.; Hadjinicolaou, Alex E.; Garrett, David J.; Cloherty, Shaun L.; Kameneva, Tatiana; Grayden, David B.; Ibbotson, Michael R.; Meffin, Hamish
2016-01-01
Implantable electrode arrays are widely used in therapeutic stimulation of the nervous system (e.g. cochlear, retinal, and cortical implants). Currently, most neural prostheses use serial stimulation (i.e. one electrode at a time) despite this severely limiting the repertoire of stimuli that can be applied. Methods to reliably predict the outcome of multi-electrode stimulation have not been available. Here, we demonstrate that a linear-nonlinear model accurately predicts neural responses to arbitrary patterns of stimulation using in vitro recordings from single retinal ganglion cells (RGCs) stimulated with a subretinal multi-electrode array. In the model, the stimulus is projected onto a low-dimensional subspace and then undergoes a nonlinear transformation to produce an estimate of spiking probability. The low-dimensional subspace is estimated using principal components analysis, which gives the neuron’s electrical receptive field (ERF), i.e. the electrodes to which the neuron is most sensitive. Our model suggests that stimulation proportional to the ERF yields a higher efficacy given a fixed amount of power when compared to equal amplitude stimulation on up to three electrodes. We find that the model captures the responses of all the cells recorded in the study, suggesting that it will generalize to most cell types in the retina. The model is computationally efficient to evaluate and, therefore, appropriate for future real-time applications including stimulation strategies that make use of recorded neural activity to improve the stimulation strategy. PMID:27035143
Whittleton, Sarah R; Otero-de-la-Roza, A; Johnson, Erin R
2017-02-14
Accurate energy ranking is a key facet to the problem of first-principles crystal-structure prediction (CSP) of molecular crystals. This work presents a systematic assessment of B86bPBE-XDM, a semilocal density functional combined with the exchange-hole dipole moment (XDM) dispersion model, for energy ranking using 14 compounds from the first five CSP blind tests. Specifically, the set of crystals studied comprises 11 rigid, planar compounds and 3 co-crystals. The experimental structure was correctly identified as the lowest in lattice energy for 12 of the 14 total crystals. One of the exceptions is 4-hydroxythiophene-2-carbonitrile, for which the experimental structure was correctly identified once a quasi-harmonic estimate of the vibrational free-energy contribution was included, evidencing the occasional importance of thermal corrections for accurate energy ranking. The other exception is an organic salt, where charge-transfer error (also called delocalization error) is expected to cause the base density functional to be unreliable. Provided the choice of base density functional is appropriate and an estimate of temperature effects is used, XDM-corrected density-functional theory is highly reliable for the energetic ranking of competing crystal structures.
NASA Astrophysics Data System (ADS)
McKemmish, Laura K.; Yurchenko, Sergei N.; Tennyson, Jonathan
2016-11-01
Accurate knowledge of the rovibronic near-infrared and visible spectra of vanadium monoxide (VO) is very important for studies of cool stellar and hot planetary atmospheres. Here, the required ab initio dipole moment and spin-orbit coupling curves for VO are produced. This data forms the basis of a new VO line list considering 13 different electronic states and containing over 277 million transitions. Open shell transition, metal diatomics are challenging species to model through ab initio quantum mechanics due to the large number of low-lying electronic states, significant spin-orbit coupling and strong static and dynamic electron correlation. Multi-reference configuration interaction methodologies using orbitals from a complete active space self-consistent-field (CASSCF) calculation are the standard technique for these systems. We use different state-specific or minimal-state CASSCF orbitals for each electronic state to maximise the calculation accuracy. The off-diagonal dipole moment controls the intensity of electronic transitions. We test finite-field off-diagonal dipole moments, but found that (1) the accuracy of the excitation energies were not sufficient to allow accurate dipole moments to be evaluated and (2) computer time requirements for perpendicular transitions were prohibitive. The best off-diagonal dipole moments are calculated using wavefunctions with different CASSCF orbitals.
Magnetic gaps in organic tri-radicals: From a simple model to accurate estimates
NASA Astrophysics Data System (ADS)
Barone, Vincenzo; Cacelli, Ivo; Ferretti, Alessandro; Prampolini, Giacomo
2017-03-01
The calculation of the energy gap between the magnetic states of organic poly-radicals still represents a challenging playground for quantum chemistry, and high-level techniques are required to obtain accurate estimates. On these grounds, the aim of the present study is twofold. From the one side, it shows that, thanks to recent algorithmic and technical improvements, we are able to compute reliable quantum mechanical results for the systems of current fundamental and technological interest. From the other side, proper parameterization of a simple Hubbard Hamiltonian allows for a sound rationalization of magnetic gaps in terms of basic physical effects, unraveling the role played by electron delocalization, Coulomb repulsion, and effective exchange in tuning the magnetic character of the ground state. As case studies, we have chosen three prototypical organic tri-radicals, namely, 1,3,5-trimethylenebenzene, 1,3,5-tridehydrobenzene, and 1,2,3-tridehydrobenzene, which differ either for geometric or electronic structure. After discussing the differences among the three species and their consequences on the magnetic properties in terms of the simple model mentioned above, accurate and reliable values for the energy gap between the lowest quartet and doublet states are computed by means of the so-called difference dedicated configuration interaction (DDCI) technique, and the final results are discussed and compared to both available experimental and computational estimates.
NASA Astrophysics Data System (ADS)
Ray, Sudipta; Saha, Sandeep
2016-11-01
Numerical solution of engineering problems with interfacial discontinuities requires exact implementation of the jump conditions else the accuracy deteriorates significantly; particularly, achieving spectral accuracy has been limited due to complex interface geometry and Gibbs phenomenon. We adopt a novel implementation of the immersed-interface method that satisfies the jump conditions at the interfaces exactly, in conjunction with the Chebyshev-collocation method. We consider solutions to linear second order ordinary and partial differential equations having a discontinuity in their zeroth and first derivatives across an interface traced by a complex curve. The solutions obtained demonstrate the ability of the proposed method to achieve spectral accuracy for discontinuous solutions across tortuous interfaces. The solution methodology is illustrated using two model problems: (i) an ordinary differential equation with jump conditions forced by an infinitely differentiable function, (ii) Poisson's equation having a discontinuous solution across interfaces that are ellipses of varying aspect ratio. The use of more polynomials in the direction of the major axis than the minor axis of the ellipse increases the convergence rate of the solution.
Pagán, Josué; Risco-Martín, José L; Moya, José M; Ayala, José L
2016-08-01
Prediction of symptomatic crises in chronic diseases allows to take decisions before the symptoms occur, such as the intake of drugs to avoid the symptoms or the activation of medical alarms. The prediction horizon is in this case an important parameter in order to fulfill the pharmacokinetics of medications, or the time response of medical services. This paper presents a study about the prediction limits of a chronic disease with symptomatic crises: the migraine. For that purpose, this work develops a methodology to build predictive migraine models and to improve these predictions beyond the limits of the initial models. The maximum prediction horizon is analyzed, and its dependency on the selected features is studied. A strategy for model selection is proposed to tackle the trade off between conservative but robust predictive models, with respect to less accurate predictions with higher horizons. The obtained results show a prediction horizon close to 40min, which is in the time range of the drug pharmacokinetics. Experiments have been performed in a realistic scenario where input data have been acquired in an ambulatory clinical study by the deployment of a non-intrusive Wireless Body Sensor Network. Our results provide an effective methodology for the selection of the future horizon in the development of prediction algorithms for diseases experiencing symptomatic crises.
Efficient and Accurate Explicit Integration Algorithms with Application to Viscoplastic Models
NASA Technical Reports Server (NTRS)
Arya, Vinod K.
1994-01-01
Several explicit integration algorithms with self-adative time integration strategies are developed and investigated for efficiency and accuracy. These algorithms involve the Runge-Kutta second order, the lower Runge-Kutta method of orders one and two, and the exponential integration method. The algorithms are applied to viscoplastic models put forth by Freed and Verrilli and Bodner and Partom for thermal/mechanical loadings (including tensile, relaxation, and cyclic loadings). The large amount of computations performed showed that, for comparable accuracy, the efficiency of an integration algorithm depends significantly on the type of application (loading). However, in general, for the aforementioned loadings and viscoplastic models, the exponential integration algorithm with the proposed self-adaptive time integration strategy worked more (or comparably) efficiently and accurately than the other integration algorithms. Using this strategy for integrating viscoplastic models may lead to considerable savings in computer time (better efficiency) without adversely affecting the accuracy of the results. This conclusion should encourage the utilization of viscoplastic models in the stress analysis and design of structural components.
Accurate integral equation theory for the central force model of liquid water and ionic solutions
NASA Astrophysics Data System (ADS)
Ichiye, Toshiko; Haymet, A. D. J.
1988-10-01
The atom-atom pair correlation functions and thermodynamics of the central force model of water, introduced by Lemberg, Stillinger, and Rahman, have been calculated accurately by an integral equation method which incorporates two new developments. First, a rapid new scheme has been used to solve the Ornstein-Zernike equation. This scheme combines the renormalization methods of Allnatt, and Rossky and Friedman with an extension of the trigonometric basis-set solution of Labik and co-workers. Second, by adding approximate ``bridge'' functions to the hypernetted-chain (HNC) integral equation, we have obtained predictions for liquid water in which the hydrogen bond length and number are in good agreement with ``exact'' computer simulations of the same model force laws. In addition, for dilute ionic solutions, the ion-oxygen and ion-hydrogen coordination numbers display both the physically correct stoichiometry and good agreement with earlier simulations. These results represent a measurable improvement over both a previous HNC solution of the central force model and the ex-RISM integral equation solutions for the TIPS and other rigid molecule models of water.
Linaro, Daniele; Storace, Marco; Giugliano, Michele
2011-03-01
Stochastic channel gating is the major source of intrinsic neuronal noise whose functional consequences at the microcircuit- and network-levels have been only partly explored. A systematic study of this channel noise in large ensembles of biophysically detailed model neurons calls for the availability of fast numerical methods. In fact, exact techniques employ the microscopic simulation of the random opening and closing of individual ion channels, usually based on Markov models, whose computational loads are prohibitive for next generation massive computer models of the brain. In this work, we operatively define a procedure for translating any Markov model describing voltage- or ligand-gated membrane ion-conductances into an effective stochastic version, whose computer simulation is efficient, without compromising accuracy. Our approximation is based on an improved Langevin-like approach, which employs stochastic differential equations and no Montecarlo methods. As opposed to an earlier proposal recently debated in the literature, our approximation reproduces accurately the statistical properties of the exact microscopic simulations, under a variety of conditions, from spontaneous to evoked response features. In addition, our method is not restricted to the Hodgkin-Huxley sodium and potassium currents and is general for a variety of voltage- and ligand-gated ion currents. As a by-product, the analysis of the properties emerging in exact Markov schemes by standard probability calculus enables us for the first time to analytically identify the sources of inaccuracy of the previous proposal, while providing solid ground for its modification and improvement we present here.
Santolini, Marc; Mora, Thierry; Hakim, Vincent
2014-01-01
The identification of transcription factor binding sites (TFBSs) on genomic DNA is of crucial importance for understanding and predicting regulatory elements in gene networks. TFBS motifs are commonly described by Position Weight Matrices (PWMs), in which each DNA base pair contributes independently to the transcription factor (TF) binding. However, this description ignores correlations between nucleotides at different positions, and is generally inaccurate: analysing fly and mouse in vivo ChIPseq data, we show that in most cases the PWM model fails to reproduce the observed statistics of TFBSs. To overcome this issue, we introduce the pairwise interaction model (PIM), a generalization of the PWM model. The model is based on the principle of maximum entropy and explicitly describes pairwise correlations between nucleotides at different positions, while being otherwise as unconstrained as possible. It is mathematically equivalent to considering a TF-DNA binding energy that depends additively on each nucleotide identity at all positions in the TFBS, like the PWM model, but also additively on pairs of nucleotides. We find that the PIM significantly improves over the PWM model, and even provides an optimal description of TFBS statistics within statistical noise. The PIM generalizes previous approaches to interdependent positions: it accounts for co-variation of two or more base pairs, and predicts secondary motifs, while outperforming multiple-motif models consisting of mixtures of PWMs. We analyse the structure of pairwise interactions between nucleotides, and find that they are sparse and dominantly located between consecutive base pairs in the flanking region of TFBS. Nonetheless, interactions between pairs of non-consecutive nucleotides are found to play a significant role in the obtained accurate description of TFBS statistics. The PIM is computationally tractable, and provides a general framework that should be useful for describing and predicting TFBSs beyond
Application of thin plate splines for accurate regional ionosphere modeling with multi-GNSS data
NASA Astrophysics Data System (ADS)
Krypiak-Gregorczyk, Anna; Wielgosz, Pawel; Borkowski, Andrzej
2016-04-01
GNSS-derived regional ionosphere models are widely used in both precise positioning, ionosphere and space weather studies. However, their accuracy is often not sufficient to support precise positioning, RTK in particular. In this paper, we presented new approach that uses solely carrier phase multi-GNSS observables and thin plate splines (TPS) for accurate ionospheric TEC modeling. TPS is a closed solution of a variational problem minimizing both the sum of squared second derivatives of a smoothing function and the deviation between data points and this function. This approach is used in UWM-rt1 regional ionosphere model developed at UWM in Olsztyn. The model allows for providing ionospheric TEC maps with high spatial and temporal resolutions - 0.2x0.2 degrees and 2.5 minutes, respectively. For TEC estimation, EPN and EUPOS reference station data is used. The maps are available with delay of 15-60 minutes. In this paper we compare the performance of UWM-rt1 model with IGS global and CODE regional ionosphere maps during ionospheric storm that took place on March 17th, 2015. During this storm, the TEC level over Europe doubled comparing to earlier quiet days. The performance of the UWM-rt1 model was validated by (a) comparison to reference double-differenced ionospheric corrections over selected baselines, and (b) analysis of post-fit residuals to calibrated carrier phase geometry-free observational arcs at selected test stations. The results show a very good performance of UWM-rt1 model. The obtained post-fit residuals in case of UWM maps are lower by one order of magnitude comparing to IGS maps. The accuracy of UWM-rt1 -derived TEC maps is estimated at 0.5 TECU. This may be directly translated to the user positioning domain.
Accurate force fields and methods for modelling organic molecular crystals at finite temperatures.
Nyman, Jonas; Pundyke, Orla Sheehan; Day, Graeme M
2016-06-21
We present an assessment of the performance of several force fields for modelling intermolecular interactions in organic molecular crystals using the X23 benchmark set. The performance of the force fields is compared to several popular dispersion corrected density functional methods. In addition, we present our implementation of lattice vibrational free energy calculations in the quasi-harmonic approximation, using several methods to account for phonon dispersion. This allows us to also benchmark the force fields' reproduction of finite temperature crystal structures. The results demonstrate that anisotropic atom-atom multipole-based force fields can be as accurate as several popular DFT-D methods, but have errors 2-3 times larger than the current best DFT-D methods. The largest error in the examined force fields is a systematic underestimation of the (absolute) lattice energy.
Brandenburg, Jan Gerit; Grimme, Stefan
2014-06-05
The ambitious goal of organic crystal structure prediction challenges theoretical methods regarding their accuracy and efficiency. Dispersion-corrected density functional theory (DFT-D) in principle is applicable, but the computational demands, for example, to compute a huge number of polymorphs, are too high. Here, we demonstrate that this task can be carried out by a dispersion-corrected density functional tight binding (DFTB) method. The semiempirical Hamiltonian with the D3 correction can accurately and efficiently model both solid- and gas-phase inter- and intramolecular interactions at a speed up of 2 orders of magnitude compared to DFT-D. The mean absolute deviations for interaction (lattice) energies for various databases are typically 2-3 kcal/mol (10-20%), that is, only about two times larger than those for DFT-D. For zero-point phonon energies, small deviations of <0.5 kcal/mol compared to DFT-D are obtained.
NASA Astrophysics Data System (ADS)
Somerville, W. R. C.; Auguié, B.; Le Ru, E. C.
2016-03-01
SMARTIES calculates the optical properties of oblate and prolate spheroidal particles, with comparable capabilities and ease-of-use as Mie theory for spheres. This suite of MATLAB codes provides a fully documented implementation of an improved T-matrix algorithm for the theoretical modelling of electromagnetic scattering by particles of spheroidal shape. Included are scripts that cover a range of scattering problems relevant to nanophotonics and plasmonics, including calculation of far-field scattering and absorption cross-sections for fixed incidence orientation, orientation-averaged cross-sections and scattering matrix, surface-field calculations as well as near-fields, wavelength-dependent near-field and far-field properties, and access to lower-level functions implementing the T-matrix calculations, including the T-matrix elements which may be calculated more accurately than with competing codes.
Ultrasonic ray models for complex geometries
NASA Astrophysics Data System (ADS)
Schumm, A.
2000-05-01
Computer Aided Design techniques have become an inherent part of many industrial applications and are also gaining popularity in Nondestructive Testing. In sound field calculations, CAD representations can contribute to one of the generic problem in ultrasonic modeling, the wave propagation in complex geometries. Ray tracing codes were the first to take account of the geometry, providing qualitative information on beam propagation, such as geometrical echoes, multiple sound paths and possible conversions between wave modes. The forward ray tracing approach is intuitive and straightforward and can evolve towards a more quantitative code if transmission, divergence and polarization information is added. If used to evaluate the impulse response of a given geometry, an approximated time-dependent received signal can be obtained after convolution with the excitation signal. The more accurate reconstruction of a sound field after interaction with a geometrical interface according to ray theory requires inverse (or Fermat) ray-tracing to obtain the contribution of each elementary point source to the field at a given observation point. The resulting field of a finite transducer can then be obtained after integration over all point sources. While conceptionally close to classical ray tracing, this approach puts more stringent requirements on the CAD representation employed and is more difficult to extend towards multiple interfaces. In this communication we present examples for both approaches. In a prospective step, the link between both ray techniques is shown, and we illustrate how a combination of both approaches contributes to the solution of an industrial problem.
"Computational Modeling of Actinide Complexes"
Balasubramanian, K
2007-03-07
We will present our recent studies on computational actinide chemistry of complexes which are not only interesting from the standpoint of actinide coordination chemistry but also of relevance to environmental management of high-level nuclear wastes. We will be discussing our recent collaborative efforts with Professor Heino Nitsche of LBNL whose research group has been actively carrying out experimental studies on these species. Computations of actinide complexes are also quintessential to our understanding of the complexes found in geochemical, biochemical environments and actinide chemistry relevant to advanced nuclear systems. In particular we have been studying uranyl, plutonyl, and Cm(III) complexes are in aqueous solution. These studies are made with a variety of relativistic methods such as coupled cluster methods, DFT, and complete active space multi-configuration self-consistent-field (CASSCF) followed by large-scale CI computations and relativistic CI (RCI) computations up to 60 million configurations. Our computational studies on actinide complexes were motivated by ongoing EXAFS studies of speciated complexes in geo and biochemical environments carried out by Prof Heino Nitsche's group at Berkeley, Dr. David Clark at Los Alamos and Dr. Gibson's work on small actinide molecules at ORNL. The hydrolysis reactions of urnayl, neputyl and plutonyl complexes have received considerable attention due to their geochemical and biochemical importance but the results of free energies in solution and the mechanism of deprotonation have been topic of considerable uncertainty. We have computed deprotonating and migration of one water molecule from the first solvation shell to the second shell in UO{sub 2}(H{sub 2}O){sub 5}{sup 2+}, UO{sub 2}(H{sub 2}O){sub 5}{sup 2+}NpO{sub 2}(H{sub 2}O){sub 6}{sup +}, and PuO{sub 2}(H{sub 2}O){sub 5}{sup 2+} complexes. Our computed Gibbs free energy(7.27 kcal/m) in solution for the first time agrees with the experiment (7.1 kcal
O’Connor, James PB; Boult, Jessica KR; Jamin, Yann; Babur, Muhammad; Finegan, Katherine G; Williams, Kaye J; Little, Ross A; Jackson, Alan; Parker, Geoff JM; Reynolds, Andrew R; Waterton, John C; Robinson, Simon P
2015-01-01
There is a clinical need for non-invasive biomarkers of tumor hypoxia for prognostic and predictive studies, radiotherapy planning and therapy monitoring. Oxygen enhanced MRI (OE-MRI) is an emerging imaging technique for quantifying the spatial distribution and extent of tumor oxygen delivery in vivo. In OE-MRI, the longitudinal relaxation rate of protons (ΔR1) changes in proportion to the concentration of molecular oxygen dissolved in plasma or interstitial tissue fluid. Therefore, well-oxygenated tissues show positive ΔR1. We hypothesized that the fraction of tumor tissue refractory to oxygen challenge (lack of positive ΔR1, termed “Oxy-R fraction”) would be a robust biomarker of hypoxia in models with varying vascular and hypoxic features. Here we demonstrate that OE-MRI signals are accurate, precise and sensitive to changes in tumor pO2 in highly vascular 786-0 renal cancer xenografts. Furthermore, we show that Oxy-R fraction can quantify the hypoxic fraction in multiple models with differing hypoxic and vascular phenotypes, when used in combination with measurements of tumor perfusion. Finally, Oxy-R fraction can detect dynamic changes in hypoxia induced by the vasomodulator agent hydralazine. In contrast, more conventional biomarkers of hypoxia (derived from blood oxygenation-level dependent MRI and dynamic contrast-enhanced MRI) did not relate to tumor hypoxia consistently. Our results show that the Oxy-R fraction accurately quantifies tumor hypoxia non-invasively and is immediately translatable to the clinic. PMID:26659574
Random generalized linear model: a highly accurate and interpretable ensemble predictor
2013-01-01
Background Ensemble predictors such as the random forest are known to have superior accuracy but their black-box predictions are difficult to interpret. In contrast, a generalized linear model (GLM) is very interpretable especially when forward feature selection is used to construct the model. However, forward feature selection tends to overfit the data and leads to low predictive accuracy. Therefore, it remains an important research goal to combine the advantages of ensemble predictors (high accuracy) with the advantages of forward regression modeling (interpretability). To address this goal several articles have explored GLM based ensemble predictors. Since limited evaluations suggested that these ensemble predictors were less accurate than alternative predictors, they have found little attention in the literature. Results Comprehensive evaluations involving hundreds of genomic data sets, the UCI machine learning benchmark data, and simulations are used to give GLM based ensemble predictors a new and careful look. A novel bootstrap aggregated (bagged) GLM predictor that incorporates several elements of randomness and instability (random subspace method, optional interaction terms, forward variable selection) often outperforms a host of alternative prediction methods including random forests and penalized regression models (ridge regression, elastic net, lasso). This random generalized linear model (RGLM) predictor provides variable importance measures that can be used to define a “thinned” ensemble predictor (involving few features) that retains excellent predictive accuracy. Conclusion RGLM is a state of the art predictor that shares the advantages of a random forest (excellent predictive accuracy, feature importance measures, out-of-bag estimates of accuracy) with those of a forward selected generalized linear model (interpretability). These methods are implemented in the freely available R software package randomGLM. PMID:23323760
Development of a Fast and Accurate PCRTM Radiative Transfer Model in the Solar Spectral Region
NASA Technical Reports Server (NTRS)
Liu, Xu; Yang, Qiguang; Li, Hui; Jin, Zhonghai; Wu, Wan; Kizer, Susan; Zhou, Daniel K.; Yang, Ping
2016-01-01
A fast and accurate principal component-based radiative transfer model in the solar spectral region (PCRTMSOLAR) has been developed. The algorithm is capable of simulating reflected solar spectra in both clear sky and cloudy atmospheric conditions. Multiple scattering of the solar beam by the multilayer clouds and aerosols are calculated using a discrete ordinate radiative transfer scheme. The PCRTM-SOLAR model can be trained to simulate top-of-atmosphere radiance or reflectance spectra with spectral resolution ranging from 1 cm(exp -1) resolution to a few nanometers. Broadband radiances or reflectance can also be calculated if desired. The current version of the PCRTM-SOLAR covers a spectral range from 300 to 2500 nm. The model is valid for solar zenith angles ranging from 0 to 80 deg, the instrument view zenith angles ranging from 0 to 70 deg, and the relative azimuthal angles ranging from 0 to 360 deg. Depending on the number of spectral channels, the speed of the current version of PCRTM-SOLAR is a few hundred to over one thousand times faster than the medium speed correlated-k option MODTRAN5. The absolute RMS error in channel radiance is smaller than 10(exp -3) mW/cm)exp 2)/sr/cm(exp -1) and the relative error is typically less than 0.2%.
Development of a fast and accurate PCRTM radiative transfer model in the solar spectral region.
Liu, Xu; Yang, Qiguang; Li, Hui; Jin, Zhonghai; Wu, Wan; Kizer, Susan; Zhou, Daniel K; Yang, Ping
2016-10-10
A fast and accurate principal component-based radiative transfer model in the solar spectral region (PCRTM-SOLAR) has been developed. The algorithm is capable of simulating reflected solar spectra in both clear sky and cloudy atmospheric conditions. Multiple scattering of the solar beam by the multilayer clouds and aerosols are calculated using a discrete ordinate radiative transfer scheme. The PCRTM-SOLAR model can be trained to simulate top-of-atmosphere radiance or reflectance spectra with spectral resolution ranging from 1 cm^{-1} resolution to a few nanometers. Broadband radiances or reflectance can also be calculated if desired. The current version of the PCRTM-SOLAR covers a spectral range from 300 to 2500 nm. The model is valid for solar zenith angles ranging from 0 to 80 deg, the instrument view zenith angles ranging from 0 to 70 deg, and the relative azimuthal angles ranging from 0 to 360 deg. Depending on the number of spectral channels, the speed of the current version of PCRTM-SOLAR is a few hundred to over one thousand times faster than the medium speed correlated-k option MODTRAN5. The absolute RMS error in channel radiance is smaller than 10^{-3} mW/cm^{2}/sr/cm^{-1} and the relative error is typically less than 0.2%.
Accurate Models of Formation Enthalpy Created using Machine Learning and Voronoi Tessellations
NASA Astrophysics Data System (ADS)
Ward, Logan; Liu, Rosanne; Krishna, Amar; Hegde, Vinay; Agrawal, Ankit; Choudhary, Alok; Wolverton, Chris
Several groups in the past decade have used high-throughput Density Functional Theory to predict the properties of hundreds of thousands of compounds. These databases provide the unique capability of being able to quickly query the properties of many compounds. Here, we explore how these datasets can also be used to create models that can predict the properties of compounds at rates several orders of magnitude faster than DFT. Our method relies on using Voronoi tessellations to derive attributes that quantitatively characterize the local environment around each atom, which then are used as input to a machine learning model. In this presentation, we will discuss the application of this technique to predicting the formation enthalpy of compounds using data from the Open Quantum Materials Database (OQMD). To date, we have found that this technique can be used to create models that are about twice as accurate as those created using the Coulomb Matrix and Partial Radial Distribution approaches and are equally as fast to evaluate.
A murine model of neurofibromatosis type 2 that accurately phenocopies human schwannoma formation
Gehlhausen, Jeffrey R.; Park, Su-Jung; Hickox, Ann E.; Shew, Matthew; Staser, Karl; Rhodes, Steven D.; Menon, Keshav; Lajiness, Jacquelyn D.; Mwanthi, Muithi; Yang, Xianlin; Yuan, Jin; Territo, Paul; Hutchins, Gary; Nalepa, Grzegorz; Yang, Feng-Chun; Conway, Simon J.; Heinz, Michael G.; Stemmer-Rachamimov, Anat; Yates, Charles W.; Wade Clapp, D.
2015-01-01
Neurofibromatosis type 2 (NF2) is an autosomal dominant genetic disorder resulting from germline mutations in the NF2 gene. Bilateral vestibular schwannomas, tumors on cranial nerve VIII, are pathognomonic for NF2 disease. Furthermore, schwannomas also commonly develop in other cranial nerves, dorsal root ganglia and peripheral nerves. These tumors are a major cause of morbidity and mortality, and medical therapies to treat them are limited. Animal models that accurately recapitulate the full anatomical spectrum of human NF2-related schwannomas, including the characteristic functional deficits in hearing and balance associated with cranial nerve VIII tumors, would allow systematic evaluation of experimental therapeutics prior to clinical use. Here, we present a genetically engineered NF2 mouse model generated through excision of the Nf2 gene driven by Cre expression under control of a tissue-restricted 3.9kbPeriostin promoter element. By 10 months of age, 100% of Postn-Cre; Nf2flox/flox mice develop spinal, peripheral and cranial nerve tumors histologically identical to human schwannomas. In addition, the development of cranial nerve VIII tumors correlates with functional impairments in hearing and balance, as measured by auditory brainstem response and vestibular testing. Overall, the Postn-Cre; Nf2flox/flox tumor model provides a novel tool for future mechanistic and therapeutic studies of NF2-associated schwannomas. PMID:25113746
Accurate Time-Dependent Traveling-Wave Tube Model Developed for Computational Bit-Error-Rate Testing
NASA Technical Reports Server (NTRS)
Kory, Carol L.
2001-01-01
The phenomenal growth of the satellite communications industry has created a large demand for traveling-wave tubes (TWT's) operating with unprecedented specifications requiring the design and production of many novel devices in record time. To achieve this, the TWT industry heavily relies on computational modeling. However, the TWT industry's computational modeling capabilities need to be improved because there are often discrepancies between measured TWT data and that predicted by conventional two-dimensional helical TWT interaction codes. This limits the analysis and design of novel devices or TWT's with parameters differing from what is conventionally manufactured. In addition, the inaccuracy of current computational tools limits achievable TWT performance because optimized designs require highly accurate models. To address these concerns, a fully three-dimensional, time-dependent, helical TWT interaction model was developed using the electromagnetic particle-in-cell code MAFIA (Solution of MAxwell's equations by the Finite-Integration-Algorithm). The model includes a short section of helical slow-wave circuit with excitation fed by radiofrequency input/output couplers, and an electron beam contained by periodic permanent magnet focusing. A cutaway view of several turns of the three-dimensional helical slow-wave circuit with input/output couplers is shown. This has been shown to be more accurate than conventionally used two-dimensional models. The growth of the communications industry has also imposed a demand for increased data rates for the transmission of large volumes of data. To achieve increased data rates, complex modulation and multiple access techniques are employed requiring minimum distortion of the signal as it is passed through the TWT. Thus, intersymbol interference (ISI) becomes a major consideration, as well as suspected causes such as reflections within the TWT. To experimentally investigate effects of the physical TWT on ISI would be
NASA Astrophysics Data System (ADS)
Kopparla, P.; Natraj, V.; Shia, R. L.; Spurr, R. J. D.; Crisp, D.; Yung, Y. L.
2015-12-01
Radiative transfer (RT) computations form the engine of atmospheric retrieval codes. However, full treatment of RT processes is computationally expensive, prompting usage of two-stream approximations in current exoplanetary atmospheric retrieval codes [Line et al., 2013]. Natraj et al. [2005, 2010] and Spurr and Natraj [2013] demonstrated the ability of a technique using principal component analysis (PCA) to speed up RT computations. In the PCA method for RT performance enhancement, empirical orthogonal functions are developed for binned sets of inherent optical properties that possess some redundancy; costly multiple-scattering RT calculations are only done for those few optical states corresponding to the most important principal components, and correction factors are applied to approximate radiation fields. Kopparla et al. [2015, in preparation] extended the PCA method to a broadband spectral region from the ultraviolet to the shortwave infrared (0.3-3 micron), accounting for major gas absorptions in this region. Here, we apply the PCA method to a some typical (exo-)planetary retrieval problems. Comparisons between the new model, called Universal Principal Component Analysis Radiative Transfer (UPCART) model, two-stream models and line-by-line RT models are performed, for spectral radiances, spectral fluxes and broadband fluxes. Each of these are calculated at the top of the atmosphere for several scenarios with varying aerosol types, extinction and scattering optical depth profiles, and stellar and viewing geometries. We demonstrate that very accurate radiance and flux estimates can be obtained, with better than 1% accuracy in all spectral regions and better than 0.1% in most cases, as compared to a numerically exact line-by-line RT model. The accuracy is enhanced when the results are convolved to typical instrument resolutions. The operational speed and accuracy of UPCART can be further improved by optimizing binning schemes and parallelizing the codes, work
NASA Astrophysics Data System (ADS)
West, J. B.; Ehleringer, J. R.; Cerling, T.
2006-12-01
Understanding how the biosphere responds to change it at the heart of biogeochemistry, ecology, and other Earth sciences. The dramatic increase in human population and technological capacity over the past 200 years or so has resulted in numerous, simultaneous changes to biosphere structure and function. This, then, has lead to increased urgency in the scientific community to try to understand how systems have already responded to these changes, and how they might do so in the future. Since all biospheric processes exhibit some patchiness or patterns over space, as well as time, we believe that understanding the dynamic interactions between natural systems and human technological manipulations can be improved if these systems are studied in an explicitly spatial context. We present here results of some of our efforts to model the spatial variation in the stable isotope ratios (δ2H and δ18O) of plants over large spatial extents, and how these spatial model predictions compare to spatially explicit data. Stable isotopes trace and record ecological processes and as such, if modeled correctly over Earth's surface allow us insights into changes in biosphere states and processes across spatial scales. The data-model comparisons show good agreement, in spite of the remaining uncertainties (e.g., plant source water isotopic composition). For example, inter-annual changes in climate are recorded in wine stable isotope ratios. Also, a much simpler model of leaf water enrichment driven with spatially continuous global rasters of precipitation and climate normals largely agrees with complex GCM modeling that includes leaf water δ18O. Our results suggest that modeling plant stable isotope ratios across large spatial extents may be done with reasonable accuracy, including over time. These spatial maps, or isoscapes, can now be utilized to help understand spatially distributed data, as well as to help guide future studies designed to understand ecological change across
Capturing Complexity through Maturity Modelling
ERIC Educational Resources Information Center
Underwood, Jean; Dillon, Gayle
2004-01-01
The impact of information and communication technologies (ICT) on the process and products of education is difficult to assess for a number of reasons. In brief, education is a complex system of interrelationships, of checks and balances. This context is not a neutral backdrop on which teaching and learning are played out. Rather, it may help, or…
Accurate model annotation of a near-atomic resolution cryo-EM map
Hryc, Corey F.; Chen, Dong-Hua; Afonine, Pavel V.; Jakana, Joanita; Wang, Zhao; Haase-Pettingell, Cameron; Jiang, Wen; Adams, Paul D.; King, Jonathan A.; Schmid, Michael F.; Chiu, Wah
2017-01-01
Electron cryomicroscopy (cryo-EM) has been used to determine the atomic coordinates (models) from density maps of biological assemblies. These models can be assessed by their overall fit to the experimental data and stereochemical information. However, these models do not annotate the actual density values of the atoms nor their positional uncertainty. Here, we introduce a computational procedure to derive an atomic model from a cryo-EM map with annotated metadata. The accuracy of such a model is validated by a faithful replication of the experimental cryo-EM map computed using the coordinates and associated metadata. The functional interpretation of any structural features in the model and its utilization for future studies can be made in the context of its measure of uncertainty. We applied this protocol to the 3.3-Å map of the mature P22 bacteriophage capsid, a large and complex macromolecular assembly. With this protocol, we identify and annotate previously undescribed molecular interactions between capsid subunits that are crucial to maintain stability in the absence of cementing proteins or cross-linking, as occur in other bacteriophages. PMID:28270620
NASA Astrophysics Data System (ADS)
Kees, C. E.; Farthing, M. W.; Terrel, A.; Certik, O.; Seljebotn, D.
2013-12-01
This presentation will focus on two barriers to progress in the hydrological modeling community, and research and development conducted to lessen or eliminate them. The first is a barrier to sharing hydrological models among specialized scientists that is caused by intertwining the implementation of numerical methods with the implementation of abstract numerical modeling information. In the Proteus toolkit for computational methods and simulation, we have decoupled these two important parts of computational model through separate "physics" and "numerics" interfaces. More recently we have begun developing the Strong Form Language for easy and direct representation of the mathematical model formulation in a domain specific language embedded in Python. The second major barrier is sharing ANY scientific software tools that have complex library or module dependencies, as most parallel, multi-physics hydrological models must have. In this setting, users and developer are dependent on an entire distribution, possibly depending on multiple compilers and special instructions depending on the environment of the target machine. To solve these problem we have developed, hashdist, a stateless package management tool and a resulting portable, open source scientific software distribution.
Accurate model annotation of a near-atomic resolution cryo-EM map.
Hryc, Corey F; Chen, Dong-Hua; Afonine, Pavel V; Jakana, Joanita; Wang, Zhao; Haase-Pettingell, Cameron; Jiang, Wen; Adams, Paul D; King, Jonathan A; Schmid, Michael F; Chiu, Wah
2017-03-21
Electron cryomicroscopy (cryo-EM) has been used to determine the atomic coordinates (models) from density maps of biological assemblies. These models can be assessed by their overall fit to the experimental data and stereochemical information. However, these models do not annotate the actual density values of the atoms nor their positional uncertainty. Here, we introduce a computational procedure to derive an atomic model from a cryo-EM map with annotated metadata. The accuracy of such a model is validated by a faithful replication of the experimental cryo-EM map computed using the coordinates and associated metadata. The functional interpretation of any structural features in the model and its utilization for future studies can be made in the context of its measure of uncertainty. We applied this protocol to the 3.3-Å map of the mature P22 bacteriophage capsid, a large and complex macromolecular assembly. With this protocol, we identify and annotate previously undescribed molecular interactions between capsid subunits that are crucial to maintain stability in the absence of cementing proteins or cross-linking, as occur in other bacteriophages.
Telfer, Scott; Erdemir, Ahmet; Woodburn, James; Cavanagh, Peter R
2016-01-25
Integration of patient-specific biomechanical measurements into the design of therapeutic footwear has been shown to improve clinical outcomes in patients with diabetic foot disease. The addition of numerical simulations intended to optimise intervention design may help to build on these advances, however at present the time and labour required to generate and run personalised models of foot anatomy restrict their routine clinical utility. In this study we developed second-generation personalised simple finite element (FE) models of the forefoot with varying geometric fidelities. Plantar pressure predictions from barefoot, shod, and shod with insole simulations using simplified models were compared to those obtained from CT-based FE models incorporating more detailed representations of bone and tissue geometry. A simplified model including representations of metatarsals based on simple geometric shapes, embedded within a contoured soft tissue block with outer geometry acquired from a 3D surface scan was found to provide pressure predictions closest to the more complex model, with mean differences of 13.3kPa (SD 13.4), 12.52kPa (SD 11.9) and 9.6kPa (SD 9.3) for barefoot, shod, and insole conditions respectively. The simplified model design could be produced in <1h compared to >3h in the case of the more detailed model, and solved on average 24% faster. FE models of the forefoot based on simplified geometric representations of the metatarsal bones and soft tissue surface geometry from 3D surface scans may potentially provide a simulation approach with improved clinical utility, however further validity testing around a range of therapeutic footwear types is required.
NASA Astrophysics Data System (ADS)
Wosnik, M.; Bachant, P.
2014-12-01
Cross-flow turbines, often referred to as vertical-axis turbines, show potential for success in marine hydrokinetic (MHK) and wind energy applications, ranging from small- to utility-scale installations in tidal/ocean currents and offshore wind. As turbine designs mature, the research focus is shifting from individual devices to the optimization of turbine arrays. It would be expensive and time-consuming to conduct physical model studies of large arrays at large model scales (to achieve sufficiently high Reynolds numbers), and hence numerical techniques are generally better suited to explore the array design parameter space. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries (e.g., grid resolution into the viscous sublayer on turbine blades), the turbines' interaction with the energy resource (water current or wind) needs to be parameterized, or modeled. Models used today--a common model is the actuator disk concept--are not able to predict the unique wake structure generated by cross-flow turbines. This wake structure has been shown to create "constructive" interference in some cases, improving turbine performance in array configurations, in contrast with axial-flow, or horizontal axis devices. Towards a more accurate parameterization of cross-flow turbines, an extensive experimental study was carried out using a high-resolution turbine test bed with wake measurement capability in a large cross-section tow tank. The experimental results were then "interpolated" using high-fidelity Navier--Stokes simulations, to gain insight into the turbine's near-wake. The study was designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. The end product of
Polzer, S; Gasser, T C; Novak, K; Man, V; Tichy, M; Skacel, P; Bursa, J
2015-03-01
Structure-based constitutive models might help in exploring mechanisms by which arterial wall histology is linked to wall mechanics. This study aims to validate a recently proposed structure-based constitutive model. Specifically, the model's ability to predict mechanical biaxial response of porcine aortic tissue with predefined collagen structure was tested. Histological slices from porcine thoracic aorta wall (n=9) were automatically processed to quantify the collagen fiber organization, and mechanical testing identified the non-linear properties of the wall samples (n=18) over a wide range of biaxial stretches. Histological and mechanical experimental data were used to identify the model parameters of a recently proposed multi-scale constitutive description for arterial layers. The model predictive capability was tested with respect to interpolation and extrapolation. Collagen in the media was predominantly aligned in circumferential direction (planar von Mises distribution with concentration parameter bM=1.03 ± 0.23), and its coherence decreased gradually from the luminal to the abluminal tissue layers (inner media, b=1.54 ± 0.40; outer media, b=0.72 ± 0.20). In contrast, the collagen in the adventitia was aligned almost isotropically (bA=0.27 ± 0.11), and no features, such as families of coherent fibers, were identified. The applied constitutive model captured the aorta biaxial properties accurately (coefficient of determination R(2)=0.95 ± 0.03) over the entire range of biaxial deformations and with physically meaningful model parameters. Good predictive properties, well outside the parameter identification space, were observed (R(2)=0.92 ± 0.04). Multi-scale constitutive models equipped with realistic micro-histological data can predict macroscopic non-linear aorta wall properties. Collagen largely defines already low strain properties of media, which explains the origin of wall anisotropy seen at this strain level. The structure and mechanical
Walter, Johannes; Thajudeen, Thaseem; Süss, Sebastian; Segets, Doris; Peukert, Wolfgang
2015-04-21
Analytical centrifugation (AC) is a powerful technique for the characterisation of nanoparticles in colloidal systems. As a direct and absolute technique it requires no calibration or measurements of standards. Moreover, it offers simple experimental design and handling, high sample throughput as well as moderate investment costs. However, the full potential of AC for nanoparticle size analysis requires the development of powerful data analysis techniques. In this study we show how the application of direct boundary models to AC data opens up new possibilities in particle characterisation. An accurate analysis method, successfully applied to sedimentation data obtained by analytical ultracentrifugation (AUC) in the past, was used for the first time in analysing AC data. Unlike traditional data evaluation routines for AC using a designated number of radial positions or scans, direct boundary models consider the complete sedimentation boundary, which results in significantly better statistics. We demonstrate that meniscus fitting, as well as the correction of radius and time invariant noise significantly improves the signal-to-noise ratio and prevents the occurrence of false positives due to optical artefacts. Moreover, hydrodynamic non-ideality can be assessed by the residuals obtained from the analysis. The sedimentation coefficient distributions obtained by AC are in excellent agreement with the results from AUC. Brownian dynamics simulations were used to generate numerical sedimentation data to study the influence of diffusion on the obtained distributions. Our approach is further validated using polystyrene and silica nanoparticles. In particular, we demonstrate the strength of AC for analysing multimodal distributions by means of gold nanoparticles.
Accurate modeling of cache replacement policies in a Data-Grid.
Otoo, Ekow J.; Shoshani, Arie
2003-01-23
Caching techniques have been used to improve the performance gap of storage hierarchies in computing systems. In data intensive applications that access large data files over wide area network environment, such as a data grid,caching mechanism can significantly improve the data access performance under appropriate workloads. In a data grid, it is envisioned that local disk storage resources retain or cache the data files being used by local application. Under a workload of shared access and high locality of reference, the performance of the caching techniques depends heavily on the replacement policies being used. A replacement policy effectively determines which set of objects must be evicted when space is needed. Unlike cache replacement policies in virtual memory paging or database buffering, developing an optimal replacement policy for data grids is complicated by the fact that the file objects being cached have varying sizes and varying transfer and processing costs that vary with time. We present an accurate model for evaluating various replacement policies and propose a new replacement algorithm referred to as ''Least Cost Beneficial based on K backward references (LCB-K).'' Using this modeling technique, we compare LCB-K with various replacement policies such as Least Frequently Used (LFU), Least Recently Used (LRU), Greedy DualSize (GDS), etc., using synthetic and actual workload of accesses to and from tertiary storage systems. The results obtained show that (LCB-K) and (GDS) are the most cost effective cache replacement policies for storage resource management in data grids.
SPARC: Mass Models for 175 Disk Galaxies with Spitzer Photometry and Accurate Rotation Curves
NASA Astrophysics Data System (ADS)
Lelli, Federico; McGaugh, Stacy S.; Schombert, James M.
2016-12-01
We introduce SPARC (Spitzer Photometry and Accurate Rotation Curves): a sample of 175 nearby galaxies with new surface photometry at 3.6 μm and high-quality rotation curves from previous H i/Hα studies. SPARC spans a broad range of morphologies (S0 to Irr), luminosities (∼5 dex), and surface brightnesses (∼4 dex). We derive [3.6] surface photometry and study structural relations of stellar and gas disks. We find that both the stellar mass–H i mass relation and the stellar radius–H i radius relation have significant intrinsic scatter, while the H i mass–radius relation is extremely tight. We build detailed mass models and quantify the ratio of baryonic to observed velocity (V bar/V obs) for different characteristic radii and values of the stellar mass-to-light ratio (ϒ⋆) at [3.6]. Assuming ϒ⋆ ≃ 0.5 M ⊙/L ⊙ (as suggested by stellar population models), we find that (i) the gas fraction linearly correlates with total luminosity (ii) the transition from star-dominated to gas-dominated galaxies roughly corresponds to the transition from spiral galaxies to dwarf irregulars, in line with density wave theory; and (iii) V bar/V obs varies with luminosity and surface brightness: high-mass, high-surface-brightness galaxies are nearly maximal, while low-mass, low-surface-brightness galaxies are submaximal. These basic properties are lost for low values of ϒ⋆ ≃ 0.2 M ⊙/L ⊙ as suggested by the DiskMass survey. The mean maximum-disk limit in bright galaxies is ϒ⋆ ≃ 0.7 M ⊙/L ⊙ at [3.6]. The SPARC data are publicly available and represent an ideal test bed for models of galaxy formation.
Modeling of Non-Gravitational Forces for Precise and Accurate Orbit Determination
NASA Astrophysics Data System (ADS)
Hackel, Stefan; Gisinger, Christoph; Steigenberger, Peter; Balss, Ulrich; Montenbruck, Oliver; Eineder, Michael
2014-05-01
Remote sensing satellites support a broad range of scientific and commercial applications. The two radar imaging satellites TerraSAR-X and TanDEM-X provide spaceborne Synthetic Aperture Radar (SAR) and interferometric SAR data with a very high accuracy. The precise reconstruction of the satellite's trajectory is based on the Global Positioning System (GPS) measurements from a geodetic-grade dual-frequency Integrated Geodetic and Occultation Receiver (IGOR) onboard the spacecraft. The increasing demand for precise radar products relies on validation methods, which require precise and accurate orbit products. An analysis of the orbit quality by means of internal and external validation methods on long and short timescales shows systematics, which reflect deficits in the employed force models. Following the proper analysis of this deficits, possible solution strategies are highlighted in the presentation. The employed Reduced Dynamic Orbit Determination (RDOD) approach utilizes models for gravitational and non-gravitational forces. A detailed satellite macro model is introduced to describe the geometry and the optical surface properties of the satellite. Two major non-gravitational forces are the direct and the indirect Solar Radiation Pressure (SRP). The satellite TerraSAR-X flies on a dusk-dawn orbit with an altitude of approximately 510 km above ground. Due to this constellation, the Sun almost constantly illuminates the satellite, which causes strong across-track accelerations on the plane rectangular to the solar rays. The indirect effect of the solar radiation is called Earth Radiation Pressure (ERP). This force depends on the sunlight, which is reflected by the illuminated Earth surface (visible spectra) and the emission of the Earth body in the infrared spectra. Both components of ERP require Earth models to describe the optical properties of the Earth surface. Therefore, the influence of different Earth models on the orbit quality is assessed. The scope of
NASA Astrophysics Data System (ADS)
Reppert, Mike; Naibo, Virginia; Jankowiak, Ryszard
2010-07-01
Accurate lineshape functions for modeling fluorescence line narrowing (FLN) difference spectra (ΔFLN spectra) in the low-fluence limit are derived and examined in terms of the physical interpretation of various contributions, including photoproduct absorption and emission. While in agreement with the earlier results of Jaaniso [Proc. Est. Acad. Sci., Phys., Math. 34, 277 (1985)] and Fünfschilling et al. [J. Lumin. 36, 85 (1986)], the derived formulas differ substantially from functions used recently [e.g., M. Rätsep et al., Chem. Phys. Lett. 479, 140 (2009)] to model ΔFLN spectra. In contrast to traditional FLN spectra, it is demonstrated that for most physically reasonable parameters, the ΔFLN spectrum reduces simply to the single-site fluorescence lineshape function. These results imply that direct measurement of a bulk-averaged single-site fluorescence lineshape function can be accomplished with no complicated extraction process or knowledge of any additional parameters such as site distribution function shape and width. We argue that previous analysis of ΔFLN spectra obtained for many photosynthetic complexes led to strong artificial lowering of apparent electron-phonon coupling strength, especially on the high-energy side of the pigment site distribution function.
Molecular simulation and modeling of complex I.
Hummer, Gerhard; Wikström, Mårten
2016-07-01
Molecular modeling and molecular dynamics simulations play an important role in the functional characterization of complex I. With its large size and complicated function, linking quinone reduction to proton pumping across a membrane, complex I poses unique modeling challenges. Nonetheless, simulations have already helped in the identification of possible proton transfer pathways. Simulations have also shed light on the coupling between electron and proton transfer, thus pointing the way in the search for the mechanistic principles underlying the proton pump. In addition to reviewing what has already been achieved in complex I modeling, we aim here to identify pressing issues and to provide guidance for future research to harness the power of modeling in the functional characterization of complex I. This article is part of a Special Issue entitled Respiratory complex I, edited by Volker Zickermann and Ulrich Brandt.
Hierarchical Models of the Nearshore Complex System
2004-01-01
unclassified unclassified /,andard Form 7 7Qien. -pii Prescrbed by ANS Sid 239-18 zgB -10z Hierarchical Models of the Nearshore Complex System: Final...TITLE AND SUBTITLE S. FUNDING NUMBERS Hierarchical Models of the Nearshore Complex System N00014-02-1-0358 6. AUTHOR(S) Brad Werner 7. PERFORMING...8217 ........... The long-term goal of this reasearch was to develop and test predictive models for nearshore processes. This grant was terminaton funding for the
NASA Astrophysics Data System (ADS)
Bengulescu, Marc; Blanc, Philippe; Boilley, Alexandre; Wald, Lucien
2017-02-01
This study investigates the characteristic time-scales of variability found in long-term time-series of daily means of estimates of surface solar irradiance (SSI). The study is performed at various levels to better understand the causes of variability in the SSI. First, the variability of the solar irradiance at the top of the atmosphere is scrutinized. Then, estimates of the SSI in cloud-free conditions as provided by the McClear model are dealt with, in order to reveal the influence of the clear atmosphere (aerosols, water vapour, etc.). Lastly, the role of clouds on variability is inferred by the analysis of in-situ measurements. A description of how the atmosphere affects SSI variability is thus obtained on a time-scale basis. The analysis is also performed with estimates of the SSI provided by the satellite-derived HelioClim-3 database and by two numerical weather re-analyses: ERA-Interim and MERRA2. It is found that HelioClim-3 estimates render an accurate picture of the variability found in ground measurements, not only globally, but also with respect to individual characteristic time-scales. On the contrary, the variability found in re-analyses correlates poorly with all scales of ground measurements variability.
Dai, Daoxin; He, Sailing
2004-12-01
An accurate two-dimensional (2D) model is introduced for the simulation of an arrayed-waveguide grating (AWG) demultiplexer by integrating the field distribution along the vertical direction. The equivalent 2D model has almost the same accuracy as the original three-dimensional model and is more accurate for the AWG considered here than the conventional 2D model based on the effective-index method. To further improve the computational efficiency, the reciprocity theory is applied to the optimal design of a flat-top AWG demultiplexer with a special input structure.
Towards more accurate wind and solar power prediction by improving NWP model physics
NASA Astrophysics Data System (ADS)
Steiner, Andrea; Köhler, Carmen; von Schumann, Jonas; Ritter, Bodo
2014-05-01
nighttime to well mixed conditions during the day presents a big challenge to NWP models. Fast decrease and successive increase in hub-height wind speed after sunrise, and the formation of nocturnal low level jets will be discussed. For PV, the life cycle of low stratus clouds and fog is crucial. Capturing these processes correctly depends on the accurate simulation of diffusion or vertical momentum transport and the interaction with other atmospheric and soil processes within the numerical weather model. Results from Single Column Model simulations and 3d case studies will be presented. Emphasis is placed on wind forecasts; however, some references to highlights concerning the PV-developments will also be given. *) ORKA: Optimierung von Ensembleprognosen regenerativer Einspeisung für den Kürzestfristbereich am Anwendungsbeispiel der Netzsicherheitsrechnungen **) EWeLiNE: Erstellung innovativer Wetter- und Leistungsprognosemodelle für die Netzintegration wetterabhängiger Energieträger, www.projekt-eweline.de
Toward accurate tooth segmentation from computed tomography images using a hybrid level set model
Gan, Yangzhou; Zhao, Qunfei; Xia, Zeyang E-mail: jing.xiong@siat.ac.cn; Hu, Ying; Xiong, Jing E-mail: jing.xiong@siat.ac.cn; Zhang, Jianwei
2015-01-15
Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm{sup 3}) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm{sup 3}, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm{sup 3}, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0
Scaffolding in Complex Modelling Situations
ERIC Educational Resources Information Center
Stender, Peter; Kaiser, Gabriele
2015-01-01
The implementation of teacher-independent realistic modelling processes is an ambitious educational activity with many unsolved problems so far. Amongst others, there hardly exists any empirical knowledge about efficient ways of possible teacher support with students' activities, which should be mainly independent from the teacher. The research…
Watson, Charles M; Francis, Gamal R
2015-07-01
Hollow copper models painted to match the reflectance of the animal subject are standard in thermal ecology research. While the copper electroplating process results in accurate models, it is relatively time consuming, uses caustic chemicals, and the models are often anatomically imprecise. Although the decreasing cost of 3D printing can potentially allow the reproduction of highly accurate models, the thermal performance of 3D printed models has not been evaluated. We compared the cost, accuracy, and performance of both copper and 3D printed lizard models and found that the performance of the models were statistically identical in both open and closed habitats. We also find that 3D models are more standard, lighter, durable, and inexpensive, than the copper electroformed models.
TRIM—3D: a three-dimensional model for accurate simulation of shallow water flow
Casulli, Vincenzo; Bertolazzi, Enrico; Cheng, Ralph T.
1993-01-01
A semi-implicit finite difference formulation for the numerical solution of three-dimensional tidal circulation is discussed. The governing equations are the three-dimensional Reynolds equations in which the pressure is assumed to be hydrostatic. A minimal degree of implicitness has been introduced in the finite difference formula so that the resulting algorithm permits the use of large time steps at a minimal computational cost. This formulation includes the simulation of flooding and drying of tidal flats, and is fully vectorizable for an efficient implementation on modern vector computers. The high computational efficiency of this method has made it possible to provide the fine details of circulation structure in complex regions that previous studies were unable to obtain. For proper interpretation of the model results suitable interactive graphics is also an essential tool.
Modeling of Protein Binary Complexes Using Structural Mass Spectrometry Data
Amisha Kamal,J.; Chance, M.
2008-01-01
In this article, we describe a general approach to modeling the structure of binary protein complexes using structural mass spectrometry data combined with molecular docking. In the first step, hydroxyl radical mediated oxidative protein footprinting is used to identify residues that experience conformational reorganization due to binding or participate in the binding interface. In the second step, a three-dimensional atomic structure of the complex is derived by computational modeling. Homology modeling approaches are used to define the structures of the individual proteins if footprinting detects significant conformational reorganization as a function of complex formation. A three-dimensional model of the complex is constructed from these binary partners using the ClusPro program, which is composed of docking, energy filtering, and clustering steps. Footprinting data are used to incorporate constraints--positive and/or negative--in the docking step and are also used to decide the type of energy filter--electrostatics or desolvation--in the successive energy-filtering step. By using this approach, we examine the structure of a number of binary complexes of monomeric actin and compare the results to crystallographic data. Based on docking alone, a number of competing models with widely varying structures are observed, one of which is likely to agree with crystallographic data. When the docking steps are guided by footprinting data, accurate models emerge as top scoring. We demonstrate this method with the actin/gelsolin segment-1 complex. We also provide a structural model for the actin/cofilin complex using this approach which does not have a crystal or NMR structure.
Complex Parameter Landscape for a Complex Neuron Model
Achard, Pablo; De Schutter, Erik
2006-01-01
The electrical activity of a neuron is strongly dependent on the ionic channels present in its membrane. Modifying the maximal conductances from these channels can have a dramatic impact on neuron behavior. But the effect of such modifications can also be cancelled out by compensatory mechanisms among different channels. We used an evolution strategy with a fitness function based on phase-plane analysis to obtain 20 very different computational models of the cerebellar Purkinje cell. All these models produced very similar outputs to current injections, including tiny details of the complex firing pattern. These models were not completely isolated in the parameter space, but neither did they belong to a large continuum of good models that would exist if weak compensations between channels were sufficient. The parameter landscape of good models can best be described as a set of loosely connected hyperplanes. Our method is efficient in finding good models in this complex landscape. Unraveling the landscape is an important step towards the understanding of functional homeostasis of neurons. PMID:16848639
Role models for complex networks
NASA Astrophysics Data System (ADS)
Reichardt, J.; White, D. R.
2007-11-01
We present a framework for automatically decomposing (“block-modeling”) the functional classes of agents within a complex network. These classes are represented by the nodes of an image graph (“block model”) depicting the main patterns of connectivity and thus functional roles in the network. Using a first principles approach, we derive a measure for the fit of a network to any given image graph allowing objective hypothesis testing. From the properties of an optimal fit, we derive how to find the best fitting image graph directly from the network and present a criterion to avoid overfitting. The method can handle both two-mode and one-mode data, directed and undirected as well as weighted networks and allows for different types of links to be dealt with simultaneously. It is non-parametric and computationally efficient. The concepts of structural equivalence and modularity are found as special cases of our approach. We apply our method to the world trade network and analyze the roles individual countries play in the global economy.
Agent-based modeling of complex infrastructures
North, M. J.
2001-06-01
Complex Adaptive Systems (CAS) can be applied to investigate complex infrastructures and infrastructure interdependencies. The CAS model agents within the Spot Market Agent Research Tool (SMART) and Flexible Agent Simulation Toolkit (FAST) allow investigation of the electric power infrastructure, the natural gas infrastructure and their interdependencies.
Modeling the complex bromate-iodine reaction.
Machado, Priscilla B; Faria, Roberto B
2009-05-07
In this article, it is shown that the FLEK model (ref 5 ) is able to model the experimental results of the bromate-iodine clock reaction. Five different complex chemical systems, the bromate-iodide clock and oscillating reactions, the bromite-iodide clock and oscillating reactions, and now the bromate-iodine clock reaction are adequately accounted for by the FLEK model.
Numerical models of complex diapirs
NASA Astrophysics Data System (ADS)
Podladchikov, Yu.; Talbot, C.; Poliakov, A. N. B.
1993-12-01
Numerically modelled diapirs that rise into overburdens with viscous rheology produce a large variety of shapes. This work uses the finite-element method to study the development of diapirs that rise towards a surface on which a diapir-induced topography creeps flat or disperses ("erodes") at different rates. Slow erosion leads to diapirs with "mushroom" shapes, moderate erosion rate to "wine glass" diapirs and fast erosion to "beer glass"- and "column"-shaped diapirs. The introduction of a low-viscosity layer at the top of the overburden causes diapirs to develop into structures resembling a "Napoleon hat". These spread lateral sheets.
Slip complexity in earthquake fault models.
Rice, J R; Ben-Zion, Y
1996-04-30
We summarize studies of earthquake fault models that give rise to slip complexities like those in natural earthquakes. For models of smooth faults between elastically deformable continua, it is critical that the friction laws involve a characteristic distance for slip weakening or evolution of surface state. That results in a finite nucleation size, or coherent slip patch size, h*. Models of smooth faults, using numerical cell size properly small compared to h*, show periodic response or complex and apparently chaotic histories of large events but have not been found to show small event complexity like the self-similar (power law) Gutenberg-Richter frequency-size statistics. This conclusion is supported in the present paper by fully inertial elastodynamic modeling of earthquake sequences. In contrast, some models of locally heterogeneous faults with quasi-independent fault segments, represented approximately by simulations with cell size larger than h* so that the model becomes "inherently discrete," do show small event complexity of the Gutenberg-Richter type. Models based on classical friction laws without a weakening length scale or for which the numerical procedure imposes an abrupt strength drop at the onset of slip have h* = 0 and hence always fall into the inherently discrete class. We suggest that the small-event complexity that some such models show will not survive regularization of the constitutive description, by inclusion of an appropriate length scale leading to a finite h*, and a corresponding reduction of numerical grid size.
Preferential urn model and nongrowing complex networks.
Ohkubo, Jun; Yasuda, Muneki; Tanaka, Kazuyuki
2005-12-01
A preferential urn model, which is based on the concept "the rich get richer," is proposed. From a relationship between a nongrowing model for complex networks and the preferential urn model in regard to degree distributions, it is revealed that a fitness parameter in the nongrowing model is interpreted as an inverse local temperature in the preferential urn model. Furthermore, it is clarified that the preferential urn model with randomness generates a fat-tailed occupation distribution; the concept of the local temperature enables us to understand the fat-tailed occupation distribution intuitively. Since the preferential urn model is a simple stochastic model, it can be applied to research on not only the nongrowing complex networks, but also many other fields such as econophysics and social sciences.
Complex system modelling for veterinary epidemiology.
Lanzas, Cristina; Chen, Shi
2015-02-01
The use of mathematical models has a long tradition in infectious disease epidemiology. The nonlinear dynamics and complexity of pathogen transmission pose challenges in understanding its key determinants, in identifying critical points, and designing effective mitigation strategies. Mathematical modelling provides tools to explicitly represent the variability, interconnectedness, and complexity of systems, and has contributed to numerous insights and theoretical advances in disease transmission, as well as to changes in public policy, health practice, and management. In recent years, our modelling toolbox has considerably expanded due to the advancements in computing power and the need to model novel data generated by technologies such as proximity loggers and global positioning systems. In this review, we discuss the principles, advantages, and challenges associated with the most recent modelling approaches used in systems science, the interdisciplinary study of complex systems, including agent-based, network and compartmental modelling. Agent-based modelling is a powerful simulation technique that considers the individual behaviours of system components by defining a set of rules that govern how individuals ("agents") within given populations interact with one another and the environment. Agent-based models have become a recent popular choice in epidemiology to model hierarchical systems and address complex spatio-temporal dynamics because of their ability to integrate multiple scales and datasets.
A new method based on the subpixel Gaussian model for accurate estimation of asteroid coordinates
NASA Astrophysics Data System (ADS)
Savanevych, V. E.; Briukhovetskyi, O. B.; Sokovikova, N. S.; Bezkrovny, M. M.; Vavilova, I. B.; Ivashchenko, Yu. M.; Elenin, L. V.; Khlamov, S. V.; Movsesian, Ia. S.; Dashkova, A. M.; Pogorelov, A. V.
2015-08-01
We describe a new iteration method to estimate asteroid coordinates, based on a subpixel Gaussian model of the discrete object image. The method operates by continuous parameters (asteroid coordinates) in a discrete observational space (the set of pixel potentials) of the CCD frame. In this model, the kind of coordinate distribution of the photons hitting a pixel of the CCD frame is known a priori, while the associated parameters are determined from a real digital object image. The method that is developed, which is flexible in adapting to any form of object image, has a high measurement accuracy along with a low calculating complexity, due to the maximum-likelihood procedure that is implemented to obtain the best fit instead of a least-squares method and Levenberg-Marquardt algorithm for minimization of the quadratic form. Since 2010, the method has been tested as the basis of our Collection Light Technology (COLITEC) software, which has been installed at several observatories across the world with the aim of the automatic discovery of asteroids and comets in sets of CCD frames. As a result, four comets (C/2010 X1 (Elenin), P/2011 NO1(Elenin), C/2012 S1 (ISON) and P/2013 V3 (Nevski)) as well as more than 1500 small Solar system bodies (including five near-Earth objects (NEOs), 21 Trojan asteroids of Jupiter and one Centaur object) have been discovered. We discuss these results, which allowed us to compare the accuracy parameters of the new method and confirm its efficiency. In 2014, the COLITEC software was recommended to all members of the Gaia-FUN-SSO network for analysing observations as a tool to detect faint moving objects in frames.
Accurate Modeling of Stability and Control Properties for Fighter Aircraft from CFD
2012-03-01
accurately placed and calibrated , etc. The results of the wind tunnel test must then be properly filtered and scaled to the proper size while taking...1 1.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2.1 Wind Tunnel . . . . . . . . . . . . . . . . . . . 2...analysis, wind tunnel testing, flight testing, and Com- putational Fluid Dynamics (CFD). Analytical analysis includes linear aerodynamic techniques
From Complex to Simple: Interdisciplinary Stochastic Models
ERIC Educational Resources Information Center
Mazilu, D. A.; Zamora, G.; Mazilu, I.
2012-01-01
We present two simple, one-dimensional, stochastic models that lead to a qualitative understanding of very complex systems from biology, nanoscience and social sciences. The first model explains the complicated dynamics of microtubules, stochastic cellular highways. Using the theory of random walks in one dimension, we find analytical expressions…
Modeling the chemistry of complex petroleum mixtures.
Quann, R J
1998-12-01
Determining the complete molecular composition of petroleum and its refined products is not feasible with current analytical techniques because of the astronomical number of molecular components. Modeling the composition and behavior of such complex mixtures in refinery processes has accordingly evolved along a simplifying concept called lumping. Lumping reduces the complexity of the problem to a manageable form by grouping the entire set of molecular components into a handful of lumps. This traditional approach does not have a molecular basis and therefore excludes important aspects of process chemistry and molecular property fundamentals from the model's formulation. A new approach called structure-oriented lumping has been developed to model the composition and chemistry of complex mixtures at a molecular level. The central concept is to represent an individual molecular or a set of closely related isomers as a mathematical construct of certain specific and repeating structural groups. A complex mixture such as petroleum can then be represented as thousands of distinct molecular components, each having a mathematical identity. This enables the automated construction of large complex reaction networks with tens of thousands of specific reactions for simulating the chemistry of complex mixtures. Further, the method provides a convenient framework for incorporating molecular physical property correlations, existing group contribution methods, molecular thermodynamic properties, and the structure--activity relationships of chemical kinetics in the development of models.
Modeling the chemistry of complex petroleum mixtures.
Quann, R J
1998-01-01
Determining the complete molecular composition of petroleum and its refined products is not feasible with current analytical techniques because of the astronomical number of molecular components. Modeling the composition and behavior of such complex mixtures in refinery processes has accordingly evolved along a simplifying concept called lumping. Lumping reduces the complexity of the problem to a manageable form by grouping the entire set of molecular components into a handful of lumps. This traditional approach does not have a molecular basis and therefore excludes important aspects of process chemistry and molecular property fundamentals from the model's formulation. A new approach called structure-oriented lumping has been developed to model the composition and chemistry of complex mixtures at a molecular level. The central concept is to represent an individual molecular or a set of closely related isomers as a mathematical construct of certain specific and repeating structural groups. A complex mixture such as petroleum can then be represented as thousands of distinct molecular components, each having a mathematical identity. This enables the automated construction of large complex reaction networks with tens of thousands of specific reactions for simulating the chemistry of complex mixtures. Further, the method provides a convenient framework for incorporating molecular physical property correlations, existing group contribution methods, molecular thermodynamic properties, and the structure--activity relationships of chemical kinetics in the development of models. PMID:9860903
Updating the debate on model complexity
Simmons, Craig T.; Hunt, Randall J.
2012-01-01
As scientists who are trying to understand a complex natural world that cannot be fully characterized in the field, how can we best inform the society in which we live? This founding context was addressed in a special session, “Complexity in Modeling: How Much is Too Much?” convened at the 2011 Geological Society of America Annual Meeting. The session had a variety of thought-provoking presentations—ranging from philosophy to cost-benefit analyses—and provided some areas of broad agreement that were not evident in discussions of the topic in 1998 (Hunt and Zheng, 1999). The session began with a short introduction during which model complexity was framed borrowing from an economic concept, the Law of Diminishing Returns, and an example of enjoyment derived by eating ice cream. Initially, there is increasing satisfaction gained from eating more ice cream, to a point where the gain in satisfaction starts to decrease, ending at a point when the eater sees no value in eating more ice cream. A traditional view of model complexity is similar—understanding gained from modeling can actually decrease if models become unnecessarily complex. However, oversimplified models—those that omit important aspects of the problem needed to make a good prediction—can also limit and confound our understanding. Thus, the goal of all modeling is to find the “sweet spot” of model sophistication—regardless of whether complexity was added sequentially to an overly simple model or collapsed from an initial highly parameterized framework that uses mathematics and statistics to attain an optimum (e.g., Hunt et al., 2007). Thus, holistic parsimony is attained, incorporating “as simple as possible,” as well as the equally important corollary “but no simpler.”
Multifaceted Modelling of Complex Business Enterprises.
Chakraborty, Subrata; Mengersen, Kerrie; Fidge, Colin; Ma, Lin; Lassen, David
2015-01-01
We formalise and present a new generic multifaceted complex system approach for modelling complex business enterprises. Our method has a strong focus on integrating the various data types available in an enterprise which represent the diverse perspectives of various stakeholders. We explain the challenges faced and define a novel approach to converting diverse data types into usable Bayesian probability forms. The data types that can be integrated include historic data, survey data, and management planning data, expert knowledge and incomplete data. The structural complexities of the complex system modelling process, based on various decision contexts, are also explained along with a solution. This new application of complex system models as a management tool for decision making is demonstrated using a railway transport case study. The case study demonstrates how the new approach can be utilised to develop a customised decision support model for a specific enterprise. Various decision scenarios are also provided to illustrate the versatility of the decision model at different phases of enterprise operations such as planning and control.
Multifaceted Modelling of Complex Business Enterprises
2015-01-01
We formalise and present a new generic multifaceted complex system approach for modelling complex business enterprises. Our method has a strong focus on integrating the various data types available in an enterprise which represent the diverse perspectives of various stakeholders. We explain the challenges faced and define a novel approach to converting diverse data types into usable Bayesian probability forms. The data types that can be integrated include historic data, survey data, and management planning data, expert knowledge and incomplete data. The structural complexities of the complex system modelling process, based on various decision contexts, are also explained along with a solution. This new application of complex system models as a management tool for decision making is demonstrated using a railway transport case study. The case study demonstrates how the new approach can be utilised to develop a customised decision support model for a specific enterprise. Various decision scenarios are also provided to illustrate the versatility of the decision model at different phases of enterprise operations such as planning and control. PMID:26247591
Complex quantum network model of energy transfer in photosynthetic complexes.
Ai, Bao-Quan; Zhu, Shi-Liang
2012-12-01
The quantum network model with real variables is usually used to describe the excitation energy transfer (EET) in the Fenna-Matthews-Olson (FMO) complexes. In this paper we add the quantum phase factors to the hopping terms and find that the quantum phase factors play an important role in the EET. The quantum phase factors allow us to consider the space structure of the pigments. It is found that phase coherence within the complexes would allow quantum interference to affect the dynamics of the EET. There exist some optimal phase regions where the transfer efficiency takes its maxima, which indicates that when the pigments are optimally spaced, the exciton can pass through the FMO with perfect efficiency. Moreover, the optimal phase regions almost do not change with the environments. In addition, we find that the phase factors are useful in the EET just in the case of multiple pathways. Therefore, we demonstrate that the quantum phases may bring the other two factors, the optimal space of the pigments and multiple pathways, together to contribute the EET in photosynthetic complexes with perfect efficiency.
NASA Astrophysics Data System (ADS)
Moghadas, D.; André, F.; Vereecken, H.; Lambot, S.
2009-04-01
Water is a vital resource for human needs, agriculture, sanitation and industrial supply. The knowledge of soil water dynamics and solute transport is essential in agricultural and environmental engineering as it controls plant growth, hydrological processes, and the contamination of surface and subsurface water. Increased irrigation efficiency has also an important role for water conservation, reducing drainage and mitigating some of the water pollution and soil salinity. Geophysical methods are effective techniques for monitoring the vadose zone. In particular, electromagnetic induction (EMI) can provide in a non-invasive way important information about the soil electrical properties at the field scale, which are mainly correlated to important variables such as soil water content, salinity, and texture. EMI is based on the radiation of a VLF EM wave into the soil. Depending on its electrical conductivity, Foucault currents are generated and produce a secondary EM field which is then recorded by the EMI system. Advanced techniques for EMI data interpretation resort to inverse modeling. Yet, a major gap in current knowledge is the limited accuracy of the forward model used for describing the EMI-subsurface system, usually relying on strongly simplifying assumptions. We present a new low frequency EMI method based on Vector Network Analyzer (VNA) technology and advanced forward modeling using a linear system of complex transfer functions for describing the EMI loop antenna and a three-dimensional solution of Maxwell's equations for wave propagation in multilayered media. VNA permits simple, international standard calibration of the EMI system. We derived a Green's function for the zero-offset, off-ground horizontal loop antenna and also proposed an optimal integration path for faster evaluation of the spatial-domain Green's function from its spectral counterpart. This new integration path shows fewer oscillations compared with the real path and permits to avoid the
Augustin, Christoph M; Neic, Aurel; Liebmann, Manfred; Prassl, Anton J; Niederer, Steven A; Haase, Gundolf; Plank, Gernot
2016-01-15
Electromechanical (EM) models of the heart have been used successfully to study fundamental mechanisms underlying a heart beat in health and disease. However, in all modeling studies reported so far numerous simplifications were made in terms of representing biophysical details of cellular function and its heterogeneity, gross anatomy and tissue microstructure, as well as the bidirectional coupling between electrophysiology (EP) and tissue distension. One limiting factor is the employed spatial discretization methods which are not sufficiently flexible to accommodate complex geometries or resolve heterogeneities, but, even more importantly, the limited efficiency of the prevailing solver techniques which are not sufficiently scalable to deal with the incurring increase in degrees of freedom (DOF) when modeling cardiac electromechanics at high spatio-temporal resolution. This study reports on the development of a novel methodology for solving the nonlinear equation of finite elasticity using human whole organ models of cardiac electromechanics, discretized at a high para-cellular resolution. Three patient-specific, anatomically accurate, whole heart EM models were reconstructed from magnetic resonance (MR) scans at resolutions of 220 μm, 440 μm and 880 μm, yielding meshes of approximately 184.6, 24.4 and 3.7 million tetrahedral elements and 95.9, 13.2 and 2.1 million displacement DOF, respectively. The same mesh was used for discretizing the governing equations of both electrophysiology (EP) and nonlinear elasticity. A novel algebraic multigrid (AMG) preconditioner for an iterative Krylov solver was developed to deal with the resulting computational load. The AMG preconditioner was designed under the primary objective of achieving favorable strong scaling characteristics for both setup and solution runtimes, as this is key for exploiting current high performance computing hardware. Benchmark results using the 220 μm, 440 μm and 880 μm meshes demonstrate
Augustin, Christoph M.; Neic, Aurel; Liebmann, Manfred; Prassl, Anton J.; Niederer, Steven A.; Haase, Gundolf; Plank, Gernot
2016-01-01
Electromechanical (EM) models of the heart have been used successfully to study fundamental mechanisms underlying a heart beat in health and disease. However, in all modeling studies reported so far numerous simplifications were made in terms of representing biophysical details of cellular function and its heterogeneity, gross anatomy and tissue microstructure, as well as the bidirectional coupling between electrophysiology (EP) and tissue distension. One limiting factor is the employed spatial discretization methods which are not sufficiently flexible to accommodate complex geometries or resolve heterogeneities, but, even more importantly, the limited efficiency of the prevailing solver techniques which are not sufficiently scalable to deal with the incurring increase in degrees of freedom (DOF) when modeling cardiac electromechanics at high spatio-temporal resolution. This study reports on the development of a novel methodology for solving the nonlinear equation of finite elasticity using human whole organ models of cardiac electromechanics, discretized at a high para-cellular resolution. Three patient-specific, anatomically accurate, whole heart EM models were reconstructed from magnetic resonance (MR) scans at resolutions of 220 μm, 440 μm and 880 μm, yielding meshes of approximately 184.6, 24.4 and 3.7 million tetrahedral elements and 95.9, 13.2 and 2.1 million displacement DOF, respectively. The same mesh was used for discretizing the governing equations of both electrophysiology (EP) and nonlinear elasticity. A novel algebraic multigrid (AMG) preconditioner for an iterative Krylov solver was developed to deal with the resulting computational load. The AMG preconditioner was designed under the primary objective of achieving favorable strong scaling characteristics for both setup and solution runtimes, as this is key for exploiting current high performance computing hardware. Benchmark results using the 220 μm, 440 μm and 880 μm meshes demonstrate
NASA Astrophysics Data System (ADS)
Augustin, Christoph M.; Neic, Aurel; Liebmann, Manfred; Prassl, Anton J.; Niederer, Steven A.; Haase, Gundolf; Plank, Gernot
2016-01-01
Electromechanical (EM) models of the heart have been used successfully to study fundamental mechanisms underlying a heart beat in health and disease. However, in all modeling studies reported so far numerous simplifications were made in terms of representing biophysical details of cellular function and its heterogeneity, gross anatomy and tissue microstructure, as well as the bidirectional coupling between electrophysiology (EP) and tissue distension. One limiting factor is the employed spatial discretization methods which are not sufficiently flexible to accommodate complex geometries or resolve heterogeneities, but, even more importantly, the limited efficiency of the prevailing solver techniques which is not sufficiently scalable to deal with the incurring increase in degrees of freedom (DOF) when modeling cardiac electromechanics at high spatio-temporal resolution. This study reports on the development of a novel methodology for solving the nonlinear equation of finite elasticity using human whole organ models of cardiac electromechanics, discretized at a high para-cellular resolution. Three patient-specific, anatomically accurate, whole heart EM models were reconstructed from magnetic resonance (MR) scans at resolutions of 220 μm, 440 μm and 880 μm, yielding meshes of approximately 184.6, 24.4 and 3.7 million tetrahedral elements and 95.9, 13.2 and 2.1 million displacement DOF, respectively. The same mesh was used for discretizing the governing equations of both electrophysiology (EP) and nonlinear elasticity. A novel algebraic multigrid (AMG) preconditioner for an iterative Krylov solver was developed to deal with the resulting computational load. The AMG preconditioner was designed under the primary objective of achieving favorable strong scaling characteristics for both setup and solution runtimes, as this is key for exploiting current high performance computing hardware. Benchmark results using the 220 μm, 440 μm and 880 μm meshes demonstrate
Woon, D.E.; Dunning, T.H. Jr.; Peterson, K.A.
1996-04-01
Augmented correlation consistent basis sets of double (aug-cc-pVDZ), triple (aug-cc-pVTZ), and modified quadruple zeta (aug-cc-pVQZ{prime}) quality have been employed to describe the N{sub 2}{endash}HF potential energy surface at the Hartree{endash}Fock level and with single reference correlated wave functions including Mo/ller{endash}Plesset perturbation theory (MP2, MP3, MP4) and coupled cluster methods [CCSD, CCSD(T)]. The most accurate computed equilibrium binding energies {ital D}{sub {ital e}} are (with counterpoise correction) 810 cm{sup {minus}1} (MP4/aug-cc-pVQZ{prime}) and 788 cm{sup {minus}1} [CCSD(T)/aug-cc-pVQZ{prime}]. Estimated complete basis set limits of 814 cm{sup {minus}1} (MP4) and 793 cm{sup {minus}1} [CCSD(T)] indicate that the large basis set results are essentially converged. Harmonic frequencies and zero-point energies were determined through the aug-cc-pVTZ level. Combining the zero point energies computed at the aug-cc-pVTZ level with the equilibrium binding energies computed at the aug-cc-pVQZ{prime} level, we predict {ital D}{sub 0} values of 322 and 296 cm{sup {minus}1}, respectively, at the MP4 and CCSD(T) levels of theory. Using experimental anharmonic frequencies, on the other hand, the CCSD(T) value of {ital D}{sub 0} is increased to 415 cm{sup {minus}1}, in good agreement with the experimental value recently reported by Miller and co-workers, 398{plus_minus}2 cm{sup {minus}1}. {copyright} {ital 1996 American Institute of Physics.}
NASA Astrophysics Data System (ADS)
Ahnen, Sandra; Hehn, Anna-Sophia; Vogiatzis, Konstantinos D.; Trachsel, Maria A.; Leutwyler, Samuel; Klopper, Wim
2014-09-01
Using explicitly-correlated coupled-cluster theory with single and double excitations, the intermolecular distances and interaction energies of the T-shaped imidazole⋯benzene and pyrrole⋯benzene complexes have been computed in a large augmented correlation-consistent quadruple-zeta basis set, adding also corrections for connected triple excitations and remaining basis-set-superposition errors. The results of these computations are used to assess other methods such as Møller-Plesset perturbation theory (MP2), spin-component-scaled MP2 theory, dispersion-weighted MP2 theory, interference-corrected explicitly-correlated MP2 theory, dispersion-corrected double-hybrid density-functional theory (DFT), DFT-based symmetry-adapted perturbation theory, the random-phase approximation, explicitly-correlated ring-coupled-cluster-doubles theory, and double-hybrid DFT with a correlation energy computed in the random-phase approximation.
Accurate ampacity determination: Temperature-Sag Model for operational real time ratings
Seppa, T.O.
1995-07-01
This report presents a method for determining transmission line ratings based on the relationship between the conductor`s temperature and its sag. The method is based on the Ruling Span principle and the use of transmission line tension monitoring systems. The report also presents a method of accurately calibrating the final sag of the conductor and determining the actual Ruling Span length of the line sections between deadend structures. Main error sources for two other real time methods are also examined.
Slip complexity in earthquake fault models.
Rice, J R; Ben-Zion, Y
1996-01-01
We summarize studies of earthquake fault models that give rise to slip complexities like those in natural earthquakes. For models of smooth faults between elastically deformable continua, it is critical that the friction laws involve a characteristic distance for slip weakening or evolution of surface state. That results in a finite nucleation size, or coherent slip patch size, h*. Models of smooth faults, using numerical cell size properly small compared to h*, show periodic response or complex and apparently chaotic histories of large events but have not been found to show small event complexity like the self-similar (power law) Gutenberg-Richter frequency-size statistics. This conclusion is supported in the present paper by fully inertial elastodynamic modeling of earthquake sequences. In contrast, some models of locally heterogeneous faults with quasi-independent fault segments, represented approximately by simulations with cell size larger than h* so that the model becomes "inherently discrete," do show small event complexity of the Gutenberg-Richter type. Models based on classical friction laws without a weakening length scale or for which the numerical procedure imposes an abrupt strength drop at the onset of slip have h* = 0 and hence always fall into the inherently discrete class. We suggest that the small-event complexity that some such models show will not survive regularization of the constitutive description, by inclusion of an appropriate length scale leading to a finite h*, and a corresponding reduction of numerical grid size. Images Fig. 2 Fig. 3 Fig. 4 Fig. 5 PMID:11607669
Minimum-complexity helicopter simulation math model
NASA Technical Reports Server (NTRS)
Heffley, Robert K.; Mnich, Marc A.
1988-01-01
An example of a minimal complexity simulation helicopter math model is presented. Motivating factors are the computational delays, cost, and inflexibility of the very sophisticated math models now in common use. A helicopter model form is given which addresses each of these factors and provides better engineering understanding of the specific handling qualities features which are apparent to the simulator pilot. The technical approach begins with specification of features which are to be modeled, followed by a build up of individual vehicle components and definition of equations. Model matching and estimation procedures are given which enable the modeling of specific helicopters from basic data sources such as flight manuals. Checkout procedures are given which provide for total model validation. A number of possible model extensions and refinement are discussed. Math model computer programs are defined and listed.
Alanazi, Hamdan O; Abdullah, Abdul Hanan; Qureshi, Kashif Naseer
2017-04-01
Recently, Artificial Intelligence (AI) has been used widely in medicine and health care sector. In machine learning, the classification or prediction is a major field of AI. Today, the study of existing predictive models based on machine learning methods is extremely active. Doctors need accurate predictions for the outcomes of their patients' diseases. In addition, for accurate predictions, timing is another significant factor that influences treatment decisions. In this paper, existing predictive models in medicine and health care have critically reviewed. Furthermore, the most famous machine learning methods have explained, and the confusion between a statistical approach and machine learning has clarified. A review of related literature reveals that the predictions of existing predictive models differ even when the same dataset is used. Therefore, existing predictive models are essential, and current methods must be improved.
Trends in modeling Biomedical Complex Systems
Milanesi, Luciano; Romano, Paolo; Castellani, Gastone; Remondini, Daniel; Liò, Petro
2009-01-01
In this paper we provide an introduction to the techniques for multi-scale complex biological systems, from the single bio-molecule to the cell, combining theoretical modeling, experiments, informatics tools and technologies suitable for biological and biomedical research, which are becoming increasingly multidisciplinary, multidimensional and information-driven. The most important concepts on mathematical modeling methodologies and statistical inference, bioinformatics and standards tools to investigate complex biomedical systems are discussed and the prominent literature useful to both the practitioner and the theoretician are presented. PMID:19828068
Resnic, F S; Ohno-Machado, L; Selwyn, A; Simon, D I; Popma, J J
2001-07-01
The objectives of this analysis were to develop and validate simplified risk score models for predicting the risk of major in-hospital complications after percutaneous coronary intervention (PCI) in the era of widespread stenting and use of glycoprotein IIb/IIIa antagonists. We then sought to compare the performance of these simplified models with those of full logistic regression and neural network models. From January 1, 1997 to December 31, 1999, data were collected on 4,264 consecutive interventional procedures at a single center. Risk score models were derived from multiple logistic regression models using the first 2,804 cases and then validated on the final 1,460 cases. The area under the receiver operating characteristic (ROC) curve for the risk score model that predicted death was 0.86 compared with 0.85 for the multiple logistic model and 0.83 for the neural network model (validation set). For the combined end points of death, myocardial infarction, or bypass surgery, the corresponding areas under the ROC curves were 0.74, 0.78, and 0.81, respectively. Previously identified risk factors were confirmed in this analysis. The use of stents was associated with a decreased risk of in-hospital complications. Thus, risk score models can accurately predict the risk of major in-hospital complications after PCI. Their discriminatory power is comparable to those of logistic models and neural network models. Accurate bedside risk stratification may be achieved with these simple models.
Dunn, Nicholas J. H.; Noid, W. G.
2015-12-28
The present work investigates the capability of bottom-up coarse-graining (CG) methods for accurately modeling both structural and thermodynamic properties of all-atom (AA) models for molecular liquids. In particular, we consider 1, 2, and 3-site CG models for heptane, as well as 1 and 3-site CG models for toluene. For each model, we employ the multiscale coarse-graining method to determine interaction potentials that optimally approximate the configuration dependence of the many-body potential of mean force (PMF). We employ a previously developed “pressure-matching” variational principle to determine a volume-dependent contribution to the potential, U{sub V}(V), that approximates the volume-dependence of the PMF. We demonstrate that the resulting CG models describe AA density fluctuations with qualitative, but not quantitative, accuracy. Accordingly, we develop a self-consistent approach for further optimizing U{sub V}, such that the CG models accurately reproduce the equilibrium density, compressibility, and average pressure of the AA models, although the CG models still significantly underestimate the atomic pressure fluctuations. Additionally, by comparing this array of models that accurately describe the structure and thermodynamic pressure of heptane and toluene at a range of different resolutions, we investigate the impact of bottom-up coarse-graining upon thermodynamic properties. In particular, we demonstrate that U{sub V} accounts for the reduced cohesion in the CG models. Finally, we observe that bottom-up coarse-graining introduces subtle correlations between the resolution, the cohesive energy density, and the “simplicity” of the model.
Research on the optimal selection method of image complexity assessment model index parameter
NASA Astrophysics Data System (ADS)
Zhu, Yong; Duan, Jin; Qian, Xiaofei; Xiao, Bo
2015-10-01
Target recognition is widely used in national economy, space technology and national defense and other fields. There is great difference between the difficulty of the target recognition and target extraction. The image complexity is evaluating the difficulty level of extracting the target from background. It can be used as a prior evaluation index of the target recognition algorithm's effectiveness. The paper, from the perspective of the target and background characteristics measurement, describe image complexity metrics parameters using quantitative, accurate mathematical relationship. For the collinear problems between each measurement parameters, image complexity metrics parameters are clustered with gray correlation method. It can realize the metrics parameters of extraction and selection, improve the reliability and validity of image complexity description and representation, and optimize the image the complexity assessment calculation model. Experiment results demonstrate that when gray system theory is applied to the image complexity analysis, target characteristics image complexity can be measured more accurately and effectively.
Ustinov, E. A.
2014-10-07
Commensurate–incommensurate (C-IC) transition of krypton molecular layer on graphite received much attention in recent decades in theoretical and experimental researches. However, there still exists a possibility of generalization of the phenomenon from thermodynamic viewpoint on the basis of accurate molecular simulation. Recently, a new technique was developed for analysis of two-dimensional (2D) phase transitions in systems involving a crystalline phase, which is based on accounting for the effect of temperature and the chemical potential on the lattice constant of the 2D layer using the Gibbs–Duhem equation [E. A. Ustinov, J. Chem. Phys. 140, 074706 (2014)]. The technique has allowed for determination of phase diagrams of 2D argon layers on the uniform surface and in slit pores. This paper extends the developed methodology on systems accounting for the periodic modulation of the substrate potential. The main advantage of the developed approach is that it provides highly accurate evaluation of the chemical potential of crystalline layers, which allows reliable determination of temperature and other parameters of various 2D phase transitions. Applicability of the methodology is demonstrated on the krypton–graphite system. Analysis of phase diagram of the krypton molecular layer, thermodynamic functions of coexisting phases, and a method of prediction of adsorption isotherms is considered accounting for a compression of the graphite due to the krypton–carbon interaction. The temperature and heat of C-IC transition has been reliably determined for the gas–solid and solid–solid system.
Ustinov, E A
2014-10-07
Commensurate-incommensurate (C-IC) transition of krypton molecular layer on graphite received much attention in recent decades in theoretical and experimental researches. However, there still exists a possibility of generalization of the phenomenon from thermodynamic viewpoint on the basis of accurate molecular simulation. Recently, a new technique was developed for analysis of two-dimensional (2D) phase transitions in systems involving a crystalline phase, which is based on accounting for the effect of temperature and the chemical potential on the lattice constant of the 2D layer using the Gibbs-Duhem equation [E. A. Ustinov, J. Chem. Phys. 140, 074706 (2014)]. The technique has allowed for determination of phase diagrams of 2D argon layers on the uniform surface and in slit pores. This paper extends the developed methodology on systems accounting for the periodic modulation of the substrate potential. The main advantage of the developed approach is that it provides highly accurate evaluation of the chemical potential of crystalline layers, which allows reliable determination of temperature and other parameters of various 2D phase transitions. Applicability of the methodology is demonstrated on the krypton-graphite system. Analysis of phase diagram of the krypton molecular layer, thermodynamic functions of coexisting phases, and a method of prediction of adsorption isotherms is considered accounting for a compression of the graphite due to the krypton-carbon interaction. The temperature and heat of C-IC transition has been reliably determined for the gas-solid and solid-solid system.
Mathematical modelling of complex contagion on clustered networks
NASA Astrophysics Data System (ADS)
O'sullivan, David J.; O'Keeffe, Gary; Fennell, Peter; Gleeson, James
2015-09-01
The spreading of behavior, such as the adoption of a new innovation, is influenced bythe structure of social networks that interconnect the population. In the experiments of Centola (Science, 2010), adoption of new behavior was shown to spread further and faster across clustered-lattice networks than across corresponding random networks. This implies that the “complex contagion” effects of social reinforcement are important in such diffusion, in contrast to “simple” contagion models of disease-spread which predict that epidemics would grow more efficiently on random networks than on clustered networks. To accurately model complex contagion on clustered networks remains a challenge because the usual assumptions (e.g. of mean-field theory) regarding tree-like networks are invalidated by the presence of triangles in the network; the triangles are, however, crucial to the social reinforcement mechanism, which posits an increased probability of a person adopting behavior that has been adopted by two or more neighbors. In this paper we modify the analytical approach that was introduced by Hebert-Dufresne et al. (Phys. Rev. E, 2010), to study disease-spread on clustered networks. We show how the approximation method can be adapted to a complex contagion model, and confirm the accuracy of the method with numerical simulations. The analytical results of the model enable us to quantify the level of social reinforcement that is required to observe—as in Centola’s experiments—faster diffusion on clustered topologies than on random networks.
The Kuramoto model in complex networks
NASA Astrophysics Data System (ADS)
Rodrigues, Francisco A.; Peron, Thomas K. DM.; Ji, Peng; Kurths, Jürgen
2016-01-01
Synchronization of an ensemble of oscillators is an emergent phenomenon present in several complex systems, ranging from social and physical to biological and technological systems. The most successful approach to describe how coherent behavior emerges in these complex systems is given by the paradigmatic Kuramoto model. This model has been traditionally studied in complete graphs. However, besides being intrinsically dynamical, complex systems present very heterogeneous structure, which can be represented as complex networks. This report is dedicated to review main contributions in the field of synchronization in networks of Kuramoto oscillators. In particular, we provide an overview of the impact of network patterns on the local and global dynamics of coupled phase oscillators. We cover many relevant topics, which encompass a description of the most used analytical approaches and the analysis of several numerical results. Furthermore, we discuss recent developments on variations of the Kuramoto model in networks, including the presence of noise and inertia. The rich potential for applications is discussed for special fields in engineering, neuroscience, physics and Earth science. Finally, we conclude by discussing problems that remain open after the last decade of intensive research on the Kuramoto model and point out some promising directions for future research.
Modelling biological complexity: a physical scientist's perspective
Coveney, Peter V; Fowler, Philip W
2005-01-01
We discuss the modern approaches of complexity and self-organization to understanding dynamical systems and how these concepts can inform current interest in systems biology. From the perspective of a physical scientist, it is especially interesting to examine how the differing weights given to philosophies of science in the physical and biological sciences impact the application of the study of complexity. We briefly describe how the dynamics of the heart and circadian rhythms, canonical examples of systems biology, are modelled by sets of nonlinear coupled differential equations, which have to be solved numerically. A major difficulty with this approach is that all the parameters within these equations are not usually known. Coupled models that include biomolecular detail could help solve this problem. Coupling models across large ranges of length- and time-scales is central to describing complex systems and therefore to biology. Such coupling may be performed in at least two different ways, which we refer to as hierarchical and hybrid multiscale modelling. While limited progress has been made in the former case, the latter is only beginning to be addressed systematically. These modelling methods are expected to bring numerous benefits to biology, for example, the properties of a system could be studied over a wider range of length- and time-scales, a key aim of systems biology. Multiscale models couple behaviour at the molecular biological level to that at the cellular level, thereby providing a route for calculating many unknown parameters as well as investigating the effects at, for example, the cellular level, of small changes at the biomolecular level, such as a genetic mutation or the presence of a drug. The modelling and simulation of biomolecular systems is itself very computationally intensive; we describe a recently developed hybrid continuum-molecular model, HybridMD, and its associated molecular insertion algorithm, which point the way towards the
Modelling biological complexity: a physical scientist's perspective.
Coveney, Peter V; Fowler, Philip W
2005-09-22
We discuss the modern approaches of complexity and self-organization to understanding dynamical systems and how these concepts can inform current interest in systems biology. From the perspective of a physical scientist, it is especially interesting to examine how the differing weights given to philosophies of science in the physical and biological sciences impact the application of the study of complexity. We briefly describe how the dynamics of the heart and circadian rhythms, canonical examples of systems biology, are modelled by sets of nonlinear coupled differential equations, which have to be solved numerically. A major difficulty with this approach is that all the parameters within these equations are not usually known. Coupled models that include biomolecular detail could help solve this problem. Coupling models across large ranges of length- and time-scales is central to describing complex systems and therefore to biology. Such coupling may be performed in at least two different ways, which we refer to as hierarchical and hybrid multiscale modelling. While limited progress has been made in the former case, the latter is only beginning to be addressed systematically. These modelling methods are expected to bring numerous benefits to biology, for example, the properties of a system could be studied over a wider range of length- and time-scales, a key aim of systems biology. Multiscale models couple behaviour at the molecular biological level to that at the cellular level, thereby providing a route for calculating many unknown parameters as well as investigating the effects at, for example, the cellular level, of small changes at the biomolecular level, such as a genetic mutation or the presence of a drug. The modelling and simulation of biomolecular systems is itself very computationally intensive; we describe a recently developed hybrid continuum-molecular model, HybridMD, and its associated molecular insertion algorithm, which point the way towards the
NASA Astrophysics Data System (ADS)
Zhang, Xiang; Vu-Quoc, Loc
2007-07-01
We present in this paper the displacement-driven version of a tangential force-displacement (TFD) model that accounts for both elastic and plastic deformations together with interfacial friction occurring in collisions of spherical particles. This elasto-plastic frictional TFD model, with its force-driven version presented in [L. Vu-Quoc, L. Lesburg, X. Zhang. An accurate tangential force-displacement model for granular-flow simulations: contacting spheres with plastic deformation, force-driven formulation, Journal of Computational Physics 196(1) (2004) 298-326], is consistent with the elasto-plastic frictional normal force-displacement (NFD) model presented in [L. Vu-Quoc, X. Zhang. An elasto-plastic contact force-displacement model in the normal direction: displacement-driven version, Proceedings of the Royal Society of London, Series A 455 (1991) 4013-4044]. Both the NFD model and the present TFD model are based on the concept of additive decomposition of the radius of contact area into an elastic part and a plastic part. The effect of permanent indentation after impact is represented by a correction to the radius of curvature. The effect of material softening due to plastic flow is represented by a correction to the elastic moduli. The proposed TFD model is accurate, and is validated against nonlinear finite element analyses involving plastic flows in both the loading and unloading conditions. The proposed consistent displacement-driven, elasto-plastic NFD and TFD models are designed for implementation in computer codes using the discrete-element method (DEM) for granular-flow simulations. The model is shown to be accurate and is validated against nonlinear elasto-plastic finite-element analysis.
NASA Astrophysics Data System (ADS)
Fischer, A.; Hoffmann, K.-H.
2004-03-01
In this case study a complex Otto engine simulation provides data including, but not limited to, effects from losses due to heat conduction, exhaust losses and frictional losses. This data is used as a benchmark to test whether the Novikov engine with heat leak, a simple endoreversible model, can reproduce the complex engine behavior quantitatively by an appropriate choice of model parameters. The reproduction obtained proves to be of high quality.
2016 KIVA-hpFE Development: A Robust and Accurate Engine Modeling Software
Carrington, David Bradley; Waters, Jiajia
2016-10-25
Los Alamos National Laboratory and its collaborators are facilitating engine modeling by improving accuracy and robustness of the modeling, and improving the robustness of software. We also continue to improve the physical modeling methods. We are developing and implementing new mathematical algorithms, those that represent the physics within an engine. We provide software that others may use directly or that they may alter with various models e.g., sophisticated chemical kinetics, different turbulent closure methods or other fuel injection and spray systems.
Assessment of higher order turbulence models for complex two- and three-dimensional flowfields
NASA Technical Reports Server (NTRS)
Menter, Florian R.
1992-01-01
A numerical method is presented to solve the three-dimensional Navier-Stokes equations in combination with a full Reynolds-stress turbulence model. Computations will be shown for three complex flowfields. The results of the Reynolds-stress model will be compared with those predicted by two different versions of the k-omega model. It will be shown that an improved version of the k-omega model gives as accurate results as the Reynolds-stress model.
ERIC Educational Resources Information Center
Gong, Yue; Beck, Joseph E.; Heffernan, Neil T.
2011-01-01
Student modeling is a fundamental concept applicable to a variety of intelligent tutoring systems (ITS). However, there is not a lot of practical guidance on how to construct and train such models. This paper compares two approaches for student modeling, Knowledge Tracing (KT) and Performance Factors Analysis (PFA), by evaluating their predictive…
Models in biology: ‘accurate descriptions of our pathetic thinking’
2014-01-01
In this essay I will sketch some ideas for how to think about models in biology. I will begin by trying to dispel the myth that quantitative modeling is somehow foreign to biology. I will then point out the distinction between forward and reverse modeling and focus thereafter on the former. Instead of going into mathematical technicalities about different varieties of models, I will focus on their logical structure, in terms of assumptions and conclusions. A model is a logical machine for deducing the latter from the former. If the model is correct, then, if you believe its assumptions, you must, as a matter of logic, also believe its conclusions. This leads to consideration of the assumptions underlying models. If these are based on fundamental physical laws, then it may be reasonable to treat the model as ‘predictive’, in the sense that it is not subject to falsification and we can rely on its conclusions. However, at the molecular level, models are more often derived from phenomenology and guesswork. In this case, the model is a test of its assumptions and must be falsifiable. I will discuss three models from this perspective, each of which yields biological insights, and this will lead to some guidelines for prospective model builders. PMID:24886484
Comparing flood loss models of different complexity
NASA Astrophysics Data System (ADS)
Schröter, Kai; Kreibich, Heidi; Vogel, Kristin; Riggelsen, Carsten; Scherbaum, Frank; Merz, Bruno
2013-04-01
Any deliberation on flood risk requires the consideration of potential flood losses. In particular, reliable flood loss models are needed to evaluate cost-effectiveness of mitigation measures, to assess vulnerability, for comparative risk analysis and financial appraisal during and after floods. In recent years, considerable improvements have been made both concerning the data basis and the methodological approaches used for the development of flood loss models. Despite of that, flood loss models remain an important source of uncertainty. Likewise the temporal and spatial transferability of flood loss models is still limited. This contribution investigates the predictive capability of different flood loss models in a split sample cross regional validation approach. For this purpose, flood loss models of different complexity, i.e. based on different numbers of explaining variables, are learned from a set of damage records that was obtained from a survey after the Elbe flood in 2002. The validation of model predictions is carried out for different flood events in the Elbe and Danube river basins in 2002, 2005 and 2006 for which damage records are available from surveys after the flood events. The models investigated are a stage-damage model, the rule based model FLEMOps+r as well as novel model approaches which are derived using data mining techniques of regression trees and Bayesian networks. The Bayesian network approach to flood loss modelling provides attractive additional information concerning the probability distribution of both model predictions and explaining variables.
Modeling Electromagnetic Scattering From Complex Inhomogeneous Objects
NASA Technical Reports Server (NTRS)
Deshpande, Manohar; Reddy, C. J.
2011-01-01
This software innovation is designed to develop a mathematical formulation to estimate the electromagnetic scattering characteristics of complex, inhomogeneous objects using the finite-element-method (FEM) and method-of-moments (MoM) concepts, as well as to develop a FORTRAN code called FEMOM3DS (Finite Element Method and Method of Moments for 3-Dimensional Scattering), which will implement the steps that are described in the mathematical formulation. Very complex objects can be easily modeled, and the operator of the code is not required to know the details of electromagnetic theory to study electromagnetic scattering.
NASA Astrophysics Data System (ADS)
Toyokuni, Genti; Takenaka, Hiroshi
2012-06-01
We propose a method for modeling global seismic wave propagation through an attenuative Earth model including the center. This method enables accurate and efficient computations since it is based on the 2.5-D approach, which solves wave equations only on a 2-D cross section of the whole Earth and can correctly model 3-D geometrical spreading. We extend a numerical scheme for the elastic waves in spherical coordinates using the finite-difference method (FDM), to solve the viscoelastodynamic equation. For computation of realistic seismic wave propagation, incorporation of anelastic attenuation is crucial. Since the nature of Earth material is both elastic solid and viscous fluid, we should solve stress-strain relations of viscoelastic material, including attenuative structures. These relations represent the stress as a convolution integral in time, which has had difficulty treating viscoelasticity in time-domain computation such as the FDM. However, we now have a method using so-called memory variables, invented in the 1980s, followed by improvements in Cartesian coordinates. Arbitrary values of the quality factor (Q) can be incorporated into the wave equation via an array of Zener bodies. We also introduce the multi-domain, an FD grid of several layers with different grid spacings, into our FDM scheme. This allows wider lateral grid spacings with depth, so as not to perturb the FD stability criterion around the Earth center. In addition, we propose a technique to avoid the singularity problem of the wave equation in spherical coordinates at the Earth center. We develop a scheme to calculate wavefield variables on this point, based on linear interpolation for the velocity-stress, staggered-grid FDM. This scheme is validated through a comparison of synthetic seismograms with those obtained by the Direct Solution Method for a spherically symmetric Earth model, showing excellent accuracy for our FDM scheme. As a numerical example, we apply the method to simulate seismic
Flowgraph Models for Complex Multistate System Reliabiliy.
Williams, B. J.; Huzurbazar, A. V.
2005-01-01
This chapter reviews flowgraph models for complex multistate systems. The focus is on modeling data from semi-Markov processes and constructing likelihoods when different portions of the system data are censored and incomplete. Semi-Markov models play an important role in the analysis of time to event data. However, in practice, data analysis for semi-Markov processes can be quite difficult and many simplifying assumptions are made. Flowgraph models are multistate models that provide a data analytic method for semi-Markov processes. Flowgraphs are useful for estimating Bayes predictive densities, predictive reliability functions, and predictive hazard functions for waiting times of interest in the presence of censored and incomplete data. This chapter reviews data analysis for flowgraph models and then presents methods for constructing likelihoods when portions of the system data are missing.
THE IMPACT OF ACCURATE EXTINCTION MEASUREMENTS FOR X-RAY SPECTRAL MODELS
Smith, Randall K.; Valencic, Lynne A.; Corrales, Lia
2016-02-20
Interstellar extinction includes both absorption and scattering of photons from interstellar gas and dust grains, and it has the effect of altering a source's spectrum and its total observed intensity. However, while multiple absorption models exist, there are no useful scattering models in standard X-ray spectrum fitting tools, such as XSPEC. Nonetheless, X-ray halos, created by scattering from dust grains, are detected around even moderately absorbed sources, and the impact on an observed source spectrum can be significant, if modest, compared to direct absorption. By convolving the scattering cross section with dust models, we have created a spectral model as a function of energy, type of dust, and extraction region that can be used with models of direct absorption. This will ensure that the extinction model is consistent and enable direct connections to be made between a source's X-ray spectral fits and its UV/optical extinction.
Gómez-Hernández, J Jaime
2006-01-01
It is difficult to define complexity in modeling. Complexity is often associated with uncertainty since modeling uncertainty is an intrinsically difficult task. However, modeling uncertainty does not require, necessarily, complex models, in the sense of a model requiring an unmanageable number of degrees of freedom to characterize the aquifer. The relationship between complexity, uncertainty, heterogeneity, and stochastic modeling is not simple. Aquifer models should be able to quantify the uncertainty of their predictions, which can be done using stochastic models that produce heterogeneous realizations of aquifer parameters. This is the type of complexity addressed in this article.
NASA Astrophysics Data System (ADS)
Huerta, Eliu; Agarwal, Bhanu; Chua, Alvin; George, Daniel; Haas, Roland; Hinder, Ian; Kumar, Prayush; Moore, Christopher; Pfeiffer, Harald
2017-01-01
We recently constructed an inspiral-merger-ringdown (IMR) waveform model to describe the dynamical evolution of compact binaries on eccentric orbits, and used this model to constrain the eccentricity with which the gravitational wave transients currently detected by LIGO could be effectively recovered with banks of quasi-circular templates. We now present the second generation of this model, which is calibrated using a large catalog of eccentric numerical relativity simulations. We discuss the new features of this model, and show that its enhance accuracy makes it a powerful tool to detect eccentric signals with LIGO.
An Efficient and Accurate Quantum Lattice-Gas Model for the Many-Body Schroedinger Wave Equation
2002-01-01
CONTRACT NUMBER AN EFFICIENT AND ACCURATE QUANTUM LATTICE-GAS MODEL FOR THE MANY-BODY SCHROEDINGER WAVE EQUATION 5b. GRANT NUMBER SC. PROGRAM ELEMENT...for simulating the time-dependent evolution of a many-body jiiantum mechanical system of particles governed by the non-relativistic Schroedinger " wave...the numerical dispersion of the simulated wave packets is compared with the analytical solutions. 15. SUBJECT TERM: Schroedinger wave equation
Nasr Esfahani, Bahram; Rezaei Yazdi, Hadi; Moghim, Sharareh; Ghasemian Safaei, Hajieh; Zarkesh Esfahani, Hamid
2012-11-01
Rapid and accurate identification of mycobacteria isolates from primary culture is important due to timely and appropriate antibiotic therapy. Conventional methods for identification of Mycobacterium species based on biochemical tests needs several weeks and may remain inconclusive. In this study, a novel multiplex real-time PCR was developed for rapid identification of Mycobacterium genus, Mycobacterium tuberculosis complex (MTC) and the most common non-tuberculosis mycobacteria species including M. abscessus, M. fortuitum, M. avium complex, M. kansasii, and the M. gordonae in three reaction tubes but under same PCR condition. Genetic targets for primer designing included the 16S rDNA gene, the dnaJ gene, the gyrB gene and internal transcribed spacer (ITS). Multiplex real-time PCR was setup with reference Mycobacterium strains and was subsequently tested with 66 clinical isolates. Results of multiplex real-time PCR were analyzed with melting curves and melting temperature (T (m)) of Mycobacterium genus, MTC, and each of non-tuberculosis Mycobacterium species were determined. Multiplex real-time PCR results were compared with amplification and sequencing of 16S-23S rDNA ITS for identification of Mycobacterium species. Sensitivity and specificity of designed primers were each 100 % for MTC, M. abscessus, M. fortuitum, M. avium complex, M. kansasii, and M. gordonae. Sensitivity and specificity of designed primer for genus Mycobacterium was 96 and 100 %, respectively. According to the obtained results, we conclude that this multiplex real-time PCR with melting curve analysis and these novel primers can be used for rapid and accurate identification of genus Mycobacterium, MTC, and the most common non-tuberculosis Mycobacterium species.
Human driven transitions in complex model ecosystems
NASA Astrophysics Data System (ADS)
Harfoot, Mike; Newbold, Tim; Tittinsor, Derek; Purves, Drew
2015-04-01
Human activities have been observed to be impacting ecosystems across the globe, leading to reduced ecosystem functioning, altered trophic and biomass structure and ultimately ecosystem collapse. Previous attempts to understand global human impacts on ecosystems have usually relied on statistical models, which do not explicitly model the processes underlying the functioning of ecosystems, represent only a small proportion of organisms and do not adequately capture complex non-linear and dynamic responses of ecosystems to perturbations. We use a mechanistic ecosystem model (1), which simulates the underlying processes structuring ecosystems and can thus capture complex and dynamic interactions, to investigate boundaries of complex ecosystems to human perturbation. We explore several drivers including human appropriation of net primary production and harvesting of animal biomass. We also present an analysis of the key interactions between biotic, societal and abiotic earth system components, considering why and how we might think about these couplings. References: M. B. J. Harfoot et al., Emergent global patterns of ecosystem structure and function from a mechanistic general ecosystem model., PLoS Biol. 12, e1001841 (2014).
Efficient and accurate local model for colorimetric characterization of liquid-crystal displays.
Zou, Wenhai; Xu, Haisong; Gong, Rui
2012-01-01
Taking the chromaticity inconstancy of LCDs and the inverse efficiency into account, a novel local colorimetric characterization model was developed in this Letter. Rather than dividing the device color space into many subspaces to refine the chromaticity description as existent local models, the proposed model tailored the transformation relationship uniquely for each characterized color with look-up tables and a local chromaticity matrix. Based on this model, the characterization task could be efficiently accomplished within a few steps for either the forward or the inverse transformation. Test experiments on several commercial LCDs indicated that the average color difference between the estimated and measured tristimulus values could be achieved in a low level of about 0.4 CIEDE2000 units, effectively demonstrating the proposed model.
Active appearance model and deep learning for more accurate prostate segmentation on MRI
NASA Astrophysics Data System (ADS)
Cheng, Ruida; Roth, Holger R.; Lu, Le; Wang, Shijun; Turkbey, Baris; Gandler, William; McCreedy, Evan S.; Agarwal, Harsh K.; Choyke, Peter; Summers, Ronald M.; McAuliffe, Matthew J.
2016-03-01
Prostate segmentation on 3D MR images is a challenging task due to image artifacts, large inter-patient prostate shape and texture variability, and lack of a clear prostate boundary specifically at apex and base levels. We propose a supervised machine learning model that combines atlas based Active Appearance Model (AAM) with a Deep Learning model to segment the prostate on MR images. The performance of the segmentation method is evaluated on 20 unseen MR image datasets. The proposed method combining AAM and Deep Learning achieves a mean Dice Similarity Coefficient (DSC) of 0.925 for whole 3D MR images of the prostate using axial cross-sections. The proposed model utilizes the adaptive atlas-based AAM model and Deep Learning to achieve significant segmentation accuracy.
D’Adamo, Giuseppe; Pelissetto, Andrea; Pierleoni, Carlo
2014-12-28
A coarse-graining strategy, previously developed for polymer solutions, is extended here to mixtures of linear polymers and hard-sphere colloids. In this approach, groups of monomers are mapped onto a single pseudoatom (a blob) and the effective blob-blob interactions are obtained by requiring the model to reproduce some large-scale structural properties in the zero-density limit. We show that an accurate parametrization of the polymer-colloid interactions is obtained by simply introducing pair potentials between blobs and colloids. For the coarse-grained (CG) model in which polymers are modelled as four-blob chains (tetramers), the pair potentials are determined by means of the iterative Boltzmann inversion scheme, taking full-monomer (FM) pair correlation functions at zero-density as targets. For a larger number n of blobs, pair potentials are determined by using a simple transferability assumption based on the polymer self-similarity. We validate the model by comparing its predictions with full-monomer results for the interfacial properties of polymer solutions in the presence of a single colloid and for thermodynamic and structural properties in the homogeneous phase at finite polymer and colloid density. The tetramer model is quite accurate for q ≲ 1 (q=R{sup ^}{sub g}/R{sub c}, where R{sup ^}{sub g} is the zero-density polymer radius of gyration and R{sub c} is the colloid radius) and reasonably good also for q = 2. For q = 2, an accurate coarse-grained description is obtained by using the n = 10 blob model. We also compare our results with those obtained by using single-blob models with state-dependent potentials.
BDI-modelling of complex intracellular dynamics.
Jonker, C M; Snoep, J L; Treur, J; Westerhoff, H V; Wijngaards, W C A
2008-03-07
A BDI-based continuous-time modelling approach for intracellular dynamics is presented. It is shown how temporalized BDI-models make it possible to model intracellular biochemical processes as decision processes. By abstracting from some of the details of the biochemical pathways, the model achieves understanding in nearly intuitive terms, without losing veracity: classical intentional state properties such as beliefs, desires and intentions are founded in reality through precise biochemical relations. In an extensive example, the complex regulation of Escherichia coli vis-à-vis lactose, glucose and oxygen is simulated as a discrete-state, continuous-time temporal decision manager. Thus a bridge is introduced between two different scientific areas: the area of BDI-modelling and the area of intracellular dynamics.
A Practical Philosophy of Complex Climate Modelling
NASA Technical Reports Server (NTRS)
Schmidt, Gavin A.; Sherwood, Steven
2014-01-01
We give an overview of the practice of developing and using complex climate models, as seen from experiences in a major climate modelling center and through participation in the Coupled Model Intercomparison Project (CMIP).We discuss the construction and calibration of models; their evaluation, especially through use of out-of-sample tests; and their exploitation in multi-model ensembles to identify biases and make predictions. We stress that adequacy or utility of climate models is best assessed via their skill against more naive predictions. The framework we use for making inferences about reality using simulations is naturally Bayesian (in an informal sense), and has many points of contact with more familiar examples of scientific epistemology. While the use of complex simulations in science is a development that changes much in how science is done in practice, we argue that the concepts being applied fit very much into traditional practices of the scientific method, albeit those more often associated with laboratory work.
Intrinsic Uncertainties in Modeling Complex Systems.
Cooper, Curtis S; Bramson, Aaron L.; Ames, Arlo L.
2014-09-01
Models are built to understand and predict the behaviors of both natural and artificial systems. Because it is always necessary to abstract away aspects of any non-trivial system being modeled, we know models can potentially leave out important, even critical elements. This reality of the modeling enterprise forces us to consider the prospective impacts of those effects completely left out of a model - either intentionally or unconsidered. Insensitivity to new structure is an indication of diminishing returns. In this work, we represent a hypothetical unknown effect on a validated model as a finite perturba- tion whose amplitude is constrained within a control region. We find robustly that without further constraints, no meaningful bounds can be placed on the amplitude of a perturbation outside of the control region. Thus, forecasting into unsampled regions is a very risky proposition. We also present inherent difficulties with proper time discretization of models and representing in- herently discrete quantities. We point out potentially worrisome uncertainties, arising from math- ematical formulation alone, which modelers can inadvertently introduce into models of complex systems. Acknowledgements This work has been funded under early-career LDRD project #170979, entitled "Quantify- ing Confidence in Complex Systems Models Having Structural Uncertainties", which ran from 04/2013 to 09/2014. We wish to express our gratitude to the many researchers at Sandia who con- tributed ideas to this work, as well as feedback on the manuscript. In particular, we would like to mention George Barr, Alexander Outkin, Walt Beyeler, Eric Vugrin, and Laura Swiler for provid- ing invaluable advice and guidance through the course of the project. We would also like to thank Steven Kleban, Amanda Gonzales, Trevor Manzanares, and Sarah Burwell for their assistance in managing project tasks and resources.
Accurate Sloshing Modes Modeling: A New Analytical Solution and its Consequences on Control
NASA Astrophysics Data System (ADS)
Gonidou, Luc-Olivier; Desmariaux, Jean
2014-06-01
This study addresses the issue of sloshing modes modeling for GNC analyses purposes. On European launchers, equivalent mechanical systems are commonly used for modeling sloshing effects on launcher dynamics. The representativeness of such a methodology is discussed here. First an exact analytical formulation of the launcher dynamics fitted with sloshing modes is proposed and discrepancies with equivalent mechanical system approach are emphasized. Then preliminary comparative GNC analyses are performed using the different models of dynamics in order to evaluate the impact of the aforementioned discrepancies from GNC standpoint. Special attention is paid to system stability.
Cimpoesu, Dorin Stoleriu, Laurentiu; Stancu, Alexandru
2013-12-14
We propose a generalized Stoner-Wohlfarth (SW) type model to describe various experimentally observed angular dependencies of the switching field in non-single-domain magnetic particles. Because the nonuniform magnetic states are generally characterized by complicated spin configurations with no simple analytical description, we maintain the macrospin hypothesis and we phenomenologically include the effects of nonuniformities only in the anisotropy energy, preserving as much as possible the elegance of SW model, the concept of critical curve and its geometric interpretation. We compare the results obtained with our model with full micromagnetic simulations in order to evaluate the performance and limits of our approach.
Different Epidemic Models on Complex Networks
NASA Astrophysics Data System (ADS)
Zhang, Hai-Feng; Small, Michael; Fu, Xin-Chu
2009-07-01
Models for diseases spreading are not just limited to SIS or SIR. For instance, for the spreading of AIDS/HIV, the susceptible individuals can be classified into different cases according to their immunity, and similarly, the infected individuals can be sorted into different classes according to their infectivity. Moreover, some diseases may develop through several stages. Many authors have shown that the individuals' relation can be viewed as a complex network. So in this paper, in order to better explain the dynamical behavior of epidemics, we consider different epidemic models on complex networks, and obtain the epidemic threshold for each case. Finally, we present numerical simulations for each case to verify our results.
NASA Astrophysics Data System (ADS)
Dale, Andy; Stolpovsky, Konstantin; Wallmann, Klaus
2016-04-01
The recycling and burial of biogenic material in the sea floor plays a key role in the regulation of ocean chemistry. Proper consideration of these processes in ocean biogeochemical models is becoming increasingly recognized as an important step in model validation and prediction. However, the rate of organic matter remineralization in sediments and the benthic flux of redox-sensitive elements are difficult to predict a priori. In this communication, examples of empirical benthic flux models that can be coupled to earth system models to predict sediment-water exchange in the open ocean are presented. Large uncertainties hindering further progress in this field include knowledge of the reactivity of organic carbon reaching the sediment, the importance of episodic variability in bottom water chemistry and particle rain rates (for both the deep-sea and margins) and the role of benthic fauna. How do we meet the challenge?
Accurate modeling of F-region electron densities. Annual progress report, 1993-1994
Not Available
1994-01-01
In the past year, the authors have made considerable progress in a number of areas including algorithm development, completion of two major case studies, and the development of a new EUV flux model. As a result, there has been a major improvement in the ability to model global emissions in support of NASA's imaging plans. Activity highlights include the following: developed a new algorithm to allow physical models to reproduce observed NmF2; investigated the relationship between NmF2 and F10.7 at Millstone Hill during 1990; developed a new solar EUV flux model; statistical survey of anomalously high nighttime electron T(sub e) at Millstone Hill; conducted a case study of the March 1990 magnetic storm; and conducted a comparison between theory and data of magnetically quiet behavior of the winter ionosphere at Millstone Hill.
Xu, Xuemiao; Zhang, Huaidong; Han, Guoqiang; Kwan, Kin Chung; Pang, Wai-Man; Fang, Jiaming; Zhao, Gansen
2016-01-01
Exterior orientation parameters’ (EOP) estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs) for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang’E-1, compared to the existing space resection model. PMID:27077855
Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu
2015-09-01
Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed.
Xu, Xuemiao; Zhang, Huaidong; Han, Guoqiang; Kwan, Kin Chung; Pang, Wai-Man; Fang, Jiaming; Zhao, Gansen
2016-04-11
Exterior orientation parameters' (EOP) estimation using space resection plays an important role in topographic reconstruction for push broom scanners. However, existing models of space resection are highly sensitive to errors in data. Unfortunately, for lunar imagery, the altitude data at the ground control points (GCPs) for space resection are error-prone. Thus, existing models fail to produce reliable EOPs. Motivated by a finding that for push broom scanners, angular rotations of EOPs can be estimated independent of the altitude data and only involving the geographic data at the GCPs, which are already provided, hence, we divide the modeling of space resection into two phases. Firstly, we estimate the angular rotations based on the reliable geographic data using our proposed mathematical model. Then, with the accurate angular rotations, the collinear equations for space resection are simplified into a linear problem, and the global optimal solution for the spatial position of EOPs can always be achieved. Moreover, a certainty term is integrated to penalize the unreliable altitude data for increasing the error tolerance. Experimental results evidence that our model can obtain more accurate EOPs and topographic maps not only for the simulated data, but also for the real data from Chang'E-1, compared to the existing space resection model.
Berger, Perrine; Alouini, Mehdi; Bourderionnet, Jérôme; Bretenaker, Fabien; Dolfi, Daniel
2010-01-18
We developed an improved model in order to predict the RF behavior and the slow light properties of the SOA valid for any experimental conditions. It takes into account the dynamic saturation of the SOA, which can be fully characterized by a simple measurement, and only relies on material fitting parameters, independent of the optical intensity and the injected current. The present model is validated by showing a good agreement with experiments for small and large modulation indices.
Accurate 3D Modeling of Breast Deformation for Temporal Mammogram Registration
2008-09-01
SUPPLEMENTARY NOTES 14. ABSTRACT In this research project, we have developed mathematical model of breast deformation to simulate breast compression during...proposed to simulate and analyze breast deformation that can significantly improve the accuracy of matching in temporal mammograms and thus, the...performance of diagnosis and treatment. In this research project, we have developed a mathematical model of breast deformation to simulate breast
Statistical tests with accurate size and power for balanced linear mixed models.
Muller, Keith E; Edwards, Lloyd J; Simpson, Sean L; Taylor, Douglas J
2007-08-30
The convenience of linear mixed models for Gaussian data has led to their widespread use. Unfortunately, standard mixed model tests often have greatly inflated test size in small samples. Many applications with correlated outcomes in medical imaging and other fields have simple properties which do not require the generality of a mixed model. Alternately, stating the special cases as a general linear multivariate model allows analysing them with either the univariate or multivariate approach to repeated measures (UNIREP, MULTIREP). Even in small samples, an appropriate UNIREP or MULTIREP test always controls test size and has a good power approximation, in sharp contrast to mixed model tests. Hence, mixed model tests should never be used when one of the UNIREP tests (uncorrected, Huynh-Feldt, Geisser-Greenhouse, Box conservative) or MULTIREP tests (Wilks, Hotelling-Lawley, Roy's, Pillai-Bartlett) apply. Convenient methods give exact power for the uncorrected and Box conservative tests. Simulations demonstrate that new power approximations for all four UNIREP tests eliminate most inaccuracy in existing methods. In turn, free software implements the approximations to give a better choice of sample size. Two repeated measures power analyses illustrate the methods. The examples highlight the advantages of examining the entire response surface of power as a function of sample size, mean differences, and variability.
Masoli, Stefano; Rizza, Martina F; Sgritta, Martina; Van Geit, Werner; Schürmann, Felix; D'Angelo, Egidio
2017-01-01
In realistic neuronal modeling, once the ionic channel complement has been defined, the maximum ionic conductance (Gi-max) values need to be tuned in order to match the firing pattern revealed by electrophysiological recordings. Recently, selection/mutation genetic algorithms have been proposed to efficiently and automatically tune these parameters. Nonetheless, since similar firing patterns can be achieved through different combinations of Gi-max values, it is not clear how well these algorithms approximate the corresponding properties of real cells. Here we have evaluated the issue by exploiting a unique opportunity offered by the cerebellar granule cell (GrC), which is electrotonically compact and has therefore allowed the direct experimental measurement of ionic currents. Previous models were constructed using empirical tuning of Gi-max values to match the original data set. Here, by using repetitive discharge patterns as a template, the optimization procedure yielded models that closely approximated the experimental Gi-max values. These models, in addition to repetitive firing, captured additional features, including inward rectification, near-threshold oscillations, and resonance, which were not used as features. Thus, parameter optimization using genetic algorithms provided an efficient modeling strategy for reconstructing the biophysical properties of neurons and for the subsequent reconstruction of large-scale neuronal network models.
Masoli, Stefano; Rizza, Martina F.; Sgritta, Martina; Van Geit, Werner; Schürmann, Felix; D'Angelo, Egidio
2017-01-01
In realistic neuronal modeling, once the ionic channel complement has been defined, the maximum ionic conductance (Gi-max) values need to be tuned in order to match the firing pattern revealed by electrophysiological recordings. Recently, selection/mutation genetic algorithms have been proposed to efficiently and automatically tune these parameters. Nonetheless, since similar firing patterns can be achieved through different combinations of Gi-max values, it is not clear how well these algorithms approximate the corresponding properties of real cells. Here we have evaluated the issue by exploiting a unique opportunity offered by the cerebellar granule cell (GrC), which is electrotonically compact and has therefore allowed the direct experimental measurement of ionic currents. Previous models were constructed using empirical tuning of Gi-max values to match the original data set. Here, by using repetitive discharge patterns as a template, the optimization procedure yielded models that closely approximated the experimental Gi-max values. These models, in addition to repetitive firing, captured additional features, including inward rectification, near-threshold oscillations, and resonance, which were not used as features. Thus, parameter optimization using genetic algorithms provided an efficient modeling strategy for reconstructing the biophysical properties of neurons and for the subsequent reconstruction of large-scale neuronal network models. PMID:28360841
Noncommutative complex Grosse-Wulkenhaar model
Hounkonnou, Mahouton Norbert; Samary, Dine Ousmane
2008-11-18
This paper stands for an application of the noncommutative (NC) Noether theorem, given in our previous work [AIP Proc 956(2007) 55-60], for the NC complex Grosse-Wulkenhaar model. It provides with an extension of a recent work [Physics Letters B 653(2007) 343-345]. The local conservation of energy-momentum tensors (EMTs) is recovered using improvement procedures based on Moyal algebraic techniques. Broken dilatation symmetry is discussed. NC gauge currents are also explicitly computed.
Heuijerjans, Ashley; Matikainen, Marko K.; Julkunen, Petro; Eliasson, Pernilla; Aspenberg, Per; Isaksson, Hanna
2015-01-01
Background Computational models of Achilles tendons can help understanding how healthy tendons are affected by repetitive loading and how the different tissue constituents contribute to the tendon’s biomechanical response. However, available models of Achilles tendon are limited in their description of the hierarchical multi-structural composition of the tissue. This study hypothesised that a poroviscoelastic fibre-reinforced model, previously successful in capturing cartilage biomechanical behaviour, can depict the biomechanical behaviour of the rat Achilles tendon found experimentally. Materials and Methods We developed a new material model of the Achilles tendon, which considers the tendon’s main constituents namely: water, proteoglycan matrix and collagen fibres. A hyperelastic formulation of the proteoglycan matrix enabled computations of large deformations of the tendon, and collagen fibres were modelled as viscoelastic. Specimen-specific finite element models were created of 9 rat Achilles tendons from an animal experiment and simulations were carried out following a repetitive tensile loading protocol. The material model parameters were calibrated against data from the rats by minimising the root mean squared error (RMS) between experimental force data and model output. Results and Conclusions All specimen models were successfully fitted to experimental data with high accuracy (RMS 0.42-1.02). Additional simulations predicted more compliant and soft tendon behaviour at reduced strain-rates compared to higher strain-rates that produce a stiff and brittle tendon response. Stress-relaxation simulations exhibited strain-dependent stress-relaxation behaviour where larger strains produced slower relaxation rates compared to smaller strain levels. Our simulations showed that the collagen fibres in the Achilles tendon are the main load-bearing component during tensile loading, where the orientation of the collagen fibres plays an important role for the tendon
Fadlalla, Adam M.A.; Golob, Joseph F.
2012-01-01
Abstract Background Differentiation between infectious and non-infectious etiologies of the systemic inflammatory response syndrome (SIRS) in trauma patients remains elusive. We hypothesized that mathematical modeling in combination with computerized clinical decision support would assist with this differentiation. The purpose of this study was to determine the capability of various mathematical modeling techniques to predict infectious complications in critically ill trauma patients and compare the performance of these models with a standard fever workup practice (identifying infections on the basis of fever or leukocytosis). Methods An 18-mo retrospective database was created using information collected daily from critically ill trauma patients admitted to an academic surgical and trauma intensive care unit. Two hundred forty-three non-infected patient-days were chosen randomly to combine with the 243 infected-days, which created a modeling sample of 486 patient-days. Utilizing ten variables known to be associated with infectious complications, decision trees, neural networks, and logistic regression analysis models were created to predict the presence of urinary tract infections (UTIs), bacteremia, and respiratory tract infections (RTIs). The data sample was split into a 70% training set and a 30% testing set. Models were compared by calculating sensitivity, specificity, positive predictive value, negative predictive value, overall accuracy, and discrimination. Results Decision trees had the best modeling performance, with a sensitivity of 83%, an accuracy of 82%, and a discrimination of 0.91 for identifying infections. Both neural networks and decision trees outperformed logistic regression analysis. A second analysis was performed utilizing the same 243 infected days and only those non-infected patient-days associated with negative microbiologic cultures (n = 236). Decision trees again had the best modeling performance for infection identification, with a
Krokhotin, Andrey; Dokholyan, Nikolay V
2015-01-01
Computational methods can provide significant insights into RNA structure and dynamics, bridging the gap in our understanding of the relationship between structure and biological function. Simulations enrich and enhance our understanding of data derived on the bench, as well as provide feasible alternatives to costly or technically challenging experiments. Coarse-grained computational models of RNA are especially important in this regard, as they allow analysis of events occurring in timescales relevant to RNA biological function, which are inaccessible through experimental methods alone. We have developed a three-bead coarse-grained model of RNA for discrete molecular dynamics simulations. This model is efficient in de novo prediction of short RNA tertiary structure, starting from RNA primary sequences of less than 50 nucleotides. To complement this model, we have incorporated additional base-pairing constraints and have developed a bias potential reliant on data obtained from hydroxyl probing experiments that guide RNA folding to its correct state. By introducing experimentally derived constraints to our computer simulations, we are able to make reliable predictions of RNA tertiary structures up to a few hundred nucleotides. Our refined model exemplifies a valuable benefit achieved through integration of computation and experimental methods.
A cortico-subcortical model for generation of spatially accurate sequential saccades.
Dominey, P F; Arbib, M A
1992-01-01
This article provides a systems framework for the analysis of cortical and subcortical interactions in the control of saccadic eye movements, A major thesis of this model is that a topography of saccade direction and amplitude is preserved through multiple projections between brain regions until they are finally transformed into a temporal pattern of activity that drives the eyes to the target. The control of voluntary saccades to visual and remembered targets is modeled in terms of interactions between posterior parietal cortex, frontal eye fields, the basal ganglia (caudate and substantia nigra), superior colliculus, mediodorsal thalamus, and the saccade generator of the brainstem. Interactions include the modulation of eye movement motor error maps by topographic inhibitory projections, dynamic remapping of spatial target representations in saccade motor error maps, and sustained neural activity that embodies spatial memory. Models of these mechanisms implemented in our Neural Simulation Language simulate behavior and neural activity described in the literature, and suggest new experiments.
Pino, Francisco; Roé, Nuria; Aguiar, Pablo; Falcon, Carles; Ros, Domènec; Pavía, Javier
2015-02-15
Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Three methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and recovery
NASA Technical Reports Server (NTRS)
Ellison, Donald; Conway, Bruce; Englander, Jacob
2015-01-01
A significant body of work exists showing that providing a nonlinear programming (NLP) solver with expressions for the problem constraint gradient substantially increases the speed of program execution and can also improve the robustness of convergence, especially for local optimizers. Calculation of these derivatives is often accomplished through the computation of spacecraft's state transition matrix (STM). If the two-body gravitational model is employed as is often done in the context of preliminary design, closed form expressions for these derivatives may be provided. If a high fidelity dynamics model, that might include perturbing forces such as the gravitational effect from multiple third bodies and solar radiation pressure is used then these STM's must be computed numerically. We present a method for the power hardward model and a full ephemeris model. An adaptive-step embedded eight order Dormand-Prince numerical integrator is discussed and a method for the computation of the time of flight derivatives in this framework is presented. The use of these numerically calculated derivatieves offer a substantial improvement over finite differencing in the context of a global optimizer. Specifically the inclusion of these STM's into the low thrust missiondesign tool chain in use at NASA Goddard Spaceflight Center allows for an increased preliminary mission design cadence.
Advancements and challenges in generating accurate animal models of gestational diabetes mellitus.
Pasek, Raymond C; Gannon, Maureen
2013-12-01
The maintenance of glucose homeostasis during pregnancy is critical to the health and well-being of both the mother and the developing fetus. Strikingly, approximately 7% of human pregnancies are characterized by insufficient insulin production or signaling, resulting in gestational diabetes mellitus (GDM). In addition to the acute health concerns of hyperglycemia, women diagnosed with GDM during pregnancy have an increased incidence of complications during pregnancy as well as an increased risk of developing type 2 diabetes (T2D) later in life. Furthermore, children born to mothers diagnosed with GDM have increased incidence of perinatal complications, including hypoglycemia, respiratory distress syndrome, and macrosomia, as well as an increased risk of being obese or developing T2D as adults. No single environmental or genetic factor is solely responsible for the disease; instead, a variety of risk factors, including weight, ethnicity, genetics, and family history, contribute to the likelihood of developing GDM, making the generation of animal models that fully recapitulate the disease difficult. Here, we discuss and critique the various animal models that have been generated to better understand the etiology of diabetes during pregnancy and its physiological impacts on both the mother and the fetus. Strategies utilized are diverse in nature and include the use of surgical manipulation, pharmacological treatment, nutritional manipulation, and genetic approaches in a variety of animal models. Continued development of animal models of GDM is essential for understanding the consequences of this disease as well as providing insights into potential treatments and preventative measures.
Accurate prediction of the refractive index of polymers using first principles and data modeling
NASA Astrophysics Data System (ADS)
Afzal, Mohammad Atif Faiz; Cheng, Chong; Hachmann, Johannes
Organic polymers with a high refractive index (RI) have recently attracted considerable interest due to their potential application in optical and optoelectronic devices. The ability to tailor the molecular structure of polymers is the key to increasing the accessible RI values. Our work concerns the creation of predictive in silico models for the optical properties of organic polymers, the screening of large-scale candidate libraries, and the mining of the resulting data to extract the underlying design principles that govern their performance. This work was set up to guide our experimentalist partners and allow them to target the most promising candidates. Our model is based on the Lorentz-Lorenz equation and thus includes the polarizability and number density values for each candidate. For the former, we performed a detailed benchmark study of different density functionals, basis sets, and the extrapolation scheme towards the polymer limit. For the number density we devised an exceedingly efficient machine learning approach to correlate the polymer structure and the packing fraction in the bulk material. We validated the proposed RI model against the experimentally known RI values of 112 polymers. We could show that the proposed combination of physical and data modeling is both successful and highly economical to characterize a wide range of organic polymers, which is a prerequisite for virtual high-throughput screening.
Developing an Accurate CFD Based Gust Model for the Truss Braced Wing Aircraft
NASA Technical Reports Server (NTRS)
Bartels, Robert E.
2013-01-01
The increased flexibility of long endurance aircraft having high aspect ratio wings necessitates attention to gust response and perhaps the incorporation of gust load alleviation. The design of civil transport aircraft with a strut or truss-braced high aspect ratio wing furthermore requires gust response analysis in the transonic cruise range. This requirement motivates the use of high fidelity nonlinear computational fluid dynamics (CFD) for gust response analysis. This paper presents the development of a CFD based gust model for the truss braced wing aircraft. A sharp-edged gust provides the gust system identification. The result of the system identification is several thousand time steps of instantaneous pressure coefficients over the entire vehicle. This data is filtered and downsampled to provide the snapshot data set from which a reduced order model is developed. A stochastic singular value decomposition algorithm is used to obtain a proper orthogonal decomposition (POD). The POD model is combined with a convolution integral to predict the time varying pressure coefficient distribution due to a novel gust profile. Finally the unsteady surface pressure response of the truss braced wing vehicle to a one-minus-cosine gust, simulated using the reduced order model, is compared with the full CFD.
Towards Relaxing the Spherical Solar Radiation Pressure Model for Accurate Orbit Predictions
NASA Astrophysics Data System (ADS)
Lachut, M.; Bennett, J.
2016-09-01
The well-known cannonball model has been used ubiquitously to capture the effects of atmospheric drag and solar radiation pressure on satellites and/or space debris for decades. While it lends itself naturally to spherical objects, its validity in the case of non-spherical objects has been debated heavily for years throughout the space situational awareness community. One of the leading motivations to improve orbit predictions by relaxing the spherical assumption, is the ongoing demand for more robust and reliable conjunction assessments. In this study, we explore the orbit propagation of a flat plate in a near-GEO orbit under the influence of solar radiation pressure, using a Lambertian BRDF model. Consequently, this approach will account for the spin rate and orientation of the object, which is typically determined in practice using a light curve analysis. Here, simulations will be performed which systematically reduces the spin rate to demonstrate the point at which the spherical model no longer describes the orbital elements of the spinning plate. Further understanding of this threshold would provide insight into when a higher fidelity model should be used, thus resulting in improved orbit propagations. Therefore, the work presented here is of particular interest to organizations and researchers that maintain their own catalog, and/or perform conjunction analyses.
ERIC Educational Resources Information Center
Vladescu, Jason C.; Carroll, Regina; Paden, Amber; Kodak, Tiffany M.
2012-01-01
The present study replicates and extends previous research on the use of video modeling (VM) with voiceover instruction to train staff to implement discrete-trial instruction (DTI). After staff trainees reached the mastery criterion when teaching an adult confederate with VM, they taught a child with a developmental disability using DTI. The…
Pantuzzo, Fernando L; Silva, Julio César J; Ciminelli, Virginia S T
2009-09-15
A fast and accurate microwave-assisted digestion method for arsenic determination by flame atomic absorption spectrometry (FAAS) in typical, complex residues from gold mining is presented. Three digestion methods were evaluated: an open vessel digestion using a mixture of HCl:HNO(3):HF acids (Method A) and two microwave digestion methods using a mixture of HCl:H(2)O(2):HNO(3) in high (Method B) and medium-pressure (Method C) vessels. The matrix effect was also investigated. Arsenic concentration from external and standard addition calibration curves (at a 95% confidence level) were statistically equal (p-value=0.122) using microwave digestion in high-pressure vessel. The results from the open vessel digestion were statistically different (p-value=0.007) whereas in the microwave digestion in medium-pressure vessel (Method C) the dissolution of the samples was incomplete.
Complex Constructivism: A Theoretical Model of Complexity and Cognition
ERIC Educational Resources Information Center
Doolittle, Peter E.
2014-01-01
Education has long been driven by its metaphors for teaching and learning. These metaphors have influenced both educational research and educational practice. Complexity and constructivism are two theories that provide functional and robust metaphors. Complexity provides a metaphor for the structure of myriad phenomena, while constructivism…
Accurate 2D/3D electromagnetic modeling for time-domain airborne EM systems
NASA Astrophysics Data System (ADS)
Yin, C.; Hodges, G.
2012-12-01
The existing industry software cannot deliver correct results for 3D time-domain airborne EM responses. In this paper, starting from the Fourier transform and convolution, we compare the stability of different modeling techniques and analyze the reason for instable calculations of the time-domain airborne EM responses. We find that the singularity of the impulse responses of EM systems at very early time that are used in the convolution is responsible for the instability of the modeling (Fig.1). Based on this finding, we put forward an algorithm that uses step response rather than impulse response of the airborne EM system for the convolution and create a stable algorithm that delivers precise results and maintains well the integral/derivative relationship between the magnetic field B and the magnetic induction dB/dt. A three-step transformation procedure for the modeling is proposed: 1) output the frequency-domain EM response data from the existing software; 2) transform into step-response by digital Fourier/Hankel transform; 3) convolve the step response with the transmitting current or its derivatives. The method has proved to be working very well (Fig. 2). The algorithm can be extended to the modeling of other time-domain ground and airborne EM system responses.Fig. 1: Comparison of impulse and step responses for an airborne EM system Fig. 2: Bz and dBz/dt calculated from step (middle panel) and impulse responses (lower panel) for the same 3D model as in Fig.1.
Sapsis, Themistoklis P; Majda, Andrew J
2013-08-20
A framework for low-order predictive statistical modeling and uncertainty quantification in turbulent dynamical systems is developed here. These reduced-order, modified quasilinear Gaussian (ROMQG) algorithms apply to turbulent dynamical systems in which there is significant linear instability or linear nonnormal dynamics in the unperturbed system and energy-conserving nonlinear interactions that transfer energy from the unstable modes to the stable modes where dissipation occurs, resulting in a statistical steady state; such turbulent dynamical systems are ubiquitous in geophysical and engineering turbulence. The ROMQG method involves constructing a low-order, nonlinear, dynamical system for the mean and covariance statistics in the reduced subspace that has the unperturbed statistics as a stable fixed point and optimally incorporates the indirect effect of non-Gaussian third-order statistics for the unperturbed system in a systematic calibration stage. This calibration procedure is achieved through information involving only the mean and covariance statistics for the unperturbed equilibrium. The performance of the ROMQG algorithm is assessed on two stringent test cases: the 40-mode Lorenz 96 model mimicking midlatitude atmospheric turbulence and two-layer baroclinic models for high-latitude ocean turbulence with over 125,000 degrees of freedom. In the Lorenz 96 model, the ROMQG algorithm with just a single mode captures the transient response to random or deterministic forcing. For the baroclinic ocean turbulence models, the inexpensive ROMQG algorithm with 252 modes, less than 0.2% of the total, captures the nonlinear response of the energy, the heat flux, and even the one-dimensional energy and heat flux spectra.
Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke
2015-11-15
Purpose: Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. Methods: A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. Results: The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was
Yu, Victoria Y.; Tran, Angelia; Nguyen, Dan; Cao, Minsong; Ruan, Dan; Low, Daniel A.; Sheng, Ke
2015-01-01
Purpose: Significant dosimetric benefits had been previously demonstrated in highly noncoplanar treatment plans. In this study, the authors developed and verified an individualized collision model for the purpose of delivering highly noncoplanar radiotherapy and tested the feasibility of total delivery automation with Varian TrueBeam developer mode. Methods: A hand-held 3D scanner was used to capture the surfaces of an anthropomorphic phantom and a human subject, which were positioned with a computer-aided design model of a TrueBeam machine to create a detailed virtual geometrical collision model. The collision model included gantry, collimator, and couch motion degrees of freedom. The accuracy of the 3D scanner was validated by scanning a rigid cubical phantom with known dimensions. The collision model was then validated by generating 300 linear accelerator orientations corresponding to 300 gantry-to-couch and gantry-to-phantom distances, and comparing the corresponding distance measurements to their corresponding models. The linear accelerator orientations reflected uniformly sampled noncoplanar beam angles to the head, lung, and prostate. The distance discrepancies between measurements on the physical and virtual systems were used to estimate treatment-site-specific safety buffer distances with 0.1%, 0.01%, and 0.001% probability of collision between the gantry and couch or phantom. Plans containing 20 noncoplanar beams to the brain, lung, and prostate optimized via an in-house noncoplanar radiotherapy platform were converted into XML script for automated delivery and the entire delivery was recorded and timed to demonstrate the feasibility of automated delivery. Results: The 3D scanner measured the dimension of the 14 cm cubic phantom within 0.5 mm. The maximal absolute discrepancy between machine and model measurements for gantry-to-couch and gantry-to-phantom was 0.95 and 2.97 cm, respectively. The reduced accuracy of gantry-to-phantom measurements was
Magnetic modeling of the Bushveld Igneous Complex
NASA Astrophysics Data System (ADS)
Webb, S. J.; Cole, J.; Letts, S. A.; Finn, C.; Torsvik, T. H.; Lee, M. D.
2009-12-01
Magnetic modeling of the 2.06 Ga Bushveld Complex presents special challenges due a variety of magnetic effects. These include strong remanence in the Main Zone and extremely high magnetic susceptibilities in the Upper Zone, which exhibit self-demagnetization. Recent palaeomagnetic results have resolved a long standing discrepancy between age data, which constrain the emplacement to within 1 million years, and older palaeomagnetic data which suggested ~50 million years for emplacement. The new palaeomagnetic results agree with the age data and present a single consistent pole, as opposed to a long polar wander path, for the Bushveld for all of the Zones and all of the limbs. These results also pass a fold test indicating the Bushveld Complex was emplaced horizontally lending support to arguments for connectivity. The magnetic signature of the Bushveld Complex provides an ideal mapping tool as the UZ has high susceptibility values and is well layered showing up as distinct anomalies on new high resolution magnetic data. However, this signature is similar to the highly magnetic BIFs found in the Transvaal and in the Witwatersrand Supergroups. Through careful mapping using new high resolution aeromagnetic data, we have been able to map the Bushveld UZ in complicated geological regions and identify a characteristic signature with well defined layers. The Main Zone, which has a more subdued magnetic signature, does have a strong remanent component and exhibits several magnetic reversals. The magnetic layers of the UZ contain layers of magnetitite with as much as 80-90% pure magnetite with large crystals (1-2 cm). While these layers are not strongly remanent, they have extremely high magnetic susceptibilities, and the self demagnetization effect must be taken into account when modeling these layers. Because the Bushveld Complex is so large, the geometry of the Earth’s magnetic field relative to the layers of the UZ Bushveld Complex changes orientation, creating
Faster and more accurate graphical model identification of tandem mass spectra using trellises
Wang, Shengjie; Halloran, John T.; Bilmes, Jeff A.; Noble, William S.
2016-01-01
Tandem mass spectrometry (MS/MS) is the dominant high throughput technology for identifying and quantifying proteins in complex biological samples. Analysis of the tens of thousands of fragmentation spectra produced by an MS/MS experiment begins by assigning to each observed spectrum the peptide that is hypothesized to be responsible for generating the spectrum. This assignment is typically done by searching each spectrum against a database of peptides. To our knowledge, all existing MS/MS search engines compute scores individually between a given observed spectrum and each possible candidate peptide from the database. In this work, we use a trellis, a data structure capable of jointly representing a large set of candidate peptides, to avoid redundantly recomputing common sub-computations among different candidates. We show how trellises may be used to significantly speed up existing scoring algorithms, and we theoretically quantify the expected speedup afforded by trellises. Furthermore, we demonstrate that compact trellis representations of whole sets of peptides enables efficient discriminative learning of a dynamic Bayesian network for spectrum identification, leading to greatly improved spectrum identification accuracy. Contact: bilmes@uw.edu or william-noble@uw.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307634
Leng, Wei; Ju, Lili; Gunzburger, Max; Price, Stephen; Ringler, Todd
2012-01-04
The numerical modeling of glacier and ice sheet evolution is a subject of growing interest, in part because of the potential for models to inform estimates of global sea level change. This paper focuses on the development of a numerical model that determines the velocity and pressure fields within an ice sheet. Our numerical model features a high-fidelity mathematical model involving the nonlinear Stokes system and combinations of no-sliding and sliding basal boundary conditions, high-order accurate finite element discretizations based on variable resolution grids, and highly scalable parallel solution strategies, all of which contribute to a numerical model that can achieve accurate velocity and pressure approximations in a highly efficient manner. We demonstrate the accuracy and efficiency of our model by analytical solution tests, established ice sheet benchmark experiments, and comparisons with other well-established ice sheet models.
Accurate dynamic power estimation for CMOS combinational logic circuits with real gate delay model.
Fadl, Omnia S; Abu-Elyazeed, Mohamed F; Abdelhalim, Mohamed B; Amer, Hassanein H; Madian, Ahmed H
2016-01-01
Dynamic power estimation is essential in designing VLSI circuits where many parameters are involved but the only circuit parameter that is related to the circuit operation is the nodes' toggle rate. This paper discusses a deterministic and fast method to estimate the dynamic power consumption for CMOS combinational logic circuits using gate-level descriptions based on the Logic Pictures concept to obtain the circuit nodes' toggle rate. The delay model for the logic gates is the real-delay model. To validate the results, the method is applied to several circuits and compared against exhaustive, as well as Monte Carlo, simulations. The proposed technique was shown to save up to 96% processing time compared to exhaustive simulation.
An accurate two-phase approximate solution to the acute viral infection model
Perelson, Alan S
2009-01-01
During an acute viral infection, virus levels rise, reach a peak and then decline. Data and numerical solutions suggest the growth and decay phases are linear on a log scale. While viral dynamic models are typically nonlinear with analytical solutions difficult to obtain, the exponential nature of the solutions suggests approximations can be found. We derive a two-phase approximate solution to the target cell limited influenza model and illustrate the accuracy using data and previously established parameter values of six patients infected with influenza A. For one patient, the subsequent fall in virus concentration was not consistent with our predictions during the decay phase and an alternate approximation is derived. We find expressions for the rate and length of initial viral growth in terms of the parameters, the extent each parameter is involved in viral peaks, and the single parameter responsible for virus decay. We discuss applications of this analysis in antiviral treatments and investigating host and virus heterogeneities.
Robust and Accurate Modeling Approaches for Migraine Per-Patient Prediction from Ambulatory Data.
Pagán, Josué; De Orbe, M Irene; Gago, Ana; Sobrado, Mónica; Risco-Martín, José L; Mora, J Vivancos; Moya, José M; Ayala, José L
2015-06-30
Migraine is one of the most wide-spread neurological disorders, and its medical treatment represents a high percentage of the costs of health systems. In some patients, characteristic symptoms that precede the headache appear. However, they are nonspecific, and their prediction horizon is unknown and pretty variable; hence, these symptoms are almost useless for prediction, and they are not useful to advance the intake of drugs to be effective and neutralize the pain. To solve this problem, this paper sets up a realistic monitoring scenario where hemodynamic variables from real patients are monitored in ambulatory conditions with a wireless body sensor network (WBSN). The acquired data are used to evaluate the predictive capabilities and robustness against noise and failures in sensors of several modeling approaches. The obtained results encourage the development of per-patient models based on state-space models (N4SID) that are capable of providing average forecast windows of 47 min and a low rate of false positives.
Towards a More Accurate Solar Power Forecast By Improving NWP Model Physics
NASA Astrophysics Data System (ADS)
Köhler, C.; Lee, D.; Steiner, A.; Ritter, B.
2014-12-01
The growing importance and successive expansion of renewable energies raise new challenges for decision makers, transmission system operators, scientists and many more. In this interdisciplinary field, the role of Numerical Weather Prediction (NWP) is to reduce the uncertainties associated with the large share of weather-dependent power sources. Precise power forecast, well-timed energy trading on the stock market, and electrical grid stability can be maintained. The research project EWeLiNE is a collaboration of the German Weather Service (DWD), the Fraunhofer Institute (IWES) and three German transmission system operators (TSOs). Together, wind and photovoltaic (PV) power forecasts shall be improved by combining optimized NWP and enhanced power forecast models. The conducted work focuses on the identification of critical weather situations and the associated errors in the German regional NWP model COSMO-DE. Not only the representation of the model cloud characteristics, but also special events like Sahara dust over Germany and the solar eclipse in 2015 are treated and their effect on solar power accounted for. An overview of the EWeLiNE project and results of the ongoing research will be presented.
Robust and Accurate Modeling Approaches for Migraine Per-Patient Prediction from Ambulatory Data
Pagán, Josué; Irene De Orbe, M.; Gago, Ana; Sobrado, Mónica; Risco-Martín, José L.; Vivancos Mora, J.; Moya, José M.; Ayala, José L.
2015-01-01
Migraine is one of the most wide-spread neurological disorders, and its medical treatment represents a high percentage of the costs of health systems. In some patients, characteristic symptoms that precede the headache appear. However, they are nonspecific, and their prediction horizon is unknown and pretty variable; hence, these symptoms are almost useless for prediction, and they are not useful to advance the intake of drugs to be effective and neutralize the pain. To solve this problem, this paper sets up a realistic monitoring scenario where hemodynamic variables from real patients are monitored in ambulatory conditions with a wireless body sensor network (WBSN). The acquired data are used to evaluate the predictive capabilities and robustness against noise and failures in sensors of several modeling approaches. The obtained results encourage the development of per-patient models based on state-space models (N4SID) that are capable of providing average forecast windows of 47 min and a low rate of false positives. PMID:26134103
NASA Astrophysics Data System (ADS)
Naumenko, Mikhail; Guzivaty, Vadim; Sapelko, Tatiana
2016-04-01
Lake morphometry refers to physical factors (shape, size, structure, etc) that determine the lake depression. Morphology has a great influence on lake ecological characteristics especially on water thermal conditions and mixing depth. Depth analyses, including sediment measurement at various depths, volumes of strata and shoreline characteristics are often critical to the investigation of biological, chemical and physical properties of fresh waters as well as theoretical retention time. Management techniques such as loading capacity for effluents and selective removal of undesirable components of the biota are also dependent on detailed knowledge of the morphometry and flow characteristics. During the recent years a lake bathymetric surveys were carried out by using echo sounder with a high bottom depth resolution and GPS coordinate determination. Few digital bathymetric models have been created with 10*10 m spatial grid for some small lakes of Russian Plain which the areas not exceed 1-2 sq. km. The statistical characteristics of the depth and slopes distribution of these lakes calculated on an equidistant grid. It will provide the level-surface-volume variations of small lakes and reservoirs, calculated through combination of various satellite images. We discuss the methodological aspects of creating of morphometric models of depths and slopes of small lakes as well as the advantages of digital models over traditional methods.
Accurate 3d Textured Models of Vessels for the Improvement of the Educational Tools of a Museum
NASA Astrophysics Data System (ADS)
Soile, S.; Adam, K.; Ioannidis, C.; Georgopoulos, A.
2013-02-01
Besides the demonstration of the findings, modern museums organize educational programs which aim to experience and knowledge sharing combined with entertainment rather than to pure learning. Toward that effort, 2D and 3D digital representations are gradually replacing the traditional recording of the findings through photos or drawings. The present paper refers to a project that aims to create 3D textured models of two lekythoi that are exhibited in the National Archaeological Museum of Athens in Greece; on the surfaces of these lekythoi scenes of the adventures of Odysseus are depicted. The project is expected to support the production of an educational movie and some other relevant interactive educational programs for the museum. The creation of accurate developments of the paintings and of accurate 3D models is the basis for the visualization of the adventures of the mythical hero. The data collection was made by using a structured light scanner consisting of two machine vision cameras that are used for the determination of geometry of the object, a high resolution camera for the recording of the texture, and a DLP projector. The creation of the final accurate 3D textured model is a complicated and tiring procedure which includes the collection of geometric data, the creation of the surface, the noise filtering, the merging of individual surfaces, the creation of a c-mesh, the creation of the UV map, the provision of the texture and, finally, the general processing of the 3D textured object. For a better result a combination of commercial and in-house software made for the automation of various steps of the procedure was used. The results derived from the above procedure were especially satisfactory in terms of accuracy and quality of the model. However, the procedure was proved to be time consuming while the use of various software packages presumes the services of a specialist.
Accurate, full chip 3D electromagnetic field model for non-Manhattan mask corners
NASA Astrophysics Data System (ADS)
Lam, Michael; Clifford, Chris; Oliver, Michael; Fryer, David; Tejnil, Edita; Adam, Kostas
2015-03-01
The physical process of mask manufacturing produces absorber geometry with significantly less than 90 degree fidelity at corners. The non-Manhattan mask geometry is an essential contributor to the aerial image and resulting patterning performance through focus. Current state of the art models for corner rounding employ "chopping" a 90 degree mask corner, replacing the corner with a small 45 degree edge. In this paper, a methodology is presented to approximate the impact of 3D EMF effects introduced by corners with rounded edges. The approach is integrated into a full chip 3D mask simulation methodology based on the Domain Decomposition Method (DDM) with edge to edge crosstalk correction.
2010-01-01
Mateger, Herley E. Hurlburt, Alan J. Walloraft H a inleficed to offer this paper to the (Nanm of Confe ounce) (Dafe. P/ace and Classification of...temperature during ENSO events? By A. BIROL KARA.HARLEY E. HURLBURT*. CHARLIE N. BARRON. ALAN J. WALLCRAFT andE. JOSEPH METZGER, Naval Research...Quantifying SST errors from an OGCM in relation to atmospheric forcing variables. Ocean Modell. 29, 43-57. Urge. W. G., McWilliams , J. C. and Doney. S. C
Generation of Accurate Lateral Boundary Conditions for a Surface-Water Groundwater Interaction Model
NASA Astrophysics Data System (ADS)
Khambhammettu, P.; Tsou, M.; Panday, S. M.; Kool, J.; Wei, X.
2010-12-01
The 106 mile long Peace River in Florida flows south from Lakeland to Charlotte Harbor and has a drainage basin of approximately 2,350 square miles. A long-term decline in stream flows and groundwater potentiometric levels has been observed in the region. Long-term trends in rainfall, along with effects of land use changes on runoff, surface-water storage, recharge and evapotranspiration patterns, and increased groundwater and surface-water withdrawals have contributed to this decline. The South West Florida Water Management District (SWFWMD) has funded the development of the Peace River Integrated Model (PRIM) to assess the effects of land use, water use, and climatic changes on stream flows and to evaluate the effectiveness of various management alternatives for restoring stream flows. The PRIM was developed using MODHMS, a fully integrated surface-water groundwater flow and transport simulator developed by HydroGeoLogic, Inc. The development of the lateral boundary conditions (groundwater inflow and outflow) for the PRIM in both historical and predictive contexts is discussed in this presentation. Monthly-varying specified heads were used to define the lateral boundary conditions for the PRIM. These head values were derived from the coarser Southern District Groundwater Model (SDM). However, there were discrepancies between the simulated SDM heads and measured heads: the likely causes being spatial (use of a coarser grid) and temporal (monthly average pumping rates and recharge rates) approximations in the regional SDM. Finer re-calibration of the SDM was not feasible, therefore, an innovative approach was adopted to remove the discrepancies. In this approach, point discrepancies/residuals between the observed and simulated heads were kriged with an appropriate variogram to generate a residual surface. This surface was then added to the simulated head surface of the SDM to generate a corrected head surface. This approach preserves the trends associated with
Structured analysis and modeling of complex systems
NASA Technical Reports Server (NTRS)
Strome, David R.; Dalrymple, Mathieu A.
1992-01-01
The Aircrew Evaluation Sustained Operations Performance (AESOP) facility at Brooks AFB, Texas, combines the realism of an operational environment with the control of a research laboratory. In recent studies we collected extensive data from the Airborne Warning and Control Systems (AWACS) Weapons Directors subjected to high and low workload Defensive Counter Air Scenarios. A critical and complex task in this environment involves committing a friendly fighter against a hostile fighter. Structured Analysis and Design techniques and computer modeling systems were applied to this task as tools for analyzing subject performance and workload. This technology is being transferred to the Man-Systems Division of NASA Johnson Space Center for application to complex mission related tasks, such as manipulating the Shuttle grappler arm.
Project trades model for complex space missions
NASA Technical Reports Server (NTRS)
Girerd, Andre R.; Shishko, Roberto
2003-01-01
A Project Trades Model (PTM) is a collection of tools/simulations linked together to rapidly perform integrated system trade studies of performance, cost, risk, and mission effectiveness. An operating PTM captures the interactions between various targeted systems and subsystems through an exchange of computed variables of the constituent models. Selection and implementation of the order, method of interaction, model type, and envisioned operation of the ensemble of tools rpresents the key system engineering challenge of the approach. This paper describes an approach to building a PTM and using it to perform top-level system trades for a complex space mission. In particular, the PTM discussed here is for a future Mars mission involving a large rover.
Considering mask pellicle effect for more accurate OPC model at 45nm technology node
NASA Astrophysics Data System (ADS)
Wang, Ching-Heng; Liu, Qingwei; Zhang, Liguo
2008-11-01
Now it comes to the 45nm technology node, which should be the first generation of the immersion micro-lithography. And the brand-new lithography tool makes many optical effects, which can be ignored at 90nm and 65nm nodes, now have significant impact on the pattern transmission process from design to silicon. Among all the effects, one that needs to be pay attention to is the mask pellicle effect's impact on the critical dimension variation. With the implement of hyper-NA lithography tools, light transmits the mask pellicle vertically is not a good approximation now, and the image blurring induced by the mask pellicle should be taken into account in the computational microlithography. In this works, we investigate how the mask pellicle impacts the accuracy of the OPC model. And we will show that considering the extremely tight critical dimension control spec for 45nm generation node, to take the mask pellicle effect into the OPC model now becomes necessary.
Accurate early-time and late-time modeling of countercurrent spontaneous imbibition
NASA Astrophysics Data System (ADS)
March, Rafael; Doster, Florian; Geiger, Sebastian
2016-08-01
Spontaneous countercurrent imbibition into a finite porous medium is an important physical mechanism for many applications, included but not limited to irrigation, CO2 storage, and oil recovery. Symmetry considerations that are often valid in fractured porous media allow us to study the process in a one-dimensional domain. In 1-D, for incompressible fluids and homogeneous rocks, the onset of imbibition can be captured by self-similar solutions and the imbibed volume scales with √t. At later times, the imbibition rate decreases and the finite size of the medium has to be taken into account. This requires numerical solutions. Here we present a new approach to approximate the whole imbibition process semianalytically. The onset is captured by a semianalytical solution. We also provide an a priori estimate of the time until which the imbibed volume scales with √t. This time is significantly longer than the time it takes until the imbibition front reaches the model boundary. The remainder of the imbibition process is obtained from a self-similarity solution. We test our approach against numerical solutions that employ parametrizations relevant for oil recovery and CO2 sequestration. We show that this concept improves common first-order approaches that heavily underestimate early-time behavior and note that it can be readily included into dual-porosity models.
Accurate modeling of SiPM detectors coupled to FE electronics for timing performance analysis
NASA Astrophysics Data System (ADS)
Ciciriello, F.; Corsi, F.; Licciulli, F.; Marzocca, C.; Matarrese, G.; Del Guerra, A.; Bisogni, M. G.
2013-08-01
It has already been shown how the shape of the current pulse produced by a SiPM in response to an incident photon is sensibly affected by the characteristics of the front-end electronics (FEE) used to read out the detector. When the application requires to approach the best theoretical time performance of the detection system, the influence of all the parasitics associated to the coupling SiPM-FEE can play a relevant role and must be adequately modeled. In particular, it has been reported that the shape of the current pulse is affected by the parasitic inductance of the wiring connection between SiPM and FEE. In this contribution, we extend the validity of a previously presented SiPM model to account for the wiring inductance. Various combinations of the main performance parameters of the FEE (input resistance and bandwidth) have been simulated in order to evaluate their influence on the time accuracy of the detection system, when the time pick-off of each single event is extracted by means of a leading edge discriminator (LED) technique.
Accurate, full-chip, three-dimensional electromagnetic field model for non-Manhattan mask corners
NASA Astrophysics Data System (ADS)
Lam, Michael C.; Clifford, Chris; Oliver, Mike; Fryer, David; Tejnil, Edita; Adam, Kostas
2016-04-01
The physical process of mask manufacturing produces absorber geometry with significant deviations from the 90-deg corners, which are typically assumed in the mask design. The non-Manhattan mask geometry is an essential contributor to the aerial image and resulting patterning performance through focus. Current state-of-the-art models for corner rounding employ "chopping" a 90-deg mask corner, replacing the corner with a small 45-deg edge. A methodology is presented to approximate the impact of three-dimensional (3-D) EMF effects introduced by corners with rounded edges. The approach is integrated into a full-chip 3-D mask simulation methodology based on the domain decomposition method with edge to edge crosstalk correction.
Secular Orbit Evolution in Systems with a Strong External Perturber - A Simple and Accurate Model
NASA Astrophysics Data System (ADS)
Andrade-Ines, Eduardo; Eggl, Siegfried
2017-04-01
We present a semi-analytical correction to the seminal solution for the secular motion of a planet’s orbit under gravitational influence of an external perturber derived by Heppenheimer. A comparison between analytical predictions and numerical simulations allows us to determine corrective factors for the secular frequency and forced eccentricity in the coplanar restricted three-body problem. The correction is given in the form of a polynomial function of the system’s parameters that can be applied to first-order forced eccentricity and secular frequency estimates. The resulting secular equations are simple, straight forward to use, and improve the fidelity of Heppenheimers solution well beyond higher-order models. The quality and convergence of the corrected secular equations are tested for a wide range of parameters and limits of its applicability are given.
A microbial clock provides an accurate estimate of the postmortem interval in a mouse model system
Metcalf, Jessica L; Wegener Parfrey, Laura; Gonzalez, Antonio; Lauber, Christian L; Knights, Dan; Ackermann, Gail; Humphrey, Gregory C; Gebert, Matthew J; Van Treuren, Will; Berg-Lyons, Donna; Keepers, Kyle; Guo, Yan; Bullard, James; Fierer, Noah; Carter, David O; Knight, Rob
2013-01-01
Establishing the time since death is critical in every death investigation, yet existing techniques are susceptible to a range of errors and biases. For example, forensic entomology is widely used to assess the postmortem interval (PMI), but errors can range from days to months. Microbes may provide a novel method for estimating PMI that avoids many of these limitations. Here we show that postmortem microbial community changes are dramatic, measurable, and repeatable in a mouse model system, allowing PMI to be estimated within approximately 3 days over 48 days. Our results provide a detailed understanding of bacterial and microbial eukaryotic ecology within a decomposing corpse system and suggest that microbial community data can be developed into a forensic tool for estimating PMI. DOI: http://dx.doi.org/10.7554/eLife.01104.001 PMID:24137541
Biomechanical modeling provides more accurate data for neuronavigation than rigid registration
Garlapati, Revanth Reddy; Roy, Aditi; Joldes, Grand Roman; Wittek, Adam; Mostayed, Ahmed; Doyle, Barry; Warfield, Simon Keith; Kikinis, Ron; Knuckey, Neville; Bunt, Stuart; Miller, Karol
2015-01-01
It is possible to improve neuronavigation during image-guided surgery by warping the high-quality preoperative brain images so that they correspond with the current intraoperative configuration of the brain. In this work, the accuracy of registration results obtained using comprehensive biomechanical models is compared to the accuracy of rigid registration, the technology currently available to patients. This comparison allows us to investigate whether biomechanical modeling provides good quality image data for neuronavigation for a larger proportion of patients than rigid registration. Preoperative images for 33 cases of neurosurgery were warped onto their respective intraoperative configurations using both biomechanics-based method and rigid registration. We used a Hausdorff distance-based evaluation process that measures the difference between images to quantify the performance of both methods of registration. A statistical test for difference in proportions was conducted to evaluate the null hypothesis that the proportion of patients for whom improved neuronavigation can be achieved, is the same for rigid and biomechanics-based registration. The null hypothesis was confidently rejected (p-value<10−4). Even the modified hypothesis that less than 25% of patients would benefit from the use of biomechanics-based registration was rejected at a significance level of 5% (p-value = 0.02). The biomechanics-based method proved particularly effective for cases experiencing large craniotomy-induced brain deformations. The outcome of this analysis suggests that our nonlinear biomechanics-based methods are beneficial to a large proportion of patients and can be considered for use in the operating theatre as one possible method of improving neuronavigation and surgical outcomes. PMID:24460486
Windhoff, Mirko; Opitz, Alexander; Thielscher, Axel
2013-04-01
The need for realistic electric field calculations in human noninvasive brain stimulation is undisputed to more accurately determine the affected brain areas. However, using numerical techniques such as the finite element method (FEM) is methodologically complex, starting with the creation of accurate head models to the integration of the models in the numerical calculations. These problems substantially limit a more widespread application of numerical methods in brain stimulation up to now. We introduce an optimized processing pipeline allowing for the automatic generation of individualized high-quality head models from magnetic resonance images and their usage in subsequent field calculations based on the FEM. The pipeline starts by extracting the borders between skin, skull, cerebrospinal fluid, gray and white matter. The quality of the resulting surfaces is subsequently improved, allowing for the creation of tetrahedral volume head meshes that can finally be used in the numerical calculations. The pipeline integrates and extends established (and mainly free) software for neuroimaging, computer graphics, and FEM calculations into one easy-to-use solution. We demonstrate the successful usage of the pipeline in six subjects, including field calculations for transcranial magnetic stimulation and transcranial direct current stimulation. The quality of the head volume meshes is validated both in terms of capturing the underlying anatomy and of the well-shapedness of the mesh elements. The latter is crucial to guarantee the numerical robustness of the FEM calculations. The pipeline will be released as open-source, allowing for the first time to perform realistic field calculations at an acceptable methodological complexity and moderate costs.
Slodownik, Dan; Grinberg, Igor; Spira, Ram M; Skornik, Yehuda; Goldstein, Ronald S
2009-04-01
The current standard method for predicting contact allergenicity is the murine local lymph node assay (LLNA). Public objection to the use of animals in testing of cosmetics makes the development of a system that does not use sentient animals highly desirable. The chorioallantoic membrane (CAM) of the chick egg has been extensively used for the growth of normal and transformed mammalian tissues. The CAM is not innervated, and embryos are sacrificed before the development of pain perception. The aim of this study was to determine whether the sensitization phase of contact dermatitis to known cosmetic allergens can be quantified using CAM-engrafted human skin and how these results compare with published EC3 data obtained with the LLNA. We studied six common molecules used in allergen testing and quantified migration of epidermal Langerhans cells (LC) as a measure of their allergic potency. All agents with known allergic potential induced statistically significant migration of LC. The data obtained correlated well with published data for these allergens generated using the LLNA test. The human-skin CAM model therefore has great potential as an inexpensive, non-radioactive, in vivo alternative to the LLNA, which does not require the use of sentient animals. In addition, this system has the advantage of testing the allergic response of human, rather than animal skin.
Wang, Shiyao; Deng, Zhidong; Yin, Gang
2016-02-24
A high-performance differential global positioning system (GPS) receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS-inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car.
Wang, Shiyao; Deng, Zhidong; Yin, Gang
2016-01-01
A high-performance differential global positioning system (GPS) receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS–inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car. PMID:26927108
NASA Astrophysics Data System (ADS)
Rybynok, V. O.; Kyriacou, P. A.
2007-10-01
Diabetes is one of the biggest health challenges of the 21st century. The obesity epidemic, sedentary lifestyles and an ageing population mean prevalence of the condition is currently doubling every generation. Diabetes is associated with serious chronic ill health, disability and premature mortality. Long-term complications including heart disease, stroke, blindness, kidney disease and amputations, make the greatest contribution to the costs of diabetes care. Many of these long-term effects could be avoided with earlier, more effective monitoring and treatment. Currently, blood glucose can only be monitored through the use of invasive techniques. To date there is no widely accepted and readily available non-invasive monitoring technique to measure blood glucose despite the many attempts. This paper challenges one of the most difficult non-invasive monitoring techniques, that of blood glucose, and proposes a new novel approach that will enable the accurate, and calibration free estimation of glucose concentration in blood. This approach is based on spectroscopic techniques and a new adaptive modelling scheme. The theoretical implementation and the effectiveness of the adaptive modelling scheme for this application has been described and a detailed mathematical evaluation has been employed to prove that such a scheme has the capability of extracting accurately the concentration of glucose from a complex biological media.
On Complexity of the Quantum Ising Model
NASA Astrophysics Data System (ADS)
Bravyi, Sergey; Hastings, Matthew
2017-01-01
We study complexity of several problems related to the Transverse field Ising Model (TIM). First, we consider the problem of estimating the ground state energy known as the Local Hamiltonian Problem (LHP). It is shown that the LHP for TIM on degree-3 graphs is equivalent modulo polynomial reductions to the LHP for general k-local `stoquastic' Hamiltonians with any constant {k ≥ 2}. This result implies that estimating the ground state energy of TIM on degree-3 graphs is a complete problem for the complexity class {StoqMA} —an extension of the classical class {MA}. As a corollary, we complete the complexity classification of 2-local Hamiltonians with a fixed set of interactions proposed recently by Cubitt and Montanaro. Secondly, we study quantum annealing algorithms for finding ground states of classical spin Hamiltonians associated with hard optimization problems. We prove that the quantum annealing with TIM Hamiltonians is equivalent modulo polynomial reductions to the quantum annealing with a certain subclass of k-local stoquastic Hamiltonians. This subclass includes all Hamiltonians representable as a sum of a k-local diagonal Hamiltonian and a 2-local stoquastic Hamiltonian.
van Wyk, Marnus J; Bingle, Marianne; Meyer, Frans J C
2005-09-01
International bodies such as International Commission on Non-Ionizing Radiation Protection (ICNIRP) and the Institute for Electrical and Electronic Engineering (IEEE) make provision for human exposure assessment based on SAR calculations (or measurements) and basic restrictions. In the case of base station exposure this is mostly applicable to occupational exposure scenarios in the very near field of these antennas where the conservative reference level criteria could be unnecessarily restrictive. This study presents a variety of critical aspects that need to be considered when calculating SAR in a human body close to a mobile phone base station antenna. A hybrid FEM/MoM technique is proposed as a suitable numerical method to obtain accurate results. The verification of the FEM/MoM implementation has been presented in a previous publication; the focus of this study is an investigation into the detail that must be included in a numerical model of the antenna, to accurately represent the real-world scenario. This is accomplished by comparing numerical results to measurements for a generic GSM base station antenna and appropriate, representative canonical and human phantoms. The results show that it is critical to take the disturbance effect of the human phantom (a large conductive body) on the base station antenna into account when the antenna-phantom spacing is less than 300 mm. For these small spacings, the antenna structure must be modeled in detail. The conclusion is that it is feasible to calculate, using the proposed techniques and methodology, accurate occupational compliance zones around base station antennas based on a SAR profile and basic restriction guidelines.
Lattice Boltzmann model for the complex Ginzburg-Landau equation.
Zhang, Jianying; Yan, Guangwu
2010-06-01
A lattice Boltzmann model with complex distribution function for the complex Ginzburg-Landau equation (CGLE) is proposed. By using multiscale technique and the Chapman-Enskog expansion on complex variables, we obtain a series of complex partial differential equations. Then, complex equilibrium distribution function and its complex moments are obtained. Based on this model, the rotation and oscillation properties of stable spiral waves and the breaking-up behavior of unstable spiral waves in CGLE are investigated in detail.
Surface Complexation Modelling in Metal-Mineral-Bacteria Systems
NASA Astrophysics Data System (ADS)
Johnson, K. J.; Fein, J. B.
2002-12-01
The reactive surfaces of bacteria and minerals can determine the fate, transport, and bioavailability of aqueous heavy metal cations. Geochemical models are instrumental in accurately accounting for the partitioning of the metals between mineral surfaces and bacteria cell walls. Previous research has shown that surface complexation modelling (SCM) is accurate in two-component systems (metal:mineral and metal:bacteria); however, the ability of SCMs to account for metal distribution in mixed metal-mineral-bacteria systems has not been tested. In this study, we measure aqueous Cd distributions in water-bacteria-mineral systems, and compare these observations with predicted distributions based on a surface complexation modelling approach. We measured Cd adsorption in 2- and 3-component batch adsorption experiments. In the 2-component experiments, we measured the extent of adsorption of 10 ppm aqueous Cd onto either a bacterial or hydrous ferric oxide sorbent. The metal:bacteria experiments contained 1 g/L (wet wt.) of B. subtilis, and were conducted as a function of pH; the metal:mineral experiments were conducted as a function of both pH and HFO content. Two types of 3-component Cd adsorption experiments were also conducted in which both mineral powder and bacteria were present as sorbents: 1) one in which the HFO was physically but not chemically isolated from the system using sealed dialysis tubing, and 2) others where the HFO, Cd and B. subtilis were all in physical contact. The dialysis tubing approach enabled the direct determination of the concentration of Cd on each sorbing surface, after separation and acidification of each sorbent. The experiments indicate that both bacteria and mineral surfaces can dominate adsorption in the system, depending on pH and bacteria:mineral ratio. The stability constants, determined using the data from the 2-component systems, along with those for other surface and aqueous species in the systems, were used with FITEQL to
Complex Educational Design: A Course Design Model Based on Complexity
ERIC Educational Resources Information Center
Freire, Maximina Maria
2013-01-01
Purpose: This article aims at presenting a conceptual framework which, theoretically grounded on complexity, provides the basis to conceive of online language courses that intend to respond to the needs of students and society. Design/methodology/approach: This paper is introduced by reflections on distance education and on the paradigmatic view…
NASA Technical Reports Server (NTRS)
Yang, Qiguang; Liu, Xu; Wu, Wan; Kizer, Susan; Baize, Rosemary R.
2016-01-01
A hybrid stream PCRTM-SOLAR model has been proposed for fast and accurate radiative transfer simulation. It calculates the reflected solar (RS) radiances with a fast coarse way and then, with the help of a pre-saved matrix, transforms the results to obtain the desired high accurate RS spectrum. The methodology has been demonstrated with the hybrid stream discrete ordinate (HSDO) radiative transfer (RT) model. The HSDO method calculates the monochromatic radiances using a 4-stream discrete ordinate method, where only a small number of monochromatic radiances are simulated with both 4-stream and a larger N-stream (N = 16) discrete ordinate RT algorithm. The accuracy of the obtained channel radiance is comparable to the result from N-stream moderate resolution atmospheric transmission version 5 (MODTRAN5). The root-mean-square errors are usually less than 5x10(exp -4) mW/sq cm/sr/cm. The computational speed is three to four-orders of magnitude faster than the medium speed correlated-k option MODTRAN5. This method is very efficient to simulate thousands of RS spectra under multi-layer clouds/aerosols and solar radiation conditions for climate change study and numerical weather prediction applications.
Garcia Lopez, Sebastian; Kim, Philip M.
2014-01-01
Advances in sequencing have led to a rapid accumulation of mutations, some of which are associated with diseases. However, to draw mechanistic conclusions, a biochemical understanding of these mutations is necessary. For coding mutations, accurate prediction of significant changes in either the stability of proteins or their affinity to their binding partners is required. Traditional methods have used semi-empirical force fields, while newer methods employ machine learning of sequence and structural features. Here, we show how combining both of these approaches leads to a marked boost in accuracy. We introduce ELASPIC, a novel ensemble machine learning approach that is able to predict stability effects upon mutation in both, domain cores and domain-domain interfaces. We combine semi-empirical energy terms, sequence conservation, and a wide variety of molecular details with a Stochastic Gradient Boosting of Decision Trees (SGB-DT) algorithm. The accuracy of our predictions surpasses existing methods by a considerable margin, achieving correlation coefficients of 0.77 for stability, and 0.75 for affinity predictions. Notably, we integrated homology modeling to enable proteome-wide prediction and show that accurate prediction on modeled structures is possible. Lastly, ELASPIC showed significant differences between various types of disease-associated mutations, as well as between disease and common neutral mutations. Unlike pure sequence-based prediction methods that try to predict phenotypic effects of mutations, our predictions unravel the molecular details governing the protein instability, and help us better understand the molecular causes of diseases. PMID:25243403
Delineating parameter unidentifiabilities in complex models
NASA Astrophysics Data System (ADS)
Raman, Dhruva V.; Anderson, James; Papachristodoulou, Antonis
2017-03-01
Scientists use mathematical modeling as a tool for understanding and predicting the properties of complex physical systems. In highly parametrized models there often exist relationships between parameters over which model predictions are identical, or nearly identical. These are known as structural or practical unidentifiabilities, respectively. They are hard to diagnose and make reliable parameter estimation from data impossible. They furthermore imply the existence of an underlying model simplification. We describe a scalable method for detecting unidentifiabilities, as well as the functional relations defining them, for generic models. This allows for model simplification, and appreciation of which parameters (or functions thereof) cannot be estimated from data. Our algorithm can identify features such as redundant mechanisms and fast time-scale subsystems, as well as the regimes in parameter space over which such approximations are valid. We base our algorithm on a quantification of regional parametric sensitivity that we call `multiscale sloppiness'. Traditionally, the link between parametric sensitivity and the conditioning of the parameter estimation problem is made locally, through the Fisher information matrix. This is valid in the regime of infinitesimal measurement uncertainty. We demonstrate the duality between multiscale sloppiness and the geometry of confidence regions surrounding parameter estimates made where measurement uncertainty is non-negligible. Further theoretical relationships are provided linking multiscale sloppiness to the likelihood-ratio test. From this, we show that a local sensitivity analysis (as typically done) is insufficient for determining the reliability of parameter estimation, even with simple (non)linear systems. Our algorithm can provide a tractable alternative. We finally apply our methods to a large-scale, benchmark systems biology model of necrosis factor (NF)-κ B , uncovering unidentifiabilities.
Uncertainty quantification for quantum chemical models of complex reaction networks.
Proppe, Jonny; Husch, Tamara; Simm, Gregor N; Reiher, Markus
2016-12-22
For the quantitative understanding of complex chemical reaction mechanisms, it is, in general, necessary to accurately determine the corresponding free energy surface and to solve the resulting continuous-time reaction rate equations for a continuous state space. For a general (complex) reaction network, it is computationally hard to fulfill these two requirements. However, it is possible to approximately address these challenges in a physically consistent way. On the one hand, it may be sufficient to consider approximate free energies if a reliable uncertainty measure can be provided. On the other hand, a highly resolved time evolution may not be necessary to still determine quantitative fluxes in a reaction network if one is interested in specific time scales. In this paper, we present discrete-time kinetic simulations in discrete state space taking free energy uncertainties into account. The method builds upon thermo-chemical data obtained from electronic structure calculations in a condensed-phase model. Our kinetic approach supports the analysis of general reaction networks spanning multiple time scales, which is here demonstrated for the example of the formose reaction. An important application of our approach is the detection of regions in a reaction network which require further investigation, given the uncertainties introduced by both approximate electronic structure methods and kinetic models. Such cases can then be studied in greater detail with more sophisticated first-principles calculations and kinetic simulations.
Using Perspective to Model Complex Processes
Kelsey, R.L.; Bisset, K.R.
1999-04-04
The notion of perspective, when supported in an object-based knowledge representation, can facilitate better abstractions of reality for modeling and simulation. The object modeling of complex physical and chemical processes is made more difficult in part due to the poor abstractions of state and phase changes available in these models. The notion of perspective can be used to create different views to represent the different states of matter in a process. These techniques can lead to a more understandable model. Additionally, the ability to record the progress of a process from start to finish is problematic. It is desirable to have a historic record of the entire process, not just the end result of the process. A historic record should facilitate backtracking and re-start of a process at different points in time. The same representation structures and techniques can be used to create a sequence of process markers to represent a historic record. By using perspective, the sequence of markers can have multiple and varying views tailored for a particular user's context of interest.
Salter, D C; McArthur, H C; Crosse, J E; Dickens, A D
1993-10-01
Summary Measurements of skin mechanics are required to understand better cracking and flaking of the epidermis and loss of 'elasticity'with age in the dermis. Improvements in torsional testing are described here. The resulting data was fitted to algebraic models, the parameters of which can serve both as a concise description of the responses and as a means of relating them to skin structure and physiology. This investigation looks into the suitability of seven such algebraic models. Five of the models examined here appear to be new. Using the commercially available Dia-Stron DTM Torque Meter with our own software, model parameters were studied as indicators of the effects of age and sex in 41 people, and of skin moisturizing treatments in a further 10 people. The two models in the literature were both found to be substantially less accurate and sensitive representations of experimental data than one of the new models proposed here based on the Weibull distribution. This 'WB model'was consistently the one best able to distinguish differences and detect changes which were statistically significant. The WB model appears to be the most powerful and efficient available. Use of this model makes it possible to demonstrate in vivo a statistically significant mechanical difference between male and pre-menopausal female skin using only one parameter (p= 0.0163, with 18 males and 19 females) and to demonstrate a statistically significant mechanical difference between successive decades of age in female skin using only one parameter (p= 0.0124, n= 24). The two parameters of the model most sensitive to skin structure, function and treatment have been combined to form the axes of a 'Skin condition chart'. Any person can be located on this chart at a point indicating their overall skin condition in mechanical terms and any changes in that condition can be clearly demonstrated by movement across the plot.
NASA Astrophysics Data System (ADS)
Pau, George Shu Heng; Shen, Chaopeng; Riley, William J.; Liu, Yaning
2016-02-01
The topography, and the biotic and abiotic parameters are typically upscaled to make watershed-scale hydrologic-biogeochemical models computationally tractable. However, upscaling procedure can produce biases when nonlinear interactions between different processes are not fully captured at coarse resolutions. Here we applied the Proper Orthogonal Decomposition Mapping Method (PODMM) to downscale the field solutions from a coarse (7 km) resolution grid to a fine (220 m) resolution grid. PODMM trains a reduced-order model (ROM) with coarse-resolution and fine-resolution solutions, here obtained using PAWS+CLM, a quasi-3-D watershed processes model that has been validated for many temperate watersheds. Subsequent fine-resolution solutions were approximated based only on coarse-resolution solutions and the ROM. The approximation errors were efficiently quantified using an error estimator. By jointly estimating correlated variables and temporally varying the ROM parameters, we further reduced the approximation errors by up to 20%. We also improved the method's robustness by constructing multiple ROMs using different set of variables, and selecting the best approximation based on the error estimator. The ROMs produced accurate downscaling of soil moisture, latent heat flux, and net primary production with O(1000) reduction in computational cost. The subgrid distributions were also nearly indistinguishable from the ones obtained using the fine-resolution model. Compared to coarse-resolution solutions, biases in upscaled ROM solutions were reduced by up to 80%. This method has the potential to help address the long-standing spatial scaling problem in hydrology and enable long-time integration, parameter estimation, and stochastic uncertainty analysis while accurately representing the heterogeneities.
Complex Geometry Creation and Turbulent Conjugate Heat Transfer Modeling
Bodey, Isaac T; Arimilli, Rao V; Freels, James D
2011-01-01
The multiphysics capabilities of COMSOL provide the necessary tools to simulate the turbulent thermal-fluid aspects of the High Flux Isotope Reactor (HFIR). Version 4.1, and later, of COMSOL provides three different turbulence models: the standard k-{var_epsilon} closure model, the low Reynolds number (LRN) k-{var_epsilon} model, and the Spalart-Allmaras model. The LRN meets the needs of the nominal HFIR thermal-hydraulic requirements for 2D and 3D simulations. COMSOL also has the capability to create complex geometries. The circular involute fuel plates used in the HFIR require the use of algebraic equations to generate an accurate geometrical representation in the simulation environment. The best-estimate simulation results show that the maximum fuel plate clad surface temperatures are lower than those predicted by the legacy thermal safety code used at HFIR by approximately 17 K. The best-estimate temperature distribution determined by COMSOL was then used to determine the necessary increase in the magnitude of the power density profile (PDP) to produce a similar clad surface temperature as compared to the legacy thermal safety code. It was determined and verified that a 19% power increase was sufficient to bring the two temperature profiles to relatively good agreement.
Wind Tunnel Modeling Of Wind Flow Over Complex Terrain
NASA Astrophysics Data System (ADS)
Banks, D.; Cochran, B.
2010-12-01
This presentation will describe the finding of an atmospheric boundary layer (ABL) wind tunnel study conducted as part of the Bolund Experiment. This experiment was sponsored by Risø DTU (National Laboratory for Sustainable Energy, Technical University of Denmark) during the fall of 2009 to enable a blind comparison of various air flow models in an attempt to validate their performance in predicting airflow over complex terrain. Bohlund hill sits 12 m above the water level at the end of a narrow isthmus. The island features a steep escarpment on one side, over which the airflow can be expected to separate. The island was equipped with several anemometer towers, and the approach flow over the water was well characterized. This study was one of only two only physical model studies included in the blind model comparison, the other being a water plume study. The remainder were computational fluid dynamics (CFD) simulations, including both RANS and LES. Physical modeling of air flow over topographical features has been used since the middle of the 20th century, and the methods required are well understood and well documented. Several books have been written describing how to properly perform ABL wind tunnel studies, including ASCE manual of engineering practice 67. Boundary layer wind tunnel tests are the only modelling method deemed acceptable in ASCE 7-10, the most recent edition of the American Society of Civil Engineers standard that provides wind loads for buildings and other structures for buildings codes across the US. Since the 1970’s, most tall structures undergo testing in a boundary layer wind tunnel to accurately determine the wind induced loading. When compared to CFD, the US EPA considers a properly executed wind tunnel study to be equivalent to a CFD model with infinitesimal grid resolution and near infinite memory. One key reason for this widespread acceptance is that properly executed ABL wind tunnel studies will accurately simulate flow separation
Water Balance Modelling - Does The Required Model Complexity Change With Scale?
NASA Astrophysics Data System (ADS)
Blöschl, G.; Merz, R.
An important issue in modelling the water balance of catchments is what is the suitable model complexity. Anecdotal evidence suggests that the model complexity required to model the water balance accurately decreases with catchment scale but so far very few studies have quantified these possible effects. In this paper we examine the model per- formance as a function of catchment scale for a given model complexity which allows us to infer, whether the required model complexity changes with scale. We also exam- ine whether the calibrated parameter values change with scale or are scale invariant. In a case study we analysed 700 catchments in Austria with catchment sizes ranging from 10 to 100 000 km2. 30 years of daily data (runoff, precipitation, air temperature, air humidity) were analysed. A spatially lumped, conceptual, HBV style soil mois- ture accounting scheme was used which involved fifteen model parameters including snow processes. Five parameters were preset and ten parameters were calibrated on observed daily streamflow. The calibration period was about 10 years and the verifi- cation period was about 20 years. Model performance (in terms of Nash-Sutcliffe effi- ciency) was examined both for the calibration and the verification periods. The mean efficiency over all catchments only decreased slightly when moving from the calibra- tion to the verification (from R2 = 0.65 to 0.60). The results suggest that the model efficiencies (both for the calibration and the verification) do not change which catch- ment scale for scales smaller than 10 000 km2 but beyond this scale there is a slight decrease in model performance. This means that for these very large scales, a spatial subdivision of the lumped model is needed to allow for spatial differences in rainfall. The results also suggest that the model parameters are not scale dependent. We con- clude that the complexity required for water balance models of catchments does not change with scale for catchment sizes
Sethurajan, Athinthra Krishnaswamy; Krachkovskiy, Sergey A; Halalay, Ion C; Goward, Gillian R; Protas, Bartosz
2015-09-17
We used NMR imaging (MRI) combined with data analysis based on inverse modeling of the mass transport problem to determine ionic diffusion coefficients and transference numbers in electrolyte solutions of interest for Li-ion batteries. Sensitivity analyses have shown that accurate estimates of these parameters (as a function of concentration) are critical to the reliability of the predictions provided by models of porous electrodes. The inverse modeling (IM) solution was generated with an extension of the Planck-Nernst model for the transport of ionic species in electrolyte solutions. Concentration-dependent diffusion coefficients and transference numbers were derived using concentration profiles obtained from in situ (19)F MRI measurements. Material properties were reconstructed under minimal assumptions using methods of variational optimization to minimize the least-squares deviation between experimental and simulated concentration values with uncertainty of the reconstructions quantified using a Monte Carlo analysis. The diffusion coefficients obtained by pulsed field gradient NMR (PFG-NMR) fall within the 95% confidence bounds for the diffusion coefficient values obtained by the MRI+IM method. The MRI+IM method also yields the concentration dependence of the Li(+) transference number in agreement with trends obtained by electrochemical methods for similar systems and with predictions of theoretical models for concentrated electrolyte solutions, in marked contrast to the salt concentration dependence of transport numbers determined from PFG-NMR data.
Reducing Spatial Data Complexity for Classification Models
NASA Astrophysics Data System (ADS)
Ruta, Dymitr; Gabrys, Bogdan
2007-11-01
Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the
Reducing Spatial Data Complexity for Classification Models
Ruta, Dymitr; Gabrys, Bogdan
2007-11-29
Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the
NASA Astrophysics Data System (ADS)
Yu, Xiaolin; Zhang, Shaoqing; Lin, Xiaopei; Li, Mingkui
2017-03-01
The uncertainties in values of coupled model parameters are an important source of model bias that causes model climate drift. The values can be calibrated by a parameter estimation procedure that projects observational information onto model parameters. The signal-to-noise ratio of error covariance between the model state and the parameter being estimated directly determines whether the parameter estimation succeeds or not. With a conceptual climate model that couples the stochastic atmosphere and slow-varying ocean, this study examines the sensitivity of state-parameter covariance on the accuracy of estimated model states in different model components of a coupled system. Due to the interaction of multiple timescales, the fast-varying atmosphere
with a chaotic nature is the major source of the inaccuracy of estimated state-parameter covariance. Thus, enhancing the estimation accuracy of atmospheric states is very important for the success of coupled model parameter estimation, especially for the parameters in the air-sea interaction processes. The impact of chaotic-to-periodic ratio in state variability on parameter estimation is also discussed. This simple model study provides a guideline when real observations are used to optimize model parameters in a coupled general circulation model for improving climate analysis and predictions.
Efficient model of an open-ended coaxial-line probe for measuring complex permittivity
NASA Astrophysics Data System (ADS)
Shin, Hyun; Hyun, Seung-Yeup; Kim, Sang-Wook; Kim, Se-Yun
2000-07-01
A virtually conical cable model of an open-ended coaxial- line probe for converting its measured reflection coefficients into the complex permittivity of a contacted material is presented here. Both reflection coefficients of air and pure water are calculated by employing the FDTD method, and the phase difference between the calculated and the measured reflection coefficients of pure water is used as a calibration factor of the probe. The virtually conical cable model renders the conversion of the complex permittivity of dry sand more accurate and faster than the integral equation model to the aperture admittance.
Martin, Eric; Mukherjee, Prasenjit
2012-01-23
Reliable in silico prediction methods promise many advantages over experimental high-throughput screening (HTS): vastly lower time and cost, affinity magnitude estimates, no requirement for a physical sample, and a knowledge-driven exploration of chemical space. For the specific case of kinases, given several hundred experimental IC(50) training measurements, the empirically parametrized profile-quantitative structure-activity relationship (profile-QSAR) and surrogate AutoShim methods developed at Novartis can predict IC(50) with a reliability approaching experimental HTS. However, in the absence of training data, prediction is much harder. The most common a priori prediction method is docking, which suffers from many limitations: It requires a protein structure, is slow, and cannot predict affinity. (1) Highly accurate profile-QSAR (2) models have now been built for roughly 100 kinases covering most of the kinome. Analyzing correlations among neighboring kinases shows that near neighbors share a high degree of SAR similarity. The novel chemogenomic kinase-kernel method reported here predicts activity for new kinases as a weighted average of predicted activities from profile-QSAR models for nearby neighbor kinases. Three different factors for weighting the neighbors were evaluated: binding site sequence identity to the kinase neighbors, similarity of the training set for each neighbor model to the compound being predicted, and accuracy of each neighbor model. Binding site sequence identity was by far most important, followed by chemical similarity. Model quality had almost no relevance. The median R(2) = 0.55 for kinase-kernel interpolations on 25% of the data of each set held out from method optimization for 51 kinase assays, approached the accuracy of median R(2) = 0.61 for the trained profile-QSAR predictions on the same held out 25% data of each set, far faster and far more accurate than docking. Validation on the full data sets from 18 additional kinase assays
NASA Astrophysics Data System (ADS)
Katata, Genki; Kajino, Mizuo; Hiraki, Takatoshi; Aikawa, Masahide; Kobayashi, Tomiki; Nagai, Haruyasu
2011-10-01
To apply a meteorological model to investigate fog occurrence, acidification and deposition in mountain forests, the meteorological model WRF was modified to calculate fog deposition accurately by the simple linear function of fog deposition onto vegetation derived from numerical experiments using the detailed multilayer atmosphere-vegetation-soil model (SOLVEG). The modified version of WRF that includes fog deposition (fog-WRF) was tested in a mountain forest on Mt. Rokko in Japan. fog-WRF provided a distinctly better prediction of liquid water content of fog (LWC) than the original version of WRF. It also successfully simulated throughfall observations due to fog deposition inside the forest during the summer season that excluded the effect of forest edges. Using the linear relationship between fog deposition and altitude given by the fog-WRF calculations and the data from throughfall observations at a given altitude, the vertical distribution of fog deposition can be roughly estimated in mountain forests. A meteorological model that includes fog deposition will be useful in mapping fog deposition in mountain cloud forests.
NASA Astrophysics Data System (ADS)
Bianchi, Davide; Chiesa, Matteo; Guzzo, Luigi
2016-10-01
As a step towards a more accurate modelling of redshift-space distortions (RSD) in galaxy surveys, we develop a general description of the probability distribution function of galaxy pairwise velocities within the framework of the so-called streaming model. For a given galaxy separation , such function can be described as a superposition of virtually infinite local distributions. We characterize these in terms of their moments and then consider the specific case in which they are Gaussian functions, each with its own mean μ and variance σ2. Based on physical considerations, we make the further crucial assumption that these two parameters are in turn distributed according to a bivariate Gaussian, with its own mean and covariance matrix. Tests using numerical simulations explicitly show that with this compact description one can correctly model redshift-space distorsions on all scales, fully capturing the overall linear and nonlinear dynamics of the galaxy flow at different separations. In particular, we naturally obtain Gaussian/exponential, skewed/unskewed distribution functions, depending on separation as observed in simulations and data. Also, the recently proposed single-Gaussian description of redshift-space distortions is included in this model as a limiting case, when the bivariate Gaussian is collapsed to a two-dimensional Dirac delta function. More work is needed, but these results indicate a very promising path to make definitive progress in our program to improve RSD estimators.
Advanced Combustion Modeling for Complex Turbulent Flows
NASA Technical Reports Server (NTRS)
Ham, Frank Stanford
2005-01-01
The next generation of aircraft engines will need to pass stricter efficiency and emission tests. NASA's Ultra-Efficient Engine Technology (UEET) program has set an ambitious goal of 70% reduction of NO(x) emissions and a 15% increase in fuel efficiency of aircraft engines. We will demonstrate the state-of-the-art combustion tools developed a t Stanford's Center for Turbulence Research (CTR) as part of this program. In the last decade, CTR has spear-headed a multi-physics-based combustion modeling program. Key technologies have been transferred to the aerospace industry and are currently being used for engine simulations. In this demo, we will showcase the next-generation combustion modeling tools that integrate a very high level of detailed physics into advanced flow simulation codes. Combustor flows involve multi-phase physics with liquid fuel jet breakup, evaporation, and eventual combustion. Individual components of the simulation are verified against complex test cases and show excellent agreement with experimental data.
Discrete Element Modeling of Complex Granular Flows
NASA Astrophysics Data System (ADS)
Movshovitz, N.; Asphaug, E. I.
2010-12-01
Granular materials occur almost everywhere in nature, and are actively studied in many fields of research, from food industry to planetary science. One approach to the study of granular media, the continuum approach, attempts to find a constitutive law that determines the material's flow, or strain, under applied stress. The main difficulty with this approach is that granular systems exhibit different behavior under different conditions, behaving at times as an elastic solid (e.g. pile of sand), at times as a viscous fluid (e.g. when poured), or even as a gas (e.g. when shaken). Even if all these physics are accounted for, numerical implementation is made difficult by the wide and often discontinuous ranges in continuum density and sound speed. A different approach is Discrete Element Modeling (DEM). Here the goal is to directly model every grain in the system as a rigid body subject to various body and surface forces. The advantage of this method is that it treats all of the above regimes in the same way, and can easily deal with a system moving back and forth between regimes. But as a granular system typically contains a multitude of individual grains, the direct integration of the system can be very computationally expensive. For this reason most DEM codes are limited to spherical grains of uniform size. However, spherical grains often cannot replicate the behavior of real world granular systems. A simple pile of spherical grains, for example, relies on static friction alone to keep its shape, while in reality a pile of irregular grains can maintain a much steeper angle by interlocking force chains. In the present study we employ a commercial DEM, nVidia's PhysX Engine, originally designed for the game and animation industry, to simulate complex granular flows with irregular, non-spherical grains. This engine runs as a multi threaded process and can be GPU accelerated. We demonstrate the code's ability to physically model granular materials in the three regimes
NASA Astrophysics Data System (ADS)
Hackel, Stefan; Montenbruck, Oliver; Steigenberger, -Peter; Eineder, Michael; Gisinger, Christoph
Remote sensing satellites support a broad range of scientific and commercial applications. The two radar imaging satellites TerraSAR-X and TanDEM-X provide spaceborne Synthetic Aperture Radar (SAR) and interferometric SAR data with a very high accuracy. The increasing demand for precise radar products relies on sophisticated validation methods, which require precise and accurate orbit products. Basically, the precise reconstruction of the satellite’s trajectory is based on the Global Positioning System (GPS) measurements from a geodetic-grade dual-frequency receiver onboard the spacecraft. The Reduced Dynamic Orbit Determination (RDOD) approach utilizes models for the gravitational and non-gravitational forces. Following a proper analysis of the orbit quality, systematics in the orbit products have been identified, which reflect deficits in the non-gravitational force models. A detailed satellite macro model is introduced to describe the geometry and the optical surface properties of the satellite. Two major non-gravitational forces are the direct and the indirect Solar Radiation Pressure (SRP). Due to the dusk-dawn orbit configuration of TerraSAR-X, the satellite is almost constantly illuminated by the Sun. Therefore, the direct SRP has an effect on the lateral stability of the determined orbit. The indirect effect of the solar radiation principally contributes to the Earth Radiation Pressure (ERP). The resulting force depends on the sunlight, which is reflected by the illuminated Earth surface in the visible, and the emission of the Earth body in the infrared spectra. Both components of ERP require Earth models to describe the optical properties of the Earth surface. Therefore, the influence of different Earth models on the orbit quality is assessed within the presentation. The presentation highlights the influence of non-gravitational force and satellite macro models on the orbit quality of TerraSAR-X.
Balzar, D.; Ledbetter, H.
1995-12-31
In the {open_quotes}double-Voigt{close_quotes} approach, an exact Voigt function describes both size- and strain-broadened profiles. The lattice strain is defined in terms of physically credible mean-square strain averaged over a distance in the diffracting domains. Analysis of Fourier coefficients in a harmonic approximation for strain coefficients leads to the Warren-Averbach method for the separation of size and strain contributions to diffraction line broadening. The model is introduced in the Rietveld refinement program in the following way: Line widths are modeled with only four parameters in the isotropic case. Varied parameters are both surface- and volume-weighted domain sizes and root-mean-square strains averaged over two distances. Refined parameters determine the physically broadened Voigt line profile. Instrumental Voigt line profile parameters are added to obtain the observed (Voigt) line profile. To speed computation, the corresponding pseudo-Voigt function is calculated and used as a fitting function in refinement. This approach allows for both fast computer code and accurate modeling in terms of physically identifiable parameters.
NASA Astrophysics Data System (ADS)
Stukel, Michael R.; Landry, Michael R.; Ohman, Mark D.; Goericke, Ralf; Samo, Ty; Benitez-Nelson, Claudia R.
2012-03-01
Despite the increasing use of linear inverse modeling techniques to elucidate fluxes in undersampled marine ecosystems, the accuracy with which they estimate food web flows has not been resolved. New Markov Chain Monte Carlo (MCMC) solution methods have also called into question the biases of the commonly used L2 minimum norm (L 2MN) solution technique. Here, we test the abilities of MCMC and L 2MN methods to recover field-measured ecosystem rates that are sequentially excluded from the model input. For data, we use experimental measurements from process cruises of the California Current Ecosystem (CCE-LTER) Program that include rate estimates of phytoplankton and bacterial production, micro- and mesozooplankton grazing, and carbon export from eight study sites varying from rich coastal upwelling to offshore oligotrophic conditions. Both the MCMC and L 2MN methods predicted well-constrained rates of protozoan and mesozooplankton grazing with reasonable accuracy, but the MCMC method overestimated primary production. The MCMC method more accurately predicted the poorly constrained rate of vertical carbon export than the L 2MN method, which consistently overestimated export. Results involving DOC and bacterial production were equivocal. Overall, when primary production is provided as model input, the MCMC method gives a robust depiction of ecosystem processes. Uncertainty in inverse ecosystem models is large and arises primarily from solution under-determinacy. We thus suggest that experimental programs focusing on food web fluxes expand the range of experimental measurements to include the nature and fate of detrital pools, which play large roles in the model.
Martin, Katherine J.; Patrick, Denis R.; Bissell, Mina J.; Fournier, Marcia V.
2008-10-20
One of the major tenets in breast cancer research is that early detection is vital for patient survival by increasing treatment options. To that end, we have previously used a novel unsupervised approach to identify a set of genes whose expression predicts prognosis of breast cancer patients. The predictive genes were selected in a well-defined three dimensional (3D) cell culture model of non-malignant human mammary epithelial cell morphogenesis as down-regulated during breast epithelial cell acinar formation and cell cycle arrest. Here we examine the ability of this gene signature (3D-signature) to predict prognosis in three independent breast cancer microarray datasets having 295, 286, and 118 samples, respectively. Our results show that the 3D-signature accurately predicts prognosis in three unrelated patient datasets. At 10 years, the probability of positive outcome was 52, 51, and 47 percent in the group with a poor-prognosis signature and 91, 75, and 71 percent in the group with a good-prognosis signature for the three datasets, respectively (Kaplan-Meier survival analysis, p<0.05). Hazard ratios for poor outcome were 5.5 (95% CI 3.0 to 12.2, p<0.0001), 2.4 (95% CI 1.6 to 3.6, p<0.0001) and 1.9 (95% CI 1.1 to 3.2, p = 0.016) and remained significant for the two larger datasets when corrected for estrogen receptor (ER) status. Hence the 3D-signature accurately predicts breast cancer outcome in both ER-positive and ER-negative tumors, though individual genes differed in their prognostic ability in the two subtypes. Genes that were prognostic in ER+ patients are AURKA, CEP55, RRM2, EPHA2, FGFBP1, and VRK1, while genes prognostic in ER patients include ACTB, FOXM1 and SERPINE2 (Kaplan-Meier p<0.05). Multivariable Cox regression analysis in the largest dataset showed that the 3D-signature was a strong independent factor in predicting breast cancer outcome. The 3D-signature accurately predicts breast cancer outcome across multiple datasets and holds prognostic
Yogurtcu, Osman N.; Johnson, Margaret E.
2015-01-01
The dynamics of association between diffusing and reacting molecular species are routinely quantified using simple rate-equation kinetics that assume both well-mixed concentrations of species and a single rate constant for parameterizing the binding rate. In two-dimensions (2D), however, even when systems are well-mixed, the assumption of a single characteristic rate constant for describing association is not generally accurate, due to the properties of diffusional searching in dimensions d ≤ 2. Establishing rigorous bounds for discriminating between 2D reactive systems that will be accurately described by rate equations with a single rate constant, and those that will not, is critical for both modeling and experimentally parameterizing binding reactions restricted to surfaces such as cellular membranes. We show here that in regimes of intrinsic reaction rate (ka) and diffusion (D) parameters ka/D > 0.05, a single rate constant cannot be fit to the dynamics of concentrations of associating species independently of the initial conditions. Instead, a more sophisticated multi-parametric description than rate-equations is necessary to robustly characterize bimolecular reactions from experiment. Our quantitative bounds derive from our new analysis of 2D rate-behavior predicted from Smoluchowski theory. Using a recently developed single particle reaction-diffusion algorithm we extend here to 2D, we are able to test and validate the predictions of Smoluchowski theory and several other theories of reversible reaction dynamics in 2D for the first time. Finally, our results also mean that simulations of reactive systems in 2D using rate equations must be undertaken with caution when reactions have ka/D > 0.05, regardless of the simulation volume. We introduce here a simple formula for an adaptive concentration dependent rate constant for these chemical kinetics simulations which improves on existing formulas to better capture non-equilibrium reaction dynamics from dilute
Dai, Xiaoxu; Hu, Minghua; Tian, Wen; Xie, Daoyi; Hu, Bin
2016-01-01
This paper presents a propagation dynamics model for congestion propagation in complex networks of airspace. It investigates the application of an epidemiology model to complex networks by comparing the similarities and differences between congestion propagation and epidemic transmission. The model developed satisfies the constraints of actual motion in airspace, based on the epidemiology model. Exploiting the constraint that the evolution of congestion cluster in the airspace is always dynamic and heterogeneous, the SIR epidemiology model (one of the classical models in epidemic spreading) with logistic increase is applied to congestion propagation and shown to be more accurate in predicting the evolution of congestion peak than the model based on probability, which is common to predict the congestion propagation. Results from sample data show that the model not only predicts accurately the value and time of congestion peak, but also describes accurately the characteristics of congestion propagation. Then, a numerical study is performed in which it is demonstrated that the structure of the networks have different effects on congestion propagation in airspace. It is shown that in regions with severe congestion, the adjustment of dissipation rate is more significant than propagation rate in controlling the propagation of congestion.
Modeling competitive substitution in a polyelectrolyte complex
Peng, B.; Muthukumar, M.
2015-12-28
We have simulated the invasion of a polyelectrolyte complex made of a polycation chain and a polyanion chain, by another longer polyanion chain, using the coarse-grained united atom model for the chains and the Langevin dynamics methodology. Our simulations reveal many intricate details of the substitution reaction in terms of conformational changes of the chains and competition between the invading chain and the chain being displaced for the common complementary chain. We show that the invading chain is required to be sufficiently longer than the chain being displaced for effecting the substitution. Yet, having the invading chain to be longer than a certain threshold value does not reduce the substitution time much further. While most of the simulations were carried out in salt-free conditions, we show that presence of salt facilitates the substitution reaction and reduces the substitution time. Analysis of our data shows that the dominant driving force for the substitution process involving polyelectrolytes lies in the release of counterions during the substitution.
An accurate locally active memristor model for S-type negative differential resistance in NbO{sub x}
Gibson, Gary A.; Musunuru, Srinitya; Zhang, Jiaming; Lee, James; Hsieh, Cheng-Chih; Jackson, Warren; Jeon, Yoocharn; Henze, Dick; Li, Zhiyong; Stanley Williams, R.; Vandenberghe, Ken
2016-01-11
A number of important commercial applications would benefit from the introduction of easily manufactured devices that exhibit current-controlled, or “S-type,” negative differential resistance (NDR). A leading example is emerging non-volatile memory based on crossbar array architectures. Due to the inherently linear current vs. voltage characteristics of candidate non-volatile memristor memory elements, individual memory cells in these crossbar arrays can be addressed only if a highly non-linear circuit element, termed a “selector,” is incorporated in the cell. Selectors based on a layer of niobium oxide sandwiched between two electrodes have been investigated by a number of groups because the NDR they exhibit provides a promisingly large non-linearity. We have developed a highly accurate compact dynamical model for their electrical conduction that shows that the NDR in these devices results from a thermal feedback mechanism. A series of electrothermal measurements and numerical simulations corroborate this model. These results reveal that the leakage currents can be minimized by thermally isolating the selector or by incorporating materials with larger activation energies for electron motion.
NASA Astrophysics Data System (ADS)
Stecca, Guglielmo; Siviglia, Annunziato; Blom, Astrid
2016-07-01
We present an accurate numerical approximation to the Saint-Venant-Hirano model for mixed-sediment morphodynamics in one space dimension. Our solution procedure originates from the fully-unsteady matrix-vector formulation developed in [54]. The principal part of the problem is solved by an explicit Finite Volume upwind method of the path-conservative type, by which all the variables are updated simultaneously in a coupled fashion. The solution to the principal part is embedded into a splitting procedure for the treatment of frictional source terms. The numerical scheme is extended to second-order accuracy and includes a bookkeeping procedure for handling the evolution of size stratification in the substrate. We develop a concept of balancedness for the vertical mass flux between the substrate and active layer under bed degradation, which prevents the occurrence of non-physical oscillations in the grainsize distribution of the substrate. We suitably modify the numerical scheme to respect this principle. We finally verify the accuracy in our solution to the equations, and its ability to reproduce one-dimensional morphodynamics due to streamwise and vertical sorting, using three test cases. In detail, (i) we empirically assess the balancedness of vertical mass fluxes under degradation; (ii) we study the convergence to the analytical linearised solution for the propagation of infinitesimal-amplitude waves [54], which is here employed for the first time to assess a mixed-sediment model; (iii) we reproduce Ribberink's E8-E9 flume experiment [46].
NASA Astrophysics Data System (ADS)
Gritsyk, P. A.; Somov, B. V.
2016-08-01
The M7.7 solar flare of July 19, 2012, at 05:58 UT was observed with high spatial, temporal, and spectral resolutions in the hard X-ray and optical ranges. The flare occurred at the solar limb, which allowed us to see the relative positions of the coronal and chromospheric X-ray sources and to determine their spectra. To explain the observations of the coronal source and the chromospheric one unocculted by the solar limb, we apply an accurate analytical model for the kinetic behavior of accelerated electrons in a flare. We interpret the chromospheric hard X-ray source in the thick-target approximation with a reverse current and the coronal one in the thin-target approximation. Our estimates of the slopes of the hard X-ray spectra for both sources are consistent with the observations. However, the calculated intensity of the coronal source is lower than the observed one by several times. Allowance for the acceleration of fast electrons in a collapsing magnetic trap has enabled us to remove this contradiction. As a result of our modeling, we have estimated the flux density of the energy transferred by electrons with energies above 15 keV to be ˜5 × 1010 erg cm-2 s-1, which exceeds the values typical of the thick-target model without a reverse current by a factor of ˜5. To independently test the model, we have calculated the microwave spectrum in the range 1-50 GHz that corresponds to the available radio observations.
Dynamical complexity in the perception-based network formation model
NASA Astrophysics Data System (ADS)
Jo, Hang-Hyun; Moon, Eunyoung
2016-12-01
Many link formation mechanisms for the evolution of social networks have been successful to reproduce various empirical findings in social networks. However, they have largely ignored the fact that individuals make decisions on whether to create links to other individuals based on cost and benefit of linking, and the fact that individuals may use perception of the network in their decision making. In this paper, we study the evolution of social networks in terms of perception-based strategic link formation. Here each individual has her own perception of the actual network, and uses it to decide whether to create a link to another individual. An individual with the least perception accuracy can benefit from updating her perception using that of the most accurate individual via a new link. This benefit is compared to the cost of linking in decision making. Once a new link is created, it affects the accuracies of other individuals' perceptions, leading to a further evolution of the actual network. As for initial actual networks, we consider both homogeneous and heterogeneous cases. The homogeneous initial actual network is modeled by Erdős-Rényi (ER) random networks, while we take a star network for the heterogeneous case. In any cases, individual perceptions of the actual network are modeled by ER random networks with controllable linking probability. Then the stable link density of the actual network is found to show discontinuous transitions or jumps according to the cost of linking. As the number of jumps is the consequence of the dynamical complexity, we discuss the effect of initial conditions on the number of jumps to find that the dynamical complexity strongly depends on how much individuals initially overestimate or underestimate the link density of the actual network. For the heterogeneous case, the role of the highly connected individual as an information spreader is also discussed.
Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems
Timmis, Jon; Qwarnstrom, Eva E.
2016-01-01
Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model. PMID:27571414
Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems.
Williams, Richard A; Timmis, Jon; Qwarnstrom, Eva E
2016-01-01
Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model.
Clinical complexity in medicine: A measurement model of task and patient complexity
Islam, R.; Weir, C.; Fiol, G. Del
2016-01-01
Summary Background Complexity in medicine needs to be reduced to simple components in a way that is comprehensible to researchers and clinicians. Few studies in the current literature propose a measurement model that addresses both task and patient complexity in medicine. Objective The objective of this paper is to develop an integrated approach to understand and measure clinical complexity by incorporating both task and patient complexity components focusing on infectious disease domain. The measurement model was adapted and modified to healthcare domain. Methods Three clinical Infectious Disease teams were observed, audio-recorded and transcribed. Each team included an Infectious Diseases expert, one Infectious Diseases fellow, one physician assistant and one pharmacy resident fellow. The transcripts were parsed and the authors independently coded complexity attributes. This baseline measurement model of clinical complexity was modified in an initial set of coding process and further validated in a consensus-based iterative process that included several meetings and email discussions by three clinical experts from diverse backgrounds from the Department of Biomedical Informatics at the University of Utah. Inter-rater reliability was calculated using Cohen’s kappa. Results The proposed clinical complexity model consists of two separate components. The first is a clinical task complexity model with 13 clinical complexity-contributing factors and 7 dimensions. The second is the patient complexity model with 11 complexity-contributing factors and 5 dimensions. Conclusion The measurement model for complexity encompassing both task and patient complexity will be a valuable resource for future researchers and industry to measure and understand complexity in healthcare. PMID:26404626
Modeling Complex Chemical Systems: Problems and Solutions
NASA Astrophysics Data System (ADS)
van Dijk, Jan
2016-09-01
Non-equilibrium plasmas in complex gas mixtures are at the heart of numerous contemporary technologies. They typically contain dozens to hundreds of species, involved in hundreds to thousands of reactions. Chemists and physicists have always been interested in what are now called chemical reduction techniques (CRT's). The idea of such CRT's is that they reduce the number of species that need to be considered explicitly without compromising the validity of the model. This is usually achieved on the basis of an analysis of the reaction time scales of the system under study, which identifies species that are in partial equilibrium after a given time span. The first such CRT that has been widely used in plasma physics was developed in the 1960's and resulted in the concept of effective ionization and recombination rates. It was later generalized to systems in which multiple levels are effected by transport. In recent years there has been a renewed interest in tools for chemical reduction and reaction pathway analysis. An example of the latter is the PumpKin tool. Another trend is that techniques that have previously been developed in other fields of science are adapted as to be able to handle the plasma state of matter. Examples are the Intrinsic Low Dimension Manifold (ILDM) method and its derivatives, which originate from combustion engineering, and the general-purpose Principle Component Analysis (PCA) technique. In this contribution we will provide an overview of the most common reduction techniques, then critically assess the pros and cons of the methods that have gained most popularity in recent years. Examples will be provided for plasmas in argon and carbon dioxide.
Guo, En-Yu; Chawla, Nikhilesh; Jing, Tao; Torquato, Salvatore; Jiao, Yang
2014-03-01
Heterogeneous materials are ubiquitous in nature and synthetic situations and have a wide range of important engineering applications. Accurate modeling and reconstructing three-dimensional (3D) microstructure of topologically complex materials from limited morphological information such as a two-dimensional (2D) micrograph is crucial to the assessment and prediction of effective material properties and performance under extreme conditions. Here, we extend a recently developed dilation–erosion method and employ the Yeong–Torquato stochastic reconstruction procedure to model and generate 3D austenitic–ferritic cast duplex stainless steel microstructure containing percolating filamentary ferrite phase from 2D optical micrographs of the material sample. Specifically, the ferrite phase is dilated to produce a modified target 2D microstructure and the resulting 3D reconstruction is eroded to recover the percolating ferrite filaments. The dilation–erosion reconstruction is compared with the actual 3D microstructure, obtained from serial sectioning (polishing), as well as the standard stochastic reconstructions incorporating topological connectedness information. The fact that the former can achieve the same level of accuracy as the latter suggests that the dilation–erosion procedure is tantamount to incorporating appreciably more topological and geometrical information into the reconstruction while being much more computationally efficient. - Highlights: • Spatial correlation functions used to characterize filamentary ferrite phase • Clustering information assessed from 3D experimental structure via serial sectioning • Stochastic reconstruction used to generate 3D virtual structure 2D micrograph • Dilation–erosion method to improve accuracy of 3D reconstruction.
Heinz, Hendrik
2014-06-18
Adsorption of biomolecules and polymers to inorganic nanostructures plays a major role in the design of novel materials and therapeutics. The behavior of flexible molecules on solid surfaces at a scale of 1-1000 nm remains difficult and expensive to monitor using current laboratory techniques, while playing a critical role in energy conversion and composite materials as well as in understanding the origin of diseases. Approaches to implement key surface features and pH in molecular models of solids are explained, and distinct mechanisms of peptide recognition on metal nanostructures, silica and apatite surfaces in solution are described as illustrative examples. The influence of surface energies, specific surface features and protonation states on the structure of aqueous interfaces and selective biomolecular adsorption is found to be critical, comparable to the well-known influence of the charge state and pH of proteins and surfactants on their conformations and assembly. The representation of such details in molecular models according to experimental data and available chemical knowledge enables accurate simulations of unknown complex interfaces in atomic resolution in quantitative agreement with independent experimental measurements. In this context, the benefits of a uniform force field for all material classes and of a mineral surface structure database are discussed.
Power Curve Modeling in Complex Terrain Using Statistical Models
NASA Astrophysics Data System (ADS)
Bulaevskaya, V.; Wharton, S.; Clifton, A.; Qualley, G.; Miller, W.
2014-12-01
Traditional power output curves typically model power only as a function of the wind speed at the turbine hub height. While the latter is an essential predictor of power output, wind speed information in other parts of the vertical profile, as well as additional atmospheric variables, are also important determinants of power. The goal of this work was to determine the gain in predictive ability afforded by adding wind speed information at other heights, as well as other atmospheric variables, to the power prediction model. Using data from a wind farm with a moderately complex terrain in the Altamont Pass region in California, we trained three statistical models, a neural network, a random forest and a Gaussian process model, to predict power output from various sets of aforementioned predictors. The comparison of these predictions to the observed power data revealed that considerable improvements in prediction accuracy can be achieved both through the addition of predictors other than the hub-height wind speed and the use of statistical models. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344 and was funded by Wind Uncertainty Quantification Laboratory Directed Research and Development Project at LLNL under project tracking code 12-ERD-069.
Webb-Robertson, Bobbie-Jo M.; Cannon, William R.; Oehmen, Christopher S.; Shah, Anuj R.; Gurumoorthi, Vidhya; Lipton, Mary S.; Waters, Katrina M.
2008-07-01
Motivation: The standard approach to identifying peptides based on accurate mass and elution time (AMT) compares these profiles obtained from a high resolution mass spectrometer to a database of peptides previously identified from tandem mass spectrometry (MS/MS) studies. It would be advantageous, with respect to both accuracy and cost, to only search for those peptides that are detectable by MS (proteotypic). Results: We present a Support Vector Machine (SVM) model that uses a simple descriptor space based on 35 properties of amino acid content, charge, hydrophilicity, and polarity for the quantitative prediction of proteotypic peptides. Using three independently derived AMT databases (Shewanella oneidensis, Salmonella typhimurium, Yersinia pestis) for training and validation within and across species, the SVM resulted in an average accuracy measure of ~0.8 with a standard deviation of less than 0.025. Furthermore, we demonstrate that these results are achievable with a small set of 12 variables and can achieve high proteome coverage. Availability: http://omics.pnl.gov/software/STEPP.php
Guan, Tao; Zhou, Dongxiang; Liu, Yunhui
2015-07-01
Overlapping cells segmentation is one of the challenging topics in medical image processing. In this paper, we propose to approximately represent the cell contour as a set of sparse contour points, which can be further partitioned into two parts: the strong contour points and the weak contour points. We consider the cell contour extraction as a contour points locating problem and propose an effective and robust framework for segmentation of partially overlapping cells in cervical smear images. First, the cell nucleus and the background are extracted by a morphological filtering-based K-means clustering algorithm. Second, a gradient decomposition-based edge enhancement method is developed for enhancing the true edges belonging to the center cell. Then, a dynamic sparse contour searching algorithm is proposed to gradually locate the weak contour points in the cell overlapping regions based on the strong contour points. This algorithm involves the least squares estimation and a dynamic searching principle, and is thus effective to cope with the cell overlapping problem. Using the located contour points, the Gradient Vector Flow Snake model is finally employed to extract the accurate cell contour. Experiments have been performed on two cervical smear image datasets containing both single cells and partially overlapping cells. The high accuracy of the cell contour extraction result validates the effectiveness of the proposed method.
Hayes, E.F.; Darakjian, Z. . Dept. of Chemistry); Walker, R.B. )
1990-01-01
The Bending Corrected Rotating Linear Model (BCRLM), developed by Hayes and Walker, is a simple approximation to the true multidimensional scattering problem for reaction of the type: A + BC {yields} AB + C. While the BCRLM method is simpler than methods designed to obtain accurate three dimensional quantum scattering results, this turns out to be a major advantage in terms of our benchmarking studies. The computer code used to obtain BCRLM scattering results is written for the most part in standard FORTRAN and has been reported to several scalar, vector, and parallel architecture computers including the IBM 3090-600J, the Cray XMP and YMP, the Ardent Titan, IBM RISC System/6000, Convex C-1 and the MIPS 2000. Benchmark results will be reported for each of these machines with an emphasis on comparing the scalar, vector, and parallel performance for the standard code with minimum modifications. Detailed analysis of the mapping of the BCRLM approach onto both shared and distributed memory parallel architecture machines indicates the importance of introducing several key changes in the basic strategy and algorithums used to calculate scattering results. This analysis of the BCRLM approach provides some insights into optimal strategies for mapping three dimensional quantum scattering methods, such as the Parker-Pack method, onto shared or distributed memory parallel computers.
McCoy, Rajiv C.; Garud, Nandita R.; Kelley, Joanna L.; Boggs, Carol L.; Petrov, Dmitri A.
2015-01-01
The analysis of molecular data from natural populations has allowed researchers to answer diverse ecological questions that were previously intractable. In particular, ecologists are often interested in the demographic history of populations, information that is rarely available from historical records. Methods have been developed to infer demographic parameters from genomic data, but it is not well understood how inferred parameters compare to true population history or depend on aspects of experimental design. Here we present and evaluate a method of SNP discovery using RNA-sequencing and demographic inference using the program δaδi, which uses a diffusion approximation to the allele frequency spectrum to fit demographic models. We test these methods in a population of the checkerspot butterfly Euphydryas gillettii. This population was intentionally introduced to Gothic, Colorado in 1977 and has since experienced extreme fluctuations including bottlenecks of fewer than 25 adults, as documented by nearly annual field surveys. Using RNA-sequencing of eight individuals from Colorado and eight individuals from a native population in Wyoming, we generate the first genomic resources for this system. While demographic inference is commonly used to examine ancient demography, our study demonstrates that our inexpensive, all-in-one approach to marker discovery and genotyping provides sufficient data to accurately infer the timing of a recent bottleneck. This demographic scenario is relevant for many species of conservation concern, few of which have sequenced genomes. Our results are remarkably insensitive to sample size or number of genomic markers, which has important implications for applying this method to other non-model systems. PMID:24237665
NASA Astrophysics Data System (ADS)
Infantino, Angelo; Marengo, Mario; Baschetti, Serafina; Cicoria, Gianfranco; Longo Vaschetto, Vittorio; Lucconi, Giulia; Massucci, Piera; Vichi, Sara; Zagni, Federico; Mostacci, Domiziano
2015-11-01
Biomedical cyclotrons for production of Positron Emission Tomography (PET) radionuclides and radiotherapy with hadrons or ions are widely diffused and established in hospitals as well as in industrial facilities and research sites. Guidelines for site planning and installation, as well as for radiation protection assessment, are given in a number of international documents; however, these well-established guides typically offer analytic methods of calculation of both shielding and materials activation, in approximate or idealized geometry set up. The availability of Monte Carlo codes with accurate and up-to-date libraries for transport and interactions of neutrons and charged particles at energies below 250 MeV, together with the continuously increasing power of nowadays computers, makes systematic use of simulations with realistic geometries possible, yielding equipment and site specific evaluation of the source terms, shielding requirements and all quantities relevant to radiation protection. In this work, the well-known Monte Carlo code FLUKA was used to simulate two representative models of cyclotron for PET radionuclides production, including their targetry; and one type of proton therapy cyclotron including the energy selection system. Simulations yield estimates of various quantities of radiological interest, including the effective dose distribution around the equipment, the effective number of neutron produced per incident proton and the activation of target materials, the structure of the cyclotron, the energy degrader, the vault walls and the soil. The model was validated against experimental measurements and comparison with well-established reference data. Neutron ambient dose equivalent H*(10) was measured around a GE PETtrace cyclotron: an average ratio between experimental measurement and simulations of 0.99±0.07 was found. Saturation yield of 18F, produced by the well-known 18O(p,n)18F reaction, was calculated and compared with the IAEA recommended
Lin, W.L.; Carlson, K.D.; Chen, C.J. |
1999-05-01
In this study, a diagonal Cartesian method for thermal analysis is developed for simulation of conjugate heat transfer over complex boundaries. This method uses diagonal line segments in addition to Cartesian coordinates. The velocity fields are also modeled using the diagonal Cartesian method. The transport equations are discretized with the finite analytic (FA) method. The current work is validated by simulating a rotated lid-driven cavity flow with conjugate heat transfer, and accurate results are obtained.
NASA Astrophysics Data System (ADS)
de Boer, H. J.; Dekker, S. C.; Wassen, M. J.
2009-04-01
Earth System Models of Intermediate Complexity (EMICs) are popular tools for palaeo climate simulations. Recent studies applied these models in comparison to terrestrial proxy records and aimed to reconstruct changes in seasonal climate forced by altered ocean circulation patterns. To strengthen this powerful methodology, we argue that the magnitude of the simulated atmospheric changes should be considered in relation to the internal variability of both the climate system and the intermediate complexity model. To attribute a shift in modelled climate to reality, this ‘signal' should be detectable above the ‘noise' related to the internal variability of the climate system and the internal variability of the model. Both noise and climate signals vary over the globe and change with the seasons. We therefore argue that spatial explicit fields of noise should be considered in relation to the strengths of the simulated signals at a seasonal timescale. We approximated total noise on terrestrial temperature and precipitation from a 29 member simulation with the EMIC PUMA-2 and global temperature and precipitation datasets. To illustrate this approach, we calculate Signal-to-Noise-Ratios (SNRs) in terrestrial temperature and precipitation on simulations of an El Niño warm event, a phase change in Atlantic Meridional Oscillation (AMO) and a Heinrich cooling event. The results of the El Niño and AMO simulations indicate that the chance to accurately detect a climate signal increases with increasing SNRs. Considering the regions and seasons with highest SNRs, the simulated El Niño anomalies show good agreement with observations (r² = 0.8 and 0.6 for temperature and precipitation at SNRs > 4). The AMO signals rarely surpass the noise levels and remain mostly undetected. The simulation of a Heinrich event predicts highest SNRs for temperature (up to 10) over Arabia and Russia during Boreal winter and spring. Highest SNRs for precipitation (up to 12) are predicted over
Estimating the Optimal Spatial Complexity of a Water Quality Model Using Multi-Criteria Methods
NASA Astrophysics Data System (ADS)
Meixner, T.
2002-12-01
Discretizing the landscape into multiple smaller units appears to be a necessary step for improving the performance of water quality models. However there is a need for adequate falsification methods to discern between discretization that improves model performance and discretization that merely adds to model complexity. Multi-criteria optimization methods promise a way to increase the power of model discrimination and a path to increasing our ability in differentiating between good and bad model discretization methods. This study focuses on the optimal level of spatial discretization of a water quality model, the Alpine Hydrochemical Model of the Emerald Lake watershed in Sequoia National Park, California. The 5 models of the watershed differ in the degree of simplification that they represent from the real watershed. The simplest model is just a lumped model of the entire watershed. The most complex model takes the 5 main soil groups in the watershed and represents each with a modeling subunit as well as having subunits for rock and talus areas in the watershed. Each of these models was calibrated using stream discharge and three chemical fluxes jointly as optimization criteria using a Pareto optimization routine, MOCOM-UA. After optimization the 5 models were compared for their performance using model criteria not used in calibration, the variability of model parameter estimates, and comparison to the mean of observations as a predictor of stream chemical composition. Based on these comparisons, the results indicate that the model with only 2 terrestrial subunits had the optimal level of model complexity. This result shows that increasing model complexity, even using detailed site specific data, is not always rewarded with improved model performance. Additionally, this result indicates that the most important geographic element for modeling water quality in alpine watersheds is accurately delineating the boundary between areas of rock and areas containing either
NASA Astrophysics Data System (ADS)
Schiavon, Ricardo P.
2007-07-01
We present a new set of model predictions for 16 Lick absorption line indices from Hδ through Fe5335 and UBV colors for single stellar populations with ages ranging between 1 and 15 Gyr, [Fe/H] ranging from -1.3 to +0.3, and variable abundance ratios. The models are based on accurate stellar parameters for the Jones library stars and a new set of fitting functions describing the behavior of line indices as a function of effective temperature, surface gravity, and iron abundance. The abundances of several key elements in the library stars have been obtained from the literature in order to characterize the abundance pattern of the stellar library, thus allowing us to produce model predictions for any set of abundance ratios desired. We develop a method to estimate mean ages and abundances of iron, carbon, nitrogen, magnesium, and calcium that explores the sensitivity of the various indices modeled to those parameters. The models are compared to high-S/N data for Galactic clusters spanning the range of ages, metallicities, and abundance patterns of interest. Essentially all line indices are matched when the known cluster parameters are adopted as input. Comparing the models to high-quality data for galaxies in the nearby universe, we reproduce previous results regarding the enhancement of light elements and the spread in the mean luminosity-weighted ages of early-type galaxies. When the results from the analysis of blue and red indices are contrasted, we find good consistency in the [Fe/H] that is inferred from different Fe indices. Applying our method to estimate mean ages and abundances from stacked SDSS spectra of early-type galaxies brighter than L*, we find mean luminosity-weighed ages of the order of ~8 Gyr and iron abundances slightly below solar. Abundance ratios, [X/Fe], tend to be higher than solar and are positively correlated with galaxy luminosity. Of all elements, nitrogen is the more strongly correlated with galaxy luminosity, which seems to indicate
Thermophysical Model of S-complex NEAs: 1627 Ivar
NASA Astrophysics Data System (ADS)
Crowell, Jenna L.; Howell, Ellen S.; Magri, Christopher; Fernandez, Yan R.; Marshall, Sean E.; Warner, Brian D.; Vervack, Ronald J.
2015-11-01
We present updates to the thermophysical model of asteroid 1627 Ivar. Ivar is an Amor class near Earth asteroid (NEA) with a taxonomic type of Sqw [1] and a rotation rate of 4.795162 ± 5.4 * 10-6 hours [2]. In 2013, our group observed Ivar in radar, in CCD lightcurves, and in the near-IR’s reflected and thermal regimes (0.8 - 4.1 µm) using the Arecibo Observatory’s 2380 MHz radar, the Palmer Divide Station’s 0.35m telescope, and the SpeX instrument at the NASA IRTF respectively. Using these radar and lightcurve data, we generated a detailed shape model of Ivar using the software SHAPE [3,4]. Our shape model reveals more surface detail compared to earlier models [5] and we found Ivar to be an elongated asteroid with the maximum extended length along the three body-fixed coordinates being 12 x 11.76 x 6 km. For our thermophysical modeling, we have used SHERMAN [6,7] with input parameters such as the asteroid’s IR emissivity, optical scattering law and thermal inertia, in order to complete thermal computations based on our shape model and the known spin state. We then create synthetic near-IR spectra that can be compared to our observed spectra, which cover a wide range of Ivar’s rotational longitudes and viewing geometries. As has been noted [6,8], the use of an accurate shape model is often crucial for correctly interpreting multi-epoch thermal emission observations. We will present what SHERMAN has let us determine about the reflective, thermal, and surface properties for Ivar that best reproduce our spectra. From our derived best-fit thermal parameters, we will learn more about the regolith, surface properties, and heterogeneity of Ivar and how those properties compare to those of other S-complex asteroids. References: [1] DeMeo et al. 2009, Icarus 202, 160-180 [2] Crowell, J. et al. 2015, LPSC 46 [3] Magri C. et al. 2007, Icarus 186, 152-177 [4] Crowell, J. et al. 2014, AAS/DPS 46 [5] Kaasalainen, M. et al. 2004, Icarus 167, 178-196 [6] Crowell, J. et
Rozovski, Uri; Verstovsek, Srdan; Manshouri, Taghi; Dembitz, Vilma; Bozinovic, Ksenija; Newberry, Kate; Zhang, Ying; Bove, Joseph E.; Pierce, Sherry; Kantarjian, Hagop; Estrov, Zeev
2017-01-01
In most patients with primary myelofibrosis, one of three mutually exclusive somatic mutations is detected. In approximately 60% of patients, the Janus kinase 2 gene is mutated, in 20%, the calreticulin gene is mutated, and in 5%, the myeloproliferative leukemia virus gene is mutated. Although patients with mutated calreticulin or myeloproliferative leukemia genes have a favorable outcome, and those with none of these mutations have an unfavorable outcome, prognostication based on mutation status is challenging due to the heterogeneous survival of patients with mutated Janus kinase 2. To develop a prognostic model based on mutation status, we screened primary myelofibrosis patients seen at the MD Anderson Cancer Center, Houston, USA, between 2000 and 2013 for the presence of Janus kinase 2, calreticulin, and myeloproliferative leukemia mutations. Of 344 primary myelofibrosis patients, Janus kinase 2V617F was detected in 226 (66%), calreticulin mutation in 43 (12%), and myeloproliferative leukemia mutation in 16 (5%); 59 patients (17%) were triple-negatives. A 50% cut-off dichotomized Janus kinase 2-mutated patients into those with high Janus kinase 2V617F allele burden and favorable survival and those with low Janus kinase 2V617F allele burden and unfavorable survival. Patients with a favorable mutation status (high Janus kinase 2V617F allele burden/myeloproliferative leukemia/calreticulin mutation) and aged 65 years or under had a median survival of 126 months. Patients with one risk factor (low Janus kinase 2V617F allele burden/triple-negative or age >65 years) had an intermediate survival duration, and patients aged over 65 years with an adverse mutation status (low Janus kinase 2V617F allele burden or triple-negative) had a median survival of only 35 months. Our simple and easily applied age- and mutation status-based scoring system accurately predicted the survival of patients with primary myelofibrosis. PMID:27686378
Modeling Complex Workflow in Molecular Diagnostics
Gomah, Mohamed E.; Turley, James P.; Lu, Huimin; Jones, Dan
2010-01-01
One of the hurdles to achieving personalized medicine has been implementing the laboratory processes for performing and reporting complex molecular tests. The rapidly changing test rosters and complex analysis platforms in molecular diagnostics have meant that many clinical laboratories still use labor-intensive manual processing and testing without the level of automation seen in high-volume chemistry and hematology testing. We provide here a discussion of design requirements and the results of implementation of a suite of lab management tools that incorporate the many elements required for use of molecular diagnostics in personalized medicine, particularly in cancer. These applications provide the functionality required for sample accessioning and tracking, material generation, and testing that are particular to the evolving needs of individualized molecular diagnostics. On implementation, the applications described here resulted in improvements in the turn-around time for reporting of more complex molecular test sets, and significant changes in the workflow. Therefore, careful mapping of workflow can permit design of software applications that simplify even the complex demands of specialized molecular testing. By incorporating design features for order review, software tools can permit a more personalized approach to sample handling and test selection without compromising efficiency. PMID:20007844
NASA Astrophysics Data System (ADS)
Calvo, F.; Falvo, Cyril; Parneix, Pascal
2013-01-01
An explicit polarizable potential for the naphthalene-argon complex has been derived assuming only atomic contributions, aiming at large scale simulations of naphthalene under argon environment. The potential was parametrized from dedicated quantum chemical calculations at the CCSD(T) level, and satisfactorily reproduces available structural and energetic properties. Combining this potential with a tight-binding model for naphthalene, collisional energy transfer is studied by means of dedicated molecular dynamics simulations, nuclear quantum effects being accounted for in the path-integral framework. Except at low target temperature, nuclear quantum effects do not alter the average energies transferred by the collision or the collision duration. However, the distribution of energy transferred is much broader in the quantum case due to the significant zero-point energy and the higher density of states. Using an ab initio potential for the Ar-Ar interaction, the IR absorption spectrum of naphthalene solvated by argon clusters or an entire Ar matrix is computed via classical and centroid molecular dynamics. The classical spectra exhibit variations with growing argon environment that are absent from quantum spectra. This is interpreted by the greater fluxional character experienced by the argon atoms due to vibrational delocalization.
Calvo, F; Falvo, Cyril; Parneix, Pascal
2013-01-21
An explicit polarizable potential for the naphthalene-argon complex has been derived assuming only atomic contributions, aiming at large scale simulations of naphthalene under argon environment. The potential was parametrized from dedicated quantum chemical calculations at the CCSD(T) level, and satisfactorily reproduces available structural and energetic properties. Combining this potential with a tight-binding model for naphthalene, collisional energy transfer is studied by means of dedicated molecular dynamics simulations, nuclear quantum effects being accounted for in the path-integral framework. Except at low target temperature, nuclear quantum effects do not alter the average energies transferred by the collision or the collision duration. However, the distribution of energy transferred is much broader in the quantum case due to the significant zero-point energy and the higher density of states. Using an ab initio potential for the Ar-Ar interaction, the IR absorption spectrum of naphthalene solvated by argon clusters or an entire Ar matrix is computed via classical and centroid molecular dynamics. The classical spectra exhibit variations with growing argon environment that are absent from quantum spectra. This is interpreted by the greater fluxional character experienced by the argon atoms due to vibrational delocalization.
ERIC Educational Resources Information Center
Blything, Liam P.; Cain, Kate
2016-01-01
In a touch-screen paradigm, we recorded 3- to 7-year-olds' (N = 108) accuracy and response times (RTs) to assess their comprehension of 2-clause sentences containing "before" and "after". Children were influenced by order: performance was most accurate when the presentation order of the 2 clauses matched the chronological order…
Specifying and Refining a Complex Measurement Model.
ERIC Educational Resources Information Center
Levy, Roy; Mislevy, Robert J.
This paper aims to describe a Bayesian approach to modeling and estimating cognitive models both in terms of statistical machinery and actual instrument development. Such a method taps the knowledge of experts to provide initial estimates for the probabilistic relationships among the variables in a multivariate latent variable model and refines…
Acquisition of Complex Systemic Thinking: Mental Models of Evolution
ERIC Educational Resources Information Center
d'Apollonia, Sylvia T.; Charles, Elizabeth S.; Boyd, Gary M.
2004-01-01
We investigated the impact of introducing college students to complex adaptive systems on their subsequent mental models of evolution compared to those of students taught in the same manner but with no reference to complex systems. The students' mental models (derived from similarity ratings of 12 evolutionary terms using the pathfinder algorithm)…
NASA Astrophysics Data System (ADS)
Valentine, A. P.; Kaeufl, P.; De Wit, R. W. L.; Trampert, J.
2014-12-01
Obtaining knowledge about source parameters in (near) real-time during or shortly after an earthquake is essential for mitigating damage and directing resources in the aftermath of the event. Therefore, a variety of real-time source-inversion algorithms have been developed over recent decades. This has been driven by the ever-growing availability of dense seismograph networks in many seismogenic areas of the world and the significant advances in real-time telemetry. By definition, these algorithms rely on short time-windows of sparse, local and regional observations, resulting in source estimates that are highly sensitive to observational errors, noise and missing data. In order to obtain estimates more rapidly, many algorithms are either entirely based on empirical scaling relations or make simplifying assumptions about the Earth's structure, which can in turn lead to biased results. It is therefore essential that realistic uncertainty bounds are estimated along with the parameters. A natural means of propagating probabilistic information on source parameters through the entire processing chain from first observations to potential end users and decision makers is provided by the Bayesian formalism.We present a novel method based on pattern recognition allowing us to incorporate highly accurate physical modelling into an uncertainty-aware real-time inversion algorithm. The algorithm is based on a pre-computed Green's functions database, containing a large set of source-receiver paths in a highly heterogeneous crustal model. Unlike similar methods, which often employ a grid search, we use a supervised learning algorithm to relate synthetic waveforms to point source parameters. This training procedure has to be performed only once and leads to a representation of the posterior probability density function p(m|d) --- the distribution of source parameters m given observations d --- which can be evaluated quickly for new data.Owing to the flexibility of the pattern
Scalable complexity-distortion model for fast motion estimation
NASA Astrophysics Data System (ADS)
Yi, Xiaoquan; Ling, Nam
2005-07-01
Recently established international video coding standard H.264/AVC and the upcoming standard on scalable video coding (SVC) bring part of the solution to high compression ratio requirement and heterogeneity requirement. However, these algorithms have unbearable complexities for real-time encoding. Therefore, there is an important challenge to reduce encoding complexity, preferably in a scalable manner. Motion estimation and motion compensation techniques provide significant coding gain but are the most time-intensive parts in an encoder system. They present tremendous research challenges to design a flexible, rate-distortion optimized, yet computationally efficient encoder, especially for various applications. In this paper, we present a scalable motion estimation framework for complexitydistortion consideration. We propose a new progressive initial search (PIS) method to generate an accurate initial search point, followed by a fast search method, which can greatly benefit from the tighter bounds of the PIS. Such approach offers not only significant speedup but also an optimal distortion performance for a given complexity constrain. We analyze the relationship between computational complexity and distortion (C-D) through probabilistic distance measure extending from the complexity and distortion theory. A configurable complexity quantization parameter (Q) is introduced. Simulation results demonstrate that the proposed scalable complexity-distortion framework enables video encoder to conveniently adjust its complexity while providing best possible services.
Chen, Liao Y
2015-04-14
Computing the free energy of binding a ligand to a protein is a difficult task of essential importance for which purpose various theoretical/computational approaches have been pursued. In this paper, we develop a hybrid steered molecular dynamics (hSMD) method capable of resolving one ligand–protein complex within a few wall-clock days with high enough accuracy to compare with the experimental data. This hSMD approach is based on the relationship between the binding affinity and the potential of mean force (PMF) in the established literature. It involves simultaneously steering n (n = 1, 2, 3, ...) centers of mass of n selected segments of the ligand using n springs of infinite stiffness. Steering the ligand from a single initial state chosen from the bound state ensemble to the corresponding dissociated state, disallowing any fluctuations of the pulling centers along the way, one can determine a 3n-dimensional PMF curve connecting the two states by sampling a small number of forward and reverse pulling paths. This PMF constitutes a large but not the sole contribution to the binding free energy. Two other contributors are (1) the partial partition function containing the equilibrium fluctuations of the ligand at the binding site and the deviation of the initial state from the PMF minimum and (2) the partial partition function containing rotation and fluctuations of the ligand around one of the pulling centers that is fixed at a position far from the protein. We implement this hSMD approach for two ligand–protein complexes whose structures were determined and whose binding affinities were measured experimentally: caprylic acid binding to bovine β-lactoglobulin and glutathione binding to Schistosoma japonicum glutathione S-transferase tyrosine 7 to phenylalanine mutant. Our computed binding affinities agree with the experimental data within a factor of 1.5. The total time of computation for these two all-atom model systems (consisting of 96K and 114K atoms
Baspinar, Alper; Cukuroglu, Engin; Nussinov, Ruth; Keskin, Ozlem; Gursoy, Attila
2014-01-01
The PRISM web server enables fast and accurate prediction of protein–protein interactions (PPIs). The prediction algorithm is knowledge-based. It combines structural similarity and accounts for evolutionary conservation in the template interfaces. The predicted models are stored in its repository. Given two protein structures, PRISM will provide a structural model of their complex if a matching template interface is available. Users can download the complex structure, retrieve the interface residues and visualize the complex model. The PRISM web server is user friendly, free and open to all users at http://cosbi.ku.edu.tr/prism. PMID:24829450
Green, Christopher T.; Zhang, Yong; Jurgens, Bryant C.; Starn, J. Jeffrey; Landon, Matthew K.
2014-01-01
Analytical models of the travel time distribution (TTD) from a source area to a sample location are often used to estimate groundwater ages and solute concentration trends. The accuracies of these models are not well known for geologically complex aquifers. In this study, synthetic datasets were used to quantify the accuracy of four analytical TTD models as affected by TTD complexity, observation errors, model selection, and tracer selection. Synthetic TTDs and tracer data were generated from existing numerical models with complex hydrofacies distributions for one public-supply well and 14 monitoring wells in the Central Valley, California. Analytical TTD models were calibrated to synthetic tracer data, and prediction errors were determined for estimates of TTDs and conservative tracer (NO3−) concentrations. Analytical models included a new, scale-dependent dispersivity model (SDM) for two-dimensional transport from the watertable to a well, and three other established analytical models. The relative influence of the error sources (TTD complexity, observation error, model selection, and tracer selection) depended on the type of prediction. Geological complexity gave rise to complex TTDs in monitoring wells that strongly affected errors of the estimated TTDs. However, prediction errors for NO3− and median age depended more on tracer concentration errors. The SDM tended to give the most accurate estimates of the vertical velocity and other predictions, although TTD model selection had minor effects overall. Adding tracers improved predictions if the new tracers had different input histories. Studies using TTD models should focus on the factors that most strongly affect the desired predictions.
NASA Astrophysics Data System (ADS)
Kim, Jibeom; Jeon, Joonhyeon
2015-01-01
Recently, related studies on Equation Of State (EOS) have reported that generalized van der Waals (GvdW) shows poor representations in the near critical region for non-polar and non-sphere molecules. Hence, there are still remains a problem of GvdW parameters to minimize loss in describing saturated vapor densities and vice versa. This paper describes a recursive model GvdW (rGvdW) for an accurate representation of pure fluid materials in the near critical region. For the performance evaluation of rGvdW in the near critical region, other EOS models are also applied together with two pure molecule group: alkane and amine. The comparison results show rGvdW provides much more accurate and reliable predictions of pressure than the others. The calculating model of EOS through this approach gives an additional insight into the physical significance of accurate prediction of pressure in the nearcritical region.
NASA Astrophysics Data System (ADS)
GASPARETTO, A.
2001-02-01
Vibration control of flexible link mechanisms with more than two flexible links is still an open question, mainly because defining a model that is adequate for the designing of a controller is a rather difficult task. In this work, an accurate dynamic non-linear model of a flexible-link planar mechanism is presented. In order to bring the system into a form that is suitable for the design of a vibration controller, the model is then linearized about an operating point, so as to achieve a linear model of the system in the standard state-space form of system theory. The linear model obtained, which is valid for whatever planar mechanism with any number of flexible link, is then applied to a four-bar planar linkage. Extensive simulation is carried out, aimed at comparing the system dynamic evolution, both in the open- and in the closed-loop case, using the non-linear model and the linearized one. The results prove that the error made by using the linearized system instead of the non-linear one is small. Therefore, it can be concluded that the model proposed in this work can constitute an effective basis for designing and testing many types of vibration controllers for flexible planar mechanisms.
Complex Systems and Human Performance Modeling
2013-12-01
human communication patterns can be implemented in a task network modeling tool. Although queues are a basic feature in many task network modeling...time. MODELING COMMUNICATIVE BEHAVIOR Barabasi (2010) argues that human communication patterns are “bursty”; that is, the inter-event arrival...Having implemented the methods advocated by Clauset et al. in C3TRACE, we have grown more confident that the human communication data discussed above
2016-01-01
In a touch-screen paradigm, we recorded 3- to 7-year-olds’ (N = 108) accuracy and response times (RTs) to assess their comprehension of 2-clause sentences containing before and after. Children were influenced by order: performance was most accurate when the presentation order of the 2 clauses matched the chronological order of events: “She drank the juice, before she walked in the park” (chronological order) versus “Before she walked in the park, she drank the juice” (reverse order). Differences in RTs for correct responses varied by sentence type: accurate responses were made more speedily for sentences that afforded an incremental processing of meaning. An independent measure of memory predicted this pattern of performance. We discuss these findings in relation to children’s knowledge of connective meaning and the processing requirements of sentences containing temporal connectives. PMID:27690492
Multiscale Computational Models of Complex Biological Systems
Walpole, Joseph; Papin, Jason A.; Peirce, Shayn M.
2014-01-01
Integration of data across spatial, temporal, and functional scales is a primary focus of biomedical engineering efforts. The advent of powerful computing platforms, coupled with quantitative data from high-throughput experimental platforms, has allowed multiscale modeling to expand as a means to more comprehensively investigate biological phenomena in experimentally relevant ways. This review aims to highlight recently published multiscale models of biological systems while using their successes to propose the best practices for future model development. We demonstrate that coupling continuous and discrete systems best captures biological information across spatial scales by selecting modeling techniques that are suited to the task. Further, we suggest how to best leverage these multiscale models to gain insight into biological systems using quantitative, biomedical engineering methods to analyze data in non-intuitive ways. These topics are discussed with a focus on the future of the field, the current challenges encountered, and opportunities yet to be realized. PMID:23642247
Sugden, Isaac; Adjiman, Claire S; Pantelides, Constantinos C
2016-12-01
The global search stage of crystal structure prediction (CSP) methods requires a fine balance between accuracy and computational cost, particularly for the study of large flexible molecules. A major improvement in the accuracy and cost of the intramolecular energy function used in the CrystalPredictor II [Habgood et al. (2015). J. Chem. Theory Comput. 11, 1957-1969] program is presented, where the most efficient use of computational effort is ensured via the use of adaptive local approximate model (LAM) placement. The entire search space of the relevant molecule's conformations is initially evaluated using a coarse, low accuracy grid. Additional LAM points are then placed at appropriate points determined via an automated process, aiming to minimize the computational effort expended in high-energy regions whilst maximizing the accuracy in low-energy regions. As the size, complexity and flexibility of molecules increase, the reduction in computational cost becomes marked. This improvement is illustrated with energy calculations for benzoic acid and the ROY molecule, and a CSP study of molecule (XXVI) from the sixth blind test [Reilly et al. (2016). Acta Cryst. B72, 439-459], which is challenging due to its size and flexibility. Its known experimental form is successfully predicted as the global minimum. The computational cost of the study is tractable without the need to make unphysical simplifying assumptions.
Information, complexity and efficiency: The automobile model
Allenby, B. |
1996-08-08
The new, rapidly evolving field of industrial ecology - the objective, multidisciplinary study of industrial and economic systems and their linkages with fundamental natural systems - provides strong ground for believing that a more environmentally and economically efficient economy will be more information intensive and complex. Information and intellectual capital will be substituted for the more traditional inputs of materials and energy in producing a desirable, yet sustainable, quality of life. While at this point this remains a strong hypothesis, the evolution of the automobile industry can be used to illustrate how such substitution may, in fact, already be occurring in an environmentally and economically critical sector.
The Creation of Surrogate Models for Fast Estimation of Complex Model Outcomes.
Pruett, W Andrew; Hester, Robert L
2016-01-01
A surrogate model is a black box model that reproduces the output of another more complex model at a single time point. This is to be distinguished from the method of surrogate data, used in time series. The purpose of a surrogate is to reduce the time necessary for a computation at the cost of rigor and generality. We describe a method of constructing surrogates in the form of support vector machine (SVM) regressions for the purpose of exploring the parameter space of physiological models. Our focus is on the methodology of surrogate creation and accuracy assessment in comparison to the original model. This is done in the context of a simulation of hemorrhage in one model, "Small", and renal denervation in another, HumMod. In both cases, the surrogate predicts the drop in mean arterial pressure following the intervention. We asked three questions concerning surrogate models: (1) how many training examples are necessary to obtain an accurate surrogate, (2) is surrogate accuracy homogeneous, and (3) how much can computation time be reduced when using a surrogate. We found the minimum training set size that would guarantee maximal accuracy was widely variable, but could be algorithmically generated. The average error for the pressure response to the protocols was -0.05±2.47 in Small, and -0.3 +/- 3.94 mmHg in HumMod. In the Small model, error grew with actual pressure drop, and in HumMod, larger pressure drops were overestimated by the surrogates. Surrogate use resulted in a 6 order of magnitude decrease in computation time. These results suggest surrogate modeling is a valuable tool for generating predictions of an integrative model's behavior on densely sampled subsets of its parameter space.
Modeling Power Systems as Complex Adaptive Systems
Chassin, David P.; Malard, Joel M.; Posse, Christian; Gangopadhyaya, Asim; Lu, Ning; Katipamula, Srinivas; Mallow, J V.
2004-12-30
Physical analogs have shown considerable promise for understanding the behavior of complex adaptive systems, including macroeconomics, biological systems, social networks, and electric power markets. Many of today's most challenging technical and policy questions can be reduced to a distributed economic control problem. Indeed, economically based control of large-scale systems is founded on the conjecture that the price-based regulation (e.g., auctions, markets) results in an optimal allocation of resources and emergent optimal system control. This report explores the state-of-the-art physical analogs for understanding the behavior of some econophysical systems and deriving stable and robust control strategies for using them. We review and discuss applications of some analytic methods based on a thermodynamic metaphor, according to which the interplay between system entropy and conservation laws gives rise to intuitive and governing global properties of complex systems that cannot be otherwise understood. We apply these methods to the question of how power markets can be expected to behave under a variety of conditions.
Integrated Modeling of Complex Optomechanical Systems
NASA Astrophysics Data System (ADS)
Andersen, Torben; Enmark, Anita
2011-09-01
Mathematical modeling and performance simulation are playing an increasing role in large, high-technology projects. There are two reasons; first, projects are now larger than they were before, and the high cost calls for detailed performance prediction before construction. Second, in particular for space-related designs, it is often difficult to test systems under realistic conditions beforehand, and mathematical modeling is then needed to verify in advance that a system will work as planned. Computers have become much more powerful, permitting calculations that were not possible before. At the same time mathematical tools have been further developed and found acceptance in the community. Particular progress has been made in the fields of structural mechanics, optics and control engineering, where new methods have gained importance over the last few decades. Also, methods for combining optical, structural and control system models into global models have found widespread use. Such combined models are usually called integrated models and were the subject of this symposium. The objective was to bring together people working in the fields of groundbased optical telescopes, ground-based radio telescopes, and space telescopes. We succeeded in doing so and had 39 interesting presentations and many fruitful discussions during coffee and lunch breaks and social arrangements. We are grateful that so many top ranked specialists found their way to Kiruna and we believe that these proceedings will prove valuable during much future work.
NASA Astrophysics Data System (ADS)
Brown, Alexander; Eviston, Connor
2017-02-01
Multiple FEM models of complex eddy current coil geometries were created and validated to calculate the change of impedance due to the presence of a notch. Capable realistic simulations of eddy current inspections are required for model assisted probability of detection (MAPOD) studies, inversion algorithms, experimental verification, and tailored probe design for NDE applications. An FEM solver was chosen to model complex real world situations including varying probe dimensions and orientations along with complex probe geometries. This will also enable creation of a probe model library database with variable parameters. Verification and validation was performed using other commercially available eddy current modeling software as well as experimentally collected benchmark data. Data analysis and comparison showed that the created models were able to correctly model the probe and conductor interactions and accurately calculate the change in impedance of several experimental scenarios with acceptable error. The promising results of the models enabled the start of an eddy current probe model library to give experimenters easy access to powerful parameter based eddy current models for alternate project applications.
The Creation of Surrogate Models for Fast Estimation of Complex Model Outcomes
Pruett, W. Andrew; Hester, Robert L.
2016-01-01
A surrogate model is a black box model that reproduces the output of another more complex model at a single time point. This is to be distinguished from the method of surrogate data, used in time series. The purpose of a surrogate is to reduce the time necessary for a computation at the cost of rigor and generality. We describe a method of constructing surrogates in the form of support vector machine (SVM) regressions for the purpose of exploring the parameter space of physiological models. Our focus is on the methodology of surrogate creation and accuracy assessment in comparison to the original model. This is done in the context of a simulation of hemorrhage in one model, “Small”, and renal denervation in another, HumMod. In both cases, the surrogate predicts the drop in mean arterial pressure following the intervention. We asked three questions concerning surrogate models: (1) how many training examples are necessary to obtain an accurate surrogate, (2) is surrogate accuracy homogeneous, and (3) how much can computation time be reduced when using a surrogate. We found the minimum training set size that would guarantee maximal accuracy was widely variable, but could be algorithmically generated. The average error for the pressure response to the protocols was -0.05±2.47 in Small, and -0.3 +/- 3.94 mmHg in HumMod. In the Small model, error grew with actual pressure drop, and in HumMod, larger pressure drops were overestimated by the surrogates. Surrogate use resulted in a 6 order of magnitude decrease in computation time. These results suggest surrogate modeling is a valuable tool for generating predictions of an integrative model’s behavior on densely sampled subsets of its parameter space. PMID:27258010
Felmy, Andrew R.; Mason, Marvin J.; Qafoku, Odeta; Dixon, David A.
2005-04-19
This symposium manuscript describes the development of an accurate aqueous thermodynamic model for predicting the speciation of Sr in the waste tanks at the Hanford site. A systematic approach is described that details the studies performed to define the most important inorganic and organic complexation reactions as well as the effects of other important metal ions that compete with Sr for complexation reactions with the chelates. By using this approach we were able to define a reduced set of inorganic complexation, organic complexation, and competing metal reactions that best represent the much more complex waste tank chemical system. A summary is presented of the final thermodynamic model for the system Na-Ca-Sr-OH-CO3-NO3-EDTA-HEDTA-H2O from 25 to 75 ºC that was previously published in a variety of sources. Previously unpublished experimental data are also given for the competing metal Ni as well for certain chemical systems, Na-Sr-CO3-PO4-H2O, and for the solubility of amorphous iron hydroxide in the presence of several organic chelating agents. These data were not used in model development but were key to the final selection of the specific chemical systems prioritized for detailed study.
A simple model clarifies the complicated relationships of complex networks
Zheng, Bojin; Wu, Hongrun; Kuang, Li; Qin, Jun; Du, Wenhua; Wang, Jianmin; Li, Deyi
2014-01-01
Real-world networks such as the Internet and WWW have many common traits. Until now, hundreds of models were proposed to characterize these traits for understanding the networks. Because different models used very different mechanisms, it is widely believed that these traits origin from different causes. However, we find that a simple model based on optimisation can produce many traits, including scale-free, small-world, ultra small-world, Delta-distribution, compact, fractal, regular and random networks. Moreover, by revising the proposed model, the community-structure networks are generated. By this model and the revised versions, the complicated relationships of complex networks are illustrated. The model brings a new universal perspective to the understanding of complex networks and provide a universal method to model complex networks from the viewpoint of optimisation. PMID:25160506
Complex Chebyshev-polynomial-based unified model (CCPBUM) neural networks
NASA Astrophysics Data System (ADS)
Jeng, Jin-Tsong; Lee, Tsu-Tian
1998-03-01
In this paper, we propose complex Chebyshev Polynomial Based unified model neural network for the approximation of complex- valued function. Based on this approximate transformable technique, we have derived the relationship between the single-layered neural network and multi-layered perceptron neural network. It is shown that the complex Chebyshev Polynomial Based unified model neural network can be represented as a functional link network that are based on Chebyshev polynomial. We also derived a new learning algorithm for the proposed network. It turns out that the complex Chebyshev Polynomial Based unified model neural network not only has the same capability of universal approximator, but also has faster learning speed than conventional complex feedforward/recurrent neural network.
A musculoskeletal model of the elbow joint complex
NASA Technical Reports Server (NTRS)
Gonzalez, Roger V.; Barr, Ronald E.; Abraham, Lawrence D.
1993-01-01
This paper describes a musculoskeletal model that represents human elbow flexion-extension and forearm pronation-supination. Musculotendon parameters and the skeletal geometry were determined for the musculoskeletal model in the analysis of ballistic elbow joint complex movements. The key objective was to develop a computational model, guided by optimal control, to investigate the relationship among patterns of muscle excitation, individual muscle forces, and movement kinematics. The model was verified using experimental kinematic, torque, and electromyographic data from volunteer subjects performing both isometric and ballistic elbow joint complex movements. In general, the model predicted kinematic and muscle excitation patterns similar to what was experimentally measured.
An elementary method for implementing complex biokinetic models.
Leggett, R W; Eckerman, K F; Williams, L R
1993-03-01
Recent efforts to incorporate greater anatomical and physiological realism into biokinetic models have resulted in many cases in mathematically complex formulations that limit routine application of the models. This paper describes an elementary, computer-efficient technique for implementing complex compartmental models, with attention focused primarily on biokinetic models involving time-dependent transfer rates and recycling. The technique applies, in particular, to the physiologically based, age-specific biokinetic models recommended in Publication No. 56 of the International Commission on Radiological Protection, Age-Dependent Doses to Members of the Public from Intake of Radionuclides.
Lee, Mi Kyung; Coker, David F
2016-08-18
An accurate approach for computing intermolecular and intrachromophore contributions to spectral densities to describe the electronic-nuclear interactions relevant for modeling excitation energy transfer processes in light harvesting systems is presented. The approach is based on molecular dynamics (MD) calculations of classical correlation functions of long-range contributions to excitation energy fluctuations and a separate harmonic analysis and single-point gradient quantum calculations for electron-intrachromophore vibrational couplings. A simple model is also presented that enables detailed analysis of the shortcomings of standard MD-based excitation energy fluctuation correlation function approaches. The method introduced here avoids these problems, and its reliability is demonstrated in accurate predictions for bacteriochlorophyll molecules in the Fenna-Matthews-Olson pigment-protein complex, where excellent agreement with experimental spectral densities is found. This efficient approach can provide instantaneous spectral densities for treating the influence of fluctuations in environmental dissipation on fast electronic relaxation.
Impact of data quality and model complexity on prediction of pesticide leaching.
Dann, R L; Close, M E; Lee, R; Pang, L
2006-01-01
Accurate input data for leaching models are expensive and difficult to obtain which may lead to the use of "general" non-site-specific input data. This study investigated the effect of using different quality data on model outputs. Three models of varying complexity, GLEAMS, LEACHM, and HYDRUS-2D, were used to simulate pesticide leaching at a field trial near Hamilton, New Zealand, on an allophanic silt loam using input data of varying quality. Each model was run for four different pesticides (hexazinone, procymidone, picloram and triclopyr); three different sets of pesticide sorption and degradation parameters (i.e., site optimized, laboratory derived, and sourced from the USDA Pesticide Properties Database); and three different sets of soil physical data of varying quality (i.e., site specific, regional database, and particle size distribution data). We found that the selection of site-optimized pesticide sorption (Koc) and degradation parameters (half-life), compared to the use of more general database derived values, had significantly more impact than the quality of the soil input data used, but interestingly also more impact than the choice of the models. Models run with pesticide sorption and degradation parameters derived from observed solute concentrations data provided simulation outputs with goodness-of-fit values closest to optimum, followed by laboratory-derived parameters, with the USDA parameters providing the least accurate simulations. In general, when using pesticide sorption and degradation parameters optimized from site solute concentrations, the more complex models (LEACHM and HYDRUS-2D) were more accurate. However, when using USDA database derived parameters, all models performed about equally.
NASA Astrophysics Data System (ADS)
Huang, X.; Bandilla, K.; Celia, M. A.; Bachu, S.
2013-12-01
Geological carbon sequestration can significantly contribute to climate-change mitigation only if it is deployed at a very large scale. This means that injection scenarios must occur, and be analyzed, at the basin scale. Various mathematical models of different complexity may be used to assess the fate of injected CO2 and/or resident brine. These models span the range from multi-dimensional, multi-phase numerical simulators to simple single-phase analytical solutions. In this study, we consider a range of models, all based on vertically-integrated governing equations, to predict the basin-scale pressure response to specific injection scenarios. The Canadian section of the Basal Aquifer is used as a test site to compare the different modeling approaches. The model domain covers an area of approximately 811,000 km2, and the total injection rate is 63 Mt/yr, corresponding to 9 locations where large point sources have been identified. Predicted areas of critical pressure exceedance are used as a comparison metric among the different modeling approaches. Comparison of the results shows that single-phase numerical models may be good enough to predict the pressure response over a large aquifer; however, a simple superposition of semi-analytical or analytical solutions is not sufficiently accurate because spatial variability of formation properties plays an important role in the problem, and these variations are not captured properly with simple superposition. We consider two different injection scenarios: injection at the source locations and injection at locations with more suitable aquifer properties. Results indicate that in formations with significant spatial variability of properties, strong variations in injectivity among the different source locations can be expected, leading to the need to transport the captured CO2 to suitable injection locations, thereby necessitating development of a pipeline network. We also consider the sensitivity of porosity and
Classrooms as Complex Adaptive Systems: A Relational Model
ERIC Educational Resources Information Center
Burns, Anne; Knox, John S.
2011-01-01
In this article, we describe and model the language classroom as a complex adaptive system (see Logan & Schumann, 2005). We argue that linear, categorical descriptions of classroom processes and interactions do not sufficiently explain the complex nature of classrooms, and cannot account for how classroom change occurs (or does not occur), over…
Realistic modeling of complex oxide materials
NASA Astrophysics Data System (ADS)
Solovyev, I. V.
2011-01-01
Since electronic and magnetic properties of many transition-metal oxides can be efficiently controlled by external factors such as the temperature, pressure, electric or magnetic field, they are regarded as promising materials for various applications. From the viewpoint of the electronic structure, these phenomena are frequently related to the behavior of a small group of states located near the Fermi level. The basic idea of this project is to construct a model for the low-energy states, derive all the parameters rigorously on the basis of density functional theory (DFT), and to study this model by modern techniques. After a brief review of the method, the abilities of this approach will be illustrated on a number of examples, including multiferroic manganites and spin-orbital-lattice coupled phenomena in RVO 3 (where R is the three-valent element).
Complex Network Modeling with an Emulab HPC
2012-09-01
field. Actual Joint Tactical Radio System (JTRS) radios, Operations Network ( OPNET ) emulations, and GNU (recursive definition for GNU is Not Unix...open-source software-defined-radio software/ firmware/ hardware emulations can be accommodated. Index Terms—network emulation, Emulab, OPNET I...other hand, simulation tools such as MATLAB, Optimized Network Engineering Tools ( OPNET ), NS2, and CORE (a modeling environment from Vitech
Computational Modeling of Uranium Hydriding and Complexes
Balasubramanian, K; Siekhaus, W J; McLean, W
2003-02-03
Uranium hydriding is one of the most important processes that has received considerable attention over many years. Although many experimental and modeling studies have been carried out concerning thermochemistry, diffusion kinetics and mechanisms of U-hydriding, very little is known about the electronic structure and electronic features that govern the U-hydriding process. Yet it is the electronic feature that controls the activation barrier and thus the rate of hydriding. Moreover the role of impurities and the role of the product UH{sub 3} on hydriding rating are not fully understood. An early study by Condon and Larson concerns with the kinetics of U-hydrogen system and a mathematical model for the U-hydriding process. They proposed that diffusion in the reactant phase by hydrogen before nucleation to form hydride phase and that the reaction is first order for hydriding and zero order for dehydriding. Condon has also calculated and measures the reaction rates of U-hydriding and proposed a diffusion model for the U-hydriding. This model was found to be in excellent agreement with the experimental reaction rates. From the slopes of the Arrhenius plot the activation energy was calculated as 6.35 kcal/mole. In a subsequent study Kirkpatrick formulated a close-form for approximate solution to Condon's equation. Bloch and Mintz have proposed the kinetics and mechanism for the U-H reaction over a wide range of pressures and temperatures. They have discussed their results through two models, one, which considers hydrogen diffusion through a protective UH{sub 3} product layer, and the second where hydride growth occurs at the hydride-metal interface. These authors obtained two-dimensional fits of experimental data to the pressure-temperature reactions. Kirkpatrick and Condon have obtained a linear solution to hydriding of uranium. These authors showed that the calculated reaction rates compared quite well with the experimental data at a hydrogen pressure of 1 atm. Powell
Model complexity and performance: How far can we simplify?
NASA Astrophysics Data System (ADS)
Raick, C.; Soetaert, K.; Grégoire, M.
2006-07-01
Handling model complexity and reliability is a key area of research today. While complex models containing sufficient detail have become possible due to increased computing power, they often lead to too much uncertainty. On the other hand, very simple models often crudely oversimplify the real ecosystem and can not be used for management purposes. Starting from a complex and validated 1D pelagic ecosystem model of the Ligurian Sea (NW Mediterranean Sea), we derived simplified aggregated models in which either the unbalanced algal growth, the functional group diversity or the explicit description of the microbial loop was sacrificed. To overcome the problem of data availability with adequate spatial and temporal resolution, the outputs of the complex model are used as the baseline of perfect knowledge to calibrate the simplified models. Objective criteria of model performance were used to compare the simplified models’ results to the complex model output and to the available data at the DYFAMED station in the central Ligurian Sea. We show that even the simplest (NPZD) model is able to represent the global ecosystem features described by the complex model (e.g. primary and secondary productions, particulate organic matter export flux, etc.). However, a certain degree of sophistication in the formulation of some biogeochemical processes is required to produce realistic behaviors (e.g. the phytoplankton competition, the potential carbon or nitrogen limitation of the zooplankton ingestion, the model trophic closure, etc.). In general, a 9 state-variable model that has the functional group diversity removed, but which retains the bacterial loop and the unbalanced algal growth, performs best.
NASA Astrophysics Data System (ADS)
Cheng, Yong; Li, Zuoyong; Jiao, Liangbao; Lu, Hong; Cao, Xuehong
2016-07-01
We improved classic retinal modeling to alleviate the adverse effect of complex illumination on face recognition and extracted robust image features. Our improvements on classic retinal modeling included three aspects. First, a combined filtering scheme was applied to simulate functions of horizontal and amacrine cells for accurate local illumination estimation. Second, we developed an optimal threshold method for illumination classification. Finally, we proposed an adaptive factor acquisition model based on the arctangent function. Experimental results on the combined Yale B; the Carnegie Mellon University poses, illumination, and expression; and the Labeled Face Parts in the Wild databases show that the proposed method can effectively alleviate illumination difference of images under complex illumination conditions, which is helpful for improving the accuracy of face recognition and that of facial feature point detection.
Using machine learning tools to model complex toxic interactions with limited sampling regimes.
Bertin, Matthew J; Moeller, Peter; Guillette, Louis J; Chapman, Robert W
2013-03-19
A major impediment to understanding the impact of environmental stress, including toxins and other pollutants, on organisms, is that organisms are rarely challenged by one or a few stressors in natural systems. Thus, linking laboratory experiments that are limited by practical considerations to a few stressors and a few levels of these stressors to real world conditions is constrained. In addition, while the existence of complex interactions among stressors can be identified by current statistical methods, these methods do not provide a means to construct mathematical models of these interactions. In this paper, we offer a two-step process by which complex interactions of stressors on biological systems can be modeled in an experimental design that is within the limits of practicality. We begin with the notion that environment conditions circumscribe an n-dimensional hyperspace within which biological processes or end points are embedded. We then randomly sample this hyperspace to establish experimental conditions that span the range of the relevant parameters and conduct the experiment(s) based upon these selected conditions. Models of the complex interactions of the parameters are then extracted using machine learning tools, specifically artificial neural networks. This approach can rapidly generate highly accurate models of biological responses to complex interactions among environmentally relevant toxins, identify critical subspaces where nonlinear responses exist, and provide an expedient means of designing traditional experiments to test the impact of complex mixtures on biological responses. Further, this can be accomplished with an astonishingly small sample size.
Felmy, Andrew R.; Qafoku, Odeta
2004-09-01
An aqueous thermodynamic model is developed which accurately describes the effects of high base concentration on the complexation of Ni2+ by ethylenedinitrilotetraacetic acid (EDTA). The model is primarily developed from an extensive data on the solubility of Ni(OH)2(c) in the presence of EDTA and in the presence and absence of Ca2+ as the competing metal ion. The solubility data for Ni(OH)2(c) were obtained in solutions ranging in NaOH concentration from 0.01 to 11.6m, and in Ca 2+ concentrations extending to saturation with respect to portlandite, Ca(OH)2. Owing to the inert nature of the Ni-EDTA complexation reactions, solubility experiments were approached from both the oversaturation and undersaturation direction and over time frames extending to 413 days. The final aqueous thermodynamic model is based upon the equations of Pitzer, accurately predicts the observed solubilities to concentrations as high as 11.6m NaOH, and is consistent with UV-Vis spectroscopic studies of the complexes in solution.
Prequential Analysis of Complex Data with Adaptive Model Reselection.
Clarke, Jennifer; Clarke, Bertrand
2009-11-01
In Prequential analysis, an inference method is viewed as a forecasting system, and the quality of the inference method is based on the quality of its predictions. This is an alternative approach to more traditional statistical methods that focus on the inference of parameters of the data generating distribution. In this paper, we introduce adaptive combined average predictors (ACAPs) for the Prequential analysis of complex data. That is, we use convex combinations of two different model averages to form a predictor at each time step in a sequence. A novel feature of our strategy is that the models in each average are re-chosen adaptively at each time step. To assess the complexity of a given data set, we introduce measures of data complexity for continuous response data. We validate our measures in several simulated contexts prior to using them in real data examples. The performance of ACAPs is compared with the performances of predictors based on stacking or likelihood weighted averaging in several model classes and in both simulated and real data sets. Our results suggest that ACAPs achieve a better trade off between model list bias and model list variability in cases where the data is very complex. This implies that the choices of model class and averaging method should be guided by a concept of complexity matching, i.e. the analysis of a complex data set may require a more complex model class and averaging strategy than the analysis of a simpler data set. We propose that complexity matching is akin to a bias-variance tradeoff in statistical modeling.
Evaluating the Novel Methods on Species Distribution Modeling in Complex Forest
NASA Astrophysics Data System (ADS)
Tu, C. H.; Lo, N. J.; Chang, W. I.; Huang, K. Y.
2012-07-01
The prediction of species distribution has become a focus in ecology. For predicting a result more effectively and accurately, some novel methods have been proposed recently, like support vector machine (SVM) and maximum entropy (MAXENT). However, high complexity in the forest, like that in Taiwan, will make the modeling become even harder. In this study, we aim to explore which method is more applicable to species distribution modeling in the complex forest. Castanopsis carlesii (long-leaf chinkapin, LLC), growing widely in Taiwan, was chosen as the target species because its seeds are an important food source for animals. We overlaid the tree samples on the layers of altitude, slope, aspect, terrain position, and vegetation index derived from SOPT-5 images, and developed three models, MAXENT, SVM, and decision tree (DT), to predict the potential habitat of LLCs. We evaluated these models by two sets of independent samples in different site and the effect on the complexity of forest by changing the background sample size (BSZ). In the forest with low complex (small BSZ), the accuracies of SVM (kappa = 0.87) and DT (0.86) models were slightly higher than that of MAXENT (0.84). In the more complex situation (large BSZ), MAXENT kept high kappa value (0.85), whereas SVM (0.61) and DT (0.57) models dropped significantly due to limiting the habitat close to samples. Therefore, MAXENT model was more applicable to predict species' potential habitat in the complex forest; whereas SVM and DT models would tend to underestimate the potential habitat of LLCs.
Hettinger, Lawrence J.; Kirlik, Alex; Goh, Yang Miang; Buckle, Peter
2015-01-01
Accurate comprehension and analysis of complex sociotechnical systems is a daunting task. Empirically examining, or simply envisioning the structure and behaviour of such systems challenges traditional analytic and experimental approaches as well as our everyday cognitive capabilities. Computer-based models and simulations afford potentially useful means of accomplishing sociotechnical system design and analysis objectives. From a design perspective, they can provide a basis for a common mental model among stakeholders, thereby facilitating accurate comprehension of factors impacting system performance and potential effects of system modifications. From a research perspective, models and simulations afford the means to study aspects of sociotechnical system design and operation, including the potential impact of modifications to structural and dynamic system properties, in ways not feasible with traditional experimental approaches. This paper describes issues involved in the design and use of such models and simulations and describes a proposed path forward to their development and implementation. Practitioner Summary: The size and complexity of real-world sociotechnical systems can present significant barriers to their design, comprehension and empirical analysis. This article describes the potential advantages of computer-based models and simulations for understanding factors that impact sociotechnical system design and operation, particularly with respect to process and occupational safety. PMID:25761227
Modeling a Ca2+ Channel/BKCa Channel Complex at the Single-Complex Level
Cox, Daniel H.
2014-01-01
BKCa-channel activity often affects the firing properties of neurons, the shapes of neuronal action potentials (APs), and in some cases the extent of neurotransmitter release. It has become clear that BKCa channels often form complexes with voltage-gated Ca2+ channels (CaV channels) such that when a CaV channel is activated, the ensuing influx of Ca2+ activates its closely associated BKCa channel. Thus, in modeling the electrical properties of neurons, it would be useful to have quantitative models of CaV/BKCa complexes. Furthermore, in a population of CaV/BKCa complexes, all BKCa channels are not exposed to the same Ca2+ concentration at the same time. Thus, stochastic rather than deterministic models are required. To date, however, no such models have been described. Here, however, I present a stochastic model of a CaV2.1/BKCa(α-only) complex, as might be found in a central nerve terminal. The CaV2.1/BKCa model is based on kinetic modeling of its two component channels at physiological temperature. Surprisingly, The CaV2.1/BKCa model predicts that although the CaV channel will open nearly every time during a typical cortical AP, its associated BKCa channel is expected to open in only 30% of trials, and this percentage is very sensitive to the duration of the AP, the distance between the two channels in the complex, and the presence of fast internal Ca2+ buffers. Also, the model predicts that the kinetics of the BKCa currents of a population of CaV2.1/BKCa complexes will not be limited by the kinetics of the CaV2.1 channel, and during a train of APs, the current response of the complex is expected to faithfully follow even very rapid trains. Aside from providing insight into how these complexes are likely to behave in vivo, the models presented here could also be of use more generally as components of higher-level models of neural function. PMID:25517147
Is there hope for multi-site complexation modeling?
Bickmore, Barry R.; Rosso, Kevin M.; Mitchell, S. C.
2006-06-06
It has been shown here that the standard formulation of the MUSIC model does not deliver the molecular-scale insight into oxide surface reactions that it promises. The model does not properly divide long-range electrostatic and short-range contributions to acid-base reaction energies, and it does not treat solvation in a physically realistic manner. However, even if the current MUSIC model does not succeed in its ambitions, its ambitions are still reasonable. It was a pioneering attempt in that Hiemstra and coworkers recognized that intrinsic equilibrium constants, where the effects of long-range electrostatic effects have been removed, must be theoretically constrained prior to model fitting if there is to be any hope of obtaining molecular-scale insights from SCMs. We have also shown, on the other hand, that it may be premature to dismiss all valence-based models of acidity. Not only can some such models accurately predict intrinsic acidity constants, but they can also now be linked to the results of molecular dynamics simulations of solvated systems. Significant challenges remain for those interested in creating SCMs that are accurate at the molecular scale. It will only be after all model parameters can be predicted from theory, and the models validated against titration data that we will be able to begin to have some confidence that we really are adequately describing the chemical systems in question.
Probabilistic Multi-Factor Interaction Model for Complex Material Behavior
NASA Technical Reports Server (NTRS)
Abumeri, Galib H.; Chamis, Christos C.
2010-01-01
Complex material behavior is represented by a single equation of product form to account for interaction among the various factors. The factors are selected by the physics of the problem and the environment that the model is to represent. For example, different factors will be required for each to represent temperature, moisture, erosion, corrosion, etc. It is important that the equation represent the physics of the behavior in its entirety accurately. The Multi-Factor Interaction Model (MFIM) is used to evaluate the divot weight (foam weight ejected) from the external launch tanks. The multi-factor has sufficient degrees of freedom to evaluate a large number of factors that may contribute to the divot ejection. It also accommodates all interactions by its product form. Each factor has an exponent that satisfies only two points - the initial and final points. The exponent describes a monotonic path from the initial condition to the final. The exponent values are selected so that the described path makes sense in the absence of experimental data. In the present investigation, the data used were obtained by testing simulated specimens in launching conditions. Results show that the MFIM is an effective method of describing the divot weight ejected under the conditions investigated. The problem lies in how to represent the divot weight with a single equation. A unique solution to this problem is a multi-factor equation of product form. Each factor is of the following form (1 xi/xf)ei, where xi is the initial value, usually at ambient conditions, xf the final value, and ei the exponent that makes the curve represented unimodal that meets the initial and final values. The exponents are either evaluated by test data or by technical judgment. A minor disadvantage may be the selection of exponents in the absence of any empirical data. This form has been used successfully in describing the foam ejected in simulated space environmental conditions. Seven factors were required
Size and complexity in model financial systems.
Arinaminpathy, Nimalan; Kapadia, Sujit; May, Robert M
2012-11-06
The global financial crisis has precipitated an increasing appreciation of the need for a systemic perspective toward financial stability. For example: What role do large banks play in systemic risk? How should capital adequacy standards recognize this role? How is stability shaped by concentration and diversification in the financial system? We explore these questions using a deliberately simplified, dynamic model of a banking system that combines three different channels for direct transmission of contagion from one bank to another: liquidity hoarding, asset price contagion, and the propagation of defaults via counterparty credit risk. Importantly, we also introduce a mechanism for capturing how swings in "confidence" in the system may contribute to instability. Our results highlight that the importance of relatively large, well-connected banks in system stability scales more than proportionately with their size: the impact of their collapse arises not only from their connectivity, but also from their effect on confidence in the system. Imposing tougher capital requirements on larger banks than smaller ones can thus enhance the resilience of the system. Moreover, these effects are more pronounced in more concentrated systems, and continue to apply, even when allowing for potential diversification benefits that may be realized by larger banks. We discuss some tentative implications for policy, as well as conceptual analogies in ecosystem stability and in the control of infectious diseases.
Size and complexity in model financial systems
Arinaminpathy, Nimalan; Kapadia, Sujit; May, Robert M.
2012-01-01
The global financial crisis has precipitated an increasing appreciation of the need for a systemic perspective toward financial stability. For example: What role do large banks play in systemic risk? How should capital adequacy standards recognize this role? How is stability shaped by concentration and diversification in the financial system? We explore these questions using a deliberately simplified, dynamic model of a banking system that combines three different channels for direct transmission of contagion from one bank to another: liquidity hoarding, asset price contagion, and the propagation of defaults via counterparty credit risk. Importantly, we also introduce a mechanism for capturing how swings in “confidence” in the system may contribute to instability. Our results highlight that the importance of relatively large, well-connected banks in system stability scales more than proportionately with their size: the impact of their collapse arises not only from their connectivity, but also from their effect on confidence in the system. Imposing tougher capital requirements on larger banks than smaller ones can thus enhance the resilience of the system. Moreover, these effects are more pronounced in more concentrated systems, and continue to apply, even when allowing for potential diversification benefits that may be realized by larger banks. We discuss some tentative implications for policy, as well as conceptual analogies in ecosystem stability and in the control of infectious diseases. PMID:23091020
Chowdhury, Amor; Sarjaš, Andrej
2016-09-15
The presented paper describes accurate distance measurement for a field-sensed magnetic suspension system. The proximity measurement is based on a Hall effect sensor. The proximity sensor is installed directly on the lower surface of the electro-magnet, which means that it is very sensitive to external magnetic influences and disturbances. External disturbances interfere with the information signal and reduce the usability and reliability of the proximity measurements and, consequently, the whole application operation. A sensor fusion algorithm is deployed for the aforementioned reasons. The sensor fusion algorithm is based on the Unscented Kalman Filter, where a nonlinear dynamic model was derived with the Finite Element Modelling approach. The advantage of such modelling is a more accurate dynamic model parameter estimation, especially in the case when the real structure, materials and dimensions of the real-time application are known. The novelty of the paper is the design of a compact electro-magnetic actuator with a built-in low cost proximity sensor for accurate proximity measurement of the magnetic object. The paper successively presents a modelling procedure with the finite element method, design and parameter settings of a sensor fusion algorithm with Unscented Kalman Filter and, finally, the implementation procedure and results of real-time operation.
Chowdhury, Amor; Sarjaš, Andrej
2016-01-01
The presented paper describes accurate distance measurement for a field-sensed magnetic suspension system. The proximity measurement is based on a Hall effect sensor. The proximity sensor is installed directly on the lower surface of the electro-magnet, which means that it is very sensitive to external magnetic influences and disturbances. External disturbances interfere with the information signal and reduce the usability and reliability of the proximity measurements and, consequently, the whole application operation. A sensor fusion algorithm is deployed for the aforementioned reasons. The sensor fusion algorithm is based on the Unscented Kalman Filter, where a nonlinear dynamic model was derived with the Finite Element Modelling approach. The advantage of such modelling is a more accurate dynamic model parameter estimation, especially in the case when the real structure, materials and dimensions of the real-time application are known. The novelty of the paper is the design of a compact electro-magnetic actuator with a built-in low cost proximity sensor for accurate proximity measurement of the magnetic object. The paper successively presents a modelling procedure with the finite element method, design and parameter settings of a sensor fusion algorithm with Unscented Kalman Filter and, finally, the implementation procedure and results of real-time operation. PMID:27649197
Generalized complex geometry, generalized branes and the Hitchin sigma model
NASA Astrophysics Data System (ADS)
Zucchini, Roberto
2005-03-01
Hitchin's generalized complex geometry has been shown to be relevant in compactifications of superstring theory with fluxes and is expected to lead to a deeper understanding of mirror symmetry. Gualtieri's notion of generalized complex submanifold seems to be a natural candidate for the description of branes in this context. Recently, we introduced a Batalin-Vilkovisky field theoretic realization of generalized complex geometry, the Hitchin sigma model, extending the well known Poisson sigma model. In this paper, exploiting Gualtieri's formalism, we incorporate branes into the model. A detailed study of the boundary conditions obeyed by the world sheet fields is provided. Finally, it is found that, when branes are present, the classical Batalin-Vilkovisky cohomology contains an extra sector that is related non trivially to a novel cohomology associated with the branes as generalized complex submanifolds.
Experimental porcine model of complex fistula-in-ano
A Ba-Bai-Ke-Re, Ma-Mu-Ti-Jiang; Chen, Hui; Liu, Xue; Wang, Yun-Hai
2017-01-01
AIM To establish and evaluate an experimental porcine model of fistula-in-ano. METHODS Twelve healthy pigs were randomly divided into two groups. Under general anesthesia, the experimental group underwent rubber band ligation surgery, and the control group underwent an artificial damage technique. Clinical magnetic resonance imaging (MRI) and histopathological evaluation were performed on the 38th d and 48th d after surgery in both groups, respectively. RESULTS There were no significant differences between the experimental group and the control group in general characteristics such as body weight, gender, and the number of fistula (P > 0.05). In the experimental group, 15 fistulas were confirmed clinically, 13 complex fistulas were confirmed by MRI, and 11 complex fistulas were confirmed by histopathology. The success rate in the porcine complex fistula model establishment was 83.33%. Among the 18 fistulas in the control group, 5 fistulas were confirmed clinically, 4 complex fistulas were confirmed by MRI, and 3 fistulas were confirmed by histopathology. The success rate in the porcine fistula model establishment was 27.78%. Thus, the success rate of the rubber band ligation group was significantly higher than the control group (P < 0.05). CONCLUSION Rubber band ligation is a stable and reliable method to establish complex fistula-in-ano models. Large animal models of complex anal fistulas can be used for the diagnosis and treatment of anal fistulas. PMID:28348488
Finite element analysis to model complex mitral valve repair.
Labrosse, Michel; Mesana, Thierry; Baxter, Ian; Chan, Vincent
2016-01-01
Although finite element analysis has been used to model simple mitral repair, it has not been used to model complex repair. A virtual mitral valve model was successful in simulating normal and abnormal valve function. Models were then developed to simulate an edge-to-edge repair and repair employing quadrangular resection. Stress contour plots demonstrated increased stresses along the mitral annulus, corresponding to the annuloplasty. The role of finite element analysis in guiding clinical practice remains undetermined.
NASA Astrophysics Data System (ADS)
Silva, Goncalo; Talon, Laurent; Ginzburg, Irina
2017-04-01
is thoroughly evaluated in three benchmark tests, which are run throughout three distinctive permeability regimes. The first configuration is a horizontal porous channel, studied with a symbolic approach, where we construct the exact solutions of FEM and BF/IBF with different boundary schemes. The second problem refers to an inclined porous channel flow, which brings in as new challenge the formation of spurious boundary layers in LBM; that is, numerical artefacts that arise due to a deficient accommodation of the bulk solution by the low-accurate boundary scheme. The third problem considers a porous flow past a periodic square array of solid cylinders, which intensifies the previous two tests with the simulation of a more complex flow pattern. The ensemble of numerical tests provides guidelines on the effect of grid resolution and the TRT free collision parameter over the accuracy and the quality of the velocity field, spanning from Stokes to Darcy permeability regimes. It is shown that, with the use of the high-order accurate boundary schemes, the simple, uniform-mesh-based TRT-LBM formulation can even surpass the accuracy of FEM employing hardworking body-fitted meshes.
First results from the International Urban Energy Balance Model Comparison: Model Complexity
NASA Astrophysics Data System (ADS)
Blackett, M.; Grimmond, S.; Best, M.
2009-04-01
A great variety of urban energy balance models has been developed. These vary in complexity from simple schemes that represent the city as a slab, through those which model various facets (i.e. road, walls and roof) to more complex urban forms (including street canyons with intersections) and features (such as vegetation cover and anthropogenic heat fluxes). Some schemes also incorporate detailed representations of momentum and energy fluxes distributed throughout various layers of the urban canopy layer. The models each differ in the parameters they require to describe the site and the in demands they make on computational processing power. Many of these models have been evaluated using observational datasets but to date, no controlled comparisons have been conducted. Urban surface energy balance models provide a means to predict the energy exchange processes which influence factors such as urban temperature, humidity, atmospheric stability and winds. These all need to be modelled accurately to capture features such as the urban heat island effect and to provide key information for dispersion and air quality modelling. A comparison of the various models available will assist in improving current and future models and will assist in formulating research priorities for future observational campaigns within urban areas. In this presentation we will summarise the initial results of this international urban energy balance model comparison. In particular, the relative performance of the models involved will be compared based on their degree of complexity. These results will inform us on ways in which we can improve the modelling of air quality within, and climate impacts of, global megacities. The methodology employed in conducting this comparison followed that used in PILPS (the Project for Intercomparison of Land-Surface Parameterization Schemes) which is also endorsed by the GEWEX Global Land Atmosphere System Study (GLASS) panel. In all cases, models were run
The Use of Behavior Models for Predicting Complex Operations
NASA Technical Reports Server (NTRS)
Gore, Brian F.
2010-01-01
Modeling and simulation (M&S) plays an important role when complex human-system notions are being proposed, developed and tested within the system design process. National Aeronautics and Space Administration (NASA) as an agency uses many different types of M&S approaches for predicting human-system interactions, especially when it is early in the development phase of a conceptual design. NASA Ames Research Center possesses a number of M&S capabilities ranging from airflow, flight path models, aircraft models, scheduling models, human performance models (HPMs), and bioinformatics models among a host of other kinds of M&S capabilities that are used for predicting whether the proposed designs will benefit the specific mission criteria. The Man-Machine Integration Design and Analysis System (MIDAS) is a NASA ARC HPM software tool that integrates many models of human behavior with environment models, equipment models, and procedural / task models. The challenge to model comprehensibility is heightened as the number of models that are integrated and the requisite fidelity of the procedural sets are increased. Model transparency is needed for some of the more complex HPMs to maintain comprehensibility of the integrated model performance. This will be exemplified in a recent MIDAS v5 application model and plans for future model refinements will be presented.
A radio-frequency sheath model for complex waveforms
Turner, M. M.; Chabert, P.
2014-04-21
Plasma sheaths driven by radio-frequency voltages occur in contexts ranging from plasma processing to magnetically confined fusion experiments. An analytical understanding of such sheaths is therefore important, both intrinsically and as an element in more elaborate theoretical structures. Radio-frequency sheaths are commonly excited by highly anharmonic waveforms, but no analytical model exists for this general case. We present a mathematically simple sheath model that is in good agreement with earlier models for single frequency excitation, yet can be solved for arbitrary excitation waveforms. As examples, we discuss dual-frequency and pulse-like waveforms. The model employs the ansatz that the time-averaged electron density is a constant fraction of the ion density. In the cases we discuss, the error introduced by this approximation is small, and in general it can be quantified through an internal consistency condition of the model. This simple and accurate model is likely to have wide application.
Geometric modeling of subcellular structures, organelles, and multiprotein complexes.
Feng, Xin; Xia, Kelin; Tong, Yiying; Wei, Guo-Wei
2012-12-01
Recently, the structure, function, stability, and dynamics of subcellular structures, organelles, and multiprotein complexes have emerged as a leading interest in structural biology. Geometric modeling not only provides visualizations of shapes for large biomolecular complexes but also fills the gap between structural information and theoretical modeling, and enables the understanding of function, stability, and dynamics. This paper introduces a suite of computational tools for volumetric data processing, information extraction, surface mesh rendering, geometric measurement, and curvature estimation of biomolecular complexes. Particular emphasis is given to the modeling of cryo-electron microscopy data. Lagrangian-triangle meshes are employed for the surface presentation. On the basis of this representation, algorithms are developed for surface area and surface-enclosed volume calculation, and curvature estimation. Methods for volumetric meshing have also been presented. Because the technological development in computer science and mathematics has led to multiple choices at each stage of the geometric modeling, we discuss the rationales in the design and selection of various algorithms. Analytical models are designed to test the computational accuracy and convergence of proposed algorithms. Finally, we select a set of six cryo-electron microscopy data representing typical subcellular complexes to demonstrate the efficacy of the proposed algorithms in handling biomolecular surfaces and explore their capability of geometric characterization of binding targets. This paper offers a comprehensive protocol for the geometric modeling of subcellular structures, organelles, and multiprotein complexes.
Surface complexation model for strontium sorption to amorphous silica and goethite
Carroll, Susan A; Roberts, Sarah K; Criscenti, Louise J; O'Day, Peggy A
2008-01-01
Strontium sorption to amorphous silica and goethite was measured as a function of pH and dissolved strontium and carbonate concentrations at 25°C. Strontium sorption gradually increases from 0 to 100% from pH 6 to 10 for both phases and requires multiple outer-sphere surface complexes to fit the data. All data are modeled using the triple layer model and the site-occupancy standard state; unless stated otherwise all strontium complexes are mononuclear. Strontium sorption to amorphous silica in the presence and absence of dissolved carbonate can be fit with tetradentate Sr2+ and SrOH+ complexes on the β-plane and a monodentate Sr2+complex on the diffuse plane to account for strontium sorption at low ionic strength. Strontium sorption to goethite in the absence of dissolved carbonate can be fit with monodentate and tetradentate SrOH+ complexes and a tetradentate binuclear Sr2+ species on the β-plane. The binuclear complex is needed to account for enhanced sorption at hgh strontium surface loadings. In the presence of dissolved carbonate additional monodentate Sr2+ and SrOH+ carbonate surface complexes on the β-plane are needed to fit strontium sorption to goethite. Modeling strontium sorption as outer-sphere complexes is consistent with quantitative analysis of extended X-ray absorption fine structure (EXAFS) on selected sorption samples that show a single first shell of oxygen atoms around strontium indicating hydrated surface complexes at the amorphous silica and goethite surfaces. Strontium surface complexation equilibrium constants determined in this study combined with other alkaline earth surface complexation constants are used to recalibrate a predictive model based on Born solvation and crystal-chemistry theory. The model is accurate to about 0.7 log K units. More studies are needed to determine the dependence of alkaline earth sorption on ionic strength and dissolved carbonate and sulfate concentrations for the development of a robust surface
Surface Complexation Model for Strontium Sorption to Amorphous Silica and Goethite
Carroll, S; Robers, S; Criscenti, L; O'Day, P
2007-11-30
Strontium sorption to amorphous silica and goethite was measured as a function of pH and dissolved strontium and carbonate concentrations at 25 C. Strontium sorption gradually increases from 0 to 100% from pH 6 to 10 for both phases and requires multiple outer-sphere surface complexes to fit the data. All data are modeled using the triple layer model and the site-occupancy standard state; unless stated otherwise all strontium complexes are mononuclear. Strontium sorption to amorphous silica in the presence and absence of dissolved carbonate can be fit with tetradentate Sr{sup 2+} and SrOH{sup +} complexes on the {beta}-plane and a monodentate Sr{sup 2+} complex on the diffuse plane to account for strontium sorption at low ionic strength. Strontium sorption to goethite in the absence of dissolved carbonate can be fit with monodentate and tetradentate SrOH{sup +} complexes and a tetradentate binuclear Sr{sup 2+} species on the {beta}-plane. The binuclear complex is needed to account for enhanced sorption at high strontium surface loadings. In the presence of dissolved carbonate additional monodentate Sr{sup 2+} and SrOH{sup +} carbonate surface complexes on the {beta}-plane are needed to fit strontium sorption to goethite. Modeling strontium sorption as outer-sphere complexes is consistent with quantitative analysis of extended X-ray absorption fine structure (EXAFS) on selected sorption samples that show a single first shell of oxygen atoms around strontium indicating hydrated surface complexes at the amorphous silica and goethite surfaces. Strontium surface complexation equilibrium constants determined in this study combined with other alkaline earth surface complexation constants are used to recalibrate a predictive model based on Born solvation and crystal-chemistry theory. The model is accurate to about 0.7 log K units. More studies are needed to determine the dependence of alkaline earth sorption on ionic strength and dissolved carbonate and sulfate
Cousin, F; Gummel, J; Combet, S; Boué, F
2011-09-14
We review, based on structural information, the mechanisms involved when putting in contact two nano-objects of opposite electrical charge, in the case of one negatively charged polyion, and a compact charged one. The central case is mixtures of PSS, a strong flexible polyanion (the salt of a strong acid, and with high linear charge density), and Lysozyme, a globular protein with a global positive charge. A wide accurate and consistent set of information in different situations is available on the structure at local scales (5-1000Å), due to the possibility of matching, the reproducibility of the system, its well-defined electrostatics features, and the well-defined structures obtained. We have related these structures to the observations at macroscopic scale of the phase behavior, and to the expected mechanisms of coacervation. On the one hand, PSS/Lysozyme mixtures show accurately many of what is expected in PEL/protein complexation, and phase separation, as reviewed by de Kruif: under certain conditions some well-defined complexes are formed before any phase separation, they are close to neutral; even in excess of one species, complexes are only modestly charged (surface charges in PEL excess). Neutral cores are attracting each other, to form larger objects responsible for large turbidity. They should lead the system to phase separation; this is observed in the more dilute samples, while in more concentrated ones the lack of separation in turbid samples is explained by locking effects between fractal aggregates. On the other hand, although some of the features just listed are the same required for coacervation, this phase transition is not really obtained. The phase separation has all the macroscopic aspects of a fluid (undifferentiated liquid/gas phase) - solid transition, not of a fluid-fluid (liquid-liquid) one, which would correspond to real coacervation). The origin of this can be found in the interaction potential between primary complexes formed (globules
NASA Astrophysics Data System (ADS)
Davtyan, Aram; Voth, Gregory A.; Andersen, Hans C.
2016-12-01
We recently developed a dynamic force matching technique for converting a coarse-grained (CG) model of a molecular system, with a CG potential energy function, into a dynamic CG model with realistic dynamics [A. Davtyan et al., J. Chem. Phys. 142, 154104 (2015)]. This is done by supplementing the model with additional degrees of freedom, called "fictitious particles." In that paper, we tested the method on CG models in which each molecule is coarse-grained into one CG point particle, with very satisfactory results. When the method was applied to a CG model of methanol that has two CG point particles per molecule, the results were encouraging but clearly required improvement. In this paper, we introduce a new type (called type-3) of fictitious particle that exerts forces on the center of mass of two CG sites. A CG model constructed using type-3 fictitious particles (as well as type-2 particles previously used) gives a much more satisfactory dynamic model for liquid methanol. In particular, we were able to construct a CG model that has the same self-diffusion coefficient and the same rotational relaxation time as an all-atom model of liquid methanol. Type-3 particles and generalizations of it are likely to be useful in converting more complicated CG models into dynamic CG models.
Network model of bilateral power markets based on complex networks
NASA Astrophysics Data System (ADS)
Wu, Yang; Liu, Junyong; Li, Furong; Yan, Zhanxin; Zhang, Li
2014-06-01
The bilateral power transaction (BPT) mode becomes a typical market organization with the restructuring of electric power industry, the proper model which could capture its characteristics is in urgent need. However, the model is lacking because of this market organization's complexity. As a promising approach to modeling complex systems, complex networks could provide a sound theoretical framework for developing proper simulation model. In this paper, a complex network model of the BPT market is proposed. In this model, price advantage mechanism is a precondition. Unlike other general commodity transactions, both of the financial layer and the physical layer are considered in the model. Through simulation analysis, the feasibility and validity of the model are verified. At same time, some typical statistical features of BPT network are identified. Namely, the degree distribution follows the power law, the clustering coefficient is low and the average path length is a bit long. Moreover, the topological stability of the BPT network is tested. The results show that the network displays a topological robustness to random market member's failures while it is fragile against deliberate attacks, and the network could resist cascading failure to some extent. These features are helpful for making decisions and risk management in BPT markets.
Using fMRI to Test Models of Complex Cognition
ERIC Educational Resources Information Center
Anderson, John R.; Carter, Cameron S.; Fincham, Jon M.; Qin, Yulin; Ravizza, Susan M.; Rosenberg-Lee, Miriam
2008-01-01
This article investigates the potential of fMRI to test assumptions about different components in models of complex cognitive tasks. If the components of a model can be associated with specific brain regions, one can make predictions for the temporal course of the BOLD response in these regions. An event-locked procedure is described for dealing…
Tips on Creating Complex Geometry Using Solid Modeling Software
ERIC Educational Resources Information Center
Gow, George
2008-01-01
Three-dimensional computer-aided drafting (CAD) software, sometimes referred to as "solid modeling" software, is easy to learn, fun to use, and becoming the standard in industry. However, many users have difficulty creating complex geometry with the solid modeling software. And the problem is not entirely a student problem. Even some teachers and…
Between complexity of modelling and modelling of complexity: An essay on econophysics
NASA Astrophysics Data System (ADS)
Schinckus, C.
2013-09-01
Econophysics is an emerging field dealing with complex systems and emergent properties. A deeper analysis of themes studied by econophysicists shows that research conducted in this field can be decomposed into two different computational approaches: “statistical econophysics” and “agent-based econophysics”. This methodological scission complicates the definition of the complexity used in econophysics. Therefore, this article aims to clarify what kind of emergences and complexities we can find in econophysics in order to better understand, on one hand, the current scientific modes of reasoning this new field provides; and on the other hand, the future methodological evolution of the field.
Zebrafish as an emerging model for studying complex brain disorders
Kalueff, Allan V.; Stewart, Adam Michael; Gerlai, Robert
2014-01-01
The zebrafish (Danio rerio) is rapidly becoming a popular model organism in pharmacogenetics and neuropharmacology. Both larval and adult zebrafish are currently used to increase our understanding of brain function, dysfunction, and their genetic and pharmacological modulation. Here we review the developing utility of zebrafish in the analysis of complex brain disorders (including, for example, depression, autism, psychoses, drug abuse and cognitive disorders), also covering zebrafish applications towards the goal of modeling major human neuropsychiatric and drug-induced syndromes. We argue that zebrafish models of complex brain disorders and drug-induced conditions have become a rapidly emerging critical field in translational neuropharmacology research. PMID:24412421
NASA Astrophysics Data System (ADS)
Shen, Yanfeng; Cesnik, Carlos E. S.
2016-09-01
This paper presents a new hybrid modeling technique for the efficient simulation of guided wave generation, propagation, and interaction with damage in complex composite structures. A local finite element model is deployed to capture the piezoelectric effects and actuation dynamics of the transmitter, while the global domain wave propagation and interaction with structural complexity (structure features and damage) are solved utilizing a local interaction simulation approach (LISA). This hybrid approach allows the accurate modeling of the local dynamics of the transducers and keeping the LISA formulation in an explicit format, which facilitates its readiness for parallel computing. The global LISA framework was extended through the 3D Kelvin-Voigt viscoelasticity theory to include anisotropic damping effects for composite structures, as an improvement over the existing LISA formulation. The global LISA framework was implemented using the compute unified device architecture running on graphic processing units. A commercial preprocessor is integrated seamlessly with the computational framework for grid generation and material property allocation to handle complex structures. The excitability and damping effects are successfully captured by this hybrid model, with experimental validation using the scanning laser doppler vibrometry. To demonstrate the capability of our hybrid approach for complex structures, guided wave propagation and interaction with a delamination in a composite panel with stiffeners is presented.
Complexation and molecular modeling studies of europium(III)-gallic acid-amino acid complexes.
Taha, Mohamed; Khan, Imran; Coutinho, João A P
2016-04-01
With many metal-based drugs extensively used today in the treatment of cancer, attention has focused on the development of new coordination compounds with antitumor activity with europium(III) complexes recently introduced as novel anticancer drugs. The aim of this work is to design new Eu(III) complexes with gallic acid, an antioxida'nt phenolic compound. Gallic acid was chosen because it shows anticancer activity without harming health cells. As antioxidant, it helps to protect human cells against oxidative damage that implicated in DNA damage, cancer, and accelerated cell aging. In this work, the formation of binary and ternary complexes of Eu(III) with gallic acid, primary ligand, and amino acids alanine, leucine, isoleucine, and tryptophan was studied by glass electrode potentiometry in aqueous solution containing 0.1M NaNO3 at (298.2 ± 0.1) K. Their overall stability constants were evaluated and the concentration distributions of the complex species in solution were calculated. The protonation constants of gallic acid and amino acids were also determined at our experimental conditions and compared with those predicted by using conductor-like screening model for realistic solvation (COSMO-RS) model. The geometries of Eu(III)-gallic acid complexes were characterized by the density functional theory (DFT). The spectroscopic UV-visible and photoluminescence measurements are carried out to confirm the formation of Eu(III)-gallic acid complexes in aqueous solutions.
Multiscale Model for the Assembly Kinetics of Protein Complexes.
Xie, Zhong-Ru; Chen, Jiawen; Wu, Yinghao
2016-02-04
The assembly of proteins into high-order complexes is a general mechanism for these biomolecules to implement their versatile functions in cells. Natural evolution has developed various assembling pathways for specific protein complexes to maintain their stability and proper activities. Previous studies have provided numerous examples of the misassembly of protein complexes leading to severe biological consequences. Although the research focusing on protein complexes has started to move beyond the static representation of quaternary structures to the dynamic aspect of their assembly, the current understanding of the assembly mechanism of protein complexes is still largely limited. To tackle this problem, we developed a new multiscale modeling framework. This framework combines a lower-resolution rigid-body-based simulation with a higher-resolution Cα-based simulation method so that protein complexes can be assembled with both structural details and computational efficiency. We applied this model to a homotrimer and a heterotetramer as simple test systems. Consistent with experimental observations, our simulations indicated very different kinetics between protein oligomerization and dimerization. The formation of protein oligomers is a multistep process that is much slower than dimerization but thermodynamically more stable. Moreover, we showed that even the same protein quaternary structure can have very diverse assembly pathways under different binding constants between subunits, which is important for regulating the functions of protein complexes. Finally, we revealed that the binding between subunits in a complex can be synergistically strengthened during assembly without considering allosteric regulation or conformational changes. Therefore, our model provides a useful tool to understand the general principles of protein complex assembly.
Pedigree models for complex human traits involving the mitochrondrial genome
Schork, N.J.; Guo, S.W. )
1993-12-01
Recent biochemical and molecular-genetic discoveries concerning variations in human mtDNA have suggested a role for mtDNA mutations in a number of human traits and disorders. Although the importance of these discoveries cannot be emphasized enough, the complex natures of mitochondrial biogenesis, mutant mtDNA phenotype expression, and the maternal inheritance pattern exhibited by mtDNA transmission make it difficult to develop models that can be used routinely in pedigree analyses to quantify and test hypotheses about the role of mtDNA in the expression of a trait. In the present paper, the authors describe complexities inherent in mitochondrial biogenesis and genetic transmission and show how these complexities can be incorporated into appropriate mathematical models. The authors offer a variety of likelihood-based models which account for the complexities discussed. The derivation of the models is meant to stimulate the construction of statistical tests for putative mtDNA contribution to a trait. Results of simulation studies which make use of the proposed models are described. The results of the simulation studies suggest that, although pedigree models of mtDNA effects can be reliable, success in mapping chromosomal determinants of a trait does not preclude the possibility that mtDNA determinants exist for the trait as well. Shortcomings inherent in the proposed models are described in an effort to expose areas in need of additional research. 58 refs., 5 figs., 2 tabs.
Systems Engineering Metrics: Organizational Complexity and Product Quality Modeling
NASA Technical Reports Server (NTRS)
Mog, Robert A.
1997-01-01
Innovative organizational complexity and product quality models applicable to performance metrics for NASA-MSFC's Systems Analysis and Integration Laboratory (SAIL) missions and objectives are presented. An intensive research effort focuses on the synergistic combination of stochastic process modeling, nodal and spatial decomposition techniques, organizational and computational complexity, systems science and metrics, chaos, and proprietary statistical tools for accelerated risk assessment. This is followed by the development of a preliminary model, which is uniquely applicable and robust for quantitative purposes. Exercise of the preliminary model using a generic system hierarchy and the AXAF-I architectural hierarchy is provided. The Kendall test for positive dependence provides an initial verification and validation of the model. Finally, the research and development of the innovation is revisited, prior to peer review. This research and development effort results in near-term, measurable SAIL organizational and product quality methodologies, enhanced organizational risk assessment and evolutionary modeling results, and 91 improved statistical quantification of SAIL productivity interests.
Emulator-assisted data assimilation in complex models
NASA Astrophysics Data System (ADS)
Margvelashvili, Nugzar Yu; Herzfeld, Mike; Rizwi, Farhan; Mongin, Mathieu; Baird, Mark E.; Jones, Emlyn; Schaffelke, Britta; King, Edward; Schroeder, Thomas
2016-09-01
Emulators are surrogates of complex models that run orders of magnitude faster than the original model. The utility of emulators for the data assimilation into ocean models is still not well understood. High complexity of ocean models translates into high uncertainty of the corresponding emulators which may undermine the quality of the assimilation schemes based on such emulators. Numerical experiments with a chaotic Lorenz-95 model are conducted to illustrate this point and suggest a strategy to alleviate this problem through the localization of the emulation and data assimilation procedures. Insights gained through these experiments are used to design and implement data assimilation scenario for a 3D fine-resolution sediment transport model of the Great Barrier Reef (GBR), Australia.
Calibration of Complex Subsurface Reaction Models Using a Surrogate-Model Approach
Application of model assessment techniques to complex subsurface reaction models involves numerous difficulties, including non-trivial model selection, parameter non-uniqueness, and excessive computational burden. To overcome these difficulties, this study introduces SAMM (Simult...
NASA Astrophysics Data System (ADS)
Boykin, Timothy B.; Luisier, Mathieu; Klimeck, Gerhard; Jiang, Xueping; Kharche, Neerav; Zhou, Yu; Nayak, Saroj K.
2011-05-01
Accurate modeling of the π-bands of armchair graphene nanoribbons (AGNRs) requires correctly reproducing asymmetries in the bulk graphene bands, as well as providing a realistic model for hydrogen passivation of the edge atoms. The commonly used single-pz orbital approach fails on both these counts. To overcome these failures we introduce a nearest-neighbor, three orbital per atom p/d tight-binding model for graphene. The parameters of the model are fit to first-principles density-functional theory -based calculations as well as to those based on the many-body Green's function and screened-exchange formalism, giving excellent agreement with the ab initio AGNR bands. We employ this model to calculate the current-voltage characteristics of an AGNR MOSFET and the conductance of rough-edge AGNRs, finding significant differences versus the single-pz model. These results show that an accurate band structure model is essential for predicting the performance of graphene-based nanodevices.
Modeling of Complex Adaptive Systems in Air Operations
2006-09-01
control of C3 in an increasingly complex military environment. Control theory is a multidisciplinary science associated with dynamic systems and, while...AFRL-IF-RS-TR-2006-282 In- House Final Technical Report September 2006 MODELING OF COMPLEX ADAPTIVE SYSTEMS IN AIR OPERATIONS...NOTICE AND SIGNATURE PAGE Using Government drawings, specifications, or other data included in this document for any purpose other than Government
Improving a regional model using reduced complexity and parameter estimation
Kelson, Victor A.; Hunt, Randall J.; Haitjema, Henk M.
2002-01-01
The availability of powerful desktop computers and graphical user interfaces for ground water flow models makes possible the construction of ever more complex models. A proposed copper-zinc sulfide mine in northern Wisconsin offers a unique case in which the same hydrologic system has been modeled using a variety of techniques covering a wide range of sophistication and complexity. Early in the permitting process, simple numerical models were used to evaluate the necessary amount of water to be pumped from the mine, reductions in streamflow, and the drawdowns in the regional aquifer. More complex models have subsequently been used in an attempt to refine the predictions. Even after so much modeling effort, questions regarding the accuracy and reliability of the predictions remain. We have performed a new analysis of the proposed mine using the two-dimensional analytic element code GFLOW coupled with the nonlinear parameter estimation code UCODE. The new model is parsimonious, containing fewer than 10 parameters, and covers a region several times larger in areal extent than any of the previous models. The model demonstrates the suitability of analytic element codes for use with parameter estimation codes. The simplified model results are similar to the more complex models; predicted mine inflows and UCODE-derived 95% confidence intervals are consistent with the previous predictions. More important, the large areal extent of the model allowed us to examine hydrological features not included in the previous models, resulting in new insights about the effects that far-field boundary conditions can have on near-field model calibration and parameterization. In this case, the addition of surface water runoff into a lake in the headwaters of a stream while holding recharge constant moved a regional ground watershed divide and resulted in some of the added water being captured by the adjoining basin. Finally, a simple analytical solution was used to clarify the GFLOW model
On explicit algebraic stress models for complex turbulent flows
NASA Technical Reports Server (NTRS)
Gatski, T. B.; Speziale, C. G.
1992-01-01
Explicit algebraic stress models that are valid for three-dimensional turbulent flows in noninertial frames are systematically derived from a hierarchy of second-order closure models. This represents a generalization of the model derived by Pope who based his analysis on the Launder, Reece, and Rodi model restricted to two-dimensional turbulent flows in an inertial frame. The relationship between the new models and traditional algebraic stress models -- as well as anistropic eddy visosity models -- is theoretically established. The need for regularization is demonstrated in an effort to explain why traditional algebraic stress models have failed in complex flows. It is also shown that these explicit algebraic stress models can shed new light on what second-order closure models predict for the equilibrium states of homogeneous turbulent flows and can serve as a useful alternative in practical computations.
Complex groundwater flow systems as traveling agent models.
López Corona, Oliver; Padilla, Pablo; Escolero, Oscar; González, Tomas; Morales-Casique, Eric; Osorio-Olvera, Luis
2014-01-01
Analyzing field data from pumping tests, we show that as with many other natural phenomena, groundwater flow exhibits complex dynamics described by 1/f power spectrum. This result is theoretically studied within an agent perspective. Using a traveling agent model, we prove that this statistical behavior emerges when the medium is complex. Some heuristic reasoning is provided to justify both spatial and dynamic complexity, as the result of the superposition of an infinite number of stochastic processes. Even more, we show that this implies that non-Kolmogorovian probability is needed for its study, and provide a set of new partial differential equations for groundwater flow.
Complex groundwater flow systems as traveling agent models
Padilla, Pablo; Escolero, Oscar; González, Tomas; Morales-Casique, Eric; Osorio-Olvera, Luis
2014-01-01
Analyzing field data from pumping tests, we show that as with many other natural phenomena, groundwater flow exhibits complex dynamics described by 1/f power spectrum. This result is theoretically studied within an agent perspective. Using a traveling agent model, we prove that this statistical behavior emerges when the medium is complex. Some heuristic reasoning is provided to justify both spatial and dynamic complexity, as the result of the superposition of an infinite number of stochastic processes. Even more, we show that this implies that non-Kolmogorovian probability is needed for its study, and provide a set of new partial differential equations for groundwater flow. PMID:25337455
A Complex Systems Model Approach to Quantified Mineral Resource Appraisal
Gettings, M.E.; Bultman, M.W.; Fisher, F.S.
2004-01-01
For federal and state land management agencies, mineral resource appraisal has evolved from value-based to outcome-based procedures wherein the consequences of resource development are compared with those of other management options. Complex systems modeling is proposed as a general framework in which to build models that can evaluate outcomes. Three frequently used methods of mineral resource appraisal (subjective probabilistic estimates, weights of evidence modeling, and fuzzy logic modeling) are discussed to obtain insight into methods of incorporating complexity into mineral resource appraisal models. Fuzzy logic and weights of evidence are most easily utilized in complex systems models. A fundamental product of new appraisals is the production of reusable, accessible databases and methodologies so that appraisals can easily be repeated with new or refined data. The data are representations of complex systems and must be so regarded if all of their information content is to be utilized. The proposed generalized model framework is applicable to mineral assessment and other geoscience problems. We begin with a (fuzzy) cognitive map using (+1,0,-1) values for the links and evaluate the map for various scenarios to obtain a ranking of the importance of various links. Fieldwork and modeling studies identify important links and help identify unanticipated links. Next, the links are given membership functions in accordance with the data. Finally, processes are associated with the links; ideally, the controlling physical and chemical events and equations are found for each link. After calibration and testing, this complex systems model is used for predictions under various scenarios.
Radzi, Shairah; Dlaska, Constantin Edmond; Cowin, Gary; Robinson, Mark; Pratap, Jit; Schuetz, Michael Andreas; Mishra, Sanjay
2016-01-01
Background Pilon fracture reduction is a challenging surgery. Radiographs are commonly used to assess the quality of reduction, but are limited in revealing the remaining bone incongruities. The study aimed to develop a method in quantifying articular malreductions using 3D computed tomography (CT) and magnetic resonance imaging (MRI) models. Methods CT and MRI data were acquired using three pairs of human cadaveric ankle specimens. Common tibial pilon fractures were simulated by performing osteotomies to the ankle specimens. Five of the created fractures [three AO type-B (43-B1), and two AO type-C (43-C1) fractures] were then reduced and stabilised using titanium implants, then rescanned. All datasets were reconstructed into CT and MRI models, and were analysed in regards to intra-articular steps and gaps, surface deviations, malrotations and maltranslations of the bone fragments. Results Initial results reveal that type B fracture CT and MRI models differed by ~0.2 (step), ~0.18 (surface deviations), ~0.56° (rotation) and ~0.4 mm (translation). Type C fracture MRI models showed metal artefacts extending to the articular surface, thus unsuitable for analysis. Type C fracture CT models differed from their CT and MRI contralateral models by ~0.15 (surface deviation), ~1.63° (rotation) and ~0.4 mm (translation). Conclusions Type B fracture MRI models were comparable to CT and may potentially be used for the postoperative assessment of articular reduction on a case-to-case basis. PMID:28090442
A mutate-and-map strategy accurately infers the base pairs of a 35-nucleotide model RNA
Kladwang, Wipapat; Cordero, Pablo; Das, Rhiju
2011-01-01
We present a rapid experimental strategy for inferring base pairs in structured RNAs via an information-rich extension of classic chemical mapping approaches. The mutate-and-map method, previously applied to a DNA/RNA helix, systematically searches for single mutations that enhance the chemical accessibility of base-pairing partners distant in sequence. To test this strategy for structured RNAs, we have carried out mutate-and-map measurements for a 35-nt hairpin, called the MedLoop RNA, embedded within an 80-nt sequence. We demonstrate the synthesis of all 105 single mutants of the MedLoop RNA sequence and present high-throughput DMS, CMCT, and SHAPE modification measurements for this library at single-nucleotide resolution. The resulting two-dimensional data reveal visually clear, punctate features corresponding to RNA base pair interactions as well as more complex features; these signals can be qualitatively rationalized by comparison to secondary structure predictions. Finally, we present an automated, sequence-blind analysis that permits the confident identification of nine of the 10 MedLoop RNA base pairs at single-nucleotide resolution, while discriminating against all 1460 false-positive base pairs. These results establish the accuracy and information content of the mutate-and-map strategy and support its feasibility for rapidly characterizing the base-pairing patterns of larger and more complex RNA systems. PMID:21239468
Highly compact and accurate circuit-level macro modeling of gate-all-around charge-trap flash memory
NASA Astrophysics Data System (ADS)
Kim, Seunghyun; Lee, Sang-Ho; Kim, Young-Goan; Cho, Seongjae; Park, Byung-Gook
2017-01-01
In this paper, a highly reliable circuit model of gate-all-around (GAA) charge-trap flash (CTF) memory cell is proposed, considering the transient behaviors for describing the program operations with improved accuracy. Although several compact models have been reported in the previous literature, time-dependent behaviors have not been precisely reflected and the failures tend to get worse as the operation time elapses. Furthermore, the developed SPICE models in this work have been verified by the measurement results of the fabricated flash memory cells having silicon-oxide-nitride-oxide-silicon (SONOS). This more realistic model would be beneficial in designing the system architectures and setting up the operation schemes for the leading three-dimensional (3D) stack CTF memory.
Goldsby, Michael E.; Mayo, Jackson R.; Bhattacharyya, Arnab; Armstrong, Robert C.; Vanderveen, Keith
2008-09-01
The goal of this research was to examine foundational methods, both computational and theoretical, that can improve the veracity of entity-based complex system models and increase confidence in their predictions for emergent behavior. The strategy was to seek insight and guidance from simplified yet realistic models, such as cellular automata and Boolean networks, whose properties can be generalized to production entity-based simulations. We have explored the usefulness of renormalization-group methods for finding reduced models of such idealized complex systems. We have prototyped representative models that are both tractable and relevant to Sandia mission applications, and quantified the effect of computational renormalization on the predictive accuracy of these models, finding good predictivity from renormalized versions of cellular automata and Boolean networks. Furthermore, we have theoretically analyzed the robustness properties of certain Boolean networks, relevant for characterizing organic behavior, and obtained precise mathematical constraints on systems that are robust to failures. In combination, our results provide important guidance for more rigorous construction of entity-based models, which currently are often devised in an ad-hoc manner. Our results can also help in designing complex systems with the goal of predictable behavior, e.g., for cybersecurity.
NASA Astrophysics Data System (ADS)
Ryu, Jaiyoung; Hu, Xiao; Shadden, Shawn C.
2014-11-01
The cerebral circulation is unique in its ability to maintain blood flow to the brain under widely varying physiologic conditions. Incorporating this autoregulatory response is critical to cerebral blood flow modeling, as well as investigations into pathological conditions. We discuss a one-dimensional nonlinear model of blood flow in the cerebral arteries that includes coupling of autoregulatory lumped parameter networks. The model is tested to reproduce a common clinical test to assess autoregulatory function - the carotid artery compression test. The change in the flow velocity at the middle cerebral artery (MCA) during carotid compression and release demonstrated strong agreement with published measurements. The model is then used to investigate vasospasm of the MCA, a common clinical concern following subarachnoid hemorrhage. Vasospasm was modeled by prescribing vessel area reduction in the middle portion of the MCA. Our model showed similar increases in velocity for moderate vasospasms, however, for serious vasospasm (~ 90% area reduction), the blood flow velocity demonstrated decrease due to blood flow rerouting. This demonstrates a potentially important phenomenon, which otherwise would lead to false-negative decisions on clinical vasospasm if not properly anticipated.
NASA Astrophysics Data System (ADS)
Brauchle, J.; Hein, D.; Berger, R.
2015-04-01
Remote sensing in areas with extreme altitude differences is particularly challenging. In high mountain areas specifically, steep slopes result in reduced ground pixel resolution and degraded quality in the DEM. Exceptionally high brightness differences can in part no longer be imaged by the sensors. Nevertheless, detailed information about mountainous regions is highly relevant: time and again glacier lake outburst floods (GLOFs) and debris avalanches claim dozens of victims. Glaciers are sensitive to climate change and must be carefully monitored. Very detailed and accurate 3D maps provide a basic tool for the analysis of natural hazards and the monitoring of glacier surfaces in high mountain areas. There is a gap here, because the desired accuracies are often not