Rey-Martinez, Jorge; McGarvie, Leigh; Pérez-Fernández, Nicolás
2017-03-01
The obtained simulations support the underlying hypothesis that the hydrostatic caloric drive is dissipated by local convective flow in a hydropic duct. To develop a computerized model to simulate and predict the internal fluid thermodynamic behavior within both normal and hydropic horizontal ducts. This study used a computational fluid dynamics software to simulate the effects of cooling and warming of two geometrical models representing normal and hydropic ducts of one semicircular horizontal canal during 120 s. Temperature maps, vorticity, and velocity fields were successfully obtained to characterize the endolymphatic flow during the caloric test in the developed models. In the normal semicircular canal, a well-defined endolymphatic linear flow was obtained, this flow has an opposite direction depending only on the cooling or warming condition of the simulation. For the hydropic model a non-effective endolymphatic flow was predicted; in this model the velocity and vorticity fields show a non-linear flow, with some vortices formed inside the hydropic duct.
Normal Brain-Skull Development with Hybrid Deformable VR Models Simulation.
Jin, Jing; De Ribaupierre, Sandrine; Eagleson, Roy
2016-01-01
This paper describes a simulation framework for a clinical application involving skull-brain co-development in infants, leading to a platform for craniosynostosis modeling. Craniosynostosis occurs when one or more sutures are fused early in life, resulting in an abnormal skull shape. Surgery is required to reopen the suture and reduce intracranial pressure, but is difficult without any predictive model to assist surgical planning. We aim to study normal brain-skull growth by computer simulation, which requires a head model and appropriate mathematical methods for brain and skull growth respectively. On the basis of our previous model, we further specified suture model into fibrous and cartilaginous sutures and develop algorithm for skull extension. We evaluate the resulting simulation by comparison with datasets of cases and normal growth.
Fluid Structural Analysis of Human Cerebral Aneurysm Using Their Own Wall Mechanical Properties
Valencia, Alvaro; Burdiles, Patricio; Ignat, Miguel; Mura, Jorge; Rivera, Rodrigo; Sordo, Juan
2013-01-01
Computational Structural Dynamics (CSD) simulations, Computational Fluid Dynamics (CFD) simulation, and Fluid Structure Interaction (FSI) simulations were carried out in an anatomically realistic model of a saccular cerebral aneurysm with the objective of quantifying the effects of type of simulation on principal fluid and solid mechanics results. Eight CSD simulations, one CFD simulation, and four FSI simulations were made. The results allowed the study of the influence of the type of material elements in the solid, the aneurism's wall thickness, and the type of simulation on the modeling of a human cerebral aneurysm. The simulations use their own wall mechanical properties of the aneurysm. The more complex simulation was the FSI simulation completely coupled with hyperelastic Mooney-Rivlin material, normal internal pressure, and normal variable thickness. The FSI simulation coupled in one direction using hyperelastic Mooney-Rivlin material, normal internal pressure, and normal variable thickness is the one that presents the most similar results with respect to the more complex FSI simulation, requiring one-fourth of the calculation time. PMID:24151523
NASA Astrophysics Data System (ADS)
Baidillah, Marlin R.; Takei, Masahiro
2017-06-01
A nonlinear normalization model which is called exponential model for electrical capacitance tomography (ECT) with external electrodes under gap permittivity conditions has been developed. The exponential model normalization is proposed based on the inherently nonlinear relationship characteristic between the mixture permittivity and the measured capacitance due to the gap permittivity of inner wall. The parameters of exponential equation are derived by using an exponential fitting curve based on the simulation and a scaling function is added to adjust the experiment system condition. The exponential model normalization was applied to two dimensional low and high contrast dielectric distribution phantoms by using simulation and experimental studies. The proposed normalization model has been compared with other normalization models i.e. Parallel, Series, Maxwell and Böttcher models. Based on the comparison of image reconstruction results, the exponential model is reliable to predict the nonlinear normalization of measured capacitance in term of low and high contrast dielectric distribution.
Modeling and simulation of normal and hemiparetic gait
NASA Astrophysics Data System (ADS)
Luengas, Lely A.; Camargo, Esperanza; Sanchez, Giovanni
2015-09-01
Gait is the collective term for the two types of bipedal locomotion, walking and running. This paper is focused on walking. The analysis of human gait is of interest to many different disciplines, including biomechanics, human-movement science, rehabilitation and medicine in general. Here we present a new model that is capable of reproducing the properties of walking, normal and pathological. The aim of this paper is to establish the biomechanical principles that underlie human walking by using Lagrange method. The constraint forces of Rayleigh dissipation function, through which to consider the effect on the tissues in the gait, are included. Depending on the value of the factor present in the Rayleigh dissipation function, both normal and pathological gait can be simulated. First of all, we apply it in the normal gait and then in the permanent hemiparetic gait. Anthropometric data of adult person are used by simulation, and it is possible to use anthropometric data for children but is necessary to consider existing table of anthropometric data. Validation of these models includes simulations of passive dynamic gait that walk on level ground. The dynamic walking approach provides a new perspective of gait analysis, focusing on the kinematics and kinetics of gait. There have been studies and simulations to show normal human gait, but few of them have focused on abnormal, especially hemiparetic gait. Quantitative comparisons of the model predictions with gait measurements show that the model can reproduce the significant characteristics of normal gait.
Numerical Simulation of Dry Granular Flow Impacting a Rigid Wall Using the Discrete Element Method
Wu, Fengyuan; Fan, Yunyun; Liang, Li; Wang, Chao
2016-01-01
This paper presents a clump model based on Discrete Element Method. The clump model was more close to the real particle than a spherical particle. Numerical simulations of several tests of dry granular flow impacting a rigid wall flowing in an inclined chute have been achieved. Five clump models with different sphericity have been used in the simulations. By comparing the simulation results with the experimental results of normal force on the rigid wall, a clump model with better sphericity was selected to complete the following numerical simulation analysis and discussion. The calculation results of normal force showed good agreement with the experimental results, which verify the effectiveness of the clump model. Then, total normal force and bending moment of the rigid wall and motion process of the granular flow were further analyzed. Finally, comparison analysis of the numerical simulations using the clump model with different grain composition was obtained. By observing normal force on the rigid wall and distribution of particle size at the front of the rigid wall at the final state, the effect of grain composition on the force of the rigid wall has been revealed. It mainly showed that, with the increase of the particle size, the peak force at the retaining wall also increase. The result can provide a basis for the research of relevant disaster and the design of protective structures. PMID:27513661
A Comparison of Normal and Elliptical Estimation Methods in Structural Equation Models.
ERIC Educational Resources Information Center
Schumacker, Randall E.; Cheevatanarak, Suchittra
Monte Carlo simulation compared chi-square statistics, parameter estimates, and root mean square error of approximation values using normal and elliptical estimation methods. Three research conditions were imposed on the simulated data: sample size, population contamination percent, and kurtosis. A Bentler-Weeks structural model established the…
Hand ultrasound: a high-fidelity simulation of lung sliding.
Shokoohi, Hamid; Boniface, Keith
2012-09-01
Simulation training has been effectively used to integrate didactic knowledge and technical skills in emergency and critical care medicine. In this article, we introduce a novel model of simulating lung ultrasound and the features of lung sliding and pneumothorax by performing a hand ultrasound. The simulation model involves scanning the palmar aspect of the hand to create normal lung sliding in varying modes of scanning and to mimic ultrasound features of pneumothorax, including "stratosphere/barcode sign" and "lung point." The simple, reproducible, and readily available simulation model we describe demonstrates a high-fidelity simulation surrogate that can be used to rapidly illustrate the signs of normal and abnormal lung sliding at the bedside. © 2012 by the Society for Academic Emergency Medicine.
NASA Astrophysics Data System (ADS)
Zhao, Yongguang; Li, Chuanrong; Ma, Lingling; Tang, Lingli; Wang, Ning; Zhou, Chuncheng; Qian, Yonggang
2017-10-01
Time series of satellite reflectance data have been widely used to characterize environmental phenomena, describe trends in vegetation dynamics and study climate change. However, several sensors with wide spatial coverage and high observation frequency are usually designed to have large field of view (FOV), which cause variations in the sun-targetsensor geometry in time-series reflectance data. In this study, on the basis of semiempirical kernel-driven BRDF model, a new semi-empirical model was proposed to normalize the sun-target-sensor geometry of remote sensing image. To evaluate the proposed model, bidirectional reflectance under different canopy growth conditions simulated by Discrete Anisotropic Radiative Transfer (DART) model were used. The semi-empirical model was first fitted by using all simulated bidirectional reflectance. Experimental result showed a good fit between the bidirectional reflectance estimated by the proposed model and the simulated value. Then, MODIS time-series reflectance data was normalized to a common sun-target-sensor geometry by the proposed model. The experimental results showed the proposed model yielded good fits between the observed and estimated values. The noise-like fluctuations in time-series reflectance data was also reduced after the sun-target-sensor normalization process.
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Basham, Bryan D.
1989-01-01
CONFIG is a modeling and simulation tool prototype for analyzing the normal and faulty qualitative behaviors of engineered systems. Qualitative modeling and discrete-event simulation have been adapted and integrated, to support early development, during system design, of software and procedures for management of failures, especially in diagnostic expert systems. Qualitative component models are defined in terms of normal and faulty modes and processes, which are defined by invocation statements and effect statements with time delays. System models are constructed graphically by using instances of components and relations from object-oriented hierarchical model libraries. Extension and reuse of CONFIG models and analysis capabilities in hybrid rule- and model-based expert fault-management support systems are discussed.
A nonparametric spatial scan statistic for continuous data.
Jung, Inkyung; Cho, Ho Jin
2015-10-20
Spatial scan statistics are widely used for spatial cluster detection, and several parametric models exist. For continuous data, a normal-based scan statistic can be used. However, the performance of the model has not been fully evaluated for non-normal data. We propose a nonparametric spatial scan statistic based on the Wilcoxon rank-sum test statistic and compared the performance of the method with parametric models via a simulation study under various scenarios. The nonparametric method outperforms the normal-based scan statistic in terms of power and accuracy in almost all cases under consideration in the simulation study. The proposed nonparametric spatial scan statistic is therefore an excellent alternative to the normal model for continuous data and is especially useful for data following skewed or heavy-tailed distributions.
ERIC Educational Resources Information Center
Gugel, John F.
A new method for estimating the parameters of the normal ogive three-parameter model for multiple-choice test items--the normalized direct (NDIR) procedure--is examined. The procedure is compared to a more commonly used estimation procedure, Lord's LOGIST, using computer simulations. The NDIR procedure uses the normalized (mid-percentile)…
Shen, Meiyu; Russek-Cohen, Estelle; Slud, Eric V
2016-08-12
Bioequivalence (BE) studies are an essential part of the evaluation of generic drugs. The most common in vivo BE study design is the two-period two-treatment crossover design. AUC (area under the concentration-time curve) and Cmax (maximum concentration) are obtained from the observed concentration-time profiles for each subject from each treatment under each sequence. In the BE evaluation of pharmacokinetic crossover studies, the normality of the univariate response variable, e.g. log(AUC) 1 or log(Cmax), is often assumed in the literature without much evidence. Therefore, we investigate the distributional assumption of the normality of response variables, log(AUC) and log(Cmax), by simulating concentration-time profiles from two-stage pharmacokinetic models (commonly used in pharmacokinetic research) for a wide range of pharmacokinetic parameters and measurement error structures. Our simulations show that, under reasonable distributional assumptions on the pharmacokinetic parameters, log(AUC) has heavy tails and log(Cmax) is skewed. Sensitivity analyses are conducted to investigate how the distribution of the standardized log(AUC) (or the standardized log(Cmax)) for a large number of simulated subjects deviates from normality if distributions of errors in the pharmacokinetic model for plasma concentrations deviate from normality and if the plasma concentration can be described by different compartmental models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veress, Alexander I.; Segars, W. Paul; Weiss, Jeffrey A.
2006-08-02
The 4D NURBS-based Cardiac-Torso (NCAT) phantom, whichprovides a realistic model of the normal human anatomy and cardiac andrespiratory motions, is used in medical imaging research to evaluate andimprove imaging devices and techniques, especially dynamic cardiacapplications. One limitation of the phantom is that it lacks the abilityto accurately simulate altered functions of the heart that result fromcardiac pathologies such as coronary artery disease (CAD). The goal ofthis work was to enhance the 4D NCAT phantom by incorporating aphysiologically based, finite-element (FE) mechanical model of the leftventricle (LV) to simulate both normal and abnormal cardiac motions. Thegeometry of the FE mechanical modelmore » was based on gated high-resolutionx-ray multi-slice computed tomography (MSCT) data of a healthy malesubject. The myocardial wall was represented as transversely isotropichyperelastic material, with the fiber angle varying from -90 degrees atthe epicardial surface, through 0 degreesat the mid-wall, to 90 degreesat the endocardial surface. A time varying elastance model was used tosimulate fiber contraction, and physiological intraventricular systolicpressure-time curves were applied to simulate the cardiac motion over theentire cardiac cycle. To demonstrate the ability of the FE mechanicalmodel to accurately simulate the normal cardiac motion as well abnormalmotions indicative of CAD, a normal case and two pathologic cases weresimulated and analyzed. In the first pathologic model, a subendocardialanterior ischemic region was defined. A second model was created with atransmural ischemic region defined in the same location. The FE baseddeformations were incorporated into the 4D NCAT cardiac model through thecontrol points that define the cardiac structures in the phantom whichwere set to move according to the predictions of the mechanical model. Asimulation study was performed using the FE-NCAT combination toinvestigate how the differences in contractile function between thesubendocardial and transmural infarcts manifest themselves in myocardialSPECT images. The normal FE model produced strain distributions that wereconsistent with those reported in the literature and a motion consistentwith that defined in the normal 4D NCAT beating heart model based ontagged MRI data. The addition of a subendocardial ischemic region changedthe average transmural circumferential strain from a contractile value of0.19 to a tensile value of 0.03. The addition of a transmural ischemicregion changed average circumferential strain to a value of 0.16, whichis consistent with data reported in the literature. Model resultsdemonstrated differences in contractile function between subendocardialand transmural infarcts and how these differences in function aredocumented in simulated myocardial SPECT images produced using the 4DNCAT phantom. In comparison to the original NCAT beating heart model, theFE mechanical model produced a more accurate simulation for the cardiacmotion abnormalities. Such a model, when incorporated into the 4D NCATphantom, has great potential for use in cardiac imaging research. Withits enhanced physiologically-based cardiac model, the 4D NCAT phantom canbe used to simulate realistic, predictive imaging data of a patientpopulation with varying whole-body anatomy and with varying healthy anddiseased states of the heart that will provide a known truth from whichto evaluate and improve existing and emerging 4D imaging techniques usedin the diagnosis of cardiac disease.« less
Convergence of Free Energy Profile of Coumarin in Lipid Bilayer
2012-01-01
Atomistic molecular dynamics (MD) simulations of druglike molecules embedded in lipid bilayers are of considerable interest as models for drug penetration and positioning in biological membranes. Here we analyze partitioning of coumarin in dioleoylphosphatidylcholine (DOPC) bilayer, based on both multiple, unbiased 3 μs MD simulations (total length) and free energy profiles along the bilayer normal calculated by biased MD simulations (∼7 μs in total). The convergences in time of free energy profiles calculated by both umbrella sampling and z-constraint techniques are thoroughly analyzed. Two sets of starting structures are also considered, one from unbiased MD simulation and the other from “pulling” coumarin along the bilayer normal. The structures obtained by pulling simulation contain water defects on the lipid bilayer surface, while those acquired from unbiased simulation have no membrane defects. The free energy profiles converge more rapidly when starting frames from unbiased simulations are used. In addition, z-constraint simulation leads to more rapid convergence than umbrella sampling, due to quicker relaxation of membrane defects. Furthermore, we show that the choice of RESP, PRODRG, or Mulliken charges considerably affects the resulting free energy profile of our model drug along the bilayer normal. We recommend using z-constraint biased MD simulations based on starting geometries acquired from unbiased MD simulations for efficient calculation of convergent free energy profiles of druglike molecules along bilayer normals. The calculation of free energy profile should start with an unbiased simulation, though the polar molecules might need a slow pulling afterward. Results obtained with the recommended simulation protocol agree well with available experimental data for two coumarin derivatives. PMID:22545027
Convergence of Free Energy Profile of Coumarin in Lipid Bilayer.
Paloncýová, Markéta; Berka, Karel; Otyepka, Michal
2012-04-10
Atomistic molecular dynamics (MD) simulations of druglike molecules embedded in lipid bilayers are of considerable interest as models for drug penetration and positioning in biological membranes. Here we analyze partitioning of coumarin in dioleoylphosphatidylcholine (DOPC) bilayer, based on both multiple, unbiased 3 μs MD simulations (total length) and free energy profiles along the bilayer normal calculated by biased MD simulations (∼7 μs in total). The convergences in time of free energy profiles calculated by both umbrella sampling and z-constraint techniques are thoroughly analyzed. Two sets of starting structures are also considered, one from unbiased MD simulation and the other from "pulling" coumarin along the bilayer normal. The structures obtained by pulling simulation contain water defects on the lipid bilayer surface, while those acquired from unbiased simulation have no membrane defects. The free energy profiles converge more rapidly when starting frames from unbiased simulations are used. In addition, z-constraint simulation leads to more rapid convergence than umbrella sampling, due to quicker relaxation of membrane defects. Furthermore, we show that the choice of RESP, PRODRG, or Mulliken charges considerably affects the resulting free energy profile of our model drug along the bilayer normal. We recommend using z-constraint biased MD simulations based on starting geometries acquired from unbiased MD simulations for efficient calculation of convergent free energy profiles of druglike molecules along bilayer normals. The calculation of free energy profile should start with an unbiased simulation, though the polar molecules might need a slow pulling afterward. Results obtained with the recommended simulation protocol agree well with available experimental data for two coumarin derivatives.
Modeling target normal sheath acceleration using handoffs between multiple simulations
NASA Astrophysics Data System (ADS)
McMahon, Matthew; Willis, Christopher; Mitchell, Robert; King, Frank; Schumacher, Douglass; Akli, Kramer; Freeman, Richard
2013-10-01
We present a technique to model the target normal sheath acceleration (TNSA) process using full-scale LSP PIC simulations. The technique allows for a realistic laser, full size target and pre-plasma, and sufficient propagation length for the accelerated ions and electrons. A first simulation using a 2D Cartesian grid models the laser-plasma interaction (LPI) self-consistently and includes field ionization. Electrons accelerated by the laser are imported into a second simulation using a 2D cylindrical grid optimized for the initial TNSA process and incorporating an equation of state. Finally, all of the particles are imported to a third simulation optimized for the propagation of the accelerated ions and utilizing a static field solver for initialization. We also show use of 3D LPI simulations. Simulation results are compared to recent ion acceleration experiments using SCARLET laser at The Ohio State University. This work was performed with support from ASOFR under contract # FA9550-12-1-0341, DARPA, and allocations of computing time from the Ohio Supercomputing Center.
Neurophysiological model of the normal and abnormal human pupil
NASA Technical Reports Server (NTRS)
Krenz, W.; Robin, M.; Barez, S.; Stark, L.
1985-01-01
Anatomical, experimental, and computer simulation studies were used to determine the structure of the neurophysiological model of the pupil size control system. The computer simulation of this model demonstrates the role played by each of the elements in the neurological pathways influencing the size of the pupil. Simulations of the effect of drugs and common abnormalities in the system help to illustrate the workings of the pathways and processes involved. The simulation program allows the user to select pupil condition (normal or an abnormality), specific site along the neurological pathway (retina, hypothalamus, etc.) drug class input (barbiturate, narcotic, etc.), stimulus/response mode, display mode, stimulus type and input waveform, stimulus or background intensity and frequency, the input and output conditions, and the response at the neuroanatomical site. The model can be used as a teaching aid or as a tool for testing hypotheses regarding the system.
Damewood, Sara; Jeanmonod, Donald; Cadigan, Beth
2011-04-01
This study compared the effectiveness of a multimedia ultrasound (US) simulator to normal human models during the practical portion of a course designed to teach the skills of both image acquisition and image interpretation for the Focused Assessment with Sonography for Trauma (FAST) exam. This was a prospective, blinded, controlled education study using medical students as an US-naïve population. After a standardized didactic lecture on the FAST exam, trainees were separated into two groups to practice image acquisition on either a multimedia simulator or a normal human model. Four outcome measures were then assessed: image interpretation of prerecorded FAST exams, adequacy of image acquisition on a standardized normal patient, perceived confidence of image adequacy, and time to image acquisition. Ninety-two students were enrolled and separated into two groups, a multimedia simulator group (n = 44), and a human model group (n = 48). Bonferroni adjustment factor determined the level of significance to be p = 0.0125. There was no difference between those trained on the multimedia simulator and those trained on a human model in image interpretation (median 80 of 100 points, interquartile range [IQR] 71-87, vs. median 78, IQR 62-86; p = 0.16), image acquisition (median 18 of 24 points, IQR 12-18 points, vs. median 16, IQR 14-20; p = 0.95), trainee's confidence in obtaining images on a 1-10 visual analog scale (median 5, IQR 4.1-6.5, vs. median 5, IQR 3.7-6.0; p = 0.36), or time to acquire images (median 3.8 minutes, IQR 2.7-5.4 minutes, vs. median = 4.5 minutes, IQR = 3.4-5.9 minutes; p = 0.044). There was no difference in teaching the skills of image acquisition and interpretation to novice FAST examiners using the multimedia simulator or normal human models. These data suggest that practical image acquisition skills learned during simulated training can be directly applied to human models. © 2011 by the Society for Academic Emergency Medicine.
WEST-3 wind turbine simulator development
NASA Technical Reports Server (NTRS)
Hoffman, J. A.; Sridhar, S.
1985-01-01
The software developed for WEST-3, a new, all digital, and fully programmable wind turbine simulator is given. The process of wind turbine simulation on WEST-3 is described in detail. The major steps are, the processing of the mathematical models, the preparation of the constant data, and the use of system software generated executable code for running on WEST-3. The mechanics of reformulation, normalization, and scaling of the mathematical models is discussed in detail, in particulr, the significance of reformulation which leads to accurate simulations. Descriptions for the preprocessor computer programs which are used to prepare the constant data needed in the simulation are given. These programs, in addition to scaling and normalizing all the constants, relieve the user from having to generate a large number of constants used in the simulation. Also given are brief descriptions of the components of the WEST-3 system software: Translator, Assembler, Linker, and Loader. Also included are: details of the aeroelastic rotor analysis, which is the center of a wind turbine simulation model, analysis of the gimbal subsystem; and listings of the variables, constants, and equations used in the simulation.
NASA Astrophysics Data System (ADS)
Kar, Leow Soo
2014-07-01
Two important factors that influence customer satisfaction in large supermarkets or hypermarkets are adequate parking facilities and short waiting times at the checkout counters. This paper describes the simulation analysis of a large supermarket to determine the optimal levels of these two factors. SAS Simulation Studio is used to model a large supermarket in a shopping mall with car park facility. In order to make the simulation model more realistic, a number of complexities are introduced into the model. For example, arrival patterns of customers vary with the time of the day (morning, afternoon and evening) and with the day of the week (weekdays or weekends), the transport mode of arriving customers (by car or other means), the mode of payment (cash or credit card), customer shopping pattern (leisurely, normal, exact) or choice of checkout counters (normal or express). In this study, we focus on 2 important components of the simulation model, namely the parking area, the normal and express checkout counters. The parking area is modeled using a Resource Pool block where one resource unit represents one parking bay. A customer arriving by car seizes a unit of the resource from the Pool block (parks car) and only releases it when he exits the system. Cars arriving when the Resource Pool is empty (no more parking bays) leave without entering the system. The normal and express checkouts are represented by Server blocks with appropriate service time distributions. As a case study, a supermarket in a shopping mall with a limited number of parking bays in Bangsar was chosen for this research. Empirical data on arrival patterns, arrival modes, payment modes, shopping patterns, service times of the checkout counters were collected and analyzed to validate the model. Sensitivity analysis was also performed with different simulation scenarios to identify the parameters for the optimal number the parking spaces and checkout counters.
NASA Astrophysics Data System (ADS)
Grova, C.; Jannin, P.; Biraben, A.; Buvat, I.; Benali, H.; Bernard, A. M.; Scarabin, J. M.; Gibaud, B.
2003-12-01
Quantitative evaluation of brain MRI/SPECT fusion methods for normal and in particular pathological datasets is difficult, due to the frequent lack of relevant ground truth. We propose a methodology to generate MRI and SPECT datasets dedicated to the evaluation of MRI/SPECT fusion methods and illustrate the method when dealing with ictal SPECT. The method consists in generating normal or pathological SPECT data perfectly aligned with a high-resolution 3D T1-weighted MRI using realistic Monte Carlo simulations that closely reproduce the response of a SPECT imaging system. Anatomical input data for the SPECT simulations are obtained from this 3D T1-weighted MRI, while functional input data result from an inter-individual analysis of anatomically standardized SPECT data. The method makes it possible to control the 'brain perfusion' function by proposing a theoretical model of brain perfusion from measurements performed on real SPECT images. Our method provides an absolute gold standard for assessing MRI/SPECT registration method accuracy since, by construction, the SPECT data are perfectly registered with the MRI data. The proposed methodology has been applied to create a theoretical model of normal brain perfusion and ictal brain perfusion characteristic of mesial temporal lobe epilepsy. To approach realistic and unbiased perfusion models, real SPECT data were corrected for uniform attenuation, scatter and partial volume effect. An anatomic standardization was used to account for anatomic variability between subjects. Realistic simulations of normal and ictal SPECT deduced from these perfusion models are presented. The comparison of real and simulated SPECT images showed relative differences in regional activity concentration of less than 20% in most anatomical structures, for both normal and ictal data, suggesting realistic models of perfusion distributions for evaluation purposes. Inter-hemispheric asymmetry coefficients measured on simulated data were found within the range of asymmetry coefficients measured on corresponding real data. The features of the proposed approach are compared with those of other methods previously described to obtain datasets appropriate for the assessment of fusion methods.
Multiple imputation for handling missing outcome data when estimating the relative risk.
Sullivan, Thomas R; Lee, Katherine J; Ryan, Philip; Salter, Amy B
2017-09-06
Multiple imputation is a popular approach to handling missing data in medical research, yet little is known about its applicability for estimating the relative risk. Standard methods for imputing incomplete binary outcomes involve logistic regression or an assumption of multivariate normality, whereas relative risks are typically estimated using log binomial models. It is unclear whether misspecification of the imputation model in this setting could lead to biased parameter estimates. Using simulated data, we evaluated the performance of multiple imputation for handling missing data prior to estimating adjusted relative risks from a correctly specified multivariable log binomial model. We considered an arbitrary pattern of missing data in both outcome and exposure variables, with missing data induced under missing at random mechanisms. Focusing on standard model-based methods of multiple imputation, missing data were imputed using multivariate normal imputation or fully conditional specification with a logistic imputation model for the outcome. Multivariate normal imputation performed poorly in the simulation study, consistently producing estimates of the relative risk that were biased towards the null. Despite outperforming multivariate normal imputation, fully conditional specification also produced somewhat biased estimates, with greater bias observed for higher outcome prevalences and larger relative risks. Deleting imputed outcomes from analysis datasets did not improve the performance of fully conditional specification. Both multivariate normal imputation and fully conditional specification produced biased estimates of the relative risk, presumably since both use a misspecified imputation model. Based on simulation results, we recommend researchers use fully conditional specification rather than multivariate normal imputation and retain imputed outcomes in the analysis when estimating relative risks. However fully conditional specification is not without its shortcomings, and so further research is needed to identify optimal approaches for relative risk estimation within the multiple imputation framework.
Not Quite Normal: Consequences of Violating the Assumption of Normality in Regression Mixture Models
ERIC Educational Resources Information Center
Van Horn, M. Lee; Smith, Jessalyn; Fagan, Abigail A.; Jaki, Thomas; Feaster, Daniel J.; Masyn, Katherine; Hawkins, J. David; Howe, George
2012-01-01
Regression mixture models, which have only recently begun to be used in applied research, are a new approach for finding differential effects. This approach comes at the cost of the assumption that error terms are normally distributed within classes. This study uses Monte Carlo simulations to explore the effects of relatively minor violations of…
Bae, Ji Yong; Park, Kyung Soon; Seon, Jong Keun; Jeon, Insu
2015-12-01
To show the causal relationship between normal walking after various lateral ankle ligament (LAL) injuries caused by acute inversion ankle sprains and alterations in ankle joint contact characteristics, finite element simulations of normal walking were carried out using an intact ankle joint model and LAL injury models. A walking experiment using a volunteer with a normal ankle joint was performed to obtain the boundary conditions for the simulations and to support the appropriateness of the simulation results. Contact pressure and strain on the talus articular cartilage and anteroposterior and mediolateral translations of the talus were calculated. Ankles with ruptured anterior talofibular ligaments (ATFLs) had a higher likelihood of experiencing increased ankle joint contact pressures, strains and translations than ATFL-deficient ankles. In particular, ankles with ruptured ATFL + calcaneofibular ligaments and all ruptured ankles had a similar likelihood as the ATFL-ruptured ankles. The push off stance phase was the most likely situation for increased ankle joint contact pressures, strains and translations in LAL-injured ankles.
Anomalous T2 relaxation in normal and degraded cartilage.
Reiter, David A; Magin, Richard L; Li, Weiguo; Trujillo, Juan J; Pilar Velasco, M; Spencer, Richard G
2016-09-01
To compare the ordinary monoexponential model with three anomalous relaxation models-the stretched Mittag-Leffler, stretched exponential, and biexponential functions-using both simulated and experimental cartilage relaxation data. Monte Carlo simulations were used to examine both the ability of identifying a given model under high signal-to-noise ratio (SNR) conditions and the accuracy and precision of parameter estimates under more modest SNR as would be encountered clinically. Experimental transverse relaxation data were analyzed from normal and enzymatically degraded cartilage samples under high SNR and rapid echo sampling to compare each model. Both simulation and experimental results showed improvement in signal representation with the anomalous relaxation models. The stretched exponential model consistently showed the lowest mean squared error in experimental data and closely represents the signal decay over multiple decades of the decay time (e.g., 1-10 ms, 10-100 ms, and >100 ms). The stretched exponential parameter αse showed an inverse correlation with biochemically derived cartilage proteoglycan content. Experimental results obtained at high field suggest potential application of αse as a measure of matrix integrity. Simulation reflecting more clinical imaging conditions, indicate the ability to robustly estimate αse and distinguish between normal and degraded tissue, highlighting its potential as a biomarker for human studies. Magn Reson Med 76:953-962, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Prediction of normalized biodiesel properties by simulation of multiple feedstock blends.
García, Manuel; Gonzalo, Alberto; Sánchez, José Luis; Arauzo, Jesús; Peña, José Angel
2010-06-01
A continuous process for biodiesel production has been simulated using Aspen HYSYS V7.0 software. As fresh feed, feedstocks with a mild acid content have been used. The process flowsheet follows a traditional alkaline transesterification scheme constituted by esterification, transesterification and purification stages. Kinetic models taking into account the concentration of the different species have been employed in order to simulate the behavior of the CSTR reactors and the product distribution within the process. The comparison between experimental data found in literature and the predicted normalized properties, has been discussed. Additionally, a comparison between different thermodynamic packages has been performed. NRTL activity model has been selected as the most reliable of them. The combination of these models allows the prediction of 13 out of 25 parameters included in standard EN-14214:2003, and confers simulators a great value as predictive as well as optimization tool. (c) 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Dongyang; Ba, Dechun; Hao, Ming; Duan, Qihui; Liu, Kun; Mei, Qi
2018-05-01
Pneumatic NC (normally closed) valves are widely used in high density microfluidics systems. To improve actuation reliability, the actuation pressure needs to be reduced. In this work, we utilize 3D FEM (finite element method) modelling to get an insight into the valve actuation process numerically. Specifically, the progressive debonding process at the elastomer interface is simulated with CZM (cohesive zone model) method. To minimize the actuation pressure, the V-shape design has been investigated and compared with a normal straight design. The geometrical effects of valve shape has been elaborated, in terms of valve actuation pressure. Based on our simulated results, we formulate the main concerns for micro valve design and fabrication, which is significant for minimizing actuation pressures and ensuring reliable operation.
Hagos, Samson; Leung, L. Ruby; Ashfaq, Moetasim; ...
2018-03-20
CMIP 5 models exhibit a mean dry bias and a large inter-model spread in simulating South Asian monsoon precipitation but the origins of the bias and spread are not well understood. Using moisture and energy budget analysis that exploits the weak temperature gradients in the tropics, we derived a non-linear relationship between the normalized precipitation and normalized precipitable water that is similar to the non-linear relationship between precipitation and precipitable water found in previous observational studies. About half of the 21 models analyzed fall in the steep gradient of the non-linear relationship where small differences in the normalized precipitable watermore » in the equatorial Indian Ocean (EIO) manifest in large differences in normalized precipitation in the region. Models with larger normalized precipitable water in the EIO during spring contribute disproportionately to the large inter-model spread and multi-model mean dry bias in monsoon precipitation through perturbations of the large-scale winds. Thus the intermodel spread in precipitable water over EIO leads to the dry bias in the multi-model mean South Asian monsoon precipitation. The models with high normalized precipitable water over EIO also project larger response to warming and dominate the inter-model spread in the multi-model projections of monsoon rainfall. Conversely, models on the flat side of the relationship between normalized precipitation and precipitable water are in better agreement with each other and with observations. On average these models project a smaller increase in the projected monsoon precipitation than that from multi-model mean. As a result, this study identified the normalized precipitable water over EIO, which is determined by the relationship between the profiles of convergence and moisture and therefore is an essential outcome of the treatment of convection, as a key metric for understanding model biases and differentiating model skill in simulating South Asian monsoon precipitation.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagos, Samson; Leung, L. Ruby; Ashfaq, Moetasim
CMIP 5 models exhibit a mean dry bias and a large inter-model spread in simulating South Asian monsoon precipitation but the origins of the bias and spread are not well understood. Using moisture and energy budget analysis that exploits the weak temperature gradients in the tropics, we derived a non-linear relationship between the normalized precipitation and normalized precipitable water that is similar to the non-linear relationship between precipitation and precipitable water found in previous observational studies. About half of the 21 models analyzed fall in the steep gradient of the non-linear relationship where small differences in the normalized precipitable watermore » in the equatorial Indian Ocean (EIO) manifest in large differences in normalized precipitation in the region. Models with larger normalized precipitable water in the EIO during spring contribute disproportionately to the large inter-model spread and multi-model mean dry bias in monsoon precipitation through perturbations of the large-scale winds. Thus the intermodel spread in precipitable water over EIO leads to the dry bias in the multi-model mean South Asian monsoon precipitation. The models with high normalized precipitable water over EIO also project larger response to warming and dominate the inter-model spread in the multi-model projections of monsoon rainfall. Conversely, models on the flat side of the relationship between normalized precipitation and precipitable water are in better agreement with each other and with observations. On average these models project a smaller increase in the projected monsoon precipitation than that from multi-model mean. As a result, this study identified the normalized precipitable water over EIO, which is determined by the relationship between the profiles of convergence and moisture and therefore is an essential outcome of the treatment of convection, as a key metric for understanding model biases and differentiating model skill in simulating South Asian monsoon precipitation.« less
Modeling error distributions of growth curve models through Bayesian methods.
Zhang, Zhiyong
2016-06-01
Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.
NASA Technical Reports Server (NTRS)
Groleau, Nicolas; Frainier, Richard; Colombano, Silvano; Hazelton, Lyman; Szolovits, Peter
1993-01-01
This paper describes portions of a novel system called MARIKA (Model Analysis and Revision of Implicit Key Assumptions) to automatically revise a model of the normal human orientation system. The revision is based on analysis of discrepancies between experimental results and computer simulations. The discrepancies are calculated from qualitative analysis of quantitative simulations. The experimental and simulated time series are first discretized in time segments. Each segment is then approximated by linear combinations of simple shapes. The domain theory and knowledge are represented as a constraint network. Incompatibilities detected during constraint propagation within the network yield both parameter and structural model alterations. Interestingly, MARIKA diagnosed a data set from the Massachusetts Eye and Ear Infirmary Vestibular Laboratory as abnormal though the data was tagged as normal. Published results from other laboratories confirmed the finding. These encouraging results could lead to a useful clinical vestibular tool and to a scientific discovery system for space vestibular adaptation.
Samuels, Mary; DiStefano, Joseph J.
2008-01-01
Background We upgraded our recent feedback control system (FBCS) simulation model of human thyroid hormone (TH) regulation to include explicit representation of hypothalamic and pituitary dynamics, and updated TH distribution and elimination (D&E) parameters. This new model greatly expands the range of clinical and basic science scenarios explorable by computer simulation. Methods We quantified the model from pharmacokinetic (PK) and physiological human data and validated it comparatively against several independent clinical data sets. We then explored three contemporary clinical issues with the new model: combined triiodothyronine (T3)/thyroxine (T4) versus T4-only treatment, parenteral levothyroxine (L-T4) administration, and central hypothyroidism. Results Combined T3/T4 therapy—In thyroidectomized patients, the L-T4–only replacement doses needed to normalize plasma T3 or average tissue T3 were 145 μg L-T4/day or 165 μgL-T4/day, respectively. The combined T4 + T3 dosing needed to normalize both plasma and tissue T3 levels was 105 μg L-T4 + 9 μgT3 per day. For all three regimens, simulated mean steady-state plasma thyroid-stimulating hormone (TSH), T3, and T4 was within normal ranges (TSH: 0.5–5 mU/L; T4: 5–12 μg/dL; T3: 0.8–1.9 ng/mL). Parenteral T4 administration—800 μg weekly or 400 μg twice weekly normalized average tissue T3 levels both for subcutaneous (SC) and intramuscular (IM) routes of administration. TSH, T3, and T4 levels were maintained within normal ranges for all four of these dosing schemes (1× vs. 2× weekly, SC vs. IM). Central hypothyroidism—We simulated steady-state plasma T3,T4, and TSH concentrations in response to varying degrees of central hypothyroidism, reducing TSH secretion from 50% down to 0.1% of normal. Surprisingly, TSH, T3, and T4 plasma concentrations remained within normal ranges for TSH secretion as low as 25% of normal. Conclusions Combined T3/T4 treatment—Simulated standard L-T4–only therapy was sufficient to renormalize average tissue T3 levels and maintain normal TSH, T3, and T4 plasma levels, supporting adequacy of standard L-T4–only treatment. Parenteral T4 administration—TSH, T3, and T4 levels were maintained within normal ranges for all four of these dosing schemes (1× vs. 2× weekly, SC vs. IM), supporting these therapeutic alternatives for patients with compromised L-T4 gut absorption. Central hypothyroidism—These results highlight how highly nonlinear feedback in the hypothalamic-pituitary-thyroid axis acts to maintain normal hormone levels, even with severely reduced TSH secretion. PMID:18844475
Mahasa, Khaphetsi Joseph; Eladdadi, Amina; de Pillis, Lisette; Ouifki, Rachid
2017-01-01
In the present paper, we address by means of mathematical modeling the following main question: How can oncolytic virus infection of some normal cells in the vicinity of tumor cells enhance oncolytic virotherapy? We formulate a mathematical model describing the interactions between the oncolytic virus, the tumor cells, the normal cells, and the antitumoral and antiviral immune responses. The model consists of a system of delay differential equations with one (discrete) delay. We derive the model's basic reproductive number within tumor and normal cell populations and use their ratio as a metric for virus tumor-specificity. Numerical simulations are performed for different values of the basic reproduction numbers and their ratios to investigate potential trade-offs between tumor reduction and normal cells losses. A fundamental feature unravelled by the model simulations is its great sensitivity to parameters that account for most variation in the early or late stages of oncolytic virotherapy. From a clinical point of view, our findings indicate that designing an oncolytic virus that is not 100% tumor-specific can increase virus particles, which in turn, can further infect tumor cells. Moreover, our findings indicate that when infected tissues can be regenerated, oncolytic viral infection of normal cells could improve cancer treatment.
Realistic simulated MRI and SPECT databases. Application to SPECT/MRI registration evaluation.
Aubert-Broche, Berengere; Grova, Christophe; Reilhac, Anthonin; Evans, Alan C; Collins, D Louis
2006-01-01
This paper describes the construction of simulated SPECT and MRI databases that account for realistic anatomical and functional variability. The data is used as a gold-standard to evaluate four SPECT/MRI similarity-based registration methods. Simulation realism was accounted for using accurate physical models of data generation and acquisition. MRI and SPECT simulations were generated from three subjects to take into account inter-subject anatomical variability. Functional SPECT data were computed from six functional models of brain perfusion. Previous models of normal perfusion and ictal perfusion observed in Mesial Temporal Lobe Epilepsy (MTLE) were considered to generate functional variability. We studied the impact noise and intensity non-uniformity in MRI simulations and SPECT scatter correction may have on registration accuracy. We quantified the amount of registration error caused by anatomical and functional variability. Registration involving ictal data was less accurate than registration involving normal data. MR intensity nonuniformity was the main factor decreasing registration accuracy. The proposed simulated database is promising to evaluate many functional neuroimaging methods, involving MRI and SPECT data.
The Influence of Normalization Weight in Population Pharmacokinetic Covariate Models.
Goulooze, Sebastiaan C; Völler, Swantje; Välitalo, Pyry A J; Calvier, Elisa A M; Aarons, Leon; Krekels, Elke H J; Knibbe, Catherijne A J
2018-03-23
In covariate (sub)models of population pharmacokinetic models, most covariates are normalized to the median value; however, for body weight, normalization to 70 kg or 1 kg is often applied. In this article, we illustrate the impact of normalization weight on the precision of population clearance (CL pop ) parameter estimates. The influence of normalization weight (70, 1 kg or median weight) on the precision of the CL pop estimate, expressed as relative standard error (RSE), was illustrated using data from a pharmacokinetic study in neonates with a median weight of 2.7 kg. In addition, a simulation study was performed to show the impact of normalization to 70 kg in pharmacokinetic studies with paediatric or obese patients. The RSE of the CL pop parameter estimate in the neonatal dataset was lowest with normalization to median weight (8.1%), compared with normalization to 1 kg (10.5%) or 70 kg (48.8%). Typical clearance (CL) predictions were independent of the normalization weight used. Simulations showed that the increase in RSE of the CL pop estimate with 70 kg normalization was highest in studies with a narrow weight range and a geometric mean weight away from 70 kg. When, instead of normalizing with median weight, a weight outside the observed range is used, the RSE of the CL pop estimate will be inflated, and should therefore not be used for model selection. Instead, established mathematical principles can be used to calculate the RSE of the typical CL (CL TV ) at a relevant weight to evaluate the precision of CL predictions.
Physical activity into the meal glucose-insulin model of type 1 diabetes: in silico studies.
Man, Chiara Dalla; Breton, Marc D; Cobelli, Claudio
2009-01-01
A simulation model of a glucose-insulin system accounting for physical activity is needed to reliably simulate normal life conditions, thus accelerating the development of an artificial pancreas. In fact, exercise causes a transient increase of insulin action and may lead to hypoglycemia. However, physical activity is difficult to model. In the past, it was described indirectly as a rise in insulin. Recently, a new parsimonious model of exercise effect on glucose homeostasis has been proposed that links the change in insulin action and glucose effectiveness to heart rate (HR). The aim of this study was to plug this exercise model into our recently proposed large-scale simulation model of glucose metabolism in type 1 diabetes to better describe normal life conditions. The exercise model describes changes in glucose-insulin dynamics in two phases: a rapid on-and-off change in insulin-independent glucose clearance and a rapid-on/slow-off change in insulin sensitivity. Three candidate models of glucose effectiveness and insulin sensitivity as a function of HR have been considered, both during exercise and recovery after exercise. By incorporating these three models into the type 1 diabetes model, we simulated different levels (from mild to moderate) and duration of exercise (15 and 30 minutes), both in steady-state (e.g., during euglycemic-hyperinsulinemic clamp) and in nonsteady state (e.g., after a meal) conditions. One candidate exercise model was selected as the most reliable. A type 1 diabetes model also describing physical activity is proposed. The model represents a step forward to accurately describe glucose homeostasis in normal life conditions; however, further studies are needed to validate it against data. © Diabetes Technology Society
LES of Supersonic Turbulent Channel Flow at Mach Numbers 1.5 and 3
NASA Astrophysics Data System (ADS)
Raghunath, Sriram; Brereton, Giles
2009-11-01
LES of compressible, turbulent, body-force driven, isothermal-wall channel flows at Reτ of 190 and 395 at moderate supersonic speeds (Mach 1.5 and 3) are presented. Simulations are fully resolved in the wall-normal direction without the need for wall-layer models. SGS models for incompressible flows, with appropriate extensions for compressibility, are tested a priori/ with DNS results and used in LES. Convergence of the simulations is found to be sensitive to the initial conditions and to the choice of model (wall-normal damping) in the laminar sublayer. The Nicoud--Ducros wall adapting SGS model, coupled with a standard SGS heat flux model, is found to yield results in good agreement with DNS.
Antonopoulos, Markos; Stamatakos, Georgios
2015-01-01
Intensive glioma tumor infiltration into the surrounding normal brain tissues is one of the most critical causes of glioma treatment failure. To quantitatively understand and mathematically simulate this phenomenon, several diffusion-based mathematical models have appeared in the literature. The majority of them ignore the anisotropic character of diffusion of glioma cells since availability of pertinent truly exploitable tomographic imaging data is limited. Aiming at enriching the anisotropy-enhanced glioma model weaponry so as to increase the potential of exploiting available tomographic imaging data, we propose a Brownian motion-based mathematical analysis that could serve as the basis for a simulation model estimating the infiltration of glioblastoma cells into the surrounding brain tissue. The analysis is based on clinical observations and exploits diffusion tensor imaging (DTI) data. Numerical simulations and suggestions for further elaboration are provided.
Bravo, Rafael; Axelrod, David E
2013-11-18
Normal colon crypts consist of stem cells, proliferating cells, and differentiated cells. Abnormal rates of proliferation and differentiation can initiate colon cancer. We have measured the variation in the number of each of these cell types in multiple crypts in normal human biopsy specimens. This has provided the opportunity to produce a calibrated computational model that simulates cell dynamics in normal human crypts, and by changing model parameter values, to simulate the initiation and treatment of colon cancer. An agent-based model of stochastic cell dynamics in human colon crypts was developed in the multi-platform open-source application NetLogo. It was assumed that each cell's probability of proliferation and probability of death is determined by its position in two gradients along the crypt axis, a divide gradient and in a die gradient. A cell's type is not intrinsic, but rather is determined by its position in the divide gradient. Cell types are dynamic, plastic, and inter-convertible. Parameter values were determined for the shape of each of the gradients, and for a cell's response to the gradients. This was done by parameter sweeps that indicated the values that reproduced the measured number and variation of each cell type, and produced quasi-stationary stochastic dynamics. The behavior of the model was verified by its ability to reproduce the experimentally observed monocolonal conversion by neutral drift, the formation of adenomas resulting from mutations either at the top or bottom of the crypt, and by the robust ability of crypts to recover from perturbation by cytotoxic agents. One use of the virtual crypt model was demonstrated by evaluating different cancer chemotherapy and radiation scheduling protocols. A virtual crypt has been developed that simulates the quasi-stationary stochastic cell dynamics of normal human colon crypts. It is unique in that it has been calibrated with measurements of human biopsy specimens, and it can simulate the variation of cell types in addition to the average number of each cell type. The utility of the model was demonstrated with in silico experiments that evaluated cancer therapy protocols. The model is available for others to conduct additional experiments.
NASA Technical Reports Server (NTRS)
Hueschen, Richard M.
2011-01-01
A six degree-of-freedom, flat-earth dynamics, non-linear, and non-proprietary aircraft simulation was developed that is representative of a generic mid-sized twin-jet transport aircraft. The simulation was developed from a non-proprietary, publicly available, subscale twin-jet transport aircraft simulation using scaling relationships and a modified aerodynamic database. The simulation has an extended aerodynamics database with aero data outside the normal transport-operating envelope (large angle-of-attack and sideslip values). The simulation has representative transport aircraft surface actuator models with variable rate-limits and generally fixed position limits. The simulation contains a generic 40,000 lb sea level thrust engine model. The engine model is a first order dynamic model with a variable time constant that changes according to simulation conditions. The simulation provides a means for interfacing a flight control system to use the simulation sensor variables and to command the surface actuators and throttle position of the engine model.
Treatment model of dengue hemorrhagic fever infection in human body
NASA Astrophysics Data System (ADS)
Handayani, D.; Nuraini, N.; Primasari, N.; Wijaya, K. P.
2014-03-01
The treatment model of DHF presented in this paper involves the dynamic of five time-dependent compartments, i.e. susceptible, infected, free virus particle, immune cell, and haematocrit level. The treatment model is investigated based on normalization of haematocrit level, which is expressed as intravenous fluid infusion control. We analyze the stability of the disease free equilibrium and the endemic equilibrium. The numerical simulations will explain the dynamic of each compartment in human body. These results show particularly that infected compartment and free virus particle compartment are tend to be vanished in two weeks after the onset of dengue virus. However, these simulation results also show that without the treatment, the haematocrit level will decrease even though not up to the normal level. Therefore the effective haematocrit normalization should be done with the treatment control.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, Ke; Euser, Bryan J.; Rougier, Esteban
Sheared granular layers undergoing stick-slip behavior are broadly employed to study the physics and dynamics of earthquakes. In this paper, a two-dimensional implementation of the combined finite-discrete element method (FDEM), which merges the finite element method (FEM) and the discrete element method (DEM), is used to explicitly simulate a sheared granular fault system including both gouge and plate, and to investigate the influence of different normal loads on seismic moment, macroscopic friction coefficient, kinetic energy, gouge layer thickness, and recurrence time between slips. In the FDEM model, the deformation of plates and particles is simulated using the FEM formulation whilemore » particle-particle and particle-plate interactions are modeled using DEM-derived techniques. The simulated seismic moment distributions are generally consistent with those obtained from the laboratory experiments. In addition, the simulation results demonstrate that with increasing normal load, (i) the kinetic energy of the granular fault system increases; (ii) the gouge layer thickness shows a decreasing trend; and (iii) the macroscopic friction coefficient does not experience much change. Analyses of the slip events reveal that, as the normal load increases, more slip events with large kinetic energy release and longer recurrence time occur, and the magnitude of gouge layer thickness decrease also tends to be larger; while the macroscopic friction coefficient drop decreases. Finally, the simulations not only reveal the influence of normal loads on the dynamics of sheared granular fault gouge, but also demonstrate the capabilities of FDEM for studying stick-slip dynamic behavior of granular fault systems.« less
Gao, Ke; Euser, Bryan J.; Rougier, Esteban; ...
2018-06-20
Sheared granular layers undergoing stick-slip behavior are broadly employed to study the physics and dynamics of earthquakes. In this paper, a two-dimensional implementation of the combined finite-discrete element method (FDEM), which merges the finite element method (FEM) and the discrete element method (DEM), is used to explicitly simulate a sheared granular fault system including both gouge and plate, and to investigate the influence of different normal loads on seismic moment, macroscopic friction coefficient, kinetic energy, gouge layer thickness, and recurrence time between slips. In the FDEM model, the deformation of plates and particles is simulated using the FEM formulation whilemore » particle-particle and particle-plate interactions are modeled using DEM-derived techniques. The simulated seismic moment distributions are generally consistent with those obtained from the laboratory experiments. In addition, the simulation results demonstrate that with increasing normal load, (i) the kinetic energy of the granular fault system increases; (ii) the gouge layer thickness shows a decreasing trend; and (iii) the macroscopic friction coefficient does not experience much change. Analyses of the slip events reveal that, as the normal load increases, more slip events with large kinetic energy release and longer recurrence time occur, and the magnitude of gouge layer thickness decrease also tends to be larger; while the macroscopic friction coefficient drop decreases. Finally, the simulations not only reveal the influence of normal loads on the dynamics of sheared granular fault gouge, but also demonstrate the capabilities of FDEM for studying stick-slip dynamic behavior of granular fault systems.« less
Molenaar, Dylan; Bolsinova, Maria
2017-05-01
In generalized linear modelling of responses and response times, the observed response time variables are commonly transformed to make their distribution approximately normal. A normal distribution for the transformed response times is desirable as it justifies the linearity and homoscedasticity assumptions in the underlying linear model. Past research has, however, shown that the transformed response times are not always normal. Models have been developed to accommodate this violation. In the present study, we propose a modelling approach for responses and response times to test and model non-normality in the transformed response times. Most importantly, we distinguish between non-normality due to heteroscedastic residual variances, and non-normality due to a skewed speed factor. In a simulation study, we establish parameter recovery and the power to separate both effects. In addition, we apply the model to a real data set. © 2017 The Authors. British Journal of Mathematical and Statistical Psychology published by John Wiley & Sons Ltd on behalf of British Psychological Society.
NASA Astrophysics Data System (ADS)
Palmieri, Benoit; Bresler, Yony; Wirtz, Denis; Grant, Martin
2015-07-01
We propose a multiscale model for monolayer of motile cells that comprise normal and cancer cells. In the model, the two types of cells have identical properties except for their elasticity; cancer cells are softer and normal cells are stiffer. The goal is to isolate the role of elasticity mismatch on the migration potential of cancer cells in the absence of other contributions that are present in real cells. The methodology is based on a phase-field description where each cell is modeled as a highly-deformable self-propelled droplet. We simulated two types of nearly confluent monolayers. One contains a single cancer cell in a layer of normal cells and the other contains normal cells only. The simulation results demonstrate that elasticity mismatch alone is sufficient to increase the motility of the cancer cell significantly. Further, the trajectory of the cancer cell is decorated by several speed “bursts” where the cancer cell quickly relaxes from a largely deformed shape and consequently increases its translational motion. The increased motility and the amplitude and frequency of the bursts are in qualitative agreement with recent experiments.
Li, Jiajia; Deng, Baoqing; Zhang, Bing; Shen, Xiuzhong; Kim, Chang Nyung
2015-01-01
A simulation of an unbaffled stirred tank reactor driven by a magnetic stirring rod was carried out in a moving reference frame. The free surface of unbaffled stirred tank was captured by Euler-Euler model coupled with the volume of fluid (VOF) method. The re-normalization group (RNG) k-ɛ model, large eddy simulation (LES) model and detached eddy simulation (DES) model were evaluated for simulating the flow field in the stirred tank. All turbulence models can reproduce the tangential velocity in an unbaffled stirred tank with a rotational speed of 150 rpm, 250 rpm and 400 rpm, respectively. Radial velocity is underpredicted by the three models. LES model and RNG k-ɛ model predict the better tangential velocity and axial velocity, respectively. RNG k-ɛ model is recommended for the simulation of the flow in an unbaffled stirred tank with magnetic rod due to its computational effort.
Montijn, Jorrit Steven; Klink, P Christaan; van Wezel, Richard J A
2012-01-01
Divisive normalization models of covert attention commonly use spike rate modulations as indicators of the effect of top-down attention. In addition, an increasing number of studies have shown that top-down attention increases the synchronization of neuronal oscillations as well, particularly in gamma-band frequencies (25-100 Hz). Although modulations of spike rate and synchronous oscillations are not mutually exclusive as mechanisms of attention, there has thus far been little effort to integrate these concepts into a single framework of attention. Here, we aim to provide such a unified framework by expanding the normalization model of attention with a multi-level hierarchical structure and a time dimension; allowing the simulation of a recently reported backward progression of attentional effects along the visual cortical hierarchy. A simple cascade of normalization models simulating different cortical areas is shown to cause signal degradation and a loss of stimulus discriminability over time. To negate this degradation and ensure stable neuronal stimulus representations, we incorporate a kind of oscillatory phase entrainment into our model that has previously been proposed as the "communication-through-coherence" (CTC) hypothesis. Our analysis shows that divisive normalization and oscillation models can complement each other in a unified account of the neural mechanisms of selective visual attention. The resulting hierarchical normalization and oscillation (HNO) model reproduces several additional spatial and temporal aspects of attentional modulation and predicts a latency effect on neuronal responses as a result of cued attention.
Montijn, Jorrit Steven; Klink, P. Christaan; van Wezel, Richard J. A.
2012-01-01
Divisive normalization models of covert attention commonly use spike rate modulations as indicators of the effect of top-down attention. In addition, an increasing number of studies have shown that top-down attention increases the synchronization of neuronal oscillations as well, particularly in gamma-band frequencies (25–100 Hz). Although modulations of spike rate and synchronous oscillations are not mutually exclusive as mechanisms of attention, there has thus far been little effort to integrate these concepts into a single framework of attention. Here, we aim to provide such a unified framework by expanding the normalization model of attention with a multi-level hierarchical structure and a time dimension; allowing the simulation of a recently reported backward progression of attentional effects along the visual cortical hierarchy. A simple cascade of normalization models simulating different cortical areas is shown to cause signal degradation and a loss of stimulus discriminability over time. To negate this degradation and ensure stable neuronal stimulus representations, we incorporate a kind of oscillatory phase entrainment into our model that has previously been proposed as the “communication-through-coherence” (CTC) hypothesis. Our analysis shows that divisive normalization and oscillation models can complement each other in a unified account of the neural mechanisms of selective visual attention. The resulting hierarchical normalization and oscillation (HNO) model reproduces several additional spatial and temporal aspects of attentional modulation and predicts a latency effect on neuronal responses as a result of cued attention. PMID:22586372
A Numerical Simulation of a Normal Sonic Jet into a Hypersonic Cross-Flow
NASA Technical Reports Server (NTRS)
Jeffries, Damon K.; Krishnamurthy, Ramesh; Chandra, Suresh
1997-01-01
This study involves numerical modeling of a normal sonic jet injection into a hypersonic cross-flow. The numerical code used for simulation is GASP (General Aerodynamic Simulation Program.) First the numerical predictions are compared with well established solutions for compressible laminar flow. Then comparisons are made with non-injection test case measurements of surface pressure distributions. Good agreement with the measurements is observed. Currently comparisons are underway with the injection case. All the experimental data were generated at the Southampton University Light Piston Isentropic Compression Tube.
Numerical Simulation of Sickle Cell Blood Flow in the Microcirculation
NASA Astrophysics Data System (ADS)
Berger, Stanley A.; Carlson, Brian E.
2001-11-01
A numerical simulation of normal and sickle cell blood flow through the transverse arteriole-capillary microcirculation is carried out to model the dominant mechanisms involved in the onset of vascular stasis in sickle cell disease. The transverse arteriole-capillary network is described by Strahler's network branching method, and the oxygen and blood transport in the capillaries is modeled by a Krogh cylinder analysis utilizing Lighthill's lubrication theory, as developed by Berger and King. Poiseuille's law is used to represent blood flow in the arterioles. Applying this flow and transport model and utilizing volumetric flow continuity at each network bifurcation, a nonlinear system of equations is obtained, which is solved iteratively using a steepest descent algorithm coupled with a Newton solver. Ten different networks are generated and flow results are calculated for normal blood and sickle cell blood without and with precapillary oxygen loss. We find that total volumetric blood flow through the network is greater in the two sickle cell blood simulations than for normal blood owing to the anemia associated with sickle cell disease. The percentage of capillary blockage in the network increases dramatically with decreasing pressure drop across the network in the sickle cell cases while there is no blockage when normal blood flows through simulated networks. It is concluded that, in sickle cell disease, without any vasomotor dilation response to decreasing oxygen concentrations in the blood, capillary blockage will occur in the microvasculature even at average pressure drops across the transverse arteriole-capillary networks.
Simulation's Ensemble is Better Than Ensemble Simulation
NASA Astrophysics Data System (ADS)
Yan, X.
2017-12-01
Simulation's ensemble is better than ensemble simulation Yan Xiaodong State Key Laboratory of Earth Surface Processes and Resource Ecology (ESPRE) Beijing Normal University,19 Xinjiekouwai Street, Haidian District, Beijing 100875, China Email: yxd@bnu.edu.cnDynamical system is simulated from initial state. However initial state data is of great uncertainty, which leads to uncertainty of simulation. Therefore, multiple possible initial states based simulation has been used widely in atmospheric science, which has indeed been proved to be able to lower the uncertainty, that was named simulation's ensemble because multiple simulation results would be fused . In ecological field, individual based model simulation (forest gap models for example) can be regarded as simulation's ensemble compared with community based simulation (most ecosystem models). In this talk, we will address the advantage of individual based simulation and even their ensembles.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ely, Geoffrey P.
2013-10-31
This project uses dynamic rupture simulations to investigate high-frequency seismic energy generation. The relevant phenomena (frictional breakdown, shear heating, effective normal-stress fluctuations, material damage, etc.) controlling rupture are strongly interacting and span many orders of magnitude in spatial scale, requiring highresolution simulations that couple disparate physical processes (e.g., elastodynamics, thermal weakening, pore-fluid transport, and heat conduction). Compounding the computational challenge, we know that natural faults are not planar, but instead have roughness that can be approximated by power laws potentially leading to large, multiscale fluctuations in normal stress. The capacity to perform 3D rupture simulations that couple these processes willmore » provide guidance for constructing appropriate source models for high-frequency ground motion simulations. The improved rupture models from our multi-scale dynamic rupture simulations will be used to conduct physicsbased (3D waveform modeling-based) probabilistic seismic hazard analysis (PSHA) for California. These calculation will provide numerous important seismic hazard results, including a state-wide extended earthquake rupture forecast with rupture variations for all significant events, a synthetic seismogram catalog for thousands of scenario events and more than 5000 physics-based seismic hazard curves for California.« less
Complex patterns of abnormal heartbeats
NASA Technical Reports Server (NTRS)
Schulte-Frohlinde, Verena; Ashkenazy, Yosef; Goldberger, Ary L.; Ivanov, Plamen Ch; Costa, Madalena; Morley-Davies, Adrian; Stanley, H. Eugene; Glass, Leon
2002-01-01
Individuals having frequent abnormal heartbeats interspersed with normal heartbeats may be at an increased risk of sudden cardiac death. However, mechanistic understanding of such cardiac arrhythmias is limited. We present a visual and qualitative method to display statistical properties of abnormal heartbeats. We introduce dynamical "heartprints" which reveal characteristic patterns in long clinical records encompassing approximately 10(5) heartbeats and may provide information about underlying mechanisms. We test if these dynamics can be reproduced by model simulations in which abnormal heartbeats are generated (i) randomly, (ii) at a fixed time interval following a preceding normal heartbeat, or (iii) by an independent oscillator that may or may not interact with the normal heartbeat. We compare the results of these three models and test their limitations to comprehensively simulate the statistical features of selected clinical records. This work introduces methods that can be used to test mathematical models of arrhythmogenesis and to develop a new understanding of underlying electrophysiologic mechanisms of cardiac arrhythmia.
Mapping of quantitative trait loci using the skew-normal distribution.
Fernandes, Elisabete; Pacheco, António; Penha-Gonçalves, Carlos
2007-11-01
In standard interval mapping (IM) of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. When this assumption of normality is violated, the most commonly adopted strategy is to use the previous model after data transformation. However, an appropriate transformation may not exist or may be difficult to find. Also this approach can raise interpretation issues. An interesting alternative is to consider a skew-normal mixture model in standard IM, and the resulting method is here denoted as skew-normal IM. This flexible model that includes the usual symmetric normal distribution as a special case is important, allowing continuous variation from normality to non-normality. In this paper we briefly introduce the main peculiarities of the skew-normal distribution. The maximum likelihood estimates of parameters of the skew-normal distribution are obtained by the expectation-maximization (EM) algorithm. The proposed model is illustrated with real data from an intercross experiment that shows a significant departure from the normality assumption. The performance of the skew-normal IM is assessed via stochastic simulation. The results indicate that the skew-normal IM has higher power for QTL detection and better precision of QTL location as compared to standard IM and nonparametric IM.
Emerson, Douglas G.
1994-01-01
A model that simulates heat and water transfer in soils during freezing and thawing periods was developed and incorporated into the U.S. Geological Survey's Precipitation-Runoff Modeling System. The model's transfer of heat is based on an equation developed from Fourier's equation for heat flux. The model's transfer of water within the soil profile is based on the concept of capillary forces. Field capacity and infiltration rate can vary throughout the freezing and thawing period, depending on soil conditions and rate and timing of snowmelt. The model can be used to determine the effects of seasonally frozen soils on ground-water recharge and surface-water runoff. Data collected for two winters, 1985-86 and 1986-87, on three runoff plots were used to calibrate and verify the model. The winter of 1985-86 was colder than normal, and snow cover was continuous throughout the winter. The winter of 1986-87 was warmer than normal, and snow accumulated for only short periods of several days. as the criteria for determining the degree of agreement between simulated and measured data. The model was calibrated using the 1985-86 data for plot 2. The calibration simulation agreed closely with the measured data. The verification simulations for plots 1 and 3 using the 1985-86 data and for plots 1 and 2 using the 1986-87 data agreed closely with the measured data. The verification simulation for plot 3 using the 1986-87 data did not agree closely. The recalibration simulations for plots 1 and 3 using the 1985-86 data indicated little improvement because the verification simulations for plots 1 and 3 already agreed closely with the measured data.
Population Synthesis of Radio and Y-ray Normal, Isolated Pulsars Using Markov Chain Monte Carlo
NASA Astrophysics Data System (ADS)
Billman, Caleb; Gonthier, P. L.; Harding, A. K.
2013-04-01
We present preliminary results of a population statistics study of normal pulsars (NP) from the Galactic disk using Markov Chain Monte Carlo techniques optimized according to two different methods. The first method compares the detected and simulated cumulative distributions of series of pulsar characteristics, varying the model parameters to maximize the overall agreement. The advantage of this method is that the distributions do not have to be binned. The other method varies the model parameters to maximize the log of the maximum likelihood obtained from the comparisons of four-two dimensional distributions of radio and γ-ray pulsar characteristics. The advantage of this method is that it provides a confidence region of the model parameter space. The computer code simulates neutron stars at birth using Monte Carlo procedures and evolves them to the present assuming initial spatial, kick velocity, magnetic field, and period distributions. Pulsars are spun down to the present and given radio and γ-ray emission characteristics, implementing an empirical γ-ray luminosity model. A comparison group of radio NPs detected in ten-radio surveys is used to normalize the simulation, adjusting the model radio luminosity to match a birth rate. We include the Fermi pulsars in the forthcoming second pulsar catalog. We present preliminary results comparing the simulated and detected distributions of radio and γ-ray NPs along with a confidence region in the parameter space of the assumed models. We express our gratitude for the generous support of the National Science Foundation (REU and RUI), Fermi Guest Investigator Program and the NASA Astrophysics Theory and Fundamental Program.
NASA Astrophysics Data System (ADS)
Ning, Fangkun; Jia, Weitao; Hou, Jian; Chen, Xingrui; Le, Qichi
2018-05-01
Various fracture criteria, especially Johnson and Cook (J-C) model and (normalized) Cockcroft and Latham (C-L) criterion were contrasted and discussed. Based on normalized C-L criterion, adopted in this paper, FE simulation was carried out and hot rolling experiments under temperature range of 200 °C–350 °C, rolling reduction rate of 25%–40% and rolling speed from 7–21 r/min was implemented. The microstructure was observed by optical microscope and damage values of simulation results were contrasted with the length of cracks on diverse parameters. The results show that the plate generated less edge cracks and the microstructure emerged slight shear bands and fine dynamic recrystallization grains rolled at 350 °C, 40% reduction and 14 r/min. The edge cracks pre-criterion model was obtained combined with Zener-Hollomon equation and deformation activation energy.
NASA Astrophysics Data System (ADS)
Li, Jun; Fu, Siyao; He, Haibo; Jia, Hongfei; Li, Yanzhong; Guo, Yi
2015-11-01
Large-scale regional evacuation is an important part of national security emergency response plan. Large commercial shopping area, as the typical service system, its emergency evacuation is one of the hot research topics. A systematic methodology based on Cellular Automata with the Dynamic Floor Field and event driven model has been proposed, and the methodology has been examined within context of a case study involving the evacuation within a commercial shopping mall. Pedestrians walking is based on Cellular Automata and event driven model. In this paper, the event driven model is adopted to simulate the pedestrian movement patterns, the simulation process is divided into normal situation and emergency evacuation. The model is composed of four layers: environment layer, customer layer, clerk layer and trajectory layer. For the simulation of movement route of pedestrians, the model takes into account purchase intention of customers and density of pedestrians. Based on evacuation model of Cellular Automata with Dynamic Floor Field and event driven model, we can reflect behavior characteristics of customers and clerks at the situations of normal and emergency evacuation. The distribution of individual evacuation time as a function of initial positions and the dynamics of the evacuation process is studied. Our results indicate that the evacuation model using the combination of Cellular Automata with Dynamic Floor Field and event driven scheduling can be used to simulate the evacuation of pedestrian flows in indoor areas with complicated surroundings and to investigate the layout of shopping mall.
Mesoscale Simulation Data for Initializing Fast-Time Wake Transport and Decay Models
NASA Technical Reports Server (NTRS)
Ahmad, Nashat N.; Proctor, Fred H.; Vanvalkenburg, Randal L.; Pruis, Mathew J.; LimonDuparcmeur, Fanny M.
2012-01-01
The fast-time wake transport and decay models require vertical profiles of crosswinds, potential temperature and the eddy dissipation rate as initial conditions. These inputs are normally obtained from various field sensors. In case of data-denied scenarios or operational use, these initial conditions can be provided by mesoscale model simulations. In this study, the vertical profiles of potential temperature from a mesoscale model were used as initial conditions for the fast-time wake models. The mesoscale model simulations were compared against available observations and the wake model predictions were compared with the Lidar measurements from three wake vortex field experiments.
A robust two-way semi-linear model for normalization of cDNA microarray data
Wang, Deli; Huang, Jian; Xie, Hehuang; Manzella, Liliana; Soares, Marcelo Bento
2005-01-01
Background Normalization is a basic step in microarray data analysis. A proper normalization procedure ensures that the intensity ratios provide meaningful measures of relative expression values. Methods We propose a robust semiparametric method in a two-way semi-linear model (TW-SLM) for normalization of cDNA microarray data. This method does not make the usual assumptions underlying some of the existing methods. For example, it does not assume that: (i) the percentage of differentially expressed genes is small; or (ii) the numbers of up- and down-regulated genes are about the same, as required in the LOWESS normalization method. We conduct simulation studies to evaluate the proposed method and use a real data set from a specially designed microarray experiment to compare the performance of the proposed method with that of the LOWESS normalization approach. Results The simulation results show that the proposed method performs better than the LOWESS normalization method in terms of mean square errors for estimated gene effects. The results of analysis of the real data set also show that the proposed method yields more consistent results between the direct and the indirect comparisons and also can detect more differentially expressed genes than the LOWESS method. Conclusions Our simulation studies and the real data example indicate that the proposed robust TW-SLM method works at least as well as the LOWESS method and works better when the underlying assumptions for the LOWESS method are not satisfied. Therefore, it is a powerful alternative to the existing normalization methods. PMID:15663789
Reconstructing in-vivo reflectance spectrum of pigmented skin lesion by Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Wang, Shuang; He, Qingli; Zhao, Jianhua; Lui, Harvey; Zeng, Haishan
2012-03-01
In dermatology applications, diffuse reflectance spectroscopy has been extensively investigated as a promising tool for the noninvasive method to distinguish melanoma from benign pigmented skin lesion (nevus), which is concentrated with the skin chromophores like melanin and hemoglobin. We carried out a theoretical study to examine melanin distribution in human skin tissue and establish a practical optical model for further pigmented skin investigation. The theoretical simulation was using junctional nevus as an example. A multiple layer skin optical model was developed on established anatomy structures of skin, the published optical parameters of different skin layers, blood and melanin. Monte Carlo simulation was used to model the interaction between excitation light and skin tissue and rebuild the diffuse reflectance process from skin tissue. A testified methodology was adopted to determine melanin contents in human skin based on in vivo diffuse reflectance spectra. The rebuild diffuse reflectance spectra were investigated by adding melanin into different layers of the theoretical model. One of in vivo reflectance spectra from Junctional nevi and their surrounding normal skin was studied by compare the ratio between nevus and normal skin tissue in both the experimental and simulated diffuse reflectance spectra. The simulation result showed a good agreement with our clinical measurements, which indicated that our research method, including the spectral ratio method, skin optical model and modifying the melanin content in the model, could be applied in further theoretical simulation of pigmented skin lesions.
Canadian crop calendars in support of the early warning project
NASA Technical Reports Server (NTRS)
Trenchard, M. H.; Hodges, T. (Principal Investigator)
1980-01-01
The Canadian crop calendars for LACIE are presented. Long term monthly averages of daily maximum and daily minimum temperatures for subregions of provinces were used to simulate normal daily maximum and minimum temperatures. The Robertson (1968) spring wheat and Williams (1974) spring barley phenology models were run using the simulated daily temperatures and daylengths for appropriate latitudes. Simulated daily temperatures and phenology model outputs for spring wheat and spring barley are given.
Note on the artefacts in SRIM simulation of sputtering
NASA Astrophysics Data System (ADS)
Shulga, V. I.
2018-05-01
The computer simulation program SRIM, unlike other well-known programs (MARLOWE, TRIM.SP, etc.), predicts non-zero values of the sputter yield at glancing ion bombardment of smooth amorphous targets and, for heavy ions, greatly underestimates the sputter yield at normal incidence. To understand the reasons for this, the sputtering of amorphous silicon bombarded with different ions was modeled here using the author's program OKSANA. Most simulations refer to 1 keV Xe ions, and angles of incidence cover range from 0 (normal incidence) to almost 90°. It has been shown that SRIM improperly simulates the initial stage of the sputtering process. Some other artefacts in SRIM calculations of sputtering are also revealed and discussed.
Identification of walking human model using agent-based modelling
NASA Astrophysics Data System (ADS)
Shahabpoor, Erfan; Pavic, Aleksandar; Racic, Vitomir
2018-03-01
The interaction of walking people with large vibrating structures, such as footbridges and floors, in the vertical direction is an important yet challenging phenomenon to describe mathematically. Several different models have been proposed in the literature to simulate interaction of stationary people with vibrating structures. However, the research on moving (walking) human models, explicitly identified for vibration serviceability assessment of civil structures, is still sparse. In this study, the results of a comprehensive set of FRF-based modal tests were used, in which, over a hundred test subjects walked in different group sizes and walking patterns on a test structure. An agent-based model was used to simulate discrete traffic-structure interactions. The occupied structure modal parameters found in tests were used to identify the parameters of the walking individual's single-degree-of-freedom (SDOF) mass-spring-damper model using 'reverse engineering' methodology. The analysis of the results suggested that the normal distribution with the average of μ = 2.85Hz and standard deviation of σ = 0.34Hz can describe human SDOF model natural frequency. Similarly, the normal distribution with μ = 0.295 and σ = 0.047 can describe the human model damping ratio. Compared to the previous studies, the agent-based modelling methodology proposed in this paper offers significant flexibility in simulating multi-pedestrian walking traffics, external forces and simulating different mechanisms of human-structure and human-environment interaction at the same time.
Koh, Y-G.; Son, J.; Kwon, S-K.; Kim, H-J.; Kang, K-T.
2017-01-01
Objectives Preservation of both anterior and posterior cruciate ligaments in total knee arthroplasty (TKA) can lead to near-normal post-operative joint mechanics and improved knee function. We hypothesised that a patient-specific bicruciate-retaining prosthesis preserves near-normal kinematics better than standard off-the-shelf posterior cruciate-retaining and bicruciate-retaining prostheses in TKA. Methods We developed the validated models to evaluate the post-operative kinematics in patient-specific bicruciate-retaining, standard off-the-shelf bicruciate-retaining and posterior cruciate-retaining TKA under gait and deep knee bend loading conditions using numerical simulation. Results Tibial posterior translation and internal rotation in patient-specific bicruciate-retaining prostheses preserved near-normal kinematics better than other standard off-the-shelf prostheses under gait loading conditions. Differences from normal kinematics were minimised for femoral rollback and internal-external rotation in patient-specific bicruciate-retaining, followed by standard off-the-shelf bicruciate-retaining and posterior cruciate-retaining TKA under deep knee bend loading conditions. Moreover, the standard off-the-shelf posterior cruciate-retaining TKA in this study showed the most abnormal performance in kinematics under gait and deep knee bend loading conditions, whereas patient-specific bicruciate-retaining TKA led to near-normal kinematics. Conclusion This study showed that restoration of the normal geometry of the knee joint in patient-specific bicruciate-retaining TKA and preservation of the anterior cruciate ligament can lead to improvement in kinematics compared with the standard off-the-shelf posterior cruciate-retaining and bicruciate-retaining TKA. Cite this article: Y-G. Koh, J. Son, S-K. Kwon, H-J. Kim, O-R. Kwon, K-T. Kang. Preservation of kinematics with posterior cruciate-, bicruciate- and patient-specific bicruciate-retaining prostheses in total knee arthroplasty by using computational simulation with normal knee model. Bone Joint Res 2017;6:557–565. DOI: 10.1302/2046-3758.69.BJR-2016-0250.R1. PMID:28947604
Quasi-normal modes from non-commutative matrix dynamics
NASA Astrophysics Data System (ADS)
Aprile, Francesco; Sanfilippo, Francesco
2017-09-01
We explore similarities between the process of relaxation in the BMN matrix model and the physics of black holes in AdS/CFT. Focusing on Dyson-fluid solutions of the matrix model, we perform numerical simulations of the real time dynamics of the system. By quenching the equilibrium distribution we study quasi-normal oscillations of scalar single trace observables, we isolate the lowest quasi-normal mode, and we determine its frequencies as function of the energy. Considering the BMN matrix model as a truncation of N=4 SYM, we also compute the frequencies of the quasi-normal modes of the dual scalar fields in the AdS5-Schwarzschild background. We compare the results, and we finda surprising similarity.
Emerson, Douglas G.
1991-01-01
A model that simulates heat and water transfer in soils during freezing and thawing periods was developed and incorporated into the U.S. Geological Survey's Precipitation-Runoff Modeling System. The transfer of heat 1s based on an equation developed from Fourier's equation for heat flux. Field capacity and infiltration rate can vary throughout the freezing and thawing period, depending on soil conditions and rate and timing of snowmelt. The transfer of water within the soil profile is based on the concept of capillary forces. The model can be used to determine the effects of seasonally frozen soils on ground-water recharge and surface-water runoff. Data collected for two winters, 1985-86 and 1986-87, on three runoff plots were used to calibrate and verify the model. The winter of 1985-86 was colder than normal and snow cover was continuous throughout the winter. The winter of 1986-87 was wanner than normal and snow accumulated for only short periods of several days.Runoff, snowmelt, and frost depths were used as the criteria for determining the degree of agreement between simulated and measured data. The model was calibrated using the 1985-86 data for plot 2. The calibration simulation agreed closely with the measured data. The verification simulations for plots 1 and 3 using the 1985-86 data and for plots 1 and 2 using the 1986-87 data agreed closely with the measured data. The verification simulation for plot 3 using the 1986-87 data did not agree closely. The recalibratlon simulations for plots 1 and 3 using the 1985-86 data Indicated small improvement because the verification simulations for plots 1 and 3 already agreed closely with the measured data.
A kinematic/kinetic hybrid airplane simulator model : draft.
DOT National Transportation Integrated Search
2008-01-01
A kinematics-based flight model, for normal flight : regimes, currently uses precise flight data to achieve a high : level of aircraft realism. However, it was desired to further : increase the models accuracy, without a substantial increase in : ...
A kinematic/kinetic hybrid airplane simulator model.
DOT National Transportation Integrated Search
2008-01-01
A kinematics-based flight model, for normal flight : regimes, currently uses precise flight data to achieve a high : level of aircraft realism. However, it was desired to further : increase the models accuracy, without a substantial increase in : ...
Modeling and simulation for fewer-axis grinding of complex surface
NASA Astrophysics Data System (ADS)
Li, Zhengjian; Peng, Xiaoqiang; Song, Ci
2017-10-01
As the basis of fewer-axis grinding of complex surface, the grinding mathematical model is of great importance. A mathematical model of the grinding wheel was established, and then coordinate and normal vector of the wheel profile could be calculated. Through normal vector matching at the cutter contact point and the coordinate system transformation, the grinding mathematical model was established to work out the coordinate of the cutter location point. Based on the model, interference analysis was simulated to find out the right position and posture of workpiece for grinding. Then positioning errors of the workpiece including the translation positioning error and the rotation positioning error were analyzed respectively, and the main locating datum was obtained. According to the analysis results, the grinding tool path was planned and generated to grind the complex surface, and good form accuracy was obtained. The grinding mathematical model is simple, feasible and can be widely applied.
A physiologically-based model for simulation of color vision deficiency.
Machado, Gustavo M; Oliveira, Manuel M; Fernandes, Leandro A F
2009-01-01
Color vision deficiency (CVD) affects approximately 200 million people worldwide, compromising the ability of these individuals to effectively perform color and visualization-related tasks. This has a significant impact on their private and professional lives. We present a physiologically-based model for simulating color vision. Our model is based on the stage theory of human color vision and is derived from data reported in electrophysiological studies. It is the first model to consistently handle normal color vision, anomalous trichromacy, and dichromacy in a unified way. We have validated the proposed model through an experimental evaluation involving groups of color vision deficient individuals and normal color vision ones. Our model can provide insights and feedback on how to improve visualization experiences for individuals with CVD. It also provides a framework for testing hypotheses about some aspects of the retinal photoreceptors in color vision deficient individuals.
Modeling the clinical and economic implications of obesity using microsimulation.
Su, W; Huang, J; Chen, F; Iacobucci, W; Mocarski, M; Dall, T M; Perreault, L
2015-01-01
The obesity epidemic has raised considerable public health concerns, but there are few validated longitudinal simulation models examining the human and economic cost of obesity. This paper describes a microsimulation model as a comprehensive tool to understand the relationship between body weight, health, and economic outcomes. Patient health and economic outcomes were simulated annually over 10 years using a Markov-based microsimulation model. The obese population examined is nationally representative of obese adults in the US from the 2005-2012 National Health and Nutrition Examination Surveys, while a matched normal weight population was constructed to have similar demographics as the obese population during the same period. Prediction equations for onset of obesity-related comorbidities, medical expenditures, economic outcomes, mortality, and quality-of-life came from published trials and studies supplemented with original research. Model validation followed International Society for Pharmacoeconomics and Outcomes Research practice guidelines. Among surviving adults, relative to a matched normal weight population, obese adults averaged $3900 higher medical expenditures in the initial year, growing to $4600 higher expenditures in year 10. Obese adults had higher initial prevalence and higher simulated onset of comorbidities as they aged. Over 10 years, excess medical expenditures attributed to obesity averaged $4280 annually-ranging from $2820 for obese category I to $5100 for obese category II, and $8710 for obese category III. Each excess kilogram of weight contributed to $140 higher annual costs, on average, ranging from $136 (obese I) to $152 (obese III). Poor health associated with obesity increased work absenteeism and mortality, and lowered employment probability, personal income, and quality-of-life. This validated model helps illustrate why obese adults have higher medical and indirect costs relative to normal weight adults, and shows that medical costs for obese adults rise more rapidly with aging relative to normal weight adults.
Choi, Young Joon; Constantino, Jason; Vedula, Vijay; Trayanova, Natalia; Mittal, Rajat
2015-01-01
A methodology for the simulation of heart function that combines an MRI-based model of cardiac electromechanics (CE) with a Navier–Stokes-based hemodynamics model is presented. The CE model consists of two coupled components that simulate the electrical and the mechanical functions of the heart. Accurate representations of ventricular geometry and fiber orientations are constructed from the structural magnetic resonance and the diffusion tensor MR images, respectively. The deformation of the ventricle obtained from the electromechanical model serves as input to the hemodynamics model in this one-way coupled approach via imposed kinematic wall velocity boundary conditions and at the same time, governs the blood flow into and out of the ventricular volume. The time-dependent endocardial surfaces are registered using a diffeomorphic mapping algorithm, while the intraventricular blood flow patterns are simulated using a sharp-interface immersed boundary method-based flow solver. The utility of the combined heart-function model is demonstrated by comparing the hemodynamic characteristics of a normal canine heart beating in sinus rhythm against that of the dyssynchronously beating failing heart. We also discuss the potential of coupled CE and hemodynamics models for various clinical applications. PMID:26442254
Monte Carlo simulation of aorta autofluorescence
NASA Astrophysics Data System (ADS)
Kuznetsova, A. A.; Pushkareva, A. E.
2016-08-01
Results of numerical simulation of autofluorescence of the aorta by the method of Monte Carlo are reported. Two states of the aorta, normal and with atherosclerotic lesions, are studied. A model of the studied tissue is developed on the basis of information about optical, morphological, and physico-chemical properties. It is shown that the data obtained by numerical Monte Carlo simulation are in good agreement with experimental results indicating adequacy of the developed model of the aorta autofluorescence.
Appliance of Independent Component Analysis to System Intrusion Analysis
NASA Astrophysics Data System (ADS)
Ishii, Yoshikazu; Takagi, Tarou; Nakai, Kouji
In order to analyze the output of the intrusion detection system and the firewall, we evaluated the applicability of ICA(independent component analysis). We developed a simulator for evaluation of intrusion analysis method. The simulator consists of the network model of an information system, the service model and the vulnerability model of each server, and the action model performed on client and intruder. We applied the ICA for analyzing the audit trail of simulated information system. We report the evaluation result of the ICA on intrusion analysis. In the simulated case, ICA separated two attacks correctly, and related an attack and the abnormalities of the normal application produced under the influence of the attach.
Real-time simulator for helicopter rotor wind-tunnel operations
NASA Technical Reports Server (NTRS)
Talbot, P. D.; Peterson, R. L.; Graham, D. R.
1986-01-01
This paper describes the elements and operation of a simulator that is being used to train operators of the Rotor Test Apparatus (RTA) in the large-scale 40- by 80-Foot Wind Tunnel at Ames Research Center. The simulator, named TUTOR (for Tunnel Utilization Trainer with Operating Rotor) duplicates the controls of the rotor and its dynamic behavior, as well as the wind-tunnel controls. The simulation software uses a preexisting blade-element model of a four-bladed rotor with flapping and lead-lag degrees of freedom. Equations were developed for all hardware and controls of the RTA and of the wind tunnel that are normally required to perform a wind-tunnel test of a helicopter rotor. The simulator hardware consists of consoles designed to have the same appearance and functions as those in the control room of the 40- by 80-Foot Wind Tunnel, allowing input from three operators who normally establish the required operating conditions during a test run. Normal operating procedures can be practiced, as well as simulated emergencies such as rotor power failure.
Muscle function may depend on model selection in forward simulation of normal walking
Xiao, Ming; Higginson, Jill S.
2008-01-01
The purpose of this study was to quantify how the predicted muscle function would change in a muscle-driven forward simulation of normal walking when changing the number of degrees of freedom in the model. Muscle function was described by individual muscle contributions to the vertical acceleration of the center of mass (COM). We built a two-dimensional (2D) sagittal plane model and a three-dimensional (3D) model in OpenSim and used both models to reproduce the same normal walking data. Perturbation analysis was applied to deduce muscle function in each model. Muscle excitations and contributions to COM support were compared between the 2D and 3D models. We found that the 2D model was able to reproduce similar joint kinematics and kinetics patterns as the 3D model. Individual muscle excitations were different for most of the hip muscles but ankle and knee muscles were able to attain similar excitations. Total induced vertical COM acceleration by muscles and gravity was the same for both models. However, individual muscle contributions to COM support varied, especially for hip muscles. Although there is currently no standard way to validate muscle function predictions, a 3D model seems to be more appropriate for estimating individual hip muscle function. PMID:18804767
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seong W. Lee
During this reporting period, the literature survey including the gasifier temperature measurement literature, the ultrasonic application and its background study in cleaning application, and spray coating process are completed. The gasifier simulator (cold model) testing has been successfully conducted. Four factors (blower voltage, ultrasonic application, injection time intervals, particle weight) were considered as significant factors that affect the temperature measurement. The Analysis of Variance (ANOVA) was applied to analyze the test data. The analysis shows that all four factors are significant to the temperature measurements in the gasifier simulator (cold model). The regression analysis for the case with the normalizedmore » room temperature shows that linear model fits the temperature data with 82% accuracy (18% error). The regression analysis for the case without the normalized room temperature shows 72.5% accuracy (27.5% error). The nonlinear regression analysis indicates a better fit than that of the linear regression. The nonlinear regression model's accuracy is 88.7% (11.3% error) for normalized room temperature case, which is better than the linear regression analysis. The hot model thermocouple sleeve design and fabrication are completed. The gasifier simulator (hot model) design and the fabrication are completed. The system tests of the gasifier simulator (hot model) have been conducted and some modifications have been made. Based on the system tests and results analysis, the gasifier simulator (hot model) has met the proposed design requirement and the ready for system test. The ultrasonic cleaning method is under evaluation and will be further studied for the gasifier simulator (hot model) application. The progress of this project has been on schedule.« less
I Vivo Characterization of Ultrasonic Backscattering from Normal and Abnormal Lungs.
NASA Astrophysics Data System (ADS)
Jafari, Farhad
The primary goal of this project has been to characterize the lung tissue in its in vivo ultrasonic backscattering properties in normal human subjects, and study the changes in the lung echo characteristics under various pathological conditions. Such a characterization procedure is used to estimate the potential of ultrasound for providing useful diagnostic information about the superficial region of the lung. The results of this study may be divided into three categories: (1) This work has resulted in the ultrasonic characterization of lung tissue, in vivo, and has investigated the various statistical features of the lung echo properties in normal human subjects. The echo properties of the lungs are characterized with respect to the mean echo amplitude relative to a perfect reflector and the mean autocorrelation of normalized echo signals. (2) A theoretical model is developed to simulate the ultrasonic backscattering properties of the lung under normal and various simulated abnormal conditions. This model has been tested on various phantoms simulating the strong acoustic interactions of the lung. When applied to the lung this model has shown excellent agreement to experimental data gathered on a population of normal human subjects. By varying a few of the model parameters, the effect of changes in the lung structural parameters on the detected ultrasonic echoes is investigated. It is found that alveoli size changes of about 50 percent and concentration changes of 40 percent may produce spectral changes exceeding the variability exhibited by normal lungs. (3) Ultrasonic echoes from the lungs of 4 groups of patients were studied. The groups included patients with edema, emphysema, pneumothorax, and patients undergoing radiation therapy for treatment of lung cancer. Significant deviations from normal lung echo characteristics is observed in more than 80 percent of the patients studied. These deviations are intercompared and some qualitative associations between the echo characteristics on each patient group and their pulmonary pathology is made. It is concluded that the technique may provide a potential tool in detecting pulmonary abnormalities. More controlled patient studies, however, are indicated as necessary to determine the sensitivity of the ultrasound technique.
Simulating magnetic resonance images based on a model of tumor growth incorporating microenvironment
NASA Astrophysics Data System (ADS)
Jackson, Pamela R.; Hawkins-Daarud, Andrea; Partridge, Savannah C.; Kinahan, Paul E.; Swanson, Kristin R.
2018-03-01
Glioblastoma (GBM), the most aggressive primary brain tumor, is primarily diagnosed and monitored using gadoliniumenhanced T1-weighted and T2-weighted (T2W) magnetic resonance imaging (MRI). Hyperintensity on T2W images is understood to correspond with vasogenic edema and infiltrating tumor cells. GBM's inherent heterogeneity and resulting non-specific MRI image features complicate assessing treatment response. To better understand treatment response, we propose creating a patient-specific untreated virtual imaging control (UVIC), which represents an individual tumor's growth if it had not been treated, for comparison with actual post-treatment images. We generated a T2W MRI UVIC by combining a patient-specific mathematical model of tumor growth with a multi-compartmental MRI signal equation. GBM growth was mathematically modeled using the previously developed Proliferation-Invasion-Hypoxia-Necrosis- Angiogenesis-Edema (PIHNA-E) model, which simulated tumor as being comprised of three cellular phenotypes: normoxic, hypoxic and necrotic cells interacting with a vasculature species, angiogenic factors and extracellular fluid. Within the PIHNA-E model, both hypoxic and normoxic cells emitted angiogenic factors, which recruited additional vessels and caused the vessels to leak, allowing fluid, or edema, to escape into the extracellular space. The model's output was spatial volume fraction maps for each glioma cell type and edema/extracellular space. Volume fraction maps and corresponding T2 values were then incorporated into a multi-compartmental Bloch signal equation to create simulated T2W images. T2 values for individual compartments were estimated from the literature and a normal volunteer. T2 maps calculated from simulated images had normal white matter, normal gray matter, and tumor tissue T2 values within range of literature values.
Influence of plasticity models upon the outcome of simulated hypervelocity impacts
NASA Astrophysics Data System (ADS)
Thomas, John N.
1994-07-01
This paper describes the results of numerical simulations of aluminum upon aluminum impacts which were performed with the CTH hydrocode to determine the effect plasticity formulations upon the final perforation size in the targets. The targets were 1 mm and 5 mm thick plates and the projectiles were 10 mm by 10 mm right circular cylinders. Both targets and projectiles were represented as 2024 aluminium alloy. The hydrocode simulations were run in a two-dimensional cylindrical geometry. Normal impacts at velocites between 5 and 15 km/s were simulated. Three isotropic yield stress models were explored in the simulations: an elastic-perfectly plastic model and the Johnson-Cook and Steinberg-Guinan-Lund viscoplastic models. The fracture behavior was modeled by a simple tensile pressure criterion. The simulations show that using the three strength models resulted in only minor differences in the final perforation diameter. The simulation results were used to construct an equation to predict the final hole size resulting from impacts on thin targets.
Generation of linear dynamic models from a digital nonlinear simulation
NASA Technical Reports Server (NTRS)
Daniele, C. J.; Krosel, S. M.
1979-01-01
The results and methodology used to derive linear models from a nonlinear simulation are presented. It is shown that averaged positive and negative perturbations in the state variables can reduce numerical errors in finite difference, partial derivative approximations and, in the control inputs, can better approximate the system response in both directions about the operating point. Both explicit and implicit formulations are addressed. Linear models are derived for the F 100 engine, and comparisons of transients are made with the nonlinear simulation. The problem of startup transients in the nonlinear simulation in making these comparisons is addressed. Also, reduction of the linear models is investigated using the modal and normal techniques. Reduced-order models of the F 100 are derived and compared with the full-state models.
Comparing Simulated and Theoretical Sampling Distributions of the U3 Person-Fit Statistic.
ERIC Educational Resources Information Center
Emons, Wilco H. M.; Meijer, Rob R.; Sijtsma, Klaas
2002-01-01
Studied whether the theoretical sampling distribution of the U3 person-fit statistic is in agreement with the simulated sampling distribution under different item response theory models and varying item and test characteristics. Simulation results suggest that the use of standard normal deviates for the standardized version of the U3 statistic may…
NASA Astrophysics Data System (ADS)
Gordon, J. J.; Weiss, E.; Abayomi, O. K.; Siebers, J. V.; Dogan, N.
2011-05-01
In intensity modulated radiation therapy (IMRT) of cervical cancer, uterine motion can be larger than cervix motion, requiring a larger clinical target volume to planning target volume (CTV-to-PTV) margin around the uterine fundus. This work simulates different motion models and margins to estimate the dosimetric consequences. A virtual study used image sets from ten patients. Plans were created with uniform margins of 1 cm (PTVA) and 2.4 cm (PTVC), and a margin tapering from 2.4 cm at the fundus to 1 cm at the cervix (PTVB). Three inter-fraction motion models (MM) were simulated. In MM1, all structures moved with normally distributed rigid body translations. In MM2, CTV motion was progressively magnified as one moved superiorly from the cervix to the fundus. In MM3, both CTV and normal tissue motion were magnified as in MM2, modeling the scenario where normal tissues move into the void left by the mobile uterus. Plans were evaluated using static and percentile DVHs. For a conventional margin (PTVA), quasi-realistic uterine motion (MM3) reduces fundus dose by about 5 Gy and increases normal tissue volumes receiving 30-50 Gy by ~5%. A tapered CTV-to-PTV margin can restore fundus and CTV doses, but will increase normal tissue volumes receiving 30-50 Gy by a further ~5%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Utgikar, Vivek; Sun, Xiaodong; Christensen, Richard
2016-12-29
The overall goal of the research project was to model the behavior of the advanced reactorintermediate heat exchange system and to develop advanced control techniques for off-normal conditions. The specific objectives defined for the project were: 1. To develop the steady-state thermal hydraulic design of the intermediate heat exchanger (IHX); 2. To develop mathematical models to describe the advanced nuclear reactor-IHX-chemical process/power generation coupling during normal and off-normal operations, and to simulate models using multiphysics software; 3. To develop control strategies using genetic algorithm or neural network techniques and couple these techniques with the multiphysics software; 4. To validate themore » models experimentally The project objectives were accomplished by defining and executing four different tasks corresponding to these specific objectives. The first task involved selection of IHX candidates and developing steady state designs for those. The second task involved modeling of the transient and offnormal operation of the reactor-IHX system. The subsequent task dealt with the development of control strategies and involved algorithm development and simulation. The last task involved experimental validation of the thermal hydraulic performances of the two prototype heat exchangers designed and fabricated for the project at steady state and transient conditions to simulate the coupling of the reactor- IHX-process plant system. The experimental work utilized the two test facilities at The Ohio State University (OSU) including one existing High-Temperature Helium Test Facility (HTHF) and the newly developed high-temperature molten salt facility.« less
Effects of ignition location models on the burn patterns of simulated wildfires
Bar-Massada, A.; Syphard, A.D.; Hawbaker, T.J.; Stewart, S.I.; Radeloff, V.C.
2011-01-01
Fire simulation studies that use models such as FARSITE often assume that ignition locations are distributed randomly, because spatially explicit information about actual ignition locations are difficult to obtain. However, many studies show that the spatial distribution of ignition locations, whether human-caused or natural, is non-random. Thus, predictions from fire simulations based on random ignitions may be unrealistic. However, the extent to which the assumption of ignition location affects the predictions of fire simulation models has never been systematically explored. Our goal was to assess the difference in fire simulations that are based on random versus non-random ignition location patterns. We conducted four sets of 6000 FARSITE simulations for the Santa Monica Mountains in California to quantify the influence of random and non-random ignition locations and normal and extreme weather conditions on fire size distributions and spatial patterns of burn probability. Under extreme weather conditions, fires were significantly larger for non-random ignitions compared to random ignitions (mean area of 344.5 ha and 230.1 ha, respectively), but burn probability maps were highly correlated (r = 0.83). Under normal weather, random ignitions produced significantly larger fires than non-random ignitions (17.5 ha and 13.3 ha, respectively), and the spatial correlations between burn probability maps were not high (r = 0.54), though the difference in the average burn probability was small. The results of the study suggest that the location of ignitions used in fire simulation models may substantially influence the spatial predictions of fire spread patterns. However, the spatial bias introduced by using a random ignition location model may be minimized if the fire simulations are conducted under extreme weather conditions when fire spread is greatest. ?? 2010 Elsevier Ltd.
Anatomically realistic multiscale models of normal and abnormal gastrointestinal electrical activity
Cheng, Leo K; Komuro, Rie; Austin, Travis M; Buist, Martin L; Pullan, Andrew J
2007-01-01
One of the major aims of the International Union of Physiological Sciences (IUPS) Physiome Project is to develop multiscale mathematical and computer models that can be used to help understand human health. We present here a small facet of this broad plan that applies to the gastrointestinal system. Specifically, we present an anatomically and physiologically based modelling framework that is capable of simulating normal and pathological electrical activity within the stomach and small intestine. The continuum models used within this framework have been created using anatomical information derived from common medical imaging modalities and data from the Visible Human Project. These models explicitly incorporate the various smooth muscle layers and networks of interstitial cells of Cajal (ICC) that are known to exist within the walls of the stomach and small bowel. Electrical activity within individual ICCs and smooth muscle cells is simulated using a previously published simplified representation of the cell level electrical activity. This simulated cell level activity is incorporated into a bidomain representation of the tissue, allowing electrical activity of the entire stomach or intestine to be simulated in the anatomically derived models. This electrical modelling framework successfully replicates many of the qualitative features of the slow wave activity within the stomach and intestine and has also been used to investigate activity associated with functional uncoupling of the stomach. PMID:17457969
Modeling normal shock velocity curvature relations for heterogeneous explosives
NASA Astrophysics Data System (ADS)
Yoo, Sunhee; Crochet, Michael; Pemberton, Steven
2017-01-01
The theory of Detonation Shock Dynamics (DSD) is, in part, an asymptotic method to model a functional form of the relation between the shock normal, its time rate and shock curvature κ. In addition, the shock polar analysis provides a relation between shock angle θ and the detonation velocity Dn that is dependent on the equations of state (EOS) of two adjacent materials. For the axial detonation of an explosive material confined by a cylinder, the shock angle is defined as the angle between the shock normal and the normal to the cylinder liner, located at the intersection of the shock front and cylinder inner wall. Therefore, given an ideal explosive such as PBX-9501 with two functional models determined, a unique, smooth detonation front shape ψ can be determined that approximates the steady state detonation shock front of the explosive. However, experimental measurements of the Dn(κ) relation for heterogeneous explosives such as PBXN-111 [D. K. Kennedy, 2000] are challenging due to the non-smoothness and asymmetry usually observed in the experimental streak records of explosion fronts. Out of many possibilities the asymmetric character may be attributed to the heterogeneity of the explosives; here, material heterogeneity refers to compositions with multiple components and having a grain morphology that can be modeled statistically. Therefore in extending the formulation of DSD to modern novel explosives, we pose two questions: (1) is there any simple hydrodynamic model that can simulate such an asymmetric shock evolution, and (2) what statistics can be derived for the asymmetry using simulations with defined structural heterogeneity in the unreacted explosive? Saenz, Taylor and Stewart [1] studied constitutive models for derivation of the Dn(κ) relation for porous homogeneous explosives and carried out simulations in a spherical coordinate frame. In this paper we extend their model to account for heterogeneity and present shock evolutions in heterogeneous explosives using 2-D hydrodynamic simulations with some statistical examination. As an initial work, we assume that the heterogeneity comes from the local density variation or porosity only.
DOT National Transportation Integrated Search
1981-01-01
The System Availability Model (SAM) is a system-level model which provides measures of vehicle and passenger availability. The SAM operates in conjunction with the AGT discrete Event Simulation Model (DESM). The DESM output is the normal source of th...
NASA Astrophysics Data System (ADS)
Zhang, Li; Lüttge, Andreas
2009-11-01
With previous two-dimensional (2D) simulations based on surface-specific feldspar dissolution succeeding in relating the macroscopic feldspar kinetics to the molecular-scale surface reactions of Si and Al atoms ( Zhang and Lüttge, 2008, 2009), we extended our modeling effort to three-dimensional (3D) feldspar particle dissolution simulations. Bearing on the same theoretical basis, the 3D feldspar particle dissolution simulations have verified the anisotropic surface kinetics observed in the 2D surface-specific simulations. The combined effect of saturation state, pH, and temperature on the surface kinetics anisotropy has been subsequently evaluated, found offering diverse options for morphological evolution of dissolving feldspar nanoparticles with varying grain sizes and starting shapes. Among the three primary faces on the simulated feldspar surface, the (1 0 0) face has the biggest dissolution rate across an extensively wide saturation state range and thus acquires a higher percentage of the surface area upon dissolution. The slowest dissolution occurs to either (0 0 1) or (0 1 0) faces depending on the bond energies of Si-(O)-Si ( ΦSi-O-Si/ kT) and Al-(O)-Si ( ΦAl-O-Si/ kT). When the ratio of ΦSi-O-Si/ kT to ΦAl-O-Si/ kT changes from 6:3 to 7:5, the dissolution rates of three primary faces change from the trend of (1 0 0) > (0 1 0) > (0 0 1) to the trend of (1 0 0) > (0 0 1) > (0 1 0). The rate difference between faces becomes more distinct and accordingly edge rounding becomes more significant. Feldspar nanoparticles also experience an increasing degree of edge rounding from far-from-equilibrium to close-to-equilibrium. Furthermore, we assessed the connection between the continuous morphological modification and the variation in the bulk dissolution rate during the dissolution of a single feldspar particle. Different normalization treatments equivalent to the commonly used mass, cube assumption, sphere assumption, geometric surface area, and reactive surface area normalizations have been used to normalize the bulk dissolution rate. For each of the treatments, time consistence and grain size dependence of the normalized dissolution rate have been evaluated and the results revealed significant dependences on the magnitude of surface kinetic anisotropy under differing environmental conditions. In general, the normalized dissolution rates are strongly dependent on grain size. Time-consistent normalization treatment varies with the investigated condition. The modeling results suggest that the sphere-, cube-, and BET-normalized dissolution rates are appropriate under the far-from-equilibrium conditions at low pH where these normalizations are time-consistent and are slightly dependent on grain size.
Digital simulation of a communication link for Pioneer Saturn Uranus atmospheric entry probe, part 1
NASA Technical Reports Server (NTRS)
Hinrichs, C. A.
1975-01-01
A digital simulation study is presented for a candidate modulator/demodulator design in an atmospheric scintillation environment with Doppler, Doppler rate, and signal attenuation typical of the conditions of an outer planet atmospheric probe. The simulation results indicate that the mean channel error rate with and without scintillation are similar to theoretical characterizations of the link. The simulation gives information for calculating other channel statistics and generates a quantized symbol stream on magnetic tape from which error correction decoding is analyzed. Results from the magnetic tape data analyses are also included. The receiver and bit synchronizer are modeled in the simulation at the level of hardware component parameters rather than at the loop equation level and individual hardware parameters are identified. The atmospheric scintillation amplitude and phase are modeled independently. Normal and log normal amplitude processes are studied. In each case the scintillations are low pass filtered. The receiver performance is given for a range of signal to noise ratios with and without the effects of scintillation. The performance is reviewed for critical reciever parameter variations.
ERIC Educational Resources Information Center
Custer, Michael; Omar, Md Hafidz; Pomplun, Mark
2006-01-01
This study compared vertical scaling results for the Rasch model from BILOG-MG and WINSTEPS. The item and ability parameters for the simulated vocabulary tests were scaled across 11 grades; kindergarten through 10th. Data were based on real data and were simulated under normal and skewed distribution assumptions. WINSTEPS and BILOG-MG were each…
Sun, Mingzhu; Xu, Hui; Zeng, Xingjuan; Zhao, Xin
2017-01-01
There are various fantastic biological phenomena in biological pattern formation. Mathematical modeling using reaction-diffusion partial differential equation systems is employed to study the mechanism of pattern formation. However, model parameter selection is both difficult and time consuming. In this paper, a visual feedback simulation framework is proposed to calculate the parameters of a mathematical model automatically based on the basic principle of feedback control. In the simulation framework, the simulation results are visualized, and the image features are extracted as the system feedback. Then, the unknown model parameters are obtained by comparing the image features of the simulation image and the target biological pattern. Considering two typical applications, the visual feedback simulation framework is applied to fulfill pattern formation simulations for vascular mesenchymal cells and lung development. In the simulation framework, the spot, stripe, labyrinthine patterns of vascular mesenchymal cells, the normal branching pattern and the branching pattern lacking side branching for lung branching are obtained in a finite number of iterations. The simulation results indicate that it is easy to achieve the simulation targets, especially when the simulation patterns are sensitive to the model parameters. Moreover, this simulation framework can expand to other types of biological pattern formation. PMID:28225811
Sun, Mingzhu; Xu, Hui; Zeng, Xingjuan; Zhao, Xin
2017-01-01
There are various fantastic biological phenomena in biological pattern formation. Mathematical modeling using reaction-diffusion partial differential equation systems is employed to study the mechanism of pattern formation. However, model parameter selection is both difficult and time consuming. In this paper, a visual feedback simulation framework is proposed to calculate the parameters of a mathematical model automatically based on the basic principle of feedback control. In the simulation framework, the simulation results are visualized, and the image features are extracted as the system feedback. Then, the unknown model parameters are obtained by comparing the image features of the simulation image and the target biological pattern. Considering two typical applications, the visual feedback simulation framework is applied to fulfill pattern formation simulations for vascular mesenchymal cells and lung development. In the simulation framework, the spot, stripe, labyrinthine patterns of vascular mesenchymal cells, the normal branching pattern and the branching pattern lacking side branching for lung branching are obtained in a finite number of iterations. The simulation results indicate that it is easy to achieve the simulation targets, especially when the simulation patterns are sensitive to the model parameters. Moreover, this simulation framework can expand to other types of biological pattern formation.
Nonholonomic Hamiltonian Method for Meso-macroscale Simulations of Reacting Shocks
NASA Astrophysics Data System (ADS)
Fahrenthold, Eric; Lee, Sangyup
2015-06-01
The seamless integration of macroscale, mesoscale, and molecular scale models of reacting shock physics has been hindered by dramatic differences in the model formulation techniques normally used at different scales. In recent research the authors have developed the first unified discrete Hamiltonian approach to multiscale simulation of reacting shock physics. Unlike previous work, the formulation employs reacting themomechanical Hamiltonian formulations at all scales, including the continuum. Unlike previous work, the formulation employs a nonholonomic modeling approach to systematically couple the models developed at all scales. Example applications of the method show meso-macroscale shock to detonation simulations in nitromethane and RDX. Research supported by the Defense Threat Reduction Agency.
NASA Astrophysics Data System (ADS)
Yui, Satoshi; Tsubota, Makoto; Kobayashi, Hiromichi
2018-04-01
The coupled dynamics of the two-fluid model of superfluid 4He is numerically studied for quantum turbulence of the thermal counterflow in a square channel. We combine the vortex filament model of the superfluid and the Navier-Stokes equations of normal fluid. Simulations of the coupled dynamics show that the velocity profile of the normal fluid is deformed significantly by superfluid turbulence as the vortices become dense. This result is consistent with recently performed visualization experiments. We introduce a dimensionless parameter that characterizes the deformation of the velocity profile.
NASA Astrophysics Data System (ADS)
Figueroa, Aldo; Meunier, Patrice; Cuevas, Sergio; Villermaux, Emmanuel; Ramos, Eduardo
2014-01-01
We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, "The diffusive strip method for scalar mixing in two-dimensions," J. Fluid Mech. 662, 134-172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement with quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors.
Reynolds-Stress Budgets in an Impinging Shock Wave/Boundary-Layer Interaction
NASA Technical Reports Server (NTRS)
Vyas, Manan A.; Yoder, Dennis A.; Gaitonde, Datta V.
2018-01-01
Implicit large-eddy simulation (ILES) of a shock wave/boundary-layer interaction (SBLI) was performed. Comparisons with experimental data showed a sensitivity of the current prediction to the modeling of the sidewalls. This was found to be common among various computational studies in the literature where periodic boundary conditions were used in the spanwise direction, as was the case in the present work. Thus, although the experiment was quasi-two-dimensional, the present simulation was determined to be two-dimensional. Quantities present in the exact equation of the Reynolds-stress transport, i.e., production, molecular diffusion, turbulent transport, pressure diffusion, pressure strain, dissipation, and turbulent mass flux were calculated. Reynolds-stress budgets were compared with past large-eddy simulation and direct numerical simulation datasets in the undisturbed portion of the turbulent boundary layer to validate the current approach. The budgets in SBLI showed the growth in the production term for the primary normal stress and energy transfer mechanism was led by the pressure strain term in the secondary normal stresses. The pressure diffusion term, commonly assumed as negligible by turbulence model developers, was shown to be small but non-zero in the normal stress budgets, however it played a key role in the primary shear stress budget.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Haihua; Zou, Ling; Zhang, Hongbin
As part of the efforts to understand the unexpected “self-regulating” mode of the RCIC (Reactor Core Isolation Cooling) systems in Fukushima accidents and extend BWR RCIC and PWR AFW (Auxiliary Feed Water) operational range and flexibility, mechanistic models for the Terry turbine, based on Sandia’s original work [1], have been developed and implemented in the RELAP-7 code to simulate the RCIC system. In 2016, our effort has been focused on normal working conditions of the RCIC system. More complex off-design conditions will be pursued in later years when more data are available. In the Sandia model, the turbine stator inletmore » velocity is provided according to a reduced-order model which was obtained from a large number of CFD (computational fluid dynamics) simulations. In this work, we propose an alternative method, using an under-expanded jet model to obtain the velocity and thermodynamic conditions for the turbine stator inlet. The models include both an adiabatic expansion process inside the nozzle and a free expansion process outside of the nozzle to ambient pressure. The combined models are able to predict the steam mass flow rate and supersonic velocity to the Terry turbine bucket entrance, which are the necessary input information for the Terry turbine rotor model. The analytical models for the nozzle were validated with experimental data and benchmarked with CFD simulations. The analytical models generally agree well with the experimental data and CFD simulations. The analytical models are suitable for implementation into a reactor system analysis code or severe accident code as part of mechanistic and dynamical models to understand the RCIC behaviors. The newly developed nozzle models and modified turbine rotor model according to the Sandia’s original work have been implemented into RELAP-7, along with the original Sandia Terry turbine model. A new pump model has also been developed and implemented to couple with the Terry turbine model. An input model was developed to test the Terry turbine RCIC system, which generates reasonable results. Both the INL RCIC model and the Sandia RCIC model produce results matching major rated parameters such as the rotational speed, pump torque, and the turbine shaft work for the normal operation condition. The Sandia model is more sensitive to the turbine outlet pressure than the INL model. The next step will be further refining the Terry turbine models by including two-phase flow cases so that off-design conditions can be simulated. The pump model could also be enhanced with the use of the homologous curves.« less
ERIC Educational Resources Information Center
Pant, Mohan Dev
2011-01-01
The Burr families (Type III and Type XII) of distributions are traditionally used in the context of statistical modeling and for simulating non-normal distributions with moment-based parameters (e.g., Skew and Kurtosis). In educational and psychological studies, the Burr families of distributions can be used to simulate extremely asymmetrical and…
Visual attention and flexible normalization pools
Schwartz, Odelia; Coen-Cagli, Ruben
2013-01-01
Attention to a spatial location or feature in a visual scene can modulate the responses of cortical neurons and affect perceptual biases in illusions. We add attention to a cortical model of spatial context based on a well-founded account of natural scene statistics. The cortical model amounts to a generalized form of divisive normalization, in which the surround is in the normalization pool of the center target only if they are considered statistically dependent. Here we propose that attention influences this computation by accentuating the neural unit activations at the attended location, and that the amount of attentional influence of the surround on the center thus depends on whether center and surround are deemed in the same normalization pool. The resulting form of model extends a recent divisive normalization model of attention (Reynolds & Heeger, 2009). We simulate cortical surround orientation experiments with attention and show that the flexible model is suitable for capturing additional data and makes nontrivial testable predictions. PMID:23345413
Multinomial mixture model with heterogeneous classification probabilities
Holland, M.D.; Gray, B.R.
2011-01-01
Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.
Simulation Based Earthquake Forecasting with RSQSim
NASA Astrophysics Data System (ADS)
Gilchrist, J. J.; Jordan, T. H.; Dieterich, J. H.; Richards-Dinger, K. B.
2016-12-01
We are developing a physics-based forecasting model for earthquake ruptures in California. We employ the 3D boundary element code RSQSim to generate synthetic catalogs with millions of events that span up to a million years. The simulations incorporate rate-state fault constitutive properties in complex, fully interacting fault systems. The Unified California Earthquake Rupture Forecast Version 3 (UCERF3) model and data sets are used for calibration of the catalogs and specification of fault geometry. Fault slip rates match the UCERF3 geologic slip rates and catalogs are tuned such that earthquake recurrence matches the UCERF3 model. Utilizing the Blue Waters Supercomputer, we produce a suite of million-year catalogs to investigate the epistemic uncertainty in the physical parameters used in the simulations. In particular, values of the rate- and state-friction parameters a and b, the initial shear and normal stress, as well as the earthquake slip speed, are varied over several simulations. In addition to testing multiple models with homogeneous values of the physical parameters, the parameters a, b, and the normal stress are varied with depth as well as in heterogeneous patterns across the faults. Cross validation of UCERF3 and RSQSim is performed within the SCEC Collaboratory for Interseismic Simulation and Modeling (CISM) to determine the affect of the uncertainties in physical parameters observed in the field and measured in the lab, on the uncertainties in probabilistic forecasting. We are particularly interested in the short-term hazards of multi-event sequences due to complex faulting and multi-fault ruptures.
Topology in two dimensions. IV - CDM models with non-Gaussian initial conditions
NASA Astrophysics Data System (ADS)
Coles, Peter; Moscardini, Lauro; Plionis, Manolis; Lucchin, Francesco; Matarrese, Sabino; Messina, Antonio
1993-02-01
The results of N-body simulations with both Gaussian and non-Gaussian initial conditions are used here to generate projected galaxy catalogs with the same selection criteria as the Shane-Wirtanen counts of galaxies. The Euler-Poincare characteristic is used to compare the statistical nature of the projected galaxy clustering in these simulated data sets with that of the observed galaxy catalog. All the models produce a topology dominated by a meatball shift when normalized to the known small-scale clustering properties of galaxies. Models characterized by a positive skewness of the distribution of primordial density perturbations are inconsistent with the Lick data, suggesting problems in reconciling models based on cosmic textures with observations. Gaussian CDM models fit the distribution of cell counts only if they have a rather high normalization but possess too low a coherence length compared with the Lick counts. This suggests that a CDM model with extra large scale power would probably fit the available data.
Individual Colorimetric Observer Model
Asano, Yuta; Fairchild, Mark D.; Blondé, Laurent
2016-01-01
This study proposes a vision model for individual colorimetric observers. The proposed model can be beneficial in many color-critical applications such as color grading and soft proofing to assess ranges of color matches instead of a single average match. We extended the CIE 2006 physiological observer by adding eight additional physiological parameters to model individual color-normal observers. These eight parameters control lens pigment density, macular pigment density, optical densities of L-, M-, and S-cone photopigments, and λmax shifts of L-, M-, and S-cone photopigments. By identifying the variability of each physiological parameter, the model can simulate color matching functions among color-normal populations using Monte Carlo simulation. The variabilities of the eight parameters were identified through two steps. In the first step, extensive reviews of past studies were performed for each of the eight physiological parameters. In the second step, the obtained variabilities were scaled to fit a color matching dataset. The model was validated using three different datasets: traditional color matching, applied color matching, and Rayleigh matches. PMID:26862905
Murad, Havi; Kipnis, Victor; Freedman, Laurence S
2016-10-01
Assessing interactions in linear regression models when covariates have measurement error (ME) is complex.We previously described regression calibration (RC) methods that yield consistent estimators and standard errors for interaction coefficients of normally distributed covariates having classical ME. Here we extend normal based RC (NBRC) and linear RC (LRC) methods to a non-classical ME model, and describe more efficient versions that combine estimates from the main study and internal sub-study. We apply these methods to data from the Observing Protein and Energy Nutrition (OPEN) study. Using simulations we show that (i) for normally distributed covariates efficient NBRC and LRC were nearly unbiased and performed well with sub-study size ≥200; (ii) efficient NBRC had lower MSE than efficient LRC; (iii) the naïve test for a single interaction had type I error probability close to the nominal significance level, whereas efficient NBRC and LRC were slightly anti-conservative but more powerful; (iv) for markedly non-normal covariates, efficient LRC yielded less biased estimators with smaller variance than efficient NBRC. Our simulations suggest that it is preferable to use: (i) efficient NBRC for estimating and testing interaction effects of normally distributed covariates and (ii) efficient LRC for estimating and testing interactions for markedly non-normal covariates. © The Author(s) 2013.
Snell, Kym Ie; Ensor, Joie; Debray, Thomas Pa; Moons, Karel Gm; Riley, Richard D
2017-01-01
If individual participant data are available from multiple studies or clusters, then a prediction model can be externally validated multiple times. This allows the model's discrimination and calibration performance to be examined across different settings. Random-effects meta-analysis can then be used to quantify overall (average) performance and heterogeneity in performance. This typically assumes a normal distribution of 'true' performance across studies. We conducted a simulation study to examine this normality assumption for various performance measures relating to a logistic regression prediction model. We simulated data across multiple studies with varying degrees of variability in baseline risk or predictor effects and then evaluated the shape of the between-study distribution in the C-statistic, calibration slope, calibration-in-the-large, and E/O statistic, and possible transformations thereof. We found that a normal between-study distribution was usually reasonable for the calibration slope and calibration-in-the-large; however, the distributions of the C-statistic and E/O were often skewed across studies, particularly in settings with large variability in the predictor effects. Normality was vastly improved when using the logit transformation for the C-statistic and the log transformation for E/O, and therefore we recommend these scales to be used for meta-analysis. An illustrated example is given using a random-effects meta-analysis of the performance of QRISK2 across 25 general practices.
NASA Astrophysics Data System (ADS)
Li, L.; Xu, C.-Y.; Engeland, K.
2012-04-01
With respect to model calibration, parameter estimation and analysis of uncertainty sources, different approaches have been used in hydrological models. Bayesian method is one of the most widely used methods for uncertainty assessment of hydrological models, which incorporates different sources of information into a single analysis through Bayesian theorem. However, none of these applications can well treat the uncertainty in extreme flows of hydrological models' simulations. This study proposes a Bayesian modularization method approach in uncertainty assessment of conceptual hydrological models by considering the extreme flows. It includes a comprehensive comparison and evaluation of uncertainty assessments by a new Bayesian modularization method approach and traditional Bayesian models using the Metropolis Hasting (MH) algorithm with the daily hydrological model WASMOD. Three likelihood functions are used in combination with traditional Bayesian: the AR (1) plus Normal and time period independent model (Model 1), the AR (1) plus Normal and time period dependent model (Model 2) and the AR (1) plus multi-normal model (Model 3). The results reveal that (1) the simulations derived from Bayesian modularization method are more accurate with the highest Nash-Sutcliffe efficiency value, and (2) the Bayesian modularization method performs best in uncertainty estimates of entire flows and in terms of the application and computational efficiency. The study thus introduces a new approach for reducing the extreme flow's effect on the discharge uncertainty assessment of hydrological models via Bayesian. Keywords: extreme flow, uncertainty assessment, Bayesian modularization, hydrological model, WASMOD
Simulating Limb Formation in the U.S. EPA Virtual Embryo - Risk Assessment Project
The U.S. EPA’s Virtual Embryo project (v-Embryo™) is a computer model simulation of morphogenesis that integrates cell and molecular level data from mechanistic and in vitro assays with knowledge about normal development processes to assess in silico the effects of chemicals on d...
Su, Liyun; Zhao, Yanyong; Yan, Tianshun; Li, Fenglan
2012-01-01
Multivariate local polynomial fitting is applied to the multivariate linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to non-parametric technique of local polynomial estimation, it is unnecessary to know the form of heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we verify that the regression coefficients is asymptotic normal based on numerical simulations and normal Q-Q plots of residuals. Finally, the simulation results and the local polynomial estimation of real data indicate that our approach is surely effective in finite-sample situations.
A Box-Cox normal model for response times.
Klein Entink, R H; van der Linden, W J; Fox, J-P
2009-11-01
The log-transform has been a convenient choice in response time modelling on test items. However, motivated by a dataset of the Medical College Admission Test where the lognormal model violated the normality assumption, the possibilities of the broader class of Box-Cox transformations for response time modelling are investigated. After an introduction and an outline of a broader framework for analysing responses and response times simultaneously, the performance of a Box-Cox normal model for describing response times is investigated using simulation studies and a real data example. A transformation-invariant implementation of the deviance information criterium (DIC) is developed that allows for comparing model fit between models with different transformation parameters. Showing an enhanced description of the shape of the response time distributions, its application in an educational measurement context is discussed at length.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gigley, H.M.
1982-01-01
An artificial intelligence approach to the simulation of neurolinguistically constrained processes in sentence comprehension is developed using control strategies for simulation of cooperative computation in associative networks. The desirability of this control strategy in contrast to ATN and production system strategies is explained. A first pass implementation of HOPE, an artificial intelligence simulation model of sentence comprehension, constrained by studies of aphasic performance, psycholinguistics, neurolinguistics, and linguistic theory is described. Claims that the model could serve as a basis for sentence production simulation and for a model of language acquisition as associative learning are discussed. HOPE is a model thatmore » performs in a normal state and includes a lesion simulation facility. HOPE is also a research tool. Its modifiability and use as a tool to investigate hypothesized causes of degradation in comprehension performance by aphasic patients are described. Issues of using behavioral constraints in modelling and obtaining appropriate data for simulated process modelling are discussed. Finally, problems of validation of the simulation results are raised; and issues of how to interpret clinical results to define the evolution of the model are discussed. Conclusions with respect to the feasibility of artificial intelligence simulation process modelling are discussed based on the current state of research.« less
Computer simulation of liquid metals
NASA Astrophysics Data System (ADS)
Belashchenko, D. K.
2013-12-01
Methods for and the results of the computer simulation of liquid metals are reviewed. Two basic methods, classical molecular dynamics with known interparticle potentials and the ab initio method, are considered. Most attention is given to the simulated results obtained using the embedded atom model (EAM). The thermodynamic, structural, and diffusion properties of liquid metal models under normal and extreme (shock) pressure conditions are considered. Liquid-metal simulated results for the Groups I - IV elements, a number of transition metals, and some binary systems (Fe - C, Fe - S) are examined. Possibilities for the simulation to account for the thermal contribution of delocalized electrons to energy and pressure are considered. Solidification features of supercooled metals are also discussed.
Helicon normal modes in Proto-MPEX
NASA Astrophysics Data System (ADS)
Piotrowicz, P. A.; Caneses, J. F.; Green, D. L.; Goulding, R. H.; Lau, C.; Caughman, J. B. O.; Rapp, J.; Ruzic, D. N.
2018-05-01
The Proto-MPEX helicon source has been operating in a high electron density ‘helicon-mode’. Establishing plasma densities and magnetic field strengths under the antenna that allow for the formation of normal modes of the fast-wave are believed to be responsible for the ‘helicon-mode’. A 2D finite-element full-wave model of the helicon antenna on Proto-MPEX is used to identify the fast-wave normal modes responsible for the steady-state electron density profile produced by the source. We also show through the simulation that in the regions of operation in which core power deposition is maximum the slow-wave does not deposit significant power besides directly under the antenna. In the case of a simulation where a normal mode is not excited significant edge power is deposited in the mirror region. ).
Helicon normal modes in Proto-MPEX
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piotrowicz, Pawel A.; Caneses, Juan F.; Green, David L.
Here, the Proto-MPEX helicon source has been operating in a high electron density 'helicon-mode'. Establishing plasma densities and magnetic field strengths under the antenna that allow for the formation of normal modes of the fast-wave are believed to be responsible for the 'helicon-mode'. A 2D finite-element full-wave model of the helicon antenna on Proto-MPEX is used to identify the fast-wave normal modes responsible for the steady-state electron density profile produced by the source. We also show through the simulation that in the regions of operation in which core power deposition is maximum the slow-wave does not deposit significant power besidesmore » directly under the antenna. In the case of a simulation where a normal mode is not excited significant edge power is deposited in the mirror region.« less
Helicon normal modes in Proto-MPEX
Piotrowicz, Pawel A.; Caneses, Juan F.; Green, David L.; ...
2018-05-22
Here, the Proto-MPEX helicon source has been operating in a high electron density 'helicon-mode'. Establishing plasma densities and magnetic field strengths under the antenna that allow for the formation of normal modes of the fast-wave are believed to be responsible for the 'helicon-mode'. A 2D finite-element full-wave model of the helicon antenna on Proto-MPEX is used to identify the fast-wave normal modes responsible for the steady-state electron density profile produced by the source. We also show through the simulation that in the regions of operation in which core power deposition is maximum the slow-wave does not deposit significant power besidesmore » directly under the antenna. In the case of a simulation where a normal mode is not excited significant edge power is deposited in the mirror region.« less
Kim, K B; Shanyfelt, L M; Hahn, D W
2006-01-01
Dense-medium scattering is explored in the context of providing a quantitative measurement of turbidity, with specific application to corneal haze. A multiple-wavelength scattering technique is proposed to make use of two-color scattering response ratios, thereby providing a means for data normalization. A combination of measurements and simulations are reported to assess this technique, including light-scattering experiments for a range of polystyrene suspensions. Monte Carlo (MC) simulations were performed using a multiple-scattering algorithm based on full Mie scattering theory. The simulations were in excellent agreement with the polystyrene suspension experiments, thereby validating the MC model. The MC model was then used to simulate multiwavelength scattering in a corneal tissue model. Overall, the proposed multiwavelength scattering technique appears to be a feasible approach to quantify dense-medium scattering such as the manifestation of corneal haze, although more complex modeling of keratocyte scattering, and animal studies, are necessary.
A mathematical and experimental simulation of the hematological response to weightlessness
NASA Technical Reports Server (NTRS)
Kimzey, S. L.; Leonard, J. I.; Johnson, P. C.
1979-01-01
A mathematical model of erythropoiesis control was used to simulate the effects of bedrest and zero-g on the circulating red cell mass. The model incorporates the best current understanding of the dynamics of red cell production and destruction and the associated feedback regulation. Specifically studied were the hemodynamic responses of a 28-day bedrest study devised to simulate Skylab experience. The results support the hypothesis that red cell loss during supine bedrest is a normal physiological feedback process in response to hemoconcentration enhanced tissue oxygenation and suppression of red cell production. Model simulation suggested the possibilities that this period was marked by some combination of increased oxygen-hemoglobin affinity, small reduction in mean red cell life span, ineffective erythropoiesis, or abnormal reticulocytosis.
König, Matthias; Holzhütter, Hermann-Georg
2012-01-01
A major problem in the insulin therapy of patients with diabetes type 2 (T2DM) is the increased occurrence of hypoglycemic events which, if left untreated, may cause confusion or fainting and in severe cases seizures, coma, and even death. To elucidate the potential contribution of the liver to hypoglycemia in T2DM we applied a detailed kinetic model of human hepatic glucose metabolism to simulate changes in glycolysis, gluconeogenesis, and glycogen metabolism induced by deviations of the hormones insulin, glucagon, and epinephrine from their normal plasma profiles. Our simulations reveal in line with experimental and clinical data from a multitude of studies in T2DM, (i) significant changes in the relative contribution of glycolysis, gluconeogenesis, and glycogen metabolism to hepatic glucose production and hepatic glucose utilization; (ii) decreased postprandial glycogen storage as well as increased glycogen depletion in overnight fasting and short term fasting; and (iii) a shift of the set point defining the switch between hepatic glucose production and hepatic glucose utilization to elevated plasma glucose levels, respectively, in T2DM relative to normal, healthy subjects. Intriguingly, our model simulations predict a restricted gluconeogenic response of the liver under impaired hormonal signals observed in T2DM, resulting in an increased risk of hypoglycemia. The inability of hepatic glucose metabolism to effectively counterbalance a decline of the blood glucose level becomes even more pronounced in case of tightly controlled insulin treatment. Given this Janus face mode of action of insulin, our model simulations underline the great potential that normalization of the plasma glucagon profile may have for the treatment of T2DM. PMID:22977253
Simulation of late inspiratory rise in airway pressure during pressure support ventilation.
Yu, Chun-Hsiang; Su, Po-Lan; Lin, Wei-Chieh; Lin, Sheng-Hsiang; Chen, Chang-Wen
2015-02-01
Late inspiratory rise in airway pressure (LIRAP, Paw/ΔT) caused by inspiratory muscle relaxation or expiratory muscle contraction is frequently seen during pressure support ventilation (PSV), although the modulating factors are unknown. We investigated the effects of respiratory mechanics (normal, obstructive, restrictive, or mixed), inspiratory effort (-2, -8, or -15 cm H2O), flow cycle criteria (5-40% peak inspiratory flow), and duration of inspiratory muscle relaxation (0.18-0.3 s) on LIRAP during PSV using a lung simulator and 4 types of ventilators. LIRAP occurred with all lung models when inspiratory effort was medium to high and duration of inspiratory muscle relaxation was short. The normal lung model was associated with the fastest LIRAP, whereas the obstructive lung model was associated with the slowest. Unless lung mechanics were normal or mixed, LIRAP was unlikely to occur when inspiratory effort was low. Different ventilators were also associated with differences in LIRAP speed. Except for within the restrictive lung model, changes in flow cycle level did not abolish LIRAP if inspiratory effort was medium to high. Increased duration of inspiratory relaxation also led to the elimination of LIRAP. Simulation of expiratory muscle contraction revealed that LIRAP occurred only when expiratory muscle contraction occurred sometime after the beginning of inspiration. Our simulation study reveals that both respiratory resistance and compliance may affect LIRAP. Except for under restrictive lung conditions, LIRAP is unlikely to be abolished by simply lowering flow cycle criteria when inspiratory effort is strong and relaxation time is rapid. LIRAP may be caused by expiratory muscle contraction when it occurs during inspiration. Copyright © 2015 by Daedalus Enterprises.
Rocha, B. M.; Toledo, E. M.; Barra, L. P. S.; dos Santos, R. Weber
2015-01-01
Heart failure is a major and costly problem in public health, which, in certain cases, may lead to death. The failing heart undergo a series of electrical and structural changes that provide the underlying basis for disturbances like arrhythmias. Computer models of coupled electrical and mechanical activities of the heart can be used to advance our understanding of the complex feedback mechanisms involved. In this context, there is a lack of studies that consider heart failure remodeling using strongly coupled electromechanics. We present a strongly coupled electromechanical model to study the effects of deformation on a human left ventricle wedge considering normal and hypertrophic heart failure conditions. We demonstrate through a series of simulations that when a strongly coupled electromechanical model is used, deformation results in the thickening of the ventricular wall that in turn increases transmural dispersion of repolarization. These effects were analyzed in both normal and failing heart conditions. We also present transmural electrograms obtained from these simulations. Our results suggest that the waveform of electrograms, particularly the T-wave, is influenced by cardiac contraction on both normal and pathological conditions. PMID:26550570
Kinematic analysis of anterior cruciate ligament reconstruction in total knee arthroplasty
Liu, Hua-Wei; Ni, Ming; Zhang, Guo-Qiang; Li, Xiang; Chen, Hui; Zhang, Qiang; Chai, Wei; Zhou, Yong-Gang; Chen, Ji-Ying; Liu, Yu-Liang; Cheng, Cheng-Kung; Wang, Yan
2016-01-01
Background: This study aims to retain normal knee kinematics after knee replacement surgeries by reconstructing anterior cruciate ligament during total knee arthroplasty. Method: We use computational simulation tools to establish four dynamic knee models, including normal knee model, posterior cruciate ligament retaining knee model, posterior cruciate ligament substituting knee model, and anterior cruciate ligament reconstructing knee model. Our proposed method utilizes magnetic resonance images to reconstruct solid bones and attachments of ligaments, and assemble femoral and tibial components according representative literatures and operational specifications. Dynamic data of axial tibial rotation and femoral translation from full-extension to 135 were measured for analyzing the motion of knee models. Findings: The computational simulation results show that comparing with the posterior cruciate ligament retained knee model and the posterior cruciate ligament substituted knee model, reconstructing anterior cruciate ligament improves the posterior movement of the lateral condyle, medial condyle and tibial internal rotation through a full range of flexion. The maximum posterior translations of the lateral condyle, medial condyle and tibial internal rotation of the anterior cruciate ligament reconstructed knee are 15.3 mm, 4.6 mm and 20.6 at 135 of flexion. Interpretation: Reconstructing anterior cruciate ligament in total knee arthroplasty has been approved to be an more efficient way of maintaining normal knee kinematics comparing to posterior cruciate ligament retained and posterior cruciate ligament substituted total knee arthroplasty. PMID:27347334
A Hierarchical Poisson Log-Normal Model for Network Inference from RNA Sequencing Data
Gallopin, Mélina; Rau, Andrea; Jaffrézic, Florence
2013-01-01
Gene network inference from transcriptomic data is an important methodological challenge and a key aspect of systems biology. Although several methods have been proposed to infer networks from microarray data, there is a need for inference methods able to model RNA-seq data, which are count-based and highly variable. In this work we propose a hierarchical Poisson log-normal model with a Lasso penalty to infer gene networks from RNA-seq data; this model has the advantage of directly modelling discrete data and accounting for inter-sample variance larger than the sample mean. Using real microRNA-seq data from breast cancer tumors and simulations, we compare this method to a regularized Gaussian graphical model on log-transformed data, and a Poisson log-linear graphical model with a Lasso penalty on power-transformed data. For data simulated with large inter-sample dispersion, the proposed model performs better than the other methods in terms of sensitivity, specificity and area under the ROC curve. These results show the necessity of methods specifically designed for gene network inference from RNA-seq data. PMID:24147011
Space-flight simulations of calcium metabolism using a mathematical model of calcium regulation
NASA Technical Reports Server (NTRS)
Brand, S. N.
1985-01-01
The results of a series of simulation studies of calcium matabolic changes which have been recorded during human exposure to bed rest and space flight are presented. Space flight and bed rest data demonstrate losses of total body calcium during exposure to hypogravic environments. These losses are evidenced by higher than normal rates of urine calcium excretion and by negative calcium balances. In addition, intestinal absorption rates and bone mineral content are assumed to decrease. The bed rest and space flight simulations were executed on a mathematical model of the calcium metabolic system. The purpose of the simulations is to theoretically test hypotheses and predict system responses which are occurring during given experimental stresses. In this case, hypogravity occurs through the comparison of simulation and experimental data and through the analysis of model structure and system responses. The model reliably simulates the responses of selected bed rest and space flight parameters. When experimental data are available, the simulated skeletal responses and regulatory factors involved in the responses agree with space flight data collected on rodents. In addition, areas within the model that need improvement are identified.
Creating Simulated Microgravity Patient Models
NASA Technical Reports Server (NTRS)
Hurst, Victor; Doerr, Harold K.; Bacal, Kira
2004-01-01
The Medical Operational Support Team (MOST) has been tasked by the Space and Life Sciences Directorate (SLSD) at the NASA Johnson Space Center (JSC) to integrate medical simulation into 1) medical training for ground and flight crews and into 2) evaluations of medical procedures and equipment for the International Space Station (ISS). To do this, the MOST requires patient models that represent the physiological changes observed during spaceflight. Despite the presence of physiological data collected during spaceflight, there is no defined set of parameters that illustrate or mimic a 'space normal' patient. Methods: The MOST culled space-relevant medical literature and data from clinical studies performed in microgravity environments. The areas of focus for data collection were in the fields of cardiovascular, respiratory and renal physiology. Results: The MOST developed evidence-based patient models that mimic the physiology believed to be induced by human exposure to a microgravity environment. These models have been integrated into space-relevant scenarios using a human patient simulator and ISS medical resources. Discussion: Despite the lack of a set of physiological parameters representing 'space normal,' the MOST developed space-relevant patient models that mimic microgravity-induced changes in terrestrial physiology. These models are used in clinical scenarios that will medically train flight surgeons, biomedical flight controllers (biomedical engineers; BME) and, eventually, astronaut-crew medical officers (CMO).
NASA Astrophysics Data System (ADS)
Skaugen, Thomas; Weltzien, Ingunn H.
2016-09-01
Snow is an important and complicated element in hydrological modelling. The traditional catchment hydrological model with its many free calibration parameters, also in snow sub-models, is not a well-suited tool for predicting conditions for which it has not been calibrated. Such conditions include prediction in ungauged basins and assessing hydrological effects of climate change. In this study, a new model for the spatial distribution of snow water equivalent (SWE), parameterized solely from observed spatial variability of precipitation, is compared with the current snow distribution model used in the operational flood forecasting models in Norway. The former model uses a dynamic gamma distribution and is called Snow Distribution_Gamma, (SD_G), whereas the latter model has a fixed, calibrated coefficient of variation, which parameterizes a log-normal model for snow distribution and is called Snow Distribution_Log-Normal (SD_LN). The two models are implemented in the parameter parsimonious rainfall-runoff model Distance Distribution Dynamics (DDD), and their capability for predicting runoff, SWE and snow-covered area (SCA) is tested and compared for 71 Norwegian catchments. The calibration period is 1985-2000 and validation period is 2000-2014. Results show that SDG better simulates SCA when compared with MODIS satellite-derived snow cover. In addition, SWE is simulated more realistically in that seasonal snow is melted out and the building up of "snow towers" and giving spurious positive trends in SWE, typical for SD_LN, is prevented. The precision of runoff simulations using SDG is slightly inferior, with a reduction in Nash-Sutcliffe and Kling-Gupta efficiency criterion of 0.01, but it is shown that the high precision in runoff prediction using SD_LN is accompanied with erroneous simulations of SWE.
Dwisaptarini, A P; Suebnukarn, S; Rhienmora, P; Haddawy, P; Koontongkaew, S
This work presents the multilayered caries model with a visuo-tactile virtual reality simulator and a randomized controlled trial protocol to determine the effectiveness of the simulator in training for minimally invasive caries removal. A three-dimensional, multilayered caries model was reconstructed from 10 micro-computed tomography (CT) images of deeply carious extracted human teeth before and after caries removal. The full grey scale 0-255 yielded a median grey scale value of 0-9, 10-18, 19-25, 26-52, and 53-80 regarding dental pulp, infected carious dentin, affected carious dentin, normal dentin, and normal enamel, respectively. The simulator was connected to two haptic devices for a handpiece and mouth mirror. The visuo-tactile feedback during the operation varied depending on the grey scale. Sixth-year dental students underwent a pretraining assessment of caries removal on extracted teeth. The students were then randomly assigned to train on either the simulator (n=16) or conventional extracted teeth (n=16) for 3 days, after which the assessment was repeated. The posttraining performance of caries removal improved compared with pretraining in both groups (Wilcoxon, p<0.05). The equivalence test for proportional differences (two 1-sided t-tests) with a 0.2 margin confirmed that the participants in both groups had identical posttraining performance scores (95% CI=0.92, 1; p=0.00). In conclusion, training on the micro-CT multilayered caries model with the visuo-tactile virtual reality simulator and conventional extracted tooth had equivalent effects on improving performance of minimally invasive caries removal.
A Computer Model of the Cardiovascular System for Effective Learning.
ERIC Educational Resources Information Center
Rothe, Carl F.
1980-01-01
Presents a model of the cardiovascular system which solves a set of interacting, possibly nonlinear, differential equations. Figures present a schematic diagram of the model and printouts that simulate normal conditions, exercise, hemorrhage, reduced contractility. The nine interacting equations used to describe the system are described in the…
Mundt, Christian; Sventitskiy, Alexander; Cehelsky, Jeffrey E.; Patters, Andrea B.; Tservistas, Markus; Hahn, Michael C.; Juhl, Gerd; DeVincenzo, John P.
2012-01-01
Background. New aerosol drugs for infants may require more efficient delivery systems, including face masks. Maximizing delivery efficiency requires tight-fitting masks with minimal internal mask volumes, which could cause carbon dioxide (CO2) retention. An RNA-interference-based antiviral for treatment of respiratory syncytial virus in populations that may include young children is designed for aerosol administration. CO2 accumulation within inhalation face masks has not been evaluated. Methods. We simulated airflow and CO2 concentrations accumulating over time within a new facemask designed for infants and young children (PARI SMARTMASK® Baby). A one-dimensional model was first examined, followed by 3-dimensional unsteady computational fluid dynamics analyses. Normal infant breathing patterns and respiratory distress were simulated. Results. The maximum average modeled CO2 concentration within the mask reached steady state (3.2% and 3% for normal and distressed breathing patterns resp.) after approximately the 5th respiratory cycle. After steady state, the mean CO2 concentration inspired into the nostril was 2.24% and 2.26% for normal and distressed breathing patterns, respectively. Conclusion. The mask is predicted to cause minimal CO2 retention and rebreathing. Infants with normal and distressed breathing should tolerate the mask intermittently delivering aerosols over brief time frames. PMID:22792479
3D virtual human atria: A computational platform for studying clinical atrial fibrillation.
Aslanidi, Oleg V; Colman, Michael A; Stott, Jonathan; Dobrzynski, Halina; Boyett, Mark R; Holden, Arun V; Zhang, Henggui
2011-10-01
Despite a vast amount of experimental and clinical data on the underlying ionic, cellular and tissue substrates, the mechanisms of common atrial arrhythmias (such as atrial fibrillation, AF) arising from the functional interactions at the whole atria level remain unclear. Computational modelling provides a quantitative framework for integrating such multi-scale data and understanding the arrhythmogenic behaviour that emerges from the collective spatio-temporal dynamics in all parts of the heart. In this study, we have developed a multi-scale hierarchy of biophysically detailed computational models for the human atria--the 3D virtual human atria. Primarily, diffusion tensor MRI reconstruction of the tissue geometry and fibre orientation in the human sinoatrial node (SAN) and surrounding atrial muscle was integrated into the 3D model of the whole atria dissected from the Visible Human dataset. The anatomical models were combined with the heterogeneous atrial action potential (AP) models, and used to simulate the AP conduction in the human atria under various conditions: SAN pacemaking and atrial activation in the normal rhythm, break-down of regular AP wave-fronts during rapid atrial pacing, and the genesis of multiple re-entrant wavelets characteristic of AF. Contributions of different properties of the tissue to mechanisms of the normal rhythm and arrhythmogenesis were investigated. Primarily, the simulations showed that tissue heterogeneity caused the break-down of the normal AP wave-fronts at rapid pacing rates, which initiated a pair of re-entrant spiral waves; and tissue anisotropy resulted in a further break-down of the spiral waves into multiple meandering wavelets characteristic of AF. The 3D virtual atria model itself was incorporated into the torso model to simulate the body surface ECG patterns in the normal and arrhythmic conditions. Therefore, a state-of-the-art computational platform has been developed, which can be used for studying multi-scale electrical phenomena during atrial conduction and AF arrhythmogenesis. Results of such simulations can be directly compared with electrophysiological and endocardial mapping data, as well as clinical ECG recordings. The virtual human atria can provide in-depth insights into 3D excitation propagation processes within atrial walls of a whole heart in vivo, which is beyond the current technical capabilities of experimental or clinical set-ups. Copyright © 2011 Elsevier Ltd. All rights reserved.
A normal stress subgrid-scale eddy viscosity model in large eddy simulation
NASA Technical Reports Server (NTRS)
Horiuti, K.; Mansour, N. N.; Kim, John J.
1993-01-01
The Smagorinsky subgrid-scale eddy viscosity model (SGS-EVM) is commonly used in large eddy simulations (LES) to represent the effects of the unresolved scales on the resolved scales. This model is known to be limited because its constant must be optimized in different flows, and it must be modified with a damping function to account for near-wall effects. The recent dynamic model is designed to overcome these limitations but is compositionally intensive as compared to the traditional SGS-EVM. In a recent study using direct numerical simulation data, Horiuti has shown that these drawbacks are due mainly to the use of an improper velocity scale in the SGS-EVM. He also proposed the use of the subgrid-scale normal stress as a new velocity scale that was inspired by a high-order anisotropic representation model. The testing of Horiuti, however, was conducted using DNS data from a low Reynolds number channel flow simulation. It was felt that further testing at higher Reynolds numbers and also using different flows (other than wall-bounded shear flows) were necessary steps needed to establish the validity of the new model. This is the primary motivation of the present study. The objective is to test the new model using DNS databases of high Reynolds number channel and fully developed turbulent mixing layer flows. The use of both channel (wall-bounded) and mixing layer flows is important for the development of accurate LES models because these two flows encompass many characteristic features of complex turbulent flows.
Wendell, David C.; Samyn, Margaret M.; Cava, Joseph R.; Ellwein, Laura M.; Krolikowski, Mary M.; Gandy, Kimberly L.; Pelech, Andrew N.; Shadden, Shawn C.; LaDisa, John F.
2012-01-01
Computational fluid dynamics (CFD) simulations quantifying thoracic aortic flow patterns have not included disturbances from the aortic valve (AoV). 80% of patients with aortic coarctation (CoA) have a bicuspid aortic valve (BAV) which may cause adverse flow patterns contributing to morbidity. Our objectives were to develop a method to account for the AoV in CFD simulations, and quantify its impact on local hemodynamics. The method developed facilitates segmentation of the AoV, spatiotemporal interpolation of segments, and anatomic positioning of segments at the CFD model inlet. The AoV was included in CFD model examples of a normal (tricuspid AoV) and a post-surgical CoA patient (BAV). Velocity, turbulent kinetic energy (TKE), time-averaged wall shear stress (TAWSS), and oscillatory shear index (OSI) results were compared to equivalent simulations using a plug inlet profile. The plug inlet greatly underestimated TKE for both examples. TAWSS differences extended throughout the thoracic aorta for the CoA BAV, but were limited to the arch for the normal example. OSI differences existed mainly in the ascending aorta for both cases. The impact of AoV can now be included with CFD simulations to identify regions of deleterious hemodynamics thereby advancing simulations of the thoracic aorta one step closer to reality. PMID:22917990
DOE Office of Scientific and Technical Information (OSTI.GOV)
Figueroa, Aldo; Meunier, Patrice; Villermaux, Emmanuel
2014-01-15
We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, “The diffusive strip method for scalar mixing in two-dimensions,” J. Fluid Mech. 662, 134–172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement withmore » quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors.« less
Turbofan Engine Simulated in a Graphical Simulation Environment
NASA Technical Reports Server (NTRS)
Parker, Khary I.; Guo, Ten-Huei
2004-01-01
Recently, there has been an increase in the development of intelligent engine technology with advanced active component control. The computer engine models used in these control studies are component-level models (CLM), models that link individual component models of state space and nonlinear algebraic equations, written in a computer language such as Fortran. The difficulty faced in performing control studies on Fortran-based models is that Fortran is not supported with control design and analysis tools, so there is no means for implementing real-time control. It is desirable to have a simulation environment that is straightforward, has modular graphical components, and allows easy access to health, control, and engine parameters through a graphical user interface. Such a tool should also provide the ability to convert a control design into real-time code, helping to make it an extremely powerful tool in control and diagnostic system development. Simulation time management is shown: Mach number versus time, power level angle versus time, altitude versus time, ambient temperature change versus time, afterburner fuel flow versus time, controller and actuator dynamics, collect initial conditions, CAD output, and component-level model: CLM sensor, CAD input, and model output. The Controls and Dynamics Technologies Branch at the NASA Glenn Research Center has developed and demonstrated a flexible, generic turbofan engine simulation platform that can meet these objectives, known as the Modular Aero-Propulsion System Simulation (MAPSS). MAPSS is a Simulink-based implementation of a Fortran-based, modern high pressure ratio, dual-spool, low-bypass, military-type variable-cycle engine with a digital controller. Simulink (The Mathworks, Natick, MA) is a computer-aided control design and simulation package allows the graphical representation of dynamic systems in a block diagram form. MAPSS is a nonlinear, non-real-time system composed of controller and actuator dynamics (CAD) and component-level model (CLM) modules. The controller in the CAD module emulates the functionality of a digital controller, which has a typical update rate of 50 Hz. The CLM module simulates the dynamics of the engine components and uses an update rate of 2500 Hz, which is needed to iterate to balance mass and energy among system components. The actuators in the CAD module use the same sampling rate as those in the CLM. Two graphs of normalized spool speed versus time in seconds and one graph of normalized average metal temperature versus time in seconds is shown. MAPSS was validated via open-loop and closed-loop comparisons with the Fortran simulation. The preceding plots show the normalized results of a closed-loop comparison looking at three states of the model: low-pressure spool speed, high-pressure spool speed, and the average metal temperature measured from the combustor to the high-pressure turbine. In steady state, the error between the simulations is less than 1 percent. During a transient, the difference between the simulations is due to a correction in MAPSS that prevents the gas flow in the bypass duct inlet from flowing forward instead of toward the aft end, which occurs in the Fortran simulation. A comparison between MAPSS and the Fortran model of the bypass duct inlet flow for power lever angles greater than 35 degrees is shown.
Krüger, Dennis M; Ahmed, Aqeel; Gohlke, Holger
2012-07-01
The NMSim web server implements a three-step approach for multiscale modeling of protein conformational changes. First, the protein structure is coarse-grained using the FIRST software. Second, a rigid cluster normal-mode analysis provides low-frequency normal modes. Third, these modes are used to extend the recently introduced idea of constrained geometric simulations by biasing backbone motions of the protein, whereas side chain motions are biased toward favorable rotamer states (NMSim). The generated structures are iteratively corrected regarding steric clashes and stereochemical constraint violations. The approach allows performing three simulation types: unbiased exploration of conformational space; pathway generation by a targeted simulation; and radius of gyration-guided simulation. On a data set of proteins with experimentally observed conformational changes, the NMSim approach has been shown to be a computationally efficient alternative to molecular dynamics simulations for conformational sampling of proteins. The generated conformations and pathways of conformational transitions can serve as input to docking approaches or more sophisticated sampling techniques. The web server output is a trajectory of generated conformations, Jmol representations of the coarse-graining and a subset of the trajectory and data plots of structural analyses. The NMSim webserver, accessible at http://www.nmsim.de, is free and open to all users with no login requirement.
Seaman, Clara; Akingba, A George; Sucosky, Philippe
2014-04-01
The bicuspid aortic valve (BAV), which forms with two leaflets instead of three as in the normal tricuspid aortic valve (TAV), is associated with a spectrum of secondary valvulopathies and aortopathies potentially triggered by hemodynamic abnormalities. While studies have demonstrated an intrinsic degree of stenosis and the existence of a skewed orifice jet in the BAV, the impact of those abnormalities on BAV hemodynamic performance and energy loss has not been examined. This steady-flow study presents the comparative in vitro assessment of the flow field and energy loss in a TAV and type-I BAV under normal and simulated calcified states. Particle-image velocimetry (PIV) measurements were performed to quantify velocity, vorticity, viscous, and Reynolds shear stress fields in normal and simulated calcified porcine TAV and BAV models at six flow rates spanning the systolic phase. The BAV model was created by suturing the two coronary leaflets of a porcine TAV. Calcification was simulated via deposition of glue beads in the base of the leaflets. Valvular performance was characterized in terms of geometric orifice area (GOA), pressure drop, effective orifice area (EOA), energy loss (EL), and energy loss index (ELI). The BAV generated an elliptical orifice and a jet skewed toward the noncoronary leaflet. In contrast, the TAV featured a circular orifice and a jet aligned along the valve long axis. While the BAV exhibited an intrinsic degree of stenosis (18% increase in maximum jet velocity and 7% decrease in EOA relative to the TAV at the maximum flow rate), it generated only a 3% increase in EL and its average ELI (2.10 cm2/m2) remained above the clinical threshold characterizing severe aortic stenosis. The presence of simulated calcific lesions normalized the alignment of the BAV jet and resulted in the loss of jet axisymmetry in the TAV. It also amplified the degree of stenosis in the TAV and BAV, as indicated by the 342% and 404% increase in EL, 70% and 51% reduction in ELI and 48% and 51% decrease in EOA, respectively, relative to the nontreated valve models at the maximum flow rate. This study indicates the ability of the BAV to function as a TAV despite its intrinsic degree of stenosis and suggests the weak dependence of pressure drop on orifice area in calcified valves.
An efficient surrogate-based simulation-optimization method for calibrating a regional MODFLOW model
NASA Astrophysics Data System (ADS)
Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.
2017-01-01
Simulation-optimization method entails a large number of model simulations, which is computationally intensive or even prohibitive if the model simulation is extremely time-consuming. Statistical models have been examined as a surrogate of the high-fidelity physical model during simulation-optimization process to tackle this problem. Among them, Multivariate Adaptive Regression Splines (MARS), a non-parametric adaptive regression method, is superior in overcoming problems of high-dimensions and discontinuities of the data. Furthermore, the stability and accuracy of MARS model can be improved by bootstrap aggregating methods, namely, bagging. In this paper, Bagging MARS (BMARS) method is integrated to a surrogate-based simulation-optimization framework to calibrate a three-dimensional MODFLOW model, which is developed to simulate the groundwater flow in an arid hardrock-alluvium region in northwestern Oman. The physical MODFLOW model is surrogated by the statistical model developed using BMARS algorithm. The surrogate model, which is fitted and validated using training dataset generated by the physical model, can approximate solutions rapidly. An efficient Sobol' method is employed to calculate global sensitivities of head outputs to input parameters, which are used to analyze their importance for the model outputs spatiotemporally. Only sensitive parameters are included in the calibration process to further improve the computational efficiency. Normalized root mean square error (NRMSE) between measured and simulated heads at observation wells is used as the objective function to be minimized during optimization. The reasonable history match between the simulated and observed heads demonstrated feasibility of this high-efficient calibration framework.
NASA Astrophysics Data System (ADS)
Wu, Donghai; Ciais, Philippe; Viovy, Nicolas; Knapp, Alan K.; Wilcox, Kevin; Bahn, Michael; Smith, Melinda D.; Vicca, Sara; Fatichi, Simone; Zscheischler, Jakob; He, Yue; Li, Xiangyi; Ito, Akihiko; Arneth, Almut; Harper, Anna; Ukkola, Anna; Paschalis, Athanasios; Poulter, Benjamin; Peng, Changhui; Ricciuto, Daniel; Reinthaler, David; Chen, Guangsheng; Tian, Hanqin; Genet, Hélène; Mao, Jiafu; Ingrisch, Johannes; Nabel, Julia E. S. M.; Pongratz, Julia; Boysen, Lena R.; Kautz, Markus; Schmitt, Michael; Meir, Patrick; Zhu, Qiuan; Hasibeder, Roland; Sippel, Sebastian; Dangal, Shree R. S.; Sitch, Stephen; Shi, Xiaoying; Wang, Yingping; Luo, Yiqi; Liu, Yongwen; Piao, Shilong
2018-06-01
Field measurements of aboveground net primary productivity (ANPP) in temperate grasslands suggest that both positive and negative asymmetric responses to changes in precipitation (P) may occur. Under normal range of precipitation variability, wet years typically result in ANPP gains being larger than ANPP declines in dry years (positive asymmetry), whereas increases in ANPP are lower in magnitude in extreme wet years compared to reductions during extreme drought (negative asymmetry). Whether the current generation of ecosystem models with a coupled carbon-water system in grasslands are capable of simulating these asymmetric ANPP responses is an unresolved question. In this study, we evaluated the simulated responses of temperate grassland primary productivity to scenarios of altered precipitation with 14 ecosystem models at three sites: Shortgrass steppe (SGS), Konza Prairie (KNZ) and Stubai Valley meadow (STU), spanning a rainfall gradient from dry to moist. We found that (1) the spatial slopes derived from modeled primary productivity and precipitation across sites were steeper than the temporal slopes obtained from inter-annual variations, which was consistent with empirical data; (2) the asymmetry of the responses of modeled primary productivity under normal inter-annual precipitation variability differed among models, and the mean of the model ensemble suggested a negative asymmetry across the three sites, which was contrary to empirical evidence based on filed observations; (3) the mean sensitivity of modeled productivity to rainfall suggested greater negative response with reduced precipitation than positive response to an increased precipitation under extreme conditions at the three sites; and (4) gross primary productivity (GPP), net primary productivity (NPP), aboveground NPP (ANPP) and belowground NPP (BNPP) all showed concave-down nonlinear responses to altered precipitation in all the models, but with different curvatures and mean values. Our results indicated that most models overestimate the negative drought effects and/or underestimate the positive effects of increased precipitation on primary productivity under normal climate conditions, highlighting the need for improving eco-hydrological processes in those models in the future.
Logit-normal mixed model for Indian monsoon precipitation
NASA Astrophysics Data System (ADS)
Dietz, L. R.; Chatterjee, S.
2014-09-01
Describing the nature and variability of Indian monsoon precipitation is a topic of much debate in the current literature. We suggest the use of a generalized linear mixed model (GLMM), specifically, the logit-normal mixed model, to describe the underlying structure of this complex climatic event. Four GLMM algorithms are described and simulations are performed to vet these algorithms before applying them to the Indian precipitation data. The logit-normal model was applied to light, moderate, and extreme rainfall. Findings indicated that physical constructs were preserved by the models, and random effects were significant in many cases. We also found GLMM estimation methods were sensitive to tuning parameters and assumptions and therefore, recommend use of multiple methods in applications. This work provides a novel use of GLMM and promotes its addition to the gamut of tools for analysis in studying climate phenomena.
SICONID: a FORTRAN-77 program for conditional simulation in one dimension
NASA Astrophysics Data System (ADS)
Pardo-Igúzquiza, E.; Chica-Olmo, M.; Delgado-García, J.
1992-07-01
The SICONID program, written in FORTRAN 77 for the conditional simulation of geological variables in one dimension, is presented. The program permits all the necessary steps to obtain a simulated series of the experimental data to be carried out. These states are: acquisition of the experimental values, modelization of the anamorphosis function, variogram of the normal scores, conditional simulation, and restoration of the experimental histogram. A practical case of simulation of the evolution of the groundwater level in a survey to show the operation of the program is given.
Approximate Single-Diode Photovoltaic Model for Efficient I-V Characteristics Estimation
Ting, T. O.; Zhang, Nan; Guan, Sheng-Uei; Wong, Prudence W. H.
2013-01-01
Precise photovoltaic (PV) behavior models are normally described by nonlinear analytical equations. To solve such equations, it is necessary to use iterative procedures. Aiming to make the computation easier, this paper proposes an approximate single-diode PV model that enables high-speed predictions for the electrical characteristics of commercial PV modules. Based on the experimental data, statistical analysis is conducted to validate the approximate model. Simulation results show that the calculated current-voltage (I-V) characteristics fit the measured data with high accuracy. Furthermore, compared with the existing modeling methods, the proposed model reduces the simulation time by approximately 30% in this work. PMID:24298205
Wagar, Brandon M; Thagard, Paul
2004-01-01
The authors present a neurological theory of how cognitive information and emotional information are integrated in the nucleus accumbens during effective decision making. They describe how the nucleus accumbens acts as a gateway to integrate cognitive information from the ventromedial prefrontal cortex and the hippocampus with emotional information from the amygdala. The authors have modeled this integration by a network of spiking artificial neurons organized into separate areas and used this computational model to simulate 2 kinds of cognitive-affective integration. The model simulates successful performance by people with normal cognitive-affective integration. The model also simulates the historical case of Phineas Gage as well as subsequent patients whose ability to make decisions became impeded by damage to the ventromedial prefrontal cortex.
NASA Technical Reports Server (NTRS)
Turon, A.; Davila, C. G.; Camanho, P. P.; Costa, J.
2007-01-01
This paper presents a methodology to determine the parameters to be used in the constitutive equations of Cohesive Zone Models employed in the simulation of delamination in composite materials by means of decohesion finite elements. A closed-form expression is developed to define the stiffness of the cohesive layer. A novel procedure that allows the use of coarser meshes of decohesion elements in large-scale computations is also proposed. The procedure ensures that the energy dissipated by the fracture process is computed correctly. It is shown that coarse-meshed models defined using the approach proposed here yield the same results as the models with finer meshes normally used for the simulation of fracture processes.
Generating Nonnormal Multivariate Data Using Copulas: Applications to SEM.
Mair, Patrick; Satorra, Albert; Bentler, Peter M
2012-07-01
This article develops a procedure based on copulas to simulate multivariate nonnormal data that satisfy a prespecified variance-covariance matrix. The covariance matrix used can comply with a specific moment structure form (e.g., a factor analysis or a general structural equation model). Thus, the method is particularly useful for Monte Carlo evaluation of structural equation models within the context of nonnormal data. The new procedure for nonnormal data simulation is theoretically described and also implemented in the widely used R environment. The quality of the method is assessed by Monte Carlo simulations. A 1-sample test on the observed covariance matrix based on the copula methodology is proposed. This new test for evaluating the quality of a simulation is defined through a particular structural model specification and is robust against normality violations.
Log-normal frailty models fitted as Poisson generalized linear mixed models.
Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver
2016-12-01
The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Salinas, J. L.; Nester, T.; Komma, J.; Bloeschl, G.
2017-12-01
Generation of realistic synthetic spatial rainfall is of pivotal importance for assessing regional hydroclimatic hazard as the input for long term rainfall-runoff simulations. The correct reproduction of observed rainfall characteristics, such as regional intensity-duration-frequency curves, and spatial and temporal correlations is necessary to adequately model the magnitude and frequency of the flood peaks, by reproducing antecedent soil moisture conditions before extreme rainfall events, and joint probability of flood waves at confluences. In this work, a modification of the model presented by Bardossy and Platte (1992), where precipitation is first modeled on a station basis as a multivariate autoregressive model (mAr) in a Normal space. The spatial and temporal correlation structures are imposed in the Normal space, allowing for a different temporal autocorrelation parameter for each station, and simultaneously ensuring the positive-definiteness of the correlation matrix of the mAr errors. The Normal rainfall is then transformed to a Gamma-distributed space, with parameters varying monthly according to a sinusoidal function, in order to adapt to the observed rainfall seasonality. One of the main differences with the original model is the simulation time-step, reduced from 24h to 6h. Due to a larger availability of daily rainfall data, as opposite to sub-daily (e.g. hourly), the parameters of the Gamma distributions are calibrated to reproduce simultaneously a series of daily rainfall characteristics (mean daily rainfall, standard deviations of daily rainfall, and 24h intensity-duration-frequency [IDF] curves), as well as other aggregated rainfall measures (mean annual rainfall, and monthly rainfall). The calibration of the spatial and temporal correlation parameters is performed in a way that the catchment-averaged IDF curves aggregated at different temporal scales fit the measured ones. The rainfall model is used to generate 10.000 years of synthetic precipitation, fed into a rainfall-runoff model to derive the flood frequency in the Tirolean Alps in Austria. Given the number of generated events, the simulation framework is able to generate a large variety of rainfall patterns, as well as reproduce the variograms of relevant extreme rainfall events in the region of interest.
NASA Astrophysics Data System (ADS)
Marrufo-Hernández, Norma Alejandra; Hernández-Guerrero, Maribel; Nápoles-Duarte, José Manuel; Palomares-Báez, Juan Pedro; Chávez-Rojo, Marco Antonio
2018-03-01
We present a computational model that describes the diffusion of a hard spheres colloidal fluid through a membrane. The membrane matrix is modeled as a series of flat parallel planes with circular pores of different sizes and random spatial distribution. This model was employed to determine how the size distribution of the colloidal filtrate depends on the size distributions of both, the particles in the feed and the pores of the membrane, as well as to describe the filtration kinetics. A Brownian dynamics simulation study considering normal distributions was developed in order to determine empirical correlations between the parameters that characterize these distributions. The model can also be extended to other distributions such as log-normal. This study could, therefore, facilitate the selection of membranes for industrial or scientific filtration processes once the size distribution of the feed is known and the expected characteristics in the filtrate have been defined.
Millennial climatic fluctuations are key to the structure of last glacial ecosystems.
Huntley, Brian; Allen, Judy R M; Collingham, Yvonne C; Hickler, Thomas; Lister, Adrian M; Singarayer, Joy; Stuart, Anthony J; Sykes, Martin T; Valdes, Paul J
2013-01-01
Whereas fossil evidence indicates extensive treeless vegetation and diverse grazing megafauna in Europe and northern Asia during the last glacial, experiments combining vegetation models and climate models have to-date simulated widespread persistence of trees. Resolving this conflict is key to understanding both last glacial ecosystems and extinction of most of the mega-herbivores. Using a dynamic vegetation model (DVM) we explored the implications of the differing climatic conditions generated by a general circulation model (GCM) in "normal" and "hosing" experiments. Whilst the former approximate interstadial conditions, the latter, designed to mimic Heinrich Events, approximate stadial conditions. The "hosing" experiments gave simulated European vegetation much closer in composition to that inferred from fossil evidence than did the "normal" experiments. Given the short duration of interstadials, and the rate at which forest cover expanded during the late-glacial and early Holocene, our results demonstrate the importance of millennial variability in determining the character of last glacial ecosystems.
Radiation pattern of a borehole radar antenna
Ellefsen, K.J.; Wright, D.L.
2005-01-01
The finite-difference time-domain method was used to simulate radar waves that were generated by a transmitting antenna inside a borehole. The simulations were of four different models that included features such as a water-filled borehole and an antenna with resistive loading. For each model, radiation patterns for the far-field region were calculated. The radiation patterns show that the amplitude of the radar wave was strongly affected by its frequency, the water-filled borehole, the resistive loading of the antenna, and the external metal parts of the antenna (e.g., the cable head and the battery pack). For the models with a water-filled borehole, their normalized radiation patterns were practically identical to the normalized radiation pattern of a finite-length electric dipole when the wavelength in the formation was significantly greater than the total length of the radiating elements of the model antenna. The minimum wavelength at which this criterion was satisfied depended upon the features of the antenna, especially its external metal parts. ?? 2005 Society of Exploration Geophysicists. All rights reserved.
Kontis, Angelo L.
1999-01-01
The seaward limit of the fresh ground-water system underlying Kings and Queens Counties on Long Island, N.Y., is at the freshwater-saltwater transition zone. This zone has been conceptualized in transient-state, three-dimensional models of the aquifer system as a sharp interface between freshwater and saltwater, and represented as a stationary, zero lateral-flow boundary. In this study, a pair of two-dimensional, four-layer ground-water flow models representing a generalized vertical section in Kings County and one in adjacent Queens County were developed to evaluate the validity of the boundary condition used in three-dimensional models of the aquifer system. The two-dimensional simulations used a model code that can simulate the movement of a sharp interface in response to transient stress. Sensitivity of interface movement to four factors was analyzed; these were (1) the method of simulating vertical leakage between freshwater and saltwater; (2) recharge at the normal rate, at 50-percent of the normal rate, and at zero for a prolonged (3-year) period; (3) high, medium, and low pumping rates; and (4) pumping from a hypothetical cluster of wells at two locations. Results indicate that the response of the interfaces to the magnitude and duration of pumping and the location of the hypothetical wells is probably sufficiently slow that the interfaces in three-dimensional models can reasonably be approximated as stationary, zero-lateral- flow boundaries.
Context, Cortex, and Dopamine: A Connectionist Approach to Behavior and Biology in Schizophrenia.
ERIC Educational Resources Information Center
Cohen, Jonathan D.; Servan-Schreiber, David
1992-01-01
Using a connectionist framework, it is possible to develop models exploring effects of biologically relevant variables on behavior. The ability of such models to explain schizophrenic behavior in terms of biological disturbances is considered, and computer models are presented that simulate normal and schizophrenic behavior in an attentional task.…
NASA Technical Reports Server (NTRS)
1979-01-01
The computer model for erythropoietic control was adapted to the mouse system by altering system parameters originally given for the human to those which more realistically represent the mouse. Parameter values were obtained from a variety of literature sources. Using the mouse model, the mouse was studied as a potential experimental model for spaceflight. Simulation studies of dehydration and hypoxia were performed. A comparison of system parameters for the mouse and human models is presented. Aside from the obvious differences expected in fluid volumes, blood flows and metabolic rates, larger differences were observed in the following: erythrocyte life span, erythropoietin half-life, and normal arterial pO2.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petersson, N. Anders; Sjogreen, Bjorn
Here, we develop a numerical method for simultaneously simulating acoustic waves in a realistic moving atmosphere and seismic waves in a heterogeneous earth model, where the motions are coupled across a realistic topography. We model acoustic wave propagation by solving the linearized Euler equations of compressible fluid mechanics. The seismic waves are modeled by the elastic wave equation in a heterogeneous anisotropic material. The motion is coupled by imposing continuity of normal velocity and normal stresses across the topographic interface. Realistic topography is resolved on a curvilinear grid that follows the interface. The governing equations are discretized using high ordermore » accurate finite difference methods that satisfy the principle of summation by parts. We apply the energy method to derive the discrete interface conditions and to show that the coupled discretization is stable. The implementation is verified by numerical experiments, and we demonstrate a simulation of coupled wave propagation in a windy atmosphere and a realistic earth model with non-planar topography.« less
Petersson, N. Anders; Sjogreen, Bjorn
2017-04-18
Here, we develop a numerical method for simultaneously simulating acoustic waves in a realistic moving atmosphere and seismic waves in a heterogeneous earth model, where the motions are coupled across a realistic topography. We model acoustic wave propagation by solving the linearized Euler equations of compressible fluid mechanics. The seismic waves are modeled by the elastic wave equation in a heterogeneous anisotropic material. The motion is coupled by imposing continuity of normal velocity and normal stresses across the topographic interface. Realistic topography is resolved on a curvilinear grid that follows the interface. The governing equations are discretized using high ordermore » accurate finite difference methods that satisfy the principle of summation by parts. We apply the energy method to derive the discrete interface conditions and to show that the coupled discretization is stable. The implementation is verified by numerical experiments, and we demonstrate a simulation of coupled wave propagation in a windy atmosphere and a realistic earth model with non-planar topography.« less
Ren, Lei; Howard, David; Ren, Luquan; Nester, Chris; Tian, Limei
2010-01-19
The objective of this paper is to develop an analytical framework to representing the ankle-foot kinematics by modelling the foot as a rollover rocker, which cannot only be used as a generic tool for general gait simulation but also allows for case-specific modelling if required. Previously, the rollover models used in gait simulation have often been based on specific functions that have usually been of a simple form. In contrast, the analytical model described here is in a general form that the effective foot rollover shape can be represented by any polar function rho=rho(phi). Furthermore, a normalized generic foot rollover model has been established based on a normative foot rollover shape dataset of 12 normal healthy subjects. To evaluate model accuracy, the predicted ankle motions and the centre of pressure (CoP) were compared with measurement data for both subject-specific and general cases. The results demonstrated that the ankle joint motions in both vertical and horizontal directions (relative RMSE approximately 10%) and CoP (relative RMSE approximately 15% for most of the subjects) are accurately predicted over most of the stance phase (from 10% to 90% of stance). However, we found that the foot cannot be very accurately represented by a rollover model just after heel strike (HS) and just before toe off (TO), probably due to shear deformation of foot plantar tissues (ankle motion can occur without any foot rotation). The proposed foot rollover model can be used in both inverse and forward dynamics gait simulation studies and may also find applications in rehabilitation engineering. Copyright 2009 Elsevier Ltd. All rights reserved.
A model of fluid and solute exchange in the human: validation and implications.
Bert, J L; Gyenge, C C; Bowen, B D; Reed, R K; Lund, T
2000-11-01
In order to understand better the complex, dynamic behaviour of the redistribution and exchange of fluid and solutes administered to normal individuals or to those with acute hypovolemia, mathematical models are used in addition to direct experimental investigation. Initial validation of a model developed by our group involved data from animal experiments (Gyenge, C.C., Bowen, B.D., Reed, R.K. & Bert, J.L. 1999b. Am J Physiol 277 (Heart Circ Physiol 46), H1228-H1240). For a first validation involving humans, we compare the results of simulations with a wide range of different types of data from two experimental studies. These studies involved administration of normal saline or hypertonic saline with Dextran to both normal and 10% haemorrhaged subjects. We compared simulations with data including the dynamic changes in plasma and interstitial fluid volumes VPL and VIT respectively, plasma and interstitial colloid osmotic pressures PiPL and PiIT respectively, haematocrit (Hct), plasma solute concentrations and transcapillary flow rates. The model predictions were overall in very good agreement with the wide range of experimental results considered. Based on the conditions investigated, the model was also validated for humans. We used the model both to investigate mechanisms associated with the redistribution and transport of fluid and solutes administered following a mild haemorrhage and to speculate on the relationship between the timing and amount of fluid infusions and subsequent blood volume expansion.
Molecular cloud formation in high-shear, magnetized colliding flows
NASA Astrophysics Data System (ADS)
Fogerty, E.; Frank, A.; Heitsch, F.; Carroll-Nellenback, J.; Haig, C.; Adams, M.
2016-08-01
The colliding flows (CF) model is a well-supported mechanism for generating molecular clouds. However, to-date most CF simulations have focused on the formation of clouds in the normal-shock layer between head-on colliding flows. We performed simulations of magnetized colliding flows that instead meet at an oblique-shock layer. Oblique shocks generate shear in the post-shock environment, and this shear creates inhospitable environments for star formation. As the degree of shear increases (I.e. the obliquity of the shock increases), we find that it takes longer for sink particles to form, they form in lower numbers, and they tend to be less massive. With regard to magnetic fields, we find that even a weak field stalls gravitational collapse within forming clouds. Additionally, an initially oblique collision interface tends to reorient over time in the presence of a magnetic field, so that it becomes normal to the oncoming flows. This was demonstrated by our most oblique shock interface, which became fully normal by the end of the simulation.
Normal modes of weak colloidal gels
NASA Astrophysics Data System (ADS)
Varga, Zsigmond; Swan, James W.
2018-01-01
The normal modes and relaxation rates of weak colloidal gels are investigated in calculations using different models of the hydrodynamic interactions between suspended particles. The relaxation spectrum is computed for freely draining, Rotne-Prager-Yamakawa, and accelerated Stokesian dynamics approximations of the hydrodynamic mobility in a normal mode analysis of a harmonic network representing several colloidal gels. We find that the density of states and spatial structure of the normal modes are fundamentally altered by long-ranged hydrodynamic coupling among the particles. Short-ranged coupling due to hydrodynamic lubrication affects only the relaxation rates of short-wavelength modes. Hydrodynamic models accounting for long-ranged coupling exhibit a microscopic relaxation rate for each normal mode, λ that scales as l-2, where l is the spatial correlation length of the normal mode. For the freely draining approximation, which neglects long-ranged coupling, the microscopic relaxation rate scales as l-γ, where γ varies between three and two with increasing particle volume fraction. A simple phenomenological model of the internal elastic response to normal mode fluctuations is developed, which shows that long-ranged hydrodynamic interactions play a central role in the viscoelasticity of the gel network. Dynamic simulations of hard spheres that gel in response to short-ranged depletion attractions are used to test the applicability of the density of states predictions. For particle concentrations up to 30% by volume, the power law decay of the relaxation modulus in simulations accounting for long-ranged hydrodynamic interactions agrees with predictions generated by the density of states of the corresponding harmonic networks as well as experimental measurements. For higher volume fractions, excluded volume interactions dominate the stress response, and the prediction from the harmonic network density of states fails. Analogous to the Zimm model in polymer physics, our results indicate that long-ranged hydrodynamic interactions play a crucial role in determining the microscopic dynamics and macroscopic properties of weak colloidal gels.
[Establishment and validation of normal human L1-L5 lumbar three-dimensional finite element model].
Zhu, Zhenqi; Liu, Chenjun; Wang, Jiefu; Wang, Kaifeng; Huang, Zhixin; Wang, Weida; Liu, Haiying
2014-10-14
To create and validate a L1-L5 lumbar three-dimensional finite element model. The L1-L5 lumbar spines of a male healthy volunteer were scanned with computed tomography (CT). And a L1-L5 lumbar three-dimensional finite element model was created with the aid of software packages of Mimics, Geomagic and Ansys. Then border conditions were set, unit type was determined, finite element mesh was divided and a model was established for loading and calculating. Average model stiffness under the conditions of flexion, extension, lateral bending and axial rotation was calculated and compared with the outcomes of former articles for validation. A normal human L1-L5 lumbar three-dimensional finite element model was established to include 459 340 elements and 661 938 nodes. After constraining the inferior endplate of L5 vertebral body, 500 kg × m × s⁻² compressive loading was imposed averagely on the superior endplate of L1 vertebral body. Then 10 kg × m² × s⁻² moment simulating flexion, extension, lateral bending and axial rotation were imposed on the superior endplate of L1 vertebral body. Eventually the average stiffness of all directions was calculated and it was similar to the outcomes of former articles. The L1-L5 lumbar three-dimensional finite element model is validated so that it may used with biomechanical simulation and analysis of normal or surgical models.
On the role of radiation and dimensionality in predicting flow opposed flame spread over thin fuels
NASA Astrophysics Data System (ADS)
Kumar, Chenthil; Kumar, Amit
2012-06-01
In this work a flame-spread model is formulated in three dimensions to simulate opposed flow flame spread over thin solid fuels. The flame-spread model is coupled to a three-dimensional gas radiation model. The experiments [1] on downward spread and zero gravity quiescent spread over finite width thin fuel are simulated by flame-spread models in both two and three dimensions to assess the role of radiation and effect of dimensionality on the prediction of the flame-spread phenomena. It is observed that while radiation plays only a minor role in normal gravity downward spread, in zero gravity quiescent spread surface radiation loss holds the key to correct prediction of low oxygen flame spread rate and quenching limit. The present three-dimensional simulations show that even in zero gravity gas radiation affects flame spread rate only moderately (as much as 20% at 100% oxygen) as the heat feedback effect exceeds the radiation loss effect only moderately. However, the two-dimensional model with the gas radiation model badly over-predicts the zero gravity flame spread rate due to under estimation of gas radiation loss to the ambient surrounding. The two-dimensional model was also found to be inadequate for predicting the zero gravity flame attributes, like the flame length and the flame width, correctly. The need for a three-dimensional model was found to be indispensable for consistently describing the zero gravity flame-spread experiments [1] (including flame spread rate and flame size) especially at high oxygen levels (>30%). On the other hand it was observed that for the normal gravity downward flame spread for oxygen levels up to 60%, the two-dimensional model was sufficient to predict flame spread rate and flame size reasonably well. Gas radiation is seen to increase the three-dimensional effect especially at elevated oxygen levels (>30% for zero gravity and >60% for normal gravity flames).
Identification of nonlinear normal modes of engineering structures under broadband forcing
NASA Astrophysics Data System (ADS)
Noël, Jean-Philippe; Renson, L.; Grappasonni, C.; Kerschen, G.
2016-06-01
The objective of the present paper is to develop a two-step methodology integrating system identification and numerical continuation for the experimental extraction of nonlinear normal modes (NNMs) under broadband forcing. The first step processes acquired input and output data to derive an experimental state-space model of the structure. The second step converts this state-space model into a model in modal space from which NNMs are computed using shooting and pseudo-arclength continuation. The method is demonstrated using noisy synthetic data simulated on a cantilever beam with a hardening-softening nonlinearity at its free end.
NASA Technical Reports Server (NTRS)
Pham-Van-diep, Gerald C.; Erwin, Daniel A.
1989-01-01
Velocity distribution functions in normal shock waves in argon and helium are calculated using Monte Carlo direct simulation. These are compared with experimental results for argon at M = 7.18 and for helium at M = 1.59 and 20. For both argon and helium, the variable-hard-sphere (VHS) model is used for the elastic scattering cross section, with the velocity dependence derived from a viscosity-temperature power-law relationship in the way normally used by Bird (1976).
Some Aspects of Advanced Tokamak Modeling in DIII-D
NASA Astrophysics Data System (ADS)
St John, H. E.; Petty, C. C.; Murakami, M.; Kinsey, J. E.
2000-10-01
We extend previous work(M. Murakami, et al., General Atomics Report GA-A23310 (1999).) done on time dependent DIII-D advanced tokamak simulations by introducing theoretical confinement models rather than relying on power balance derived transport coefficients. We explore using NBCD and off axis ECCD together with a self-consistent aligned bootstrap current, driven by the internal transport barrier dynamics generated with the GLF23 confinement model, to shape the hollow current profile and to maintain MHD stable conditions. Our theoretical modeling approach uses measured DIII-D initial conditions to start off the simulations in a smooth consistent manner. This mitigates the troublesome long lived perturbations in the ohmic current profile that is normally caused by inconsistent initial data. To achieve this goal our simulation uses a sequence of time dependent eqdsks generated autonomously by the EFIT MHD equilibrium code in analyzing experimental data to supply the history for the simulation.
ERIC Educational Resources Information Center
Hamaker, Ellen L.; Dolan, Conor V.; Molenaar, Peter C. M.
2003-01-01
Demonstrated, through simulation, that stationary autoregressive moving average (ARMA) models may be fitted readily when T>N, using normal theory raw maximum likelihood structural equation modeling. Also provides some illustrations based on real data. (SLD)
Liu, Yi-Wen; Neely, Stephen T.
2013-01-01
This paper presents the results of simulating the acoustic suppression of distortion-product otoacoustic emissions (DPOAEs) from a computer model of cochlear mechanics. A tone suppressor was introduced, causing the DPOAE level to decrease, and the decrement was plotted against an increasing suppressor level. Suppression threshold was estimated from the resulting suppression growth functions (SGFs), and suppression tuning curves (STCs) were obtained by plotting the suppression threshold as a function of suppressor frequency. Results show that the slope of SGFs is generally higher for low-frequency suppressors than high-frequency suppressors, resembling those obtained from normal hearing human ears. By comparing responses of normal (100%) vs reduced (50%) outer-hair-cell sensitivities, the model predicts that the tip-to-tail difference of the STCs correlates well with that of intra-cochlear iso-displacement tuning curves. The correlation is poorer, however, between the sharpness of the STCs and that of the intra-cochlear tuning curves. These results agree qualitatively with what was recently reported from normal-hearing and hearing-impaired human subjects, and examination of intra-cochlear model responses can provide the needed insight regarding the interpretation of DPOAE STCs obtained in individual ears. PMID:23363112
Numerical schemes for anomalous diffusion of single-phase fluids in porous media
NASA Astrophysics Data System (ADS)
Awotunde, Abeeb A.; Ghanam, Ryad A.; Al-Homidan, Suliman S.; Tatar, Nasser-eddine
2016-10-01
Simulation of fluid flow in porous media is an indispensable part of oil and gas reservoir management. Accurate prediction of reservoir performance and profitability of investment rely on our ability to model the flow behavior of reservoir fluids. Over the years, numerical reservoir simulation models have been based mainly on solutions to the normal diffusion of fluids in the porous reservoir. Recently, however, it has been documented that fluid flow in porous media does not always follow strictly the normal diffusion process. Small deviations from normal diffusion, called anomalous diffusion, have been reported in some experimental studies. Such deviations can be caused by different factors such as the viscous state of the fluid, the fractal nature of the porous media and the pressure pulse in the system. In this work, we present explicit and implicit numerical solutions to the anomalous diffusion of single-phase fluids in heterogeneous reservoirs. An analytical solution is used to validate the numerical solution to the simple homogeneous case. The conventional wellbore flow model is modified to account for anomalous behavior. Example applications are used to show the behavior of wellbore and wellblock pressures during the single-phase anomalous flow of fluids in the reservoirs considered.
Stender, Michael E; Regueiro, Richard A; Ferguson, Virginia L
2017-02-01
The changes experienced in synovial joints with osteoarthritis involve coupled chemical, biological, and mechanical processes. The aim of this study was to investigate the consequences of increasing permeability in articular cartilage (AC), calcified cartilage (CC), subchondral cortical bone (SCB), and subchondral trabecular bone (STB) as observed with osteoarthritis. Two poroelastic finite element models were developed using a depth-dependent anisotropic model of AC with strain-dependent permeability and poroelastic models of calcified tissues (CC, SCB, and STB). The first model simulated a bone-cartilage unit (BCU) in uniaxial unconfined compression, while the second model simulated spherical indentation of the AC surface. Results indicate that the permeability of AC is the primary determinant of the BCU's poromechanical response while the permeability of calcified tissues exerts no appreciable effect on the force-indentation response of the BCU. In spherical indentation simulations with osteoarthritic permeability properties, fluid velocities were larger in magnitude and distributed over a smaller area compared to normal tissues. In vivo, this phenomenon would likely lead to chondrocyte death, tissue remodeling, alterations in joint lubrication, and the progression of osteoarthritis. For osteoarthritic and normal tissue permeability values, fluid flow was predicted to occur across the osteochondral interface. These results help elucidate the consequences of increases in the permeability of the BCU that occur with osteoarthritis. Furthermore, this study may guide future treatments to counteract osteoarthritis.
NASA Technical Reports Server (NTRS)
Yoshida, Y.; Joiner, J.; Tucker, C.; Berry, J.; Lee, J. -E.; Walker, G.; Reichle, R.; Koster, R.; Lyapustin, A.; Wang, Y.
2015-01-01
We examine satellite-based measurements of chlorophyll solar-induced fluorescence (SIF) over the region impacted by the Russian drought and heat wave of 2010. Like the popular Normalized Difference Vegetation Index (NDVI) that has been used for decades to measure photosynthetic capacity, SIF measurements are sensitive to the fraction of absorbed photosynthetically-active radiation (fPAR). However, in addition, SIF is sensitive to the fluorescence yield that is related to the photosynthetic yield. Both SIF and NDVI from satellite data show drought-related declines early in the growing season in 2010 as compared to other years between 2007 and 2013 for areas dominated by crops and grasslands. This suggests an early manifestation of the dry conditions on fPAR. We also simulated SIF using a global land surface model driven by observation-based meteorological fields. The model provides a reasonable simulation of the drought and heat impacts on SIF in terms of the timing and spatial extents of anomalies, but there are some differences between modeled and observed SIF. The model may potentially be improved through data assimilation or parameter estimation using satellite observations of SIF (as well as NDVI). The model simulations also offer the opportunity to examine separately the different components of the SIF signal and relationships with Gross Primary Productivity (GPP).
Large-scale 3D modeling of projectile impact damage in brittle plates
NASA Astrophysics Data System (ADS)
Seagraves, A.; Radovitzky, R.
2015-10-01
The damage and failure of brittle plates subjected to projectile impact is investigated through large-scale three-dimensional simulation using the DG/CZM approach introduced by Radovitzky et al. [Comput. Methods Appl. Mech. Eng. 2011; 200(1-4), 326-344]. Two standard experimental setups are considered: first, we simulate edge-on impact experiments on Al2O3 tiles by Strassburger and Senf [Technical Report ARL-CR-214, Army Research Laboratory, 1995]. Qualitative and quantitative validation of the simulation results is pursued by direct comparison of simulations with experiments at different loading rates and good agreement is obtained. In the second example considered, we investigate the fracture patterns in normal impact of spheres on thin, unconfined ceramic plates over a wide range of loading rates. For both the edge-on and normal impact configurations, the full field description provided by the simulations is used to interpret the mechanisms underlying the crack propagation patterns and their strong dependence on loading rate.
Pore-scale modeling of saturated permeabilities in random sphere packings.
Pan, C; Hilpert, M; Miller, C T
2001-12-01
We use two pore-scale approaches, lattice-Boltzmann (LB) and pore-network modeling, to simulate single-phase flow in simulated sphere packings that vary in porosity and sphere-size distribution. For both modeling approaches, we determine the size of the representative elementary volume with respect to the permeability. Permeabilities obtained by LB modeling agree well with Rumpf and Gupte's experiments in sphere packings for small Reynolds numbers. The LB simulations agree well with the empirical Ergun equation for intermediate but not for small Reynolds numbers. We suggest a modified form of Ergun's equation to describe both low and intermediate Reynolds number flows. The pore-network simulations agree well with predictions from the effective-medium approximation but underestimate the permeability due to the simplified representation of the porous media. Based on LB simulations in packings with log-normal sphere-size distributions, we suggest a permeability relation with respect to the porosity, as well as the mean and standard deviation of the sphere diameter.
[Adaptability of APSIM model in Southwestern China: A case study of winter wheat in Chongqing City].
Dai, Tong; Wang, Jing; He, Di; Zhang, Jian-ping; Wang, Na
2015-04-01
Field experimental data of winter wheat and parallel daily meteorological data at four typical stations in Chongqing City were used to calibrate and validate APSIM-wheat model and determine the genetic parameters for 12 varieties of winter wheat. The results showed that there was a good agreement between the simulated and observed growth periods from sowing to emergence, flowering and maturity of wheat. Root mean squared errors (RMSEs) between simulated and observed emergence, flowering and maturity were 0-3, 1-8, and 0-8 d, respectively. Normalized root mean squared errors (NRMSEs) between simulated and observed above-ground biomass for 12 study varieties were less than 30%. NRMSE between simulated and observed yields for 10 varieties out of 12 study varieties were less than 30%. APSIM-wheat model performed well in simulating phenology, aboveground biomass and yield of winter wheat in Chongqing City, which could provide a foundational support for assessing the impact of climate change on wheat production in the study area based on the model.
Sunrise/sunset thermal shock disturbance analysis and simulation for the TOPEX satellite
NASA Technical Reports Server (NTRS)
Dennehy, C. J.; Welch, R. V.; Zimbelman, D. F.
1990-01-01
It is shown here that during normal on-orbit operations the TOPEX low-earth orbiting satellite is subjected to an impulsive disturbance torque caused by rapid heating of its solar array when entering and exiting the earth's shadow. Error budgets and simulation results are used to demonstrate that this sunrise/sunset torque disturbance is the dominant Normal Mission Mode (NMM) attitude error source. The detailed thermomechanical modeling, analysis, and simulation of this torque is described, and the predicted on-orbit performance of the NMM attitude control system in the face of the sunrise/sunset disturbance is presented. The disturbance results in temporary attitude perturbations that exceed NMM pointing requirements. However, they are below the maximum allowable pointing error which would cause the radar altimeter to break lock.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, C. D.; Kemp, A. J.; Pérez, F.
2013-05-15
A 2-D multi-stage simulation model incorporating realistic laser conditions and a fully resolved electron distribution handoff has been developed and compared to angularly and spectrally resolved Bremsstrahlung measurements from high-Z planar targets. For near-normal incidence and 0.5-1 × 10{sup 20} W/cm{sup 2} intensity, particle-in-cell (PIC) simulations predict the existence of a high energy electron component consistently directed away from the laser axis, in contrast with previous expectations for oblique irradiation. Measurements of the angular distribution are consistent with a high energy component when directed along the PIC predicted direction, as opposed to between the target normal and laser axis asmore » previously measured.« less
Simple liquid models with corrected dielectric constants
Fennell, Christopher J.; Li, Libo; Dill, Ken A.
2012-01-01
Molecular simulations often use explicit-solvent models. Sometimes explicit-solvent models can give inaccurate values for basic liquid properties, such as the density, heat capacity, and permittivity, as well as inaccurate values for molecular transfer free energies. Such errors have motivated the development of more complex solvents, such as polarizable models. We describe an alternative here. We give new fixed-charge models of solvents for molecular simulations – water, carbon tetrachloride, chloroform and dichloromethane. Normally, such solvent models are parameterized to agree with experimental values of the neat liquid density and enthalpy of vaporization. Here, in addition to those properties, our parameters are chosen to give the correct dielectric constant. We find that these new parameterizations also happen to give better values for other properties, such as the self-diffusion coefficient. We believe that parameterizing fixed-charge solvent models to fit experimental dielectric constants may provide better and more efficient ways to treat solvents in computer simulations. PMID:22397577
NASA Astrophysics Data System (ADS)
Ethier, Marie-Pier; Bussière, Bruno; Broda, Stefan; Aubertin, Michel
2018-01-01
The Manitou Mine sulphidic-tailings storage facility No. 2, near Val D'Or, Canada, was reclaimed in 2009 by elevating the water table and applying a monolayer cover made of tailings from nearby Goldex Mine. Previous studies showed that production of acid mine drainage can be controlled by lowering the oxygen flux through Manitou tailings with a water table maintained at the interface between the cover and reactive tailings. Simulations of different scenarios were performed using numerical hydrogeological modeling to evaluate the capacity of the reclamation works to maintain the phreatic surface at this interface. A large-scale numerical model was constructed and calibrated using 3 years of field measurements. This model reproduced the field measurements, including the existence of a western zone on the site where the phreatic level targeted is not always met during the summer. A sensitivity analysis was performed to assess the response of the model to varying saturated hydraulic conductivities, porosities, and grain-size distributions. Higher variations of the hydraulic heads, with respect to the calibrated scenario results, were observed when simulating a looser or coarser cover material. Long-term responses were simulated using: the normal climatic data, data for a normal climate with a 2-month dry spell, and a simplified climate-change case. Environmental quality targets were reached less frequently during summer for the dry spell simulation as well as for the simplified climate-change scenario. This study illustrates how numerical simulations can be used as a key tool to assess the eventual performance of various mine-site reclamation scenarios.
NASA Astrophysics Data System (ADS)
Ethier, Marie-Pier; Bussière, Bruno; Broda, Stefan; Aubertin, Michel
2018-06-01
The Manitou Mine sulphidic-tailings storage facility No. 2, near Val D'Or, Canada, was reclaimed in 2009 by elevating the water table and applying a monolayer cover made of tailings from nearby Goldex Mine. Previous studies showed that production of acid mine drainage can be controlled by lowering the oxygen flux through Manitou tailings with a water table maintained at the interface between the cover and reactive tailings. Simulations of different scenarios were performed using numerical hydrogeological modeling to evaluate the capacity of the reclamation works to maintain the phreatic surface at this interface. A large-scale numerical model was constructed and calibrated using 3 years of field measurements. This model reproduced the field measurements, including the existence of a western zone on the site where the phreatic level targeted is not always met during the summer. A sensitivity analysis was performed to assess the response of the model to varying saturated hydraulic conductivities, porosities, and grain-size distributions. Higher variations of the hydraulic heads, with respect to the calibrated scenario results, were observed when simulating a looser or coarser cover material. Long-term responses were simulated using: the normal climatic data, data for a normal climate with a 2-month dry spell, and a simplified climate-change case. Environmental quality targets were reached less frequently during summer for the dry spell simulation as well as for the simplified climate-change scenario. This study illustrates how numerical simulations can be used as a key tool to assess the eventual performance of various mine-site reclamation scenarios.
Hermeling, Evelien; Delhaas, Tammo; Prinzen, Frits W; Kuijpers, Nico H L
2012-01-01
In the ECG, T- and R-wave are concordant during normal sinus rhythm (SR), but discordant after a period of ventricular pacing (VP). Experiments showed that the latter phenomenon, called T-wave memory, is mediated by a mechanical stimulus. By means of a mathematical model, we investigated the hypothesis that slow acting mechano-electrical feedback (MEF) explains T-wave memory. In our model, electromechanical behavior of the left ventricle (LV) was simulated using a series of mechanically and electrically coupled segments. Each segment comprised ionic membrane currents, calcium handling, and excitation-contraction coupling. MEF was incorporated by locally adjusting conductivity of L-type calcium current (g(CaL)) to local external work. In our set-up, g(CaL) could vary up to 25%, 50%, 100% or unlimited amount around its default value. Four consecutive simulations were performed: normal SR (with MEF), acute VP, sustained VP (with MEF), and acutely restored SR. MEF led to T-wave concordance in normal SR and to discordant T-waves acutely after restoring SR. Simulated ECGs with a maximum of 25-50% adaptation closely resembled those during T-wave memory experiments in vivo and also provided the best compromise between optimal systolic and diastolic function. In conclusion, these simulation results indicate that slow acting MEF in the LV can explain a) the relatively small differences in systolic shortening and mechanical work during SR, b) the small dispersion in repolarization time, c) the concordant T-wave during SR, and d) T-wave memory. The physiological distribution in electrophysiological properties, reflected by the concordant T-wave, may serve to optimize cardiac pump function. Copyright © 2012 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yuan, H. Z.; Chen, Z.; Shu, C.; Wang, Y.; Niu, X. D.; Shu, S.
2017-09-01
In this paper, a free energy-based surface tension force (FESF) model is presented for accurately resolving the surface tension force in numerical simulation of multiphase flows by the level set method. By using the analytical form of order parameter along the normal direction to the interface in the phase-field method and the free energy principle, FESF model offers an explicit and analytical formulation for the surface tension force. The only variable in this formulation is the normal distance to the interface, which can be substituted by the distance function solved by the level set method. On one hand, as compared to conventional continuum surface force (CSF) model in the level set method, FESF model introduces no regularized delta function, due to which it suffers less from numerical diffusions and performs better in mass conservation. On the other hand, as compared to the phase field surface tension force (PFSF) model, the evaluation of surface tension force in FESF model is based on an analytical approach rather than numerical approximations of spatial derivatives. Therefore, better numerical stability and higher accuracy can be expected. Various numerical examples are tested to validate the robustness of the proposed FESF model. It turns out that FESF model performs better than CSF model and PFSF model in terms of accuracy, stability, convergence speed and mass conservation. It is also shown in numerical tests that FESF model can effectively simulate problems with high density/viscosity ratio, high Reynolds number and severe topological interfacial changes.
Fatigue properties on the failure mode of a dental implant in a simulated body environment
NASA Astrophysics Data System (ADS)
Kim, Min Gun
2011-10-01
This study undertook a fatigue test in a simulated body environment that has reflected the conditions (such as the body fluid conditions, the micro-current of cell membranes, and the chewing force) within a living body. First, the study sought to evaluate the fatigue limit under normal conditions and in a simulated body environment, looking into the governing factors of implant fatigue strength through an observation of the fracture mode. In addition, the crack initiation behavior of a tungsten-carbide-coated abutment screw was examined. The fatigue limit of an implant within the simulated body environment decreased by 19 % compared to the limit noted under normal conditions. Several corrosion pits were observed on the abutment screw after the fatigue test in the simulated body environment. For the model used in this study, the implant fracture was mostly governed by the fatigue failure of the abutment screw; accordingly, the influence by the fixture on the fatigue strength of the implant was noted to be low. For the abutment screw coated with tungsten carbide, several times the normal amount of stress was found to be concentrated on the contact part due to the elastic interaction between the coating material and the base material.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hardy, A; Bostani, M; McMillan, K
Purpose: The purpose of this work is to estimate effective and lung doses from a low-dose lung cancer screening CT protocol using Tube Current Modulation (TCM) across patient models of different sizes. Methods: Monte Carlo simulation methods were used to estimate effective and lung doses from a low-dose lung cancer screening protocol for a 64-slice CT (Sensation 64, Siemens Healthcare) that used TCM. Scanning parameters were from the AAPM protocols. Ten GSF voxelized patient models were used and had all radiosensitive organs identified to facilitate estimating both organ and effective doses. Predicted TCM schemes for each patient model were generatedmore » using a validated method wherein tissue attenuation characteristics and scanner limitations were used to determine the TCM output as a function of table position and source angle. The water equivalent diameter (WED) was determined by estimating the attenuation at the center of the scan volume for each patient model. Monte Carlo simulations were performed using the unique TCM scheme for each patient model. Lung doses were tallied and effective doses were estimated using ICRP 103 tissue weighting factors. Effective and lung dose values were normalized by scanspecific 32 cm CTDIvol values based upon the average tube current across the entire simulated scan. Absolute and normalized doses were reported as a function of WED for each patient. Results: For all ten patients modeled, the effective dose using TCM protocols was below 1.5 mSv. Smaller sized patient models experienced lower absolute doses compared to larger sized patients. Normalized effective and lung doses showed some dependence on patient size (R2 = 0.77 and 0.78, respectively). Conclusion: Effective doses for a low-dose lung screening protocol using TCM were below 1.5 mSv for all patient models used in this study. Institutional research agreement, Siemens Healthcare; Past recipient, research grant support, Siemens Healthcare; Consultant, Toshiba America Medical Systems; Consultant, Samsung Electronics.« less
Effect of suspension kinematic on 14 DOF vehicle model
NASA Astrophysics Data System (ADS)
Wongpattananukul, T.; Chantharasenawong, C.
2017-12-01
Computer simulations play a major role in shaping modern science and engineering. They reduce time and resource consumption in new studies and designs. Vehicle simulations have been studied extensively to achieve a vehicle model used in minimum lap time solution. Simulation result accuracy depends on the abilities of these models to represent real phenomenon. Vehicles models with 7 degrees of freedom (DOF), 10 DOF and 14 DOF are normally used in optimal control to solve for minimum lap time. However, suspension kinematics are always neglected on these models. Suspension kinematics are defined as wheel movements with respect to the vehicle body. Tire forces are expressed as a function of wheel slip and wheel position. Therefore, the suspension kinematic relation is appended to the 14 DOF vehicle model to investigate its effects on the accuracy of simulate trajectory. Classical 14 DOF vehicle model is chosen as baseline model. Experiment data is collected from formula student style car test runs as baseline data for simulation and comparison between baseline model and model with suspension kinematic. Results show that in a single long turn there is an accumulated trajectory error in baseline model compared to model with suspension kinematic. While in short alternate turns, the trajectory error is much smaller. These results show that suspension kinematic had an effect on the trajectory simulation of vehicle. Which optimal control that use baseline model will result in inaccuracy control scheme.
A Fatigue Crack Size Evaluation Method Based on Lamb Wave Simulation and Limited Experimental Data
He, Jingjing; Ran, Yunmeng; Liu, Bin; Yang, Jinsong; Guan, Xuefei
2017-01-01
This paper presents a systematic and general method for Lamb wave-based crack size quantification using finite element simulations and Bayesian updating. The method consists of construction of a baseline quantification model using finite element simulation data and Bayesian updating with limited Lamb wave data from target structure. The baseline model correlates two proposed damage sensitive features, namely the normalized amplitude and phase change, with the crack length through a response surface model. The two damage sensitive features are extracted from the first received S0 mode wave package. The model parameters of the baseline model are estimated using finite element simulation data. To account for uncertainties from numerical modeling, geometry, material and manufacturing between the baseline model and the target model, Bayesian method is employed to update the baseline model with a few measurements acquired from the actual target structure. A rigorous validation is made using in-situ fatigue testing and Lamb wave data from coupon specimens and realistic lap-joint components. The effectiveness and accuracy of the proposed method is demonstrated under different loading and damage conditions. PMID:28902148
Yaguchi, A; Nagase, K; Ishikawa, M; Iwasaka, T; Odagaki, M; Hosaka, H
2006-01-01
Computer simulation and myocardial cell models were used to evaluate a low-energy defibrillation technique. A generated spiral wave, considered to be a mechanism of fibrillation, and fibrillation were investigated using two myocardial sheet models: a two-dimensional computer simulation model and a two-dimensional experimental model. A new defibrillation technique that has few side effects, which are induced by the current passing into the patient's body, on cardiac muscle is desired. The purpose of the present study is to conduct a basic investigation into an efficient defibrillation method. In order to evaluate the defibrillation method, the propagation of excitation in the myocardial sheet is measured during the normal state and during fibrillation, respectively. The advantages of the low-energy defibrillation technique are then discussed based on the stimulation timing.
Modeling of convection phenomena in Bridgman-Stockbarger crystal growth
NASA Technical Reports Server (NTRS)
Carlson, F. M.; Eraslan, A. H.; Sheu, J. Z.
1985-01-01
Thermal convection phenomena in a vertically oriented Bridgman-Stockbarger apparatus were modeled by computer simulations for different gravity conditions, ranging from earth conditions to extremely low gravity, approximate space conditions. The modeling results were obtained by the application of a state-of-the art, transient, multi-dimensional, completely densimetrically coupled, discrete-element computational model which was specifically developed for the simulation of flow, temperature, and species concentration conditions in two-phase (solid-liquid) systems. The computational model was applied to the simulation of the flow and the thermal conditions associated with the convection phenomena in a modified Germanium-Silicon charge enclosed in a stationary fused-silica ampoule. The results clearly indicated that the gravitational field strength influences the characteristics of the coherent vortical flow patterns, interface shape and position, maximum melt velocity, and interfacial normal temperature gradient.
NASA Astrophysics Data System (ADS)
Cai, Gaochao; Vanderborght, Jan; Langensiepen, Matthias; Schnepf, Andrea; Hüging, Hubert; Vereecken, Harry
2018-04-01
How much water can be taken up by roots and how this depends on the root and water distributions in the root zone are important questions that need to be answered to describe water fluxes in the soil-plant-atmosphere system. Physically based root water uptake (RWU) models that relate RWU to transpiration, root density, and water potential distributions have been developed but used or tested far less. This study aims at evaluating the simulated RWU of winter wheat using the empirical Feddes-Jarvis (FJ) model and the physically based Couvreur (C) model for different soil water conditions and soil textures compared to sap flow measurements. Soil water content (SWC), water potential, and root development were monitored noninvasively at six soil depths in two rhizotron facilities that were constructed in two soil textures: stony vs. silty, with each of three water treatments: sheltered, rainfed, and irrigated. Soil and root parameters of the two models were derived from inverse modeling and simulated RWU was compared with sap flow measurements for validation. The different soil types and water treatments resulted in different crop biomass, root densities, and root distributions with depth. The two models simulated the lowest RWU in the sheltered plot of the stony soil where RWU was also lower than the potential RWU. In the silty soil, simulated RWU was equal to the potential uptake for all treatments. The variation of simulated RWU among the different plots agreed well with measured sap flow but the C model predicted the ratios of the transpiration fluxes in the two soil types slightly better than the FJ model. The root hydraulic parameters of the C model could be constrained by the field data but not the water stress parameters of the FJ model. This was attributed to differences in root densities between the different soils and treatments which are accounted for by the C model, whereas the FJ model only considers normalized root densities. The impact of differences in root density on RWU could be accounted for directly by the physically based RWU model but not by empirical models that use normalized root density functions.
Steiner, Malte; Claes, Lutz; Ignatius, Anita; Simon, Ulrich; Wehner, Tim
2014-07-01
The outcome of secondary fracture healing processes is strongly influenced by interfragmentary motion. Shear movement is assumed to be more disadvantageous than axial movement, however, experimental results are contradictory. Numerical fracture healing models allow simulation of the fracture healing process with variation of single input parameters and under comparable, normalized mechanical conditions. Thus, a comparison of the influence of different loading directions on the healing process is possible. In this study we simulated fracture healing under several axial compressive, and translational and torsional shear movement scenarios, and compared their respective healing times. Therefore, we used a calibrated numerical model for fracture healing in sheep. Numerous variations of movement amplitudes and musculoskeletal loads were simulated for the three loading directions. Our results show that isolated axial compression was more beneficial for the fracture healing success than both isolated shearing conditions for load and displacement magnitudes which were identical as well as physiological different, and even for strain-based normalized comparable conditions. Additionally, torsional shear movements had less impeding effects than translational shear movements. Therefore, our findings suggest that osteosynthesis implants can be optimized, in particular, to limit translational interfragmentary shear under musculoskeletal loading. © 2014 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.
A remote-sensing driven tool for estimating crop stress and yields
USDA-ARS?s Scientific Manuscript database
Biophysical crop simulation models are normally forced with precipitation data recorded with either gages or ground-based radar. However, ground based recording networks are not available at spatial and temporal scales needed to drive the models at many critical places on earth. An alternative would...
Cunningham, C E; Siegel, L S
1987-06-01
Groups of 30 ADD-H boys and 90 normal boys were divided into 30 mixed dyads composed of a normal and an ADD-H boy, and 30 normal dyads composed of 2 normal boys. Dyads were videotaped interacting in 15-minute free-play, 15-minute cooperative task, and 15-minute simulated classroom settings. Mixed dyads engaged in more controlling interaction than normal dyads in both free-play and simulated classroom settings. In the simulated classroom, mixed dyads completed fewer math problems and were less compliant with the commands of peers. ADD-H children spent less simulated classroom time on task and scored lower on drawing tasks than normal peers. Older dyads proved less controlling, more compliant with peer commands, more inclined to play and work independently, less active, and more likely to remain on task during the cooperative task and simulated classroom settings. Results suggest that the ADD-H child prompts a more controlling, less cooperative pattern of responses from normal peers.
On the continuity of mean total normal stress in geometrical multiscale cardiovascular problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blanco, Pablo J., E-mail: pjblanco@lncc.br; INCT-MACC, Instituto Nacional de Ciência e Tecnologia em Medicina Assistida por Computação Científica, Petrópolis; Deparis, Simone, E-mail: simone.deparis@epfl.ch
2013-10-15
In this work an iterative strategy to implicitly couple dimensionally-heterogeneous blood flow models accounting for the continuity of mean total normal stress at interface boundaries is developed. Conservation of mean total normal stress in the coupling of heterogeneous models is mandatory to satisfy energetic consistency between them. Nevertheless, existing methodologies are based on modifications of the Navier–Stokes variational formulation, which are undesired when dealing with fluid–structure interaction or black box codes. The proposed methodology makes possible to couple one-dimensional and three-dimensional fluid–structure interaction models, enforcing the continuity of mean total normal stress while just imposing flow rate data or evenmore » the classical Neumann boundary data to the models. This is accomplished by modifying an existing iterative algorithm, which is also able to account for the continuity of the vessel area, when required. Comparisons are performed to assess differences in the convergence properties of the algorithms when considering the continuity of mean normal stress and the continuity of mean total normal stress for a wide range of flow regimes. Finally, examples in the physiological regime are shown to evaluate the importance, or not, of considering the continuity of mean total normal stress in hemodynamics simulations.« less
The dynamic simulation of the Progetto Energia combined cycle power plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giglio, R.; Cerabolini, M.; Pisacane, F.
1996-12-31
Over the next four years, the Progetto Energia project is building several cogeneration plants to satisfy the increasing demands of Italy`s industrial complex and the country`s demand for electrical power. Located at six different sites within Italy`s borders these Combined Cycle Cogeneration Plants will supply a total of 500 MW of electricity and 100 tons/hr of process steam to Italian industries and residences. To ensure project success, a dynamic model of the 50 MW base unit was developed. The goal established for the model was to predict the dynamic behavior of the complex thermodynamic system in order to assess equipmentmore » performance and control system effectiveness for normal operation and, more importantly, abrupt load changes. In addition to fulfilling its goals, the dynamic study guided modifications to controller logic that significantly improved steam drum pressure control and bypassed steam de-superheating performance. Simulations of normal and abrupt transient events allowed engineers to define optimum controller gain coefficients. The paper discusses the Combined Cycle plant configuration, its operating modes and control system, the dynamic model representation, the simulation results and project benefits.« less
Pulsatile flows and wall-shear stresses in models simulating normal and stenosed aortic arches
NASA Astrophysics Data System (ADS)
Huang, Rong Fung; Yang, Ten-Fang; Lan, Y.-K.
2010-03-01
Pulsatile aqueous glycerol solution flows in the models simulating normal and stenosed human aortic arches are measured by means of particle image velocimetry. Three transparent models were used: normal, 25% stenosed, and 50% stenosed aortic arches. The Womersley parameter, Dean number, and time-averaged Reynolds number are 17.31, 725, and 1,081, respectively. The Reynolds numbers based on the peak velocities of the normal, 25% stenosed, and 50% stenosed aortic arches are 2,484, 3,456, and 3,931, respectively. The study presents the temporal/spatial evolution processes of the flow pattern, velocity distribution, and wall-shear stress during the systolic and diastolic phases. It is found that the flow pattern evolving in the central plane of normal and stenosed aortic arches exhibits (1) a separation bubble around the inner arch, (2) a recirculation vortex around the outer arch wall upstream of the junction of the brachiocephalic artery, (3) an accelerated main stream around the outer arch wall near the junctions of the left carotid and the left subclavian arteries, and (4) the vortices around the entrances of the three main branches. The study identifies and discusses the reasons for the flow physics’ contribution to the formation of these features. The oscillating wall-shear stress distributions are closely related to the featured flow structures. On the outer wall of normal and slightly stenosed aortas, large wall-shear stresses appear in the regions upstream of the junction of the brachiocephalic artery as well as the corner near the junctions of the left carotid artery and the left subclavian artery. On the inner wall, the largest wall-shear stress appears in the region where the boundary layer separates.
Mounts, W M; Liebman, M N
1997-07-01
We have developed a method for representing biological pathways and simulating their behavior based on the use of stochastic activity networks (SANs). SANs, an extension of the original Petri net, have been used traditionally to model flow systems including data-communications networks and manufacturing processes. We apply the methodology to the blood coagulation cascade, a biological flow system, and present the representation method as well as results of simulation studies based on published experimental data. In addition to describing the dynamic model, we also present the results of its utilization to perform simulations of clinical states including hemophilia's A and B as well as sensitivity analysis of individual factors and their impact on thrombin production.
Simulation of urban land surface temperature based on sub-pixel land cover in a coastal city
NASA Astrophysics Data System (ADS)
Zhao, Xiaofeng; Deng, Lei; Feng, Huihui; Zhao, Yanchuang
2014-11-01
The sub-pixel urban land cover has been proved to have obvious correlations with land surface temperature (LST). Yet these relationships have seldom been used to simulate LST. In this study we provided a new approach of urban LST simulation based on sub-pixel land cover modeling. Landsat TM/ETM+ images of Xiamen city, China on both the January of 2002 and 2007 were used to acquire land cover and then extract the transformation rule using logistic regression. The transformation possibility was taken as its percent in the same pixel after normalization. And cellular automata were used to acquire simulated sub-pixel land cover on 2007 and 2017. On the other hand, the correlations between retrieved LST and sub-pixel land cover achieved by spectral mixture analysis in 2002 were examined and a regression model was built. Then the regression model was used on simulated 2007 land cover to model the LST of 2007. Finally the LST of 2017 was simulated for urban planning and management. The results showed that our method is useful in LST simulation. Although the simulation accuracy is not quite satisfactory, it provides an important idea and a good start in the modeling of urban LST.
Lo, Kenneth
2011-01-01
Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components. PMID:22125375
Lo, Kenneth; Gottardo, Raphael
2012-01-01
Cluster analysis is the automated search for groups of homogeneous observations in a data set. A popular modeling approach for clustering is based on finite normal mixture models, which assume that each cluster is modeled as a multivariate normal distribution. However, the normality assumption that each component is symmetric is often unrealistic. Furthermore, normal mixture models are not robust against outliers; they often require extra components for modeling outliers and/or give a poor representation of the data. To address these issues, we propose a new class of distributions, multivariate t distributions with the Box-Cox transformation, for mixture modeling. This class of distributions generalizes the normal distribution with the more heavy-tailed t distribution, and introduces skewness via the Box-Cox transformation. As a result, this provides a unified framework to simultaneously handle outlier identification and data transformation, two interrelated issues. We describe an Expectation-Maximization algorithm for parameter estimation along with transformation selection. We demonstrate the proposed methodology with three real data sets and simulation studies. Compared with a wealth of approaches including the skew-t mixture model, the proposed t mixture model with the Box-Cox transformation performs favorably in terms of accuracy in the assignment of observations, robustness against model misspecification, and selection of the number of components.
Constructing inverse probability weights for continuous exposures: a comparison of methods.
Naimi, Ashley I; Moodie, Erica E M; Auger, Nathalie; Kaufman, Jay S
2014-03-01
Inverse probability-weighted marginal structural models with binary exposures are common in epidemiology. Constructing inverse probability weights for a continuous exposure can be complicated by the presence of outliers, and the need to identify a parametric form for the exposure and account for nonconstant exposure variance. We explored the performance of various methods to construct inverse probability weights for continuous exposures using Monte Carlo simulation. We generated two continuous exposures and binary outcomes using data sampled from a large empirical cohort. The first exposure followed a normal distribution with homoscedastic variance. The second exposure followed a contaminated Poisson distribution, with heteroscedastic variance equal to the conditional mean. We assessed six methods to construct inverse probability weights using: a normal distribution, a normal distribution with heteroscedastic variance, a truncated normal distribution with heteroscedastic variance, a gamma distribution, a t distribution (1, 3, and 5 degrees of freedom), and a quantile binning approach (based on 10, 15, and 20 exposure categories). We estimated the marginal odds ratio for a single-unit increase in each simulated exposure in a regression model weighted by the inverse probability weights constructed using each approach, and then computed the bias and mean squared error for each method. For the homoscedastic exposure, the standard normal, gamma, and quantile binning approaches performed best. For the heteroscedastic exposure, the quantile binning, gamma, and heteroscedastic normal approaches performed best. Our results suggest that the quantile binning approach is a simple and versatile way to construct inverse probability weights for continuous exposures.
Parametric Model of an Aerospike Rocket Engine
NASA Technical Reports Server (NTRS)
Korte, J. J.
2000-01-01
A suite of computer codes was assembled to simulate the performance of an aerospike engine and to generate the engine input for the Program to Optimize Simulated Trajectories. First an engine simulator module was developed that predicts the aerospike engine performance for a given mixture ratio, power level, thrust vectoring level, and altitude. This module was then used to rapidly generate the aerospike engine performance tables for axial thrust, normal thrust, pitching moment, and specific thrust. Parametric engine geometry was defined for use with the engine simulator module. The parametric model was also integrated into the iSIGHTI multidisciplinary framework so that alternate designs could be determined. The computer codes were used to support in-house conceptual studies of reusable launch vehicle designs.
Parametric Model of an Aerospike Rocket Engine
NASA Technical Reports Server (NTRS)
Korte, J. J.
2000-01-01
A suite of computer codes was assembled to simulate the performance of an aerospike engine and to generate the engine input for the Program to Optimize Simulated Trajectories. First an engine simulator module was developed that predicts the aerospike engine performance for a given mixture ratio, power level, thrust vectoring level, and altitude. This module was then used to rapidly generate the aerospike engine performance tables for axial thrust, normal thrust, pitching moment, and specific thrust. Parametric engine geometry was defined for use with the engine simulator module. The parametric model was also integrated into the iSIGHT multidisciplinary framework so that alternate designs could be determined. The computer codes were used to support in-house conceptual studies of reusable launch vehicle designs.
Selgrade, J F; Harris, L A; Pasteur, R D
2009-10-21
This study presents a 13-dimensional system of delayed differential equations which predicts serum concentrations of five hormones important for regulation of the menstrual cycle. Parameters for the system are fit to two different data sets for normally cycling women. For these best fit parameter sets, model simulations agree well with the two different data sets but one model also has an abnormal stable periodic solution, which may represent polycystic ovarian syndrome. This abnormal cycle occurs for the model in which the normal cycle has estradiol levels at the high end of the normal range. Differences in model behavior are explained by studying hysteresis curves in bifurcation diagrams with respect to sensitive model parameters. For instance, one sensitive parameter is indicative of the estradiol concentration that promotes pituitary synthesis of a large amount of luteinizing hormone, which is required for ovulation. Also, it is observed that models with greater early follicular growth rates may have a greater risk of cycling abnormally.
Simplifying BRDF input data for optical signature modeling
NASA Astrophysics Data System (ADS)
Hallberg, Tomas; Pohl, Anna; Fagerström, Jan
2017-05-01
Scene simulations of optical signature properties using signature codes normally requires input of various parameterized measurement data of surfaces and coatings in order to achieve realistic scene object features. Some of the most important parameters are used in the model of the Bidirectional Reflectance Distribution Function (BRDF) and are normally determined by surface reflectance and scattering measurements. Reflectance measurements of the spectral Directional Hemispherical Reflectance (DHR) at various incident angles can normally be performed in most spectroscopy labs, while measuring the BRDF is more complicated or may not be available at all in many optical labs. We will present a method in order to achieve the necessary BRDF data directly from DHR measurements for modeling software using the Sandford-Robertson BRDF model. The accuracy of the method is tested by modeling a test surface by comparing results from using estimated and measured BRDF data as input to the model. These results show that using this method gives no significant loss in modeling accuracy.
Prediction and Validation of Mars Pathfinder Hypersonic Aerodynamic Data Base
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.; Braun, Robert D.; Weilmuenster, K. James; Mitcheltree, Robert A.; Engelund, Walter C.; Powell, Richard W.
1998-01-01
Postflight analysis of the Mars Pathfinder hypersonic, continuum aerodynamic data base is presented. Measured data include accelerations along the body axis and axis normal directions. Comparisons of preflight simulation and measurements show good agreement. The prediction of two static instabilities associated with movement of the sonic line from the shoulder to the nose and back was confirmed by measured normal accelerations. Reconstruction of atmospheric density during entry has an uncertainty directly proportional to the uncertainty in the predicted axial coefficient. The sensitivity of the moment coefficient to freestream density, kinetic models and center-of-gravity location are examined to provide additional consistency checks of the simulation with flight data. The atmospheric density as derived from axial coefficient and measured axial accelerations falls within the range required for sonic line shift and static stability transition as independently determined from normal accelerations.
Mathematical modelling of intra-aortic balloon pump.
Abdolrazaghi, Mona; Navidbakhsh, Mahdi; Hassani, Kamran
2010-10-01
Ischemic heart diseases now afflict thousands of Iranians and are the major cause of death in many industrialised countries. Mathematical modelling of an intra-aortic balloon pump (IABP) could provide a better understanding of its performance and help to represent blood flow and pressure in systemic arteries before and after inserting the pump. A mathematical modelling of the whole cardiovascular system was formulated using MATLAB software. The block diagram of the model consists of 43 compartments. All the anatomical data was extracted from the physiological references. In the next stage, myocardial infarction (MI) was induced in the model by decreasing the contractility of the left ventricle. The IABP was mathematically modelled and inserted in the model in the thoracic aorta I artery just before the descending aorta. The effects of IABP on MI were studied using the mathematical model. The normal operation of the cardiovascular system was studied firstly. The pressure-time graphs of the ventricles, atriums, aorta, pulmonary system, capillaries and arterioles were obtained. The volume-time curve of the left ventricle was also presented. The pressure-time curves of the left ventricle and thoracic aorta I were obtained for normal, MI, and inserted IABP conditions. Model verification was performed by comparing the simulation results with the clinical observations reported in the literature. IABP can be described by a theoretical model. Our model representing the cardiovascular system is capable of showing the effects of different pathologies such as MI and we have shown that MI effects can be reduced using IABP in accordance with the modelling results. The mathematical model should serve as a useful tool to simulate and better understand cardiovascular operation in normal and pathological conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Kaiyu; Yan, Da; Hong, Tianzhen
2014-02-28
Overtime is a common phenomenon around the world. Overtime drives both internal heat gains from occupants, lighting and plug-loads, and HVAC operation during overtime periods. Overtime leads to longer occupancy hours and extended operation of building services systems beyond normal working hours, thus overtime impacts total building energy use. Current literature lacks methods to model overtime occupancy because overtime is stochastic in nature and varies by individual occupants and by time. To address this gap in the literature, this study aims to develop a new stochastic model based on the statistical analysis of measured overtime occupancy data from an officemore » building. A binomial distribution is used to represent the total number of occupants working overtime, while an exponential distribution is used to represent the duration of overtime periods. The overtime model is used to generate overtime occupancy schedules as an input to the energy model of a second office building. The measured and simulated cooling energy use during the overtime period is compared in order to validate the overtime model. A hybrid approach to energy model calibration is proposed and tested, which combines ASHRAE Guideline 14 for the calibration of the energy model during normal working hours, and a proposed KS test for the calibration of the energy model during overtime. The developed stochastic overtime model and the hybrid calibration approach can be used in building energy simulations to improve the accuracy of results, and better understand the characteristics of overtime in office buildings.« less
Subject-specific finite-element modeling of normal aortic valve biomechanics from 3D+t TEE images.
Labrosse, Michel R; Beller, Carsten J; Boodhwani, Munir; Hudson, Christopher; Sohmer, Benjamin
2015-02-01
In the past decades, developments in transesophageal echocardiography (TEE) have opened new horizons in reconstructive surgery of the aortic valve (AV), whereby corrections are made to normalize the geometry and function of the valve, and effectively treat leaks. To the best of our knowledge, we propose the first integrated framework to process subject-specific 3D+t TEE AV data, determine age-matched material properties for the aortic and leaflet tissues, build a finite element model of the unpressurized AV, and simulate the AV function throughout a cardiac cycle. For geometric reconstruction purposes, dedicated software was created to acquire the 3-D coordinates of 21 anatomical landmarks of the AV apparatus in a systematic fashion. Measurements from ten 3D+t TEE datasets of normal AVs were assessed for inter- and intra-observer variability. These tests demonstrated mean measurement errors well within the acceptable range. Simulation of a complete cardiac cycle was successful for all ten valves and validated the novel schemes introduced to evaluate age-matched material properties and iteratively scale the unpressurized dimensions of the valves such that, given the determined material properties, the dimensions measured in vivo closely matched those simulated in late diastole. The leaflet coaptation area, describing the quality of the sealing of the valve, was measured directly from the medical images and was also obtained from the simulations; both approaches correlated well. The mechanical stress values obtained from the simulations may be interpreted in a comparative sense whereby higher values are indicative of higher risk of tearing and/or development of calcification. Copyright © 2014 Elsevier B.V. All rights reserved.
DAMS: A Model to Assess Domino Effects by Using Agent-Based Modeling and Simulation.
Zhang, Laobing; Landucci, Gabriele; Reniers, Genserik; Khakzad, Nima; Zhou, Jianfeng
2017-12-19
Historical data analysis shows that escalation accidents, so-called domino effects, have an important role in disastrous accidents in the chemical and process industries. In this study, an agent-based modeling and simulation approach is proposed to study the propagation of domino effects in the chemical and process industries. Different from the analytical or Monte Carlo simulation approaches, which normally study the domino effect at probabilistic network levels, the agent-based modeling technique explains the domino effects from a bottom-up perspective. In this approach, the installations involved in a domino effect are modeled as agents whereas the interactions among the installations (e.g., by means of heat radiation) are modeled via the basic rules of the agents. Application of the developed model to several case studies demonstrates the ability of the model not only in modeling higher-level domino effects and synergistic effects but also in accounting for temporal dependencies. The model can readily be applied to large-scale complicated cases. © 2017 Society for Risk Analysis.
Statistically Modeling I-V Characteristics of CNT-FET with LASSO
NASA Astrophysics Data System (ADS)
Ma, Dongsheng; Ye, Zuochang; Wang, Yan
2017-08-01
With the advent of internet of things (IOT), the need for studying new material and devices for various applications is increasing. Traditionally we build compact models for transistors on the basis of physics. But physical models are expensive and need a very long time to adjust for non-ideal effects. As the vision for the application of many novel devices is not certain or the manufacture process is not mature, deriving generalized accurate physical models for such devices is very strenuous, whereas statistical modeling is becoming a potential method because of its data oriented property and fast implementation. In this paper, one classical statistical regression method, LASSO, is used to model the I-V characteristics of CNT-FET and a pseudo-PMOS inverter simulation based on the trained model is implemented in Cadence. The normalized relative mean square prediction error of the trained model versus experiment sample data and the simulation results show that the model is acceptable for digital circuit static simulation. And such modeling methodology can extend to general devices.
Summers, Richard L; Pipke, Matt; Wegerich, Stephan; Conkright, Gary; Isom, Kristen C
2014-01-01
Background. Monitoring cardiovascular hemodynamics in the modern clinical setting is a major challenge. Increasing amounts of physiologic data must be analyzed and interpreted in the context of the individual patients pathology and inherent biologic variability. Certain data-driven analytical methods are currently being explored for smart monitoring of data streams from patients as a first tier automated detection system for clinical deterioration. As a prelude to human clinical trials, an empirical multivariate machine learning method called Similarity-Based Modeling (SBM), was tested in an In Silico experiment using data generated with the aid of a detailed computer simulator of human physiology (Quantitative Circulatory Physiology or QCP) which contains complex control systems with realistic integrated feedback loops. Methods. SBM is a kernel-based, multivariate machine learning method that that uses monitored clinical information to generate an empirical model of a patients physiologic state. This platform allows for the use of predictive analytic techniques to identify early changes in a patients condition that are indicative of a state of deterioration or instability. The integrity of the technique was tested through an In Silico experiment using QCP in which the output of computer simulations of a slowly evolving cardiac tamponade resulted in progressive state of cardiovascular decompensation. Simulator outputs for the variables under consideration were generated at a 2-min data rate (0.083Hz) with the tamponade introduced at a point 420 minutes into the simulation sequence. The functionality of the SBM predictive analytics methodology to identify clinical deterioration was compared to the thresholds used by conventional monitoring methods. Results. The SBM modeling method was found to closely track the normal physiologic variation as simulated by QCP. With the slow development of the tamponade, the SBM model are seen to disagree while the simulated biosignals in the early stages of physiologic deterioration and while the variables are still within normal ranges. Thus, the SBM system was found to identify pathophysiologic conditions in a timeframe that would not have been detected in a usual clinical monitoring scenario. Conclusion. In this study the functionality of a multivariate machine learning predictive methodology that that incorporates commonly monitored clinical information was tested using a computer model of human physiology. SBM and predictive analytics were able to differentiate a state of decompensation while the monitored variables were still within normal clinical ranges. This finding suggests that the SBM could provide for early identification of a clinical deterioration using predictive analytic techniques. predictive analytics, hemodynamic, monitoring.
Austin, Peter C; Steyerberg, Ewout W
2012-06-20
When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population.
Background Error Correlation Modeling with Diffusion Operators
2013-01-01
RESPONSIBLE PERSON 19b. TELEPHONE NUMBER (Include area code) 07-10-2013 Book Chapter Background Error Correlation Modeling with Diffusion Operators...normalization Unclassified Unclassified Unclassified UU 27 Max Yaremchuk (228) 688-5259 Reset Chapter 8 Background error correlation modeling with diffusion ...field, then a structure like this simulates enhanced diffusive transport of model errors in the regions of strong cur- rents on the background of
Tang, Min; Curtis, Sean; Yoon, Sung-Eui; Manocha, Dinesh
2009-01-01
We present an interactive algorithm for continuous collision detection between deformable models. We introduce multiple techniques to improve the culling efficiency and the overall performance of continuous collision detection. First, we present a novel formulation for continuous normal cones and use these normal cones to efficiently cull large regions of the mesh as part of self-collision tests. Second, we introduce the concept of "procedural representative triangles" to remove all redundant elementary tests between nonadjacent triangles. Finally, we exploit the mesh connectivity and introduce the concept of "orphan sets" to eliminate redundant elementary tests between adjacent triangle primitives. In practice, we can reduce the number of elementary tests by two orders of magnitude. These culling techniques have been combined with bounding volume hierarchies and can result in one order of magnitude performance improvement as compared to prior collision detection algorithms for deformable models. We highlight the performance of our algorithm on several benchmarks, including cloth simulations, N-body simulations, and breaking objects.
Seasonal Parameterizations of the Tau-Omega Model Using the ComRAD Ground-Based SMAP Simulator
NASA Technical Reports Server (NTRS)
O'Neill, P.; Joseph, A.; Srivastava, P.; Cosh, M.; Lang, R.
2014-01-01
NASA's Soil Moisture Active Passive (SMAP) mission is scheduled for launch in November 2014. In the prelaunch time frame, the SMAP team has focused on improving retrieval algorithms for the various SMAP baseline data products. The SMAP passive-only soil moisture product depends on accurate parameterization of the tau-omega model to achieve the required accuracy in soil moisture retrieval. During a field experiment (APEX12) conducted in the summer of 2012 under dry conditions in Maryland, the Combined Radar/Radiometer (ComRAD) truck-based SMAP simulator collected active/passive microwave time series data at the SMAP incident angle of 40 degrees over corn and soybeans throughout the crop growth cycle. A similar experiment was conducted only over corn in 2002 under normal moist conditions. Data from these two experiments will be analyzed and compared to evaluate how changes in vegetation conditions throughout the growing season in both a drought and normal year can affect parameterizations in the tau-omega model for more accurate soil moisture retrieval.
Orientation of chain molecules in ionotropic gels: a Brownian dynamics model
NASA Astrophysics Data System (ADS)
Woelki, Stefan; Kohler, Hans-Helmut
2003-09-01
As is known from birefringence measurements, polysaccharide molecules of ionotropic gels are preferentially orientated normal to the direction of gel growth. In this paper the orientation effect is investigated by means of an off-lattice Brownian dynamics model simulating the gel formation process. The model describes the integration of a single coarse grained phantom chain into the growing gel. The equations of motion of the chain are derived. The computer simulations show that, during the process of integration, the chain is contracting normal to the direction of gel growth. A scaling relation is obtained for the degree of contraction as a function of the length parameters of the chain, the velocity of the gel formation front and the rate constant of the crosslinking reaction. It is shown that the scaling relation, if applied to the example of ionotropic copper alginate gel, leads to reasonable predictions of the time course of the degree of contraction of the alginate chains.
Ahmed, Aqeel; Rippmann, Friedrich; Barnickel, Gerhard; Gohlke, Holger
2011-07-25
A three-step approach for multiscale modeling of protein conformational changes is presented that incorporates information about preferred directions of protein motions into a geometric simulation algorithm. The first two steps are based on a rigid cluster normal-mode analysis (RCNMA). Low-frequency normal modes are used in the third step (NMSim) to extend the recently introduced idea of constrained geometric simulations of diffusive motions in proteins by biasing backbone motions of the protein, whereas side-chain motions are biased toward favorable rotamer states. The generated structures are iteratively corrected regarding steric clashes and stereochemical constraint violations. The approach allows performing three simulation types: unbiased exploration of conformational space; pathway generation by a targeted simulation; and radius of gyration-guided simulation. When applied to a data set of proteins with experimentally observed conformational changes, conformational variabilities are reproduced very well for 4 out of 5 proteins that show domain motions, with correlation coefficients r > 0.70 and as high as r = 0.92 in the case of adenylate kinase. In 7 out of 8 cases, NMSim simulations starting from unbound structures are able to sample conformations that are similar (root-mean-square deviation = 1.0-3.1 Å) to ligand bound conformations. An NMSim generated pathway of conformational change of adenylate kinase correctly describes the sequence of domain closing. The NMSim approach is a computationally efficient alternative to molecular dynamics simulations for conformational sampling of proteins. The generated conformations and pathways of conformational transitions can serve as input to docking approaches or as starting points for more sophisticated sampling techniques.
[The three-dimensional simulation of arytenoid cartilage movement].
Zhang, Jun; Wang, Xuefeng
2011-08-01
Exploring the characteristics of arytenoid cartilage movement. Using Pro/ENGINEER (Pro/E) software, the cricoid cartilage, arytenoid cartilage and vocal cords were simulated to the three-dimensional reconstruction, by analyzing the trajectory of arytenoid cartilage in the joint surface from the cricoid cartilage and arytenoid cartilage composition. The 3D animation simulation showed the normal movement patterns of the vocal cords and the characteristics of vocal cords movement in occasion of arytenoid cartilage dislocation vividly. The three-dimensional model has clinical significance for arytenoid cartilage movement disorders.
Logit-normal mixed model for Indian Monsoon rainfall extremes
NASA Astrophysics Data System (ADS)
Dietz, L. R.; Chatterjee, S.
2014-03-01
Describing the nature and variability of Indian monsoon rainfall extremes is a topic of much debate in the current literature. We suggest the use of a generalized linear mixed model (GLMM), specifically, the logit-normal mixed model, to describe the underlying structure of this complex climatic event. Several GLMM algorithms are described and simulations are performed to vet these algorithms before applying them to the Indian precipitation data procured from the National Climatic Data Center. The logit-normal model was applied with fixed covariates of latitude, longitude, elevation, daily minimum and maximum temperatures with a random intercept by weather station. In general, the estimation methods concurred in their suggestion of a relationship between the El Niño Southern Oscillation (ENSO) and extreme rainfall variability estimates. This work provides a valuable starting point for extending GLMM to incorporate the intricate dependencies in extreme climate events.
NASA Technical Reports Server (NTRS)
Takacs, L. L.; Kalnay, E.; Navon, I. M.
1985-01-01
A normal modes expansion technique is applied to perform high latitude filtering in the GLAS fourth order global shallow water model with orography. The maximum permissible time step in the solution code is controlled by the frequency of the fastest propagating mode, which can be a gravity wave. Numerical methods are defined for filtering the data to identify the number of gravity modes to be included in the computations in order to obtain the appropriate zonal wavenumbers. The performances of the model with and without the filter, and with a time tendency and a prognostic field filter are tested with simulations of the Northern Hemisphere winter. The normal modes expansion technique is shown to leave the Rossby modes intact and permit 3-5 day predictions, a range not possible with the other high-latitude filters.
Numerical and Qualitative Contrasts of Two Statistical Models ...
Two statistical approaches, weighted regression on time, discharge, and season and generalized additive models, have recently been used to evaluate water quality trends in estuaries. Both models have been used in similar contexts despite differences in statistical foundations and products. This study provided an empirical and qualitative comparison of both models using 29 years of data for two discrete time series of chlorophyll-a (chl-a) in the Patuxent River estuary. Empirical descriptions of each model were based on predictive performance against the observed data, ability to reproduce flow-normalized trends with simulated data, and comparisons of performance with validation datasets. Between-model differences were apparent but minor and both models had comparable abilities to remove flow effects from simulated time series. Both models similarly predicted observations for missing data with different characteristics. Trends from each model revealed distinct mainstem influences of the Chesapeake Bay with both models predicting a roughly 65% increase in chl-a over time in the lower estuary, whereas flow-normalized predictions for the upper estuary showed a more dynamic pattern, with a nearly 100% increase in chl-a in the last 10 years. Qualitative comparisons highlighted important differences in the statistical structure, available products, and characteristics of the data and desired analysis. This manuscript describes a quantitative comparison of two recently-
Geravanchizadeh, Masoud; Fallah, Ali
2015-12-01
A binaural and psychoacoustically motivated intelligibility model, based on a well-known monaural microscopic model is proposed. This model simulates a phoneme recognition task in the presence of spatially distributed speech-shaped noise in anechoic scenarios. In the proposed model, binaural advantage effects are considered by generating a feature vector for a dynamic-time-warping speech recognizer. This vector consists of three subvectors incorporating two monaural subvectors to model the better-ear hearing, and a binaural subvector to simulate the binaural unmasking effect. The binaural unit of the model is based on equalization-cancellation theory. This model operates blindly, which means separate recordings of speech and noise are not required for the predictions. Speech intelligibility tests were conducted with 12 normal hearing listeners by collecting speech reception thresholds (SRTs) in the presence of single and multiple sources of speech-shaped noise. The comparison of the model predictions with the measured binaural SRTs, and with the predictions of a macroscopic binaural model called extended equalization-cancellation, shows that this approach predicts the intelligibility in anechoic scenarios with good precision. The square of the correlation coefficient (r(2)) and the mean-absolute error between the model predictions and the measurements are 0.98 and 0.62 dB, respectively.
NASA Astrophysics Data System (ADS)
Kamal Chowdhury, AFM; Lockart, Natalie; Willgoose, Garry; Kuczera, George; Kiem, Anthony; Parana Manage, Nadeeka
2016-04-01
Stochastic simulation of rainfall is often required in the simulation of streamflow and reservoir levels for water security assessment. As reservoir water levels generally vary on monthly to multi-year timescales, it is important that these rainfall series accurately simulate the multi-year variability. However, the underestimation of multi-year variability is a well-known issue in daily rainfall simulation. Focusing on this issue, we developed a hierarchical Markov Chain (MC) model in a traditional two-part MC-Gamma Distribution modelling structure, but with a new parameterization technique. We used two parameters of first-order MC process (transition probabilities of wet-to-wet and dry-to-dry days) to simulate the wet and dry days, and two parameters of Gamma distribution (mean and standard deviation of wet day rainfall) to simulate wet day rainfall depths. We found that use of deterministic Gamma parameter values results in underestimation of multi-year variability of rainfall depths. Therefore, we calculated the Gamma parameters for each month of each year from the observed data. Then, for each month, we fitted a multi-variate normal distribution to the calculated Gamma parameter values. In the model, we stochastically sampled these two Gamma parameters from the multi-variate normal distribution for each month of each year and used them to generate rainfall depth in wet days using the Gamma distribution. In another study, Mehrotra and Sharma (2007) proposed a semi-parametric Markov model. They also used a first-order MC process for rainfall occurrence simulation. But, the MC parameters were modified by using an additional factor to incorporate the multi-year variability. Generally, the additional factor is analytically derived from the rainfall over a pre-specified past periods (e.g. last 30, 180, or 360 days). They used a non-parametric kernel density process to simulate the wet day rainfall depths. In this study, we have compared the performance of our hierarchical MC model with the semi-parametric model in preserving rainfall variability in daily, monthly, and multi-year scales. To calibrate the parameters of both models and assess their ability to preserve observed statistics, we have used ground based data from 15 raingauge stations around Australia, which consist a wide range of climate zones including coastal, monsoonal, and arid climate characteristics. In preliminary results, both models show comparative performances in preserving the multi-year variability of rainfall depth and occurrence. However, the semi-parametric model shows a tendency of overestimating the mean rainfall depth, while our model shows a tendency of overestimating the number of wet days. We will discuss further the relative merits of the both models for hydrology simulation in the presentation.
Visual Predictive Check in Models with Time-Varying Input Function.
Largajolli, Anna; Bertoldo, Alessandra; Campioni, Marco; Cobelli, Claudio
2015-11-01
The nonlinear mixed effects models are commonly used modeling techniques in the pharmaceutical research as they enable the characterization of the individual profiles together with the population to which the individuals belong. To ensure a correct use of them is fundamental to provide powerful diagnostic tools that are able to evaluate the predictive performance of the models. The visual predictive check (VPC) is a commonly used tool that helps the user to check by visual inspection if the model is able to reproduce the variability and the main trend of the observed data. However, the simulation from the model is not always trivial, for example, when using models with time-varying input function (IF). In this class of models, there is a potential mismatch between each set of simulated parameters and the associated individual IF which can cause an incorrect profile simulation. We introduce a refinement of the VPC by taking in consideration a correlation term (the Mahalanobis or normalized Euclidean distance) that helps the association of the correct IF with the individual set of simulated parameters. We investigate and compare its performance with the standard VPC in models of the glucose and insulin system applied on real and simulated data and in a simulated pharmacokinetic/pharmacodynamic (PK/PD) example. The newly proposed VPC performance appears to be better with respect to the standard VPC especially for the models with big variability in the IF where the probability of simulating incorrect profiles is higher.
Rollable nano-etched diffractive low-concentration PV sheets for small satelites
NASA Astrophysics Data System (ADS)
Brac-de-la-Perriere, Vincent; Kress, Bernard; Ben-Menahem, Shahar; Ishihara, Abraham K.; Dorais, Greg
2014-09-01
This paper discuses a novel, rollable, mass fabricable, low-concentration photovoltaic sheets for Cubesats providing them with efficient photoelectric conversion of sunlight and secondary diffuse light. The wrap consists of three thin (of order a millimeter or less), cheap plastic-sheet layers, which can be rolled together in a spiral wrapping configuration when stowed. Preliminary simulation based on the above modeling approaches show that the designs achieve comparable photovoltaic power (area for area) and (b) result in a at angular response curve which remains at from normal incidence of over 35 degrees to the normal. The simulation were performed using a ray tracing simulator built in Matlab. In addition, we have constructed a demonstrator using quartz wafers based on the optimized design to show the technology. Details of its fabrication are also provided.
Turbulence Hazard Metric Based on Peak Accelerations for Jetliner Passengers
NASA Technical Reports Server (NTRS)
Stewart, Eric C.
2005-01-01
Calculations are made of the approximate hazard due to peak normal accelerations of an airplane flying through a simulated vertical wind field associated with a convective frontal system. The calculations are based on a hazard metric developed from a systematic application of a generic math model to 1-cosine discrete gusts of various amplitudes and gust lengths. The math model simulates the three degree-of- freedom longitudinal rigid body motion to vertical gusts and includes (1) fuselage flexibility, (2) the lag in the downwash from the wing to the tail, (3) gradual lift effects, (4) a simplified autopilot, and (5) motion of an unrestrained passenger in the rear cabin. Airplane and passenger response contours are calculated for a matrix of gust amplitudes and gust lengths. The airplane response contours are used to develop an approximate hazard metric of peak normal accelerations as a function of gust amplitude and gust length. The hazard metric is then applied to a two-dimensional simulated vertical wind field of a convective frontal system. The variations of the hazard metric with gust length and airplane heading are demonstrated.
Mechanical model for simulating the conditioning of air in the respiratory tract.
Bergonse Neto, Nelson; Von Bahten, Luiz Carlos; Moura, Luís Mauro; Coelho, Marlos de Souza; Stori Junior, Wilson de Souza; Bergonse, Gilberto da Fontoura Rey
2007-01-01
To create a mechanical model that could be regulated to simulate the conditioning of inspired and expired air with the same normal values of temperature, pressure, and relative humidity as those of the respiratory system of a healthy young man on mechanical ventilation. Using several types of materials, a mechanical device was built and regulated using normal values of vital capacity, tidal volume, maximal inspiratory pressure, positive end-expiratory pressure, and gas temperature in the system. The device was submitted to mechanical ventilation for a period of 29.8 min. The changes in the temperature of the air circulating in the system were recorded every two seconds. The statistical analysis of the data collected revealed that the device was approximately as efficient in the conditioning of air as is the respiratory system of a human being. By the study endpoint, we had developed a mechanical device capable of simulating the conditioning of air in the respiratory tract. The device mimics the conditions of temperature, pressure, and relative humidity seen in the respiratory system of healthy individuals.
Log-Normal Turbulence Dissipation in Global Ocean Models
NASA Astrophysics Data System (ADS)
Pearson, Brodie; Fox-Kemper, Baylor
2018-03-01
Data from turbulent numerical simulations of the global ocean demonstrate that the dissipation of kinetic energy obeys a nearly log-normal distribution even at large horizontal scales O (10 km ) . As the horizontal scales of resolved turbulence are larger than the ocean is deep, the Kolmogorov-Yaglom theory for intermittency in 3D homogeneous, isotropic turbulence cannot apply; instead, the down-scale potential enstrophy cascade of quasigeostrophic turbulence should. Yet, energy dissipation obeys approximate log-normality—robustly across depths, seasons, regions, and subgrid schemes. The distribution parameters, skewness and kurtosis, show small systematic departures from log-normality with depth and subgrid friction schemes. Log-normality suggests that a few high-dissipation locations dominate the integrated energy and enstrophy budgets, which should be taken into account when making inferences from simplified models and inferring global energy budgets from sparse observations.
Model-Based Normalization of a Fractional-Crystal Collimator for Small-Animal PET Imaging
Li, Yusheng; Matej, Samuel; Karp, Joel S.; Metzler, Scott D.
2017-01-01
Previously, we proposed to use a coincidence collimator to achieve fractional-crystal resolution in PET imaging. We have designed and fabricated a collimator prototype for a small-animal PET scanner, A-PET. To compensate for imperfections in the fabricated collimator prototype, collimator normalization, as well as scanner normalization, is required to reconstruct quantitative and artifact-free images. In this study, we develop a normalization method for the collimator prototype based on the A-PET normalization using a uniform cylinder phantom. We performed data acquisition without the collimator for scanner normalization first, and then with the collimator from eight different rotation views for collimator normalization. After a reconstruction without correction, we extracted the cylinder parameters from which we generated expected emission sinograms. Single scatter simulation was used to generate the scattered sinograms. We used the least-squares method to generate the normalization coefficient for each LOR based on measured, expected and scattered sinograms. The scanner and collimator normalization coefficients were factorized by performing two normalizations separately. The normalization methods were also verified using experimental data acquired from A-PET with and without the collimator. In summary, we developed a model-base collimator normalization that can significantly reduce variance and produce collimator normalization with adequate statistical quality within feasible scan time. PMID:29270539
Model-Based Normalization of a Fractional-Crystal Collimator for Small-Animal PET Imaging.
Li, Yusheng; Matej, Samuel; Karp, Joel S; Metzler, Scott D
2017-05-01
Previously, we proposed to use a coincidence collimator to achieve fractional-crystal resolution in PET imaging. We have designed and fabricated a collimator prototype for a small-animal PET scanner, A-PET. To compensate for imperfections in the fabricated collimator prototype, collimator normalization, as well as scanner normalization, is required to reconstruct quantitative and artifact-free images. In this study, we develop a normalization method for the collimator prototype based on the A-PET normalization using a uniform cylinder phantom. We performed data acquisition without the collimator for scanner normalization first, and then with the collimator from eight different rotation views for collimator normalization. After a reconstruction without correction, we extracted the cylinder parameters from which we generated expected emission sinograms. Single scatter simulation was used to generate the scattered sinograms. We used the least-squares method to generate the normalization coefficient for each LOR based on measured, expected and scattered sinograms. The scanner and collimator normalization coefficients were factorized by performing two normalizations separately. The normalization methods were also verified using experimental data acquired from A-PET with and without the collimator. In summary, we developed a model-base collimator normalization that can significantly reduce variance and produce collimator normalization with adequate statistical quality within feasible scan time.
Normal mode analysis of the IUS/TDRS payload in a payload canister/transporter environment
NASA Technical Reports Server (NTRS)
Meyer, K. A.
1980-01-01
Special modeling techniques were developed to simulate an accurate mathematical model of the transporter/canister/payload system during ground transport of the Inertial Upper Stage/Tracking and Data Relay Satellite (IUS/TDRS) payload. The three finite element models - the transporter, the canister, and the IUS/TDRS payload - were merged into one model and used along with the NASTRAN normal mode analysis. Deficiencies were found in the NASTRAN program that make a total analysis using modal transient response impractical. It was also discovered that inaccuracies may exist for NASTRAN rigid body modes on large models when Given's method for eigenvalue extraction is employed. The deficiencies as well as recommendations for improving the NASTRAN program are discussed.
2009-05-01
was measured on Mylar cards through fluorometric analysis. Plant health measures height and normalized difference vegetation index NDVI were...plant health data were used to generate dose-response relationships. Dose-response curves relating change in plant height and change in measured NDVI ...Held Sensor Model 505, NTech Industries, Inc., Ukiah, California to measure the normalized difference vegetation index NDVI which is directly
Octree-based Global Earthquake Simulations
NASA Astrophysics Data System (ADS)
Ramirez-Guzman, L.; Juarez, A.; Bielak, J.; Salazar Monroy, E. F.
2017-12-01
Seismological research has motivated recent efforts to construct more accurate three-dimensional (3D) velocity models of the Earth, perform global simulations of wave propagation to validate models, and also to study the interaction of seismic fields with 3D structures. However, traditional methods for seismogram computation at global scales are limited by computational resources, relying primarily on traditional methods such as normal mode summation or two-dimensional numerical methods. We present an octree-based mesh finite element implementation to perform global earthquake simulations with 3D models using topography and bathymetry with a staircase approximation, as modeled by the Carnegie Mellon Finite Element Toolchain Hercules (Tu et al., 2006). To verify the implementation, we compared the synthetic seismograms computed in a spherical earth against waveforms calculated using normal mode summation for the Preliminary Earth Model (PREM) for a point source representation of the 2014 Mw 7.3 Papanoa, Mexico earthquake. We considered a 3 km-thick ocean layer for stations with predominantly oceanic paths. Eigen frequencies and eigen functions were computed for toroidal, radial, and spherical oscillations in the first 20 branches. Simulations are valid at frequencies up to 0.05 Hz. Matching among the waveforms computed by both approaches, especially for long period surface waves, is excellent. Additionally, we modeled the Mw 9.0 Tohoku-Oki earthquake using the USGS finite fault inversion. Topography and bathymetry from ETOPO1 are included in a mesh with more than 3 billion elements; constrained by the computational resources available. We compared estimated velocity and GPS synthetics against observations at regional and teleseismic stations of the Global Seismological Network and discuss the differences among observations and synthetics, revealing that heterogeneity, particularly in the crust, needs to be considered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murphy, Chantell Lynne-Marie
Traditional nuclear materials accounting does not work well for safeguards when applied to pyroprocessing. Alternate methods such as Signature Based Safeguards (SBS) are being investigated. The goal of SBS is real-time/near-real-time detection of anomalous events in the pyroprocessing facility as they could indicate loss of special nuclear material. In high-throughput reprocessing facilities, metric tons of separated material are processed that must be accounted for. Even with very low uncertainties of accountancy measurements (<0.1%) the uncertainty of the material balances is still greater than the desired level. Novel contributions of this work are as follows: (1) significant enhancement of SBS developmentmore » for the salt cleanup process by creating a new gas sparging process model, selecting sensors to monitor normal operation, identifying safeguards-significant off-normal scenarios, and simulating those off-normal events and generating sensor output; (2) further enhancement of SBS development for the electrorefiner by simulating off-normal events caused by changes in salt concentration and identifying which conditions lead to Pu and Cm not tracking throughout the rest of the system; and (3) new contribution in applying statistical techniques to analyze the signatures gained from these two models to help draw real-time conclusions on anomalous events.« less
Predicting durations of online collective actions based on Peaks' heights
NASA Astrophysics Data System (ADS)
Lu, Peng; Nie, Shizhao; Wang, Zheng; Jing, Ziwei; Yang, Jianwu; Qi, Zhongxiang; Pujia, Wangmo
2018-02-01
Capturing the whole process of collective actions, the peak model contains four stages, including Prepare, Outbreak, Peak, and Vanish. Based on the peak model, one of the key variables, factors and parameters are further investigated in this paper, which is the rate between peaks and spans. Although the durations or spans and peaks' heights are highly diversified, it seems that the ratio between them is quite stable. If the rate's regularity is discovered, we can predict how long the collective action lasts and when it ends based on the peak's height. In this work, we combined mathematical simulations and empirical big data of 148 cases to explore the regularity of ratio's distribution. It is indicated by results of simulations that the rate has some regularities of distribution, which is not normal distribution. The big data has been collected from the 148 online collective actions and the whole processes of participation are recorded. The outcomes of empirical big data indicate that the rate seems to be closer to being log-normally distributed. This rule holds true for both the total cases and subgroups of 148 online collective actions. The Q-Q plot is applied to check the normal distribution of the rate's logarithm, and the rate's logarithm does follow the normal distribution.
SIMULATING THE 'SLIDING DOORS' EFFECT THROUGH MAGNETIC FLUX EMERGENCE
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacTaggart, David; Hood, Alan W., E-mail: dm428@st-andrews.ac.u
2010-06-20
Recent Hinode photospheric vector magnetogram observations have shown that the opposite polarities of a long arcade structure move apart and then come together. In addition to this 'sliding doors' effect, orientations of horizontal magnetic fields along the polarity inversion line on the photosphere evolve from a normal-polarity configuration to an inverse one. To explain this behavior, a simple model by Okamoto et al. suggested that it is the result of the emergence of a twisted flux rope. Here, we model this scenario using a three-dimensional megnatohydrodynamic simulation of a twisted flux rope emerging into a pre-existing overlying arcade. We constructmore » magnetograms from the simulation and compare them with the observations. The model produces the two signatures mentioned above. However, the cause of the 'sliding doors' effect differs from the previous model.« less
Corrected goodness-of-fit test in covariance structure analysis.
Hayakawa, Kazuhiko
2018-05-17
Many previous studies report simulation evidence that the goodness-of-fit test in covariance structure analysis or structural equation modeling suffers from the overrejection problem when the number of manifest variables is large compared with the sample size. In this study, we demonstrate that one of the tests considered in Browne (1974) can address this long-standing problem. We also propose a simple modification of Satorra and Bentler's mean and variance adjusted test for non-normal data. A Monte Carlo simulation is carried out to investigate the performance of the corrected tests in the context of a confirmatory factor model, a panel autoregressive model, and a cross-lagged panel (panel vector autoregressive) model. The simulation results reveal that the corrected tests overcome the overrejection problem and outperform existing tests in most cases. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
User's Guide to the Stand Prognosis Model
William R. Wykoff; Nicholas L. Crookston; Albert R. Stage
1982-01-01
The Stand Prognosis Model is a computer program that projects the development of forest stands in the Northern Rocky Mountains. Thinning options allow for simulation of a variety of management strategies. Input consists of a stand inventory, including sample tree records, and a set of option selection instructions. Output includes data normally found in stand, stock,...
Modeling initial contact dynamics during ambulation with dynamic simulation.
Meyer, Andrew R; Wang, Mei; Smith, Peter A; Harris, Gerald F
2007-04-01
Ankle-foot orthoses are frequently used interventions to correct pathological gait. Their effects on the kinematics and kinetics of the proximal joints are of great interest when prescribing ankle-foot orthoses to specific patient groups. Mathematical Dynamic Model (MADYMO) is developed to simulate motor vehicle crash situations and analyze tissue injuries of the occupants based multibody dynamic theories. Joint kinetics output from an inverse model were perturbed and input to the forward model to examine the effects of changes in the internal sagittal ankle moment on knee and hip kinematics following heel strike. Increasing the internal ankle moment (augmentation, equivalent to gastroc-soleus contraction) produced less pronounced changes in kinematic results at the hip, knee and ankle than decreasing the moment (attenuation, equivalent to gastroc-soleus relaxation). Altering the internal ankle moment produced two distinctly different kinematic curve morphologies at the hip. Decreased internal ankle moments increased hip flexion, peaking at roughly 8% of the gait cycle. Increasing internal ankle moments decreased hip flexion to a lesser degree, and approached normal at the same point in the gait cycle. Increasing the internal ankle moment produced relatively small, well-behaved extension-biased kinematic results at the knee. Decreasing the internal ankle moment produced more substantial changes in knee kinematics towards flexion that increased with perturbation magnitude. Curve morphologies were similar to those at the hip. Immediately following heel strike, kinematic results at the ankle showed movement in the direction of the internal moment perturbation. Increased internal moments resulted in kinematic patterns that rapidly approach normal after initial differences. When the internal ankle moment was decreased, differences from normal were much greater and did not rapidly decrease. This study shows that MADYMO can be successfully applied to accomplish forward dynamic simulations, given kinetic inputs. Future applications include predicting muscle forces and decomposing external kinetics.
The importance of mechano-electrical feedback and inertia in cardiac electromechanics.
Costabal, Francisco Sahli; Concha, Felipe A; Hurtado, Daniel E; Kuhl, Ellen
2017-06-15
In the past years, a number cardiac electromechanics models have been developed to better understand the excitation-contraction behavior of the heart. However, there is no agreement on whether inertial forces play a role in this system. In this study, we assess the influence of mass in electromechanical simulations, using a fully coupled finite element model. We include the effect of mechano-electrical feedback via stretch activated currents. We compare five different models: electrophysiology, electromechanics, electromechanics with mechano-electrical feedback, electromechanics with mass, and electromechanics with mass and mechano-electrical feedback. We simulate normal conduction to study conduction velocity and spiral waves to study fibrillation. During normal conduction, mass in conjunction with mechano-electrical feedback increased the conduction velocity by 8.12% in comparison to the plain electrophysiology case. During the generation of a spiral wave, mass and mechano-electrical feedback generated secondary wavefronts, which were not present in any other model. These secondary wavefronts were initiated in tensile stretch regions that induced electrical currents. We expect that this study will help the research community to better understand the importance of mechanoelectrical feedback and inertia in cardiac electromechanics.
NASA Astrophysics Data System (ADS)
Liu, Y.; Weisberg, R. H.
2017-12-01
The Lagrangian separation distance between the endpoints of simulated and observed drifter trajectories is often used to assess the performance of numerical particle trajectory models. However, the separation distance fails to indicate relative model performance in weak and strong current regions, such as a continental shelf and its adjacent deep ocean. A skill score is proposed based on the cumulative Lagrangian separation distances normalized by the associated cumulative trajectory lengths. The new metrics correctly indicates the relative performance of the Global HYCOM in simulating the strong currents of the Gulf of Mexico Loop Current and the weaker currents of the West Florida Shelf in the eastern Gulf of Mexico. In contrast, the Lagrangian separation distance alone gives a misleading result. Also, the observed drifter position series can be used to reinitialize the trajectory model and evaluate its performance along the observed trajectory, not just at the drifter end position. The proposed dimensionless skill score is particularly useful when the number of drifter trajectories is limited and neither a conventional Eulerian-based velocity nor a Lagrangian-based probability density function may be estimated.
Biomechanical simulation of thorax deformation using finite element approach.
Zhang, Guangzhi; Chen, Xian; Ohgi, Junji; Miura, Toshiro; Nakamoto, Akira; Matsumura, Chikanori; Sugiura, Seiryo; Hisada, Toshiaki
2016-02-06
The biomechanical simulation of the human respiratory system is expected to be a useful tool for the diagnosis and treatment of respiratory diseases. Because the deformation of the thorax significantly influences airflow in the lungs, we focused on simulating the thorax deformation by introducing contraction of the intercostal muscles and diaphragm, which are the main muscles responsible for the thorax deformation during breathing. We constructed a finite element model of the thorax, including the rib cage, intercostal muscles, and diaphragm. To reproduce the muscle contractions, we introduced the Hill-type transversely isotropic hyperelastic continuum skeletal muscle model, which allows the intercostal muscles and diaphragm to contract along the direction of the fibres with clinically measurable muscle activation and active force-length relationship. The anatomical fibre orientations of the intercostal muscles and diaphragm were introduced. Thorax deformation consists of movements of the ribs and diaphragm. By activating muscles, we were able to reproduce the pump-handle and bucket-handle motions for the ribs and the clinically observed motion for the diaphragm. In order to confirm the effectiveness of this approach, we simulated the thorax deformation during normal quiet breathing and compared the results with four-dimensional computed tomography (4D-CT) images for verification. Thorax deformation can be simulated by modelling the respiratory muscles according to continuum mechanics and by introducing muscle contractions. The reproduction of representative motions of the ribs and diaphragm and the comparison of the thorax deformations during normal quiet breathing with 4D-CT images demonstrated the effectiveness of the proposed approach. This work may provide a platform for establishing a computational mechanics model of the human respiratory system.
2013-07-09
through a potential energy surface (PES), such as the simple Lennard - Jones (LJ) PES [23] shown in the inset of Fig. 3, which is given by the following...a normal shock wave. Inset shows a simple Lennard -‐ Jones (LJ) potential energy surface (PES) dictating...model input into such simulations is the potential energy surface (PES) that governs individual atomic interaction forces, developed by chemists and
NASA Astrophysics Data System (ADS)
Gholampour, S.; Fatouraee, N.; Seddighi, A. S.; Seddighi, A.
2017-05-01
Three-dimensional computational models of the cerebrospinal fluid (CSF) flow and brain tissue are presented for evaluation of their hydrodynamic conditions before and after shunting for seven patients with non-communicating hydrocephalus. One healthy subject is also modeled to compare deviated patients data to normal conditions. The fluid-solid interaction simulation shows the CSF mean pressure and pressure amplitude (the superior index for evaluation of non-communicating hydrocephalus) in patients at a greater point than those in the healthy subject by 5.3 and 2 times, respectively.
NASA Astrophysics Data System (ADS)
McAfee, S. A.; DeLaFrance, A.
2017-12-01
Investigating the impacts of climate change often entails using projections from inherently imperfect general circulation models (GCMs) to drive models that simulate biophysical or societal systems in great detail. Error or bias in the GCM output is often assessed in relation to observations, and the projections are adjusted so that the output from impacts models can be compared to historical or observed conditions. Uncertainty in the projections is typically accommodated by running more than one future climate trajectory to account for differing emissions scenarios, model simulations, and natural variability. The current methods for dealing with error and uncertainty treat them as separate problems. In places where observed and/or simulated natural variability is large, however, it may not be possible to identify a consistent degree of bias in mean climate, blurring the lines between model error and projection uncertainty. Here we demonstrate substantial instability in mean monthly temperature bias across a suite of GCMs used in CMIP5. This instability is greatest in the highest latitudes during the cool season, where shifts from average temperatures below to above freezing could have profound impacts. In models with the greatest degree of bias instability, the timing of regional shifts from below to above average normal temperatures in a single climate projection can vary by about three decades, depending solely on the degree of bias assessed. This suggests that current bias correction methods based on comparison to 20- or 30-year normals may be inappropriate, particularly in the polar regions.
NASA Astrophysics Data System (ADS)
Jiang, Zhou; Xia, Zhenhua; Shi, Yipeng; Chen, Shiyi
2018-04-01
A fully developed spanwise rotating turbulent channel flow has been numerically investigated utilizing large-eddy simulation. Our focus is to assess the performances of the dynamic variants of eddy viscosity models, including dynamic Vreman's model (DVM), dynamic wall adapting local eddy viscosity (DWALE) model, dynamic σ (Dσ ) model, and the dynamic volumetric strain-stretching (DVSS) model, in this canonical flow. The results with dynamic Smagorinsky model (DSM) and direct numerical simulations (DNS) are used as references. Our results show that the DVM has a wrong asymptotic behavior in the near wall region, while the other three models can correctly predict it. In the high rotation case, the DWALE can get reliable mean velocity profile, but the turbulence intensities in the wall-normal and spanwise directions show clear deviations from DNS data. DVSS exhibits poor predictions on both the mean velocity profile and turbulence intensities. In all three cases, Dσ performs the best.
NASA Astrophysics Data System (ADS)
Taghavy, Amir; Kim, Ijung; Huh, Chun; DiCarlo, David A.
2018-06-01
A variable-viscosity colloid transport simulator is developed to model the mobility behavior of surface-engineered nanosilica aggregates (nSiO2) under high salinity conditions. A two-site (2S) filtration approach was incorporated to account for heterogeneous particle-collector surface interactions. 2S model was then implemented along with the conventional clean bed filtration (CFT) and maximum retention capacity (MRC) particle filtration models to simulate the results of a series of column tests conducted in brine (8% wt. NaCl and 2% wt. CaCl2)-saturated Ottawa sand columns at various pore velocities (7 to 71 m/day). Simulation results reveal the superiority of the MRC and 2S model classes over CFT model with respect to numerical performance criteria; a general decrease of normalized sum of squared residuals (ca. 20-90% reduction) and an enhanced degree of normality of model residuals were detected for 2S and MRC over CFT in all simulated experiments. Based on our findings, conformance with theories underpinning colloid deposition in porous media was the ultimate factor that set 2S and MRC model classes apart in terms of explaining the observed mobility trends. MRC and 2S models were evaluated based on the scaling of the fitted maximum retention capacity parameter with variation of experimental conditions. Two subclasses of 2S that consider a mix of favorable and unfavorable attachment sites with irreversible attachment to favorable sites (with and without physical straining effects) were found most consistent with filtration theory and shadow zone predictions, yielding theoretical conformity indices of 0.6 and higher, the highest among all implemented models. An explanation for such irreversible favorable deposition sites on the surface of silica nanoaggregates might be a partial depletion of stabilizing steric forces that had led to the formation of these aggregates.
Vaez‐zadeh, Mehdi; Masoudi, S. Farhad; Rahmani, Faezeh; Knaup, Courtney; Meigooni, Ali S.
2015-01-01
The effects of gold nanoparticles (GNPs) in 125I brachytherapy dose enhancement on choroidal melanoma are examined using the Monte Carlo simulation technique. Usually, Monte Carlo ophthalmic brachytherapy dosimetry is performed in a water phantom. However, here, the compositions of human eye have been considered instead of water. Both human eye and water phantoms have been simulated with MCNP5 code. These simulations were performed for a fully loaded 16 mm COMS eye plaque containing 13 125I seeds. The dose delivered to the tumor and normal tissues have been calculated in both phantoms with and without GNPs. Normally, the radiation therapy of cancer patients is designed to deliver a required dose to the tumor while sparing the surrounding normal tissues. However, as the normal and cancerous cells absorbed dose in an almost identical fashion, the normal tissue absorbed radiation dose during the treatment time. The use of GNPs in combination with radiotherapy in the treatment of tumor decreases the absorbed dose by normal tissues. The results indicate that the dose to the tumor in an eyeball implanted with COMS plaque increases with increasing GNPs concentration inside the target. Therefore, the required irradiation time for the tumors in the eye is decreased by adding the GNPs prior to treatment. As a result, the dose to normal tissues decreases when the irradiation time is reduced. Furthermore, a comparison between the simulated data in an eye phantom made of water and eye phantom made of human eye composition, in the presence of GNPs, shows the significance of utilizing the composition of eye in ophthalmic brachytherapy dosimetry Also, defining the eye composition instead of water leads to more accurate calculations of GNPs radiation effects in ophthalmic brachytherapy dosimetry. PACS number: 87.53.Jw, 87.85.Rs, 87.10.Rt PMID:26699318
Rise time of proton cut-off energy in 2D and 3D PIC simulations
NASA Astrophysics Data System (ADS)
Babaei, J.; Gizzi, L. A.; Londrillo, P.; Mirzanejad, S.; Rovelli, T.; Sinigardi, S.; Turchetti, G.
2017-04-01
The Target Normal Sheath Acceleration regime for proton acceleration by laser pulses is experimentally consolidated and fairly well understood. However, uncertainties remain in the analysis of particle-in-cell simulation results. The energy spectrum is exponential with a cut-off, but the maximum energy depends on the simulation time, following different laws in two and three dimensional (2D, 3D) PIC simulations so that the determination of an asymptotic value has some arbitrariness. We propose two empirical laws for the rise time of the cut-off energy in 2D and 3D PIC simulations, suggested by a model in which the proton acceleration is due to a surface charge distribution on the target rear side. The kinetic energy of the protons that we obtain follows two distinct laws, which appear to be nicely satisfied by PIC simulations, for a model target given by a uniform foil plus a contaminant layer that is hydrogen-rich. The laws depend on two parameters: the scaling time, at which the energy starts to rise, and the asymptotic cut-off energy. The values of the cut-off energy, obtained by fitting 2D and 3D simulations for the same target and laser pulse configuration, are comparable. This suggests that parametric scans can be performed with 2D simulations since 3D ones are computationally very expensive, delegating their role only to a correspondence check. In this paper, the simulations are carried out with the PIC code ALaDyn by changing the target thickness L and the incidence angle α, with a fixed a0 = 3. A monotonic dependence, on L for normal incidence and on α for fixed L, is found, as in the experimental results for high temporal contrast pulses.
Climatic Consequences and Agricultural Impact of Regional Nuclear Conflict
NASA Astrophysics Data System (ADS)
Toon, O. B.; Robock, A.; Mills, M. J.; Xia, L.
2013-05-01
A nuclear war between India and Pakistan, with each country using 50 Hiroshima-sized atom bombs as airbursts on urban areas, would inject smoke from the resulting fires into the stratosphere.This could produce climate change unprecedented in recorded human history and global-scale ozone depletion, with enhanced ultraviolet (UV) radiation reaching the surface.Simulations with the Whole Atmosphere Community Climate Model (WACCM), run at higher vertical and horizontal resolution than a previous simulation with the NASA Goddard Institute for Space Studies ModelE, and incorporating ozone chemistry for the first time, show a longer stratospheric residence time for smoke and hence a longer-lasting climate response, with global average surface air temperatures still 1.1 K below normal and global average precipitation 4% below normal after a decade.The erythemal dose from the enhanced UV radiation would greatly increase, in spite of enhanced absorption by the remaining smoke, with the UV index more than 3 units higher in the summer midlatitudes, even after a decade. Scenarios of changes in temperature, precipitation, and downward shortwave radiation from the ModelE and WACCM simulations, applied to the Decision Support System for Agrotechnology Transfer crop model for winter wheat, rice, soybeans, and maize by perturbing observed time series with anomalies from the regional nuclear war simulations, produce decreases of 10-50% in yield averaged over a decade, with larger decreases in the first several years, over the midlatitudes of the Northern Hemisphere. The impact of the nuclear war simulated here, using much less than 1% of the global nuclear arsenal, would be devastating to world agricultural production and trade, possibly sentencing a billion people now living marginal existences to starvation.The continued environmental threat of the use of even a small number of nuclear weapons must be considered in nuclear policy deliberations in Russia, the U.S., and the rest of the world.
Climatic Consequences and Agricultural Impact of Regional Nuclear Conflict
NASA Astrophysics Data System (ADS)
Robock, Alan; Mills, Michael; Toon, Owen Brian; Xia, Lili
2013-04-01
A nuclear war between India and Pakistan, with each country using 50 Hiroshima-sized atom bombs as airbursts on urban areas, would inject smoke from the resulting fires into the stratosphere. This could produce climate change unprecedented in recorded human history and global-scale ozone depletion, with enhanced ultraviolet (UV) radiation reaching the surface. Simulations with the NCAR Whole Atmosphere Community Climate Model (WACCM), run at higher vertical and horizontal resolution than a previous simulation with the NASA Goddard Institute for Space Studies ModelE, and incorporating ozone chemistry for the first time, show a longer stratospheric residence time for smoke and hence a longer-lasting climate response, with global average surface air temperatures still 1.1 K below normal and global average precipitation 4% below normal after a decade. The erythemal dose from the enhanced UV radiation would greatly increase, in spite of enhanced absorption by the remaining smoke, with the UV index more than 3 units higher in the summer midlatitudes, even after a decade. Scenarios of changes in temperature, precipitation, and downward shortwave radiation from the ModelE and WACCM simulations, applied to the Decision Support System for Agrotechnology Transfer crop model for winter wheat, rice, soybeans, and maize by perturbing observed time series with anomalies from the regional nuclear war simulations, produce decreases of 10-50% in yield averaged over a decade, with larger decreases in the first several years, over several regions in the midlatitudes of the Northern Hemisphere. The impact of the nuclear war simulated here, using much less than 1% of the global nuclear arsenal, would be devastating to world agricultural production and trade, possibly sentencing a billion people now living marginal existences to starvation. The continued environmental threat of the use of even a small number of nuclear weapons must be considered in nuclear policy deliberations in Russia, the U.S., and the rest of the world
Wildfire Risk Mapping over the State of Mississippi: Land Surface Modeling Approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cooke, William H.; Mostovoy, Georgy; Anantharaj, Valentine G
2012-01-01
Three fire risk indexes based on soil moisture estimates were applied to simulate wildfire probability over the southern part of Mississippi using the logistic regression approach. The fire indexes were retrieved from: (1) accumulated difference between daily precipitation and potential evapotranspiration (P-E); (2) top 10 cm soil moisture content simulated by the Mosaic land surface model; and (3) the Keetch-Byram drought index (KBDI). The P-E, KBDI, and soil moisture based indexes were estimated from gridded atmospheric and Mosaic-simulated soil moisture data available from the North American Land Data Assimilation System (NLDAS-2). Normalized deviations of these indexes from the 31-year meanmore » (1980-2010) were fitted into the logistic regression model describing probability of wildfires occurrence as a function of the fire index. It was assumed that such normalization provides more robust and adequate description of temporal dynamics of soil moisture anomalies than the original (not normalized) set of indexes. The logistic model parameters were evaluated for 0.25 x0.25 latitude/longitude cells and for probability representing at least one fire event occurred during 5 consecutive days. A 23-year (1986-2008) forest fires record was used. Two periods were selected and examined (January mid June and mid September December). The application of the logistic model provides an overall good agreement between empirical/observed and model-fitted fire probabilities over the study area during both seasons. The fire risk indexes based on the top 10 cm soil moisture and KBDI have the largest impact on the wildfire odds (increasing it by almost 2 times in response to each unit change of the corresponding fire risk index during January mid June period and by nearly 1.5 times during mid September-December) observed over 0.25 x0.25 cells located along the state of Mississippi Coast line. This result suggests a rather strong control of fire risk indexes on fire occurrence probability over this region.« less
Crop calendars for the US, USSR, and Canada in support of the early warning project
NASA Technical Reports Server (NTRS)
Hodges, T.; Sestak, M. L.; Trenchard, M. H. (Principal Investigator)
1980-01-01
New crop calendars are produced for U.S. regions where several years of periodic growth stage observations are available on a CRD basis. Preexisting crop calendars from the LACIE are also collected as are U.S. crop calendars currently being created for the Foreign Commodities Production Forecast project. For the U.S.S.R. and Canada, no new crop calendars are created because no new data are available. Instead, LACIE crop calendars are compared against simulated normal daily temperatures and against the Robertson wheat and Williams barley phenology models run on the simulated normal temperatures. Severe inconsistencies are noted and discussed. For the U.S.S.R., spring and fall planting dates can probably be estimated accurately from satellite or meteorological data. For the starter model problem, the Feyerherm spring wheat model is recommended for spring planted small grains, and the results of an analysis are presented. For fall planted small grains, use of normal planting dates supplemented by spectral observation of an early stage is recommended. The importance of nonmeteorological factors as they pertain to meteorological factors in determining fall planting is discussed. Crop calendar data available at the Johnson Space Center for the U.S., U.S.S.R., Canada, and other countries are inventoried.
Improving 1D Stellar Models with 3D Atmospheres
NASA Astrophysics Data System (ADS)
Mosumgaard, Jakob Rørsted; Silva Aguirre, Víctor; Weiss, Achim; Christensen-Dalsgaard, Jørgen; Trampedach, Regner
2017-10-01
Stellar evolution codes play a major role in present-day astrophysics, yet they share common issues. In this work we seek to remedy some of those by the use of results from realistic and highly detailed 3D hydrodynamical simulations of stellar atmospheres. We have implemented a new temperature stratification extracted directly from the 3D simulations into the Garching Stellar Evolution Code to replace the simplified atmosphere normally used. Secondly, we have implemented the use of a variable mixing-length parameter, which changes as a function of the stellar surface gravity and temperature - also derived from the 3D simulations. Furthermore, to make our models consistent, we have calculated new opacity tables to match the atmospheric simulations. Here, we present the modified code and initial results on stellar evolution using it.
An energy-limited model of algal biofuel production: Toward the next generation of advanced biofuels
Dunlop, Eric H.; Coaldrake, A. Kimi; Silva, Cory S.; ...
2013-10-22
Algal biofuels are increasingly important as a source of renewable energy. The absence of reliable thermodynamic and other property data, and the large amount of kinetic data that would normally be required have created a major barrier to simulation. Additionally, the absence of a generally accepted flowsheet for biofuel production means that detailed simulation of the wrong approach is a real possibility. This model of algal biofuel production estimates the necessary data and places it into a heuristic model using a commercial simulator that back-calculates the process structure required. Furthermore, complex kinetics can be obviated for now by putting themore » simulator into energy limitation and forcing it to solve for the missing design variables, such as bioreactor surface area, productivity, and oil content. The model does not attempt to prescribe a particular approach, but provides a guide towards a sound engineering approach to this challenging and important problem.« less
Fang, Juan; Gong, He; Kong, Lingyan; Zhu, Dong
2013-12-20
Bone can adjust its morphological structure to adapt to the changes of mechanical environment, i.e. the bone structure change is related to mechanical loading. This implies that osteoarthritis may be closely associated with knee joint deformity. The purposes of this paper were to simulate the internal bone mineral density (BMD) change in three-dimensional (3D) proximal tibia under different mechanical environments, as well as to explore the relationship between mechanical environment and bone morphological abnormity. The right proximal tibia was scanned with CT to reconstruct a 3D proximal tibia model in MIMICS, then it was imported to finite element software ANSYS to establish 3D finite element model. The internal structure of 3D proximal tibia of young normal people was simulated using quantitative bone remodeling theory in combination with finite element method, then based on the changing pattern of joint contact force on the tibial plateau in valgus knees, the mechanical loading was changed, and the simulated normal tibia structure was used as initial structure to simulate the internal structure of 3D proximal tibia for old people with 6° valgus deformity. Four regions of interest (ROIs) were selected in the proximal tibia to quantitatively analyze BMD and compare with the clinical measurements. The simulation results showed that the BMD distribution in 3D proximal tibia was consistent with clinical measurements in normal knees and that in valgus knees was consistent with the measurement of patients with osteoarthritis in clinics. It is shown that the change of mechanical environment is the main cause for the change of subchondral bone structure, and being under abnormal mechanical environment for a long time may lead to osteoarthritis. Besides, the simulation method adopted in this paper can more accurately simulate the internal structure of 3D proximal tibia under different mechanical environments. It helps to better understand the mechanism of osteoarthritis and provides theoretical basis and computational method for the prevention and treatment of osteoarthritis. It can also serve as basis for further study on periprosthetic BMD changes after total knee arthroplasty, and provide a theoretical basis for optimization design of prosthesis.
2013-01-01
Background Bone can adjust its morphological structure to adapt to the changes of mechanical environment, i.e. the bone structure change is related to mechanical loading. This implies that osteoarthritis may be closely associated with knee joint deformity. The purposes of this paper were to simulate the internal bone mineral density (BMD) change in three-dimensional (3D) proximal tibia under different mechanical environments, as well as to explore the relationship between mechanical environment and bone morphological abnormity. Methods The right proximal tibia was scanned with CT to reconstruct a 3D proximal tibia model in MIMICS, then it was imported to finite element software ANSYS to establish 3D finite element model. The internal structure of 3D proximal tibia of young normal people was simulated using quantitative bone remodeling theory in combination with finite element method, then based on the changing pattern of joint contact force on the tibial plateau in valgus knees, the mechanical loading was changed, and the simulated normal tibia structure was used as initial structure to simulate the internal structure of 3D proximal tibia for old people with 6° valgus deformity. Four regions of interest (ROIs) were selected in the proximal tibia to quantitatively analyze BMD and compare with the clinical measurements. Results The simulation results showed that the BMD distribution in 3D proximal tibia was consistent with clinical measurements in normal knees and that in valgus knees was consistent with the measurement of patients with osteoarthritis in clinics. Conclusions It is shown that the change of mechanical environment is the main cause for the change of subchondral bone structure, and being under abnormal mechanical environment for a long time may lead to osteoarthritis. Besides, the simulation method adopted in this paper can more accurately simulate the internal structure of 3D proximal tibia under different mechanical environments. It helps to better understand the mechanism of osteoarthritis and provides theoretical basis and computational method for the prevention and treatment of osteoarthritis. It can also serve as basis for further study on periprosthetic BMD changes after total knee arthroplasty, and provide a theoretical basis for optimization design of prosthesis. PMID:24359345
3D numerical simulations of multiphase continental rifting
NASA Astrophysics Data System (ADS)
Naliboff, J.; Glerum, A.; Brune, S.
2017-12-01
Observations of rifted margin architecture suggest continental breakup occurs through multiple phases of extension with distinct styles of deformation. The initial rifting stages are often characterized by slow extension rates and distributed normal faulting in the upper crust decoupled from deformation in the lower crust and mantle lithosphere. Further rifting marks a transition to higher extension rates and coupling between the crust and mantle lithosphere, with deformation typically focused along large-scale detachment faults. Significantly, recent detailed reconstructions and high-resolution 2D numerical simulations suggest that rather than remaining focused on a single long-lived detachment fault, deformation in this phase may progress toward lithospheric breakup through a complex process of fault interaction and development. The numerical simulations also suggest that an initial phase of distributed normal faulting can play a key role in the development of these complex fault networks and the resulting finite deformation patterns. Motivated by these findings, we will present 3D numerical simulations of continental rifting that examine the role of temporal increases in extension velocity on rifted margin structure. The numerical simulations are developed with the massively parallel finite-element code ASPECT. While originally designed to model mantle convection using advanced solvers and adaptive mesh refinement techniques, ASPECT has been extended to model visco-plastic deformation that combines a Drucker Prager yield criterion with non-linear dislocation and diffusion creep. To promote deformation localization, the internal friction angle and cohesion weaken as a function of accumulated plastic strain. Rather than prescribing a single zone of weakness to initiate deformation, an initial random perturbation of the plastic strain field combined with rapid strain weakening produces distributed normal faulting at relatively slow rates of extension in both 2D and 3D simulations. Our presentation will focus on both the numerical assumptions required to produce these results and variations in 3D rifted margin architecture arising from a transition from slow to rapid rates of extension.
Effects of Pump-turbine S-shaped Characteristics on Transient Behaviours: Model Setup
NASA Astrophysics Data System (ADS)
Zeng, Wei; Yang, Jiandong; Hu, Jinhong
2017-04-01
Pumped storage stations undergo numerous transition processes, which make the pump turbines go through the unstable S-shaped region. The hydraulic transient in S-shaped region has normally been investigated through numerical simulations, while field experiments generally involve high risks and are difficult to perform. In this research, a pumped storage model composed of a piping system, two model units, two electrical control systems, a measurement system and a collection system was set up to study the transition processes. The model platform can be applied to simulate almost any hydraulic transition process that occurs in real power stations, such as load rejection, startup, frequency control and grid connection.
Single-cell-based computer simulation of the oxygen-dependent tumour response to irradiation
NASA Astrophysics Data System (ADS)
Harting, Christine; Peschke, Peter; Borkenstein, Klaus; Karger, Christian P.
2007-08-01
Optimization of treatment plans in radiotherapy requires the knowledge of tumour control probability (TCP) and normal tissue complication probability (NTCP). Mathematical models may help to obtain quantitative estimates of TCP and NTCP. A single-cell-based computer simulation model is presented, which simulates tumour growth and radiation response on the basis of the response of the constituting cells. The model contains oxic, hypoxic and necrotic tumour cells as well as capillary cells which are considered as sources of a radial oxygen profile. Survival of tumour cells is calculated by the linear quadratic model including the modified response due to the local oxygen concentration. The model additionally includes cell proliferation, hypoxia-induced angiogenesis, apoptosis and resorption of inactivated tumour cells. By selecting different degrees of angiogenesis, the model allows the simulation of oxic as well as hypoxic tumours having distinctly different oxygen distributions. The simulation model showed that poorly oxygenated tumours exhibit an increased radiation tolerance. Inter-tumoural variation of radiosensitivity flattens the dose response curve. This effect is enhanced by proliferation between fractions. Intra-tumoural radiosensitivity variation does not play a significant role. The model may contribute to the mechanistic understanding of the influence of biological tumour parameters on TCP. It can in principle be validated in radiation experiments with experimental tumours.
Sensitivity of estimated muscle force in forward simulation of normal walking
Xiao, Ming; Higginson, Jill
2009-01-01
Generic muscle parameters are often used in muscle-driven simulations of human movement estimate individual muscle forces and function. The results may not be valid since muscle properties vary from subject to subject. This study investigated the effect of using generic parameters in a muscle-driven forward simulation on muscle force estimation. We generated a normal walking simulation in OpenSim and examined the sensitivity of individual muscle to perturbations in muscle parameters, including the number of muscles, maximum isometric force, optimal fiber length and tendon slack length. We found that when changing the number muscles included in the model, only magnitude of the estimated muscle forces was affected. Our results also suggest it is especially important to use accurate values of tendon slack length and optimal fiber length for ankle plantarflexors and knee extensors. Changes in force production one muscle were typically compensated for by changes in force production by muscles in the same functional muscle group, or the antagonistic muscle group. Conclusions regarding muscle function based on simulations with generic musculoskeletal parameters should be interpreted with caution. PMID:20498485
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fayock, B.; Zank, G. P.; Heerikhuisen, J., E-mail: brian.fayock@gmail.com, E-mail: garyp.zank@gmail.com, E-mail: jacob.heerikhuisen@uah.edu
Observations made by ultraviolet (UV) detectors on board Pioneer 10, Voyager 1, and Voyager 2 can be used to analyze the distribution of neutral hydrogen throughout the heliosphere, including the interaction regions of the solar wind and local interstellar medium. Previous studies of the long-term trend of decreasing intensity with increasing heliocentric distance established the need for more sophisticated heliospheric models. Here we use state-of-the-art three-dimensional (3D) magnetohydrodynamic (MHD) neutral models to simulate Lyman-alpha backscatter as would be seen by the three spacecrafts, exploiting a new 3D Monte Carlo radiative transfer code under solar minimum conditions. Both observations and simulationsmore » of the UV backscatter intensity are normalized for each spacecraft flight path at {approx}15 AU, and we focus on the slope of decreasing intensity over an increasing heliocentric distance. Comparisons of simulations with Voyager 1 Lyman-alpha data results in a very close match, while the Pioneer 10 comparison is similar due to normalization, but not considered to be in agreement. The deviations may be influenced by a low resolution of photoionization in the 3D MHD-neutral model, a lack of solar cycle activity in our simulations, and possibly issues with instrumental sensitivity. Comparing the slope of Voyager 2 and the simulated intensities yields an almost identical match. Our results predict a large increase in the Lyman-alpha intensity as the hydrogen wall is approached, which would signal an imminent crossing of the heliopause.« less
Using color histogram normalization for recovering chromatic illumination-changed images.
Pei, S C; Tseng, C L; Wu, C C
2001-11-01
We propose a novel image-recovery method using the covariance matrix of the red-green-blue (R-G-B) color histogram and tensor theories. The image-recovery method is called the color histogram normalization algorithm. It is known that the color histograms of an image taken under varied illuminations are related by a general affine transformation of the R-G-B coordinates when the illumination is changed. We propose a simplified affine model for application with illumination variation. This simplified affine model considers the effects of only three basic forms of distortion: translation, scaling, and rotation. According to this principle, we can estimate the affine transformation matrix necessary to recover images whose color distributions are varied as a result of illumination changes. We compare the normalized color histogram of the standard image with that of the tested image. By performing some operations of simple linear algebra, we can estimate the matrix of the affine transformation between two images under different illuminations. To demonstrate the performance of the proposed algorithm, we divide the experiments into two parts: computer-simulated images and real images corresponding to illumination changes. Simulation results show that the proposed algorithm is effective for both types of images. We also explain the noise-sensitive skew-rotation estimation that exists in the general affine model and demonstrate that the proposed simplified affine model without the use of skew rotation is better than the general affine model for such applications.
Implementation of Combined Feather and Surface-Normal Ice Growth Models in LEWICE/X
NASA Technical Reports Server (NTRS)
Velazquez, M. T.; Hansman, R. J., Jr.
1995-01-01
Experimental observations have shown that discrete rime ice growths called feathers, which grow in approximately the direction of water droplet impingement, play an important role in the growth of ice on accreting surfaces for some thermodynamic conditions. An improved physical model of ice accretion has been implemented in the LEWICE 2D panel-based ice accretion code maintained by the NASA Lewis Research Center. The LEWICE/X model of ice accretion explicitly simulates regions of feather growth within the framework of the LEWICE model. Water droplets impinging on an accreting surface are withheld from the normal LEWICE mass/energy balance and handled in a separate routine; ice growth resulting from these droplets is performed with enhanced convective heat transfer approximately along droplet impingement directions. An independent underlying ice shape is grown along surface normals using the unmodified LEWICE method. The resulting dual-surface ice shape models roughness-induced feather growth observed in icing wind tunnel tests. Experiments indicate that the exact direction of feather growth is dependent on external conditions. Data is presented to support a linear variation of growth direction with temperature and cloud water content. Test runs of LEWICE/X indicate that the sizes of surface regions containing feathers are influenced by initial roughness element height. This suggests that a previous argument that feather region size is determined by boundary layer transition may be incorrect. Simulation results for two typical test cases give improved shape agreement over unmodified LEWICE.
Dietz, Mathias; Hohmann, Volker; Jürgens, Tim
2015-01-01
For normal-hearing listeners, speech intelligibility improves if speech and noise are spatially separated. While this spatial release from masking has already been quantified in normal-hearing listeners in many studies, it is less clear how spatial release from masking changes in cochlear implant listeners with and without access to low-frequency acoustic hearing. Spatial release from masking depends on differences in access to speech cues due to hearing status and hearing device. To investigate the influence of these factors on speech intelligibility, the present study measured speech reception thresholds in spatially separated speech and noise for 10 different listener types. A vocoder was used to simulate cochlear implant processing and low-frequency filtering was used to simulate residual low-frequency hearing. These forms of processing were combined to simulate cochlear implant listening, listening based on low-frequency residual hearing, and combinations thereof. Simulated cochlear implant users with additional low-frequency acoustic hearing showed better speech intelligibility in noise than simulated cochlear implant users without acoustic hearing and had access to more spatial speech cues (e.g., higher binaural squelch). Cochlear implant listener types showed higher spatial release from masking with bilateral access to low-frequency acoustic hearing than without. A binaural speech intelligibility model with normal binaural processing showed overall good agreement with measured speech reception thresholds, spatial release from masking, and spatial speech cues. This indicates that differences in speech cues available to listener types are sufficient to explain the changes of spatial release from masking across these simulated listener types. PMID:26721918
Crevillén-García, D
2018-04-01
Time-consuming numerical simulators for solving groundwater flow and dissolution models of physico-chemical processes in deep aquifers normally require some of the model inputs to be defined in high-dimensional spaces in order to return realistic results. Sometimes, the outputs of interest are spatial fields leading to high-dimensional output spaces. Although Gaussian process emulation has been satisfactorily used for computing faithful and inexpensive approximations of complex simulators, these have been mostly applied to problems defined in low-dimensional input spaces. In this paper, we propose a method for simultaneously reducing the dimensionality of very high-dimensional input and output spaces in Gaussian process emulators for stochastic partial differential equation models while retaining the qualitative features of the original models. This allows us to build a surrogate model for the prediction of spatial fields in such time-consuming simulators. We apply the methodology to a model of convection and dissolution processes occurring during carbon capture and storage.
NASA Astrophysics Data System (ADS)
Jindal, Sumit Kumar; Mahajan, Ankush; Raghuwanshi, Sanjeev Kumar
2017-10-01
An analytical model and numerical simulation for the performance of MEMS capacitive pressure sensors in both normal and touch modes is required for expected behavior of the sensor prior to their fabrication. Obtaining such information should be based on a complete analysis of performance parameters such as deflection of diaphragm, change of capacitance when the diaphragm deflects, and sensitivity of the sensor. In the literature, limited work has been carried out on the above-stated issue; moreover, due to approximation factors of polynomials, a tolerance error cannot be overseen. Reliable before-fabrication forecasting requires exact mathematical calculation of the parameters involved. A second-order polynomial equation is calculated mathematically for key performance parameters of both modes. This eliminates the approximation factor, and an exact result can be studied, maintaining high accuracy. The elimination of approximation factors and an approach of exact results are based on a new design parameter (δ) that we propose. The design parameter gives an initial hint to the designers on how the sensor will behave once it is fabricated. The complete work is aided by extensive mathematical detailing of all the parameters involved. Next, we verified our claims using MATLAB® simulation. Since MATLAB® effectively provides the simulation theory for the design approach, more complicated finite element method is not used.
Dynamics of crystalline acetanilide: Analysis using neutron scattering and computer simulation
NASA Astrophysics Data System (ADS)
Hayward, R. L.; Middendorf, H. D.; Wanderlingh, U.; Smith, J. C.
1995-04-01
The unusual temperature dependence of several optical spectroscopic vibrational bands in crystalline acetanilide has been interpreted as providing evidence for dynamic localization. Here we examine the vibrational dynamics of crystalline acetanilide over a spectral range of ˜20-4000 cm-1 using incoherent neutron scattering experiments, phonon normal mode calculations and molecular dynamics simulations. A molecular mechanics energy function is parametrized and used to perform the normal mode analyses in the full configurational space of the crystal i.e., including the intramolecular and intermolecular degrees of freedom. One- and multiphonon incoherent inelastic neutron scattering intensities are calculated from harmonic analyses in the first Brillouin zone and compared with the experimental data presented here. Phonon dispersion relations and mean-square atomic displacements are derived from the harmonic model and compared with data derived from coherent inelastic neutron scattering and neutron and x-ray diffraction. To examine the temperature effects on the vibrations the full, anharmonic potential function is used in molecular dynamics simulations of the crystal at 80, 140, and 300 K. Several, but not all, of the spectral features calculated from the molecular dynamics simulations exhibit temperature-dependent behavior in agreement with experiment. The significance of the results for the interpretation of the optical spectroscopic results and possible improvements to the model are discussed.
NASA Astrophysics Data System (ADS)
Aziz Hashikin, Nurul Ab; Yeong, Chai-Hong; Guatelli, Susanna; Jeet Abdullah, Basri Johan; Ng, Kwan-Hoong; Malaroda, Alessandra; Rosenfeld, Anatoly; Perkins, Alan Christopher
2017-09-01
We aimed to investigate the validity of the partition model (PM) in estimating the absorbed doses to liver tumour ({{D}T} ), normal liver tissue ({{D}NL} ) and lungs ({{D}L} ), when cross-fire irradiations between these compartments are being considered. MIRD-5 phantom incorporated with various treatment parameters, i.e. tumour involvement (TI), tumour-to-normal liver uptake ratio (T/N) and lung shunting (LS), were simulated using the Geant4 Monte Carlo (MC) toolkit. 108 track histories were generated for each combination of the three parameters to obtain the absorbed dose per activity uptake in each compartment (DT{{AT}} , DNL{{ANL}} , and DL{{AL}} ). The administered activities, A were estimated using PM, so as to achieve either limiting doses to normal liver, DNLlim or lungs, ~DLlim (70 or 30 Gy, respectively). Using these administered activities, the activity uptake in each compartment ({{A}T} , {{A}NL} , and {{A}L} ) was estimated and multiplied with the absorbed dose per activity uptake attained using the MC simulations, to obtain the actual dose received by each compartment. PM overestimated {{D}L} by 11.7% in all cases, due to the escaped particles from the lungs. {{D}T} and {{D}NL} by MC were largely affected by T/N, which were not considered by PM due to cross-fire exclusion at the tumour-normal liver boundary. These have resulted in the overestimation of {{D}T} by up to 8% and underestimation of {{D}NL} by as high as -78%, by PM. When DNLlim was estimated via PM, the MC simulations showed significantly higher {{D}NL} for cases with higher T/N, and LS ⩽ 10%. All {{D}L} and {{D}T} by MC were overestimated by PM, thus DLlim were never exceeded. PM leads to inaccurate dose estimations due to the exclusion of cross-fire irradiation, i.e. between the tumour and normal liver tissue. Caution should be taken for cases with higher TI and T/N, and lower LS, as they contribute to major underestimation of {{D}NL} . For {{D}L} , a different correction factor for dose calculation may be used for improved accuracy.
Analytical and experimental vibration analysis of a faulty gear system
NASA Astrophysics Data System (ADS)
Choy, F. K.; Braun, M. J.; Polyshchuk, V.; Zakrajsek, J. J.; Townsend, D. P.; Handschuh, R. F.
1994-10-01
A comprehensive analytical procedure was developed for predicting faults in gear transmission systems under normal operating conditions. A gear tooth fault model is developed to simulate the effects of pitting and wear on the vibration signal under normal operating conditions. The model uses changes in the gear mesh stiffness to simulate the effects of gear tooth faults. The overall dynamics of the gear transmission system is evaluated by coupling the dynamics of each individual gear-rotor system through gear mesh forces generated between each gear-rotor system and the bearing forces generated between the rotor and the gearbox structures. The predicted results were compared with experimental results obtained from a spiral bevel gear fatigue test rig at NASA Lewis Research Center. The Wigner-Ville Distribution (WVD) was used to give a comprehensive comparison of the predicted and experimental results. The WVD method applied to the experimental results were also compared to other fault detection techniques to verify the WVD's ability to detect the pitting damage, and to determine its relative performance. Overall results show good correlation between the experimental vibration data of the damaged test gear and the predicted vibration from the model with simulated gear tooth pitting damage. Results also verified that the WVD method can successfully detect and locate gear tooth wear and pitting damage.
Analytical and experimental vibration analysis of a faulty gear system
NASA Astrophysics Data System (ADS)
Choy, F. K.; Braun, M. J.; Polyshchuk, V.; Zakrajsek, J. J.; Townsend, D. P.; Handschuh, R. F.
1994-10-01
A comprehensive analytical procedure was developed for predicting faults in gear transmission systems under normal operating conditions. A gear tooth fault model is developed to simulate the effects of pitting and wear on the vibration signal under normal operating conditions. The model uses changes in the gear mesh stiffness to simulate the effects of gear tooth faults. The overall dynamics of the gear transmission system is evaluated by coupling the dynamics of each individual gear-rotor system through gear mesh forces generated between each gear-rotor system and the bearing forces generated between the rotor and the gearbox structure. The predicted results were compared with experimental results obtained from a spiral bevel gear fatigue test rig at NASA Lewis Research Center. The Wigner-Ville distribution (WVD) was used to give a comprehensive comparison of the predicted and experimental results. The WVD method applied to the experimental results were also compared to other fault detection techniques to verify the WVD's ability to detect the pitting damage, and to determine its relative performance. Overall results show good correlation between the experimental vibration data of the damaged test gear and the predicted vibration from the model with simulated gear tooth pitting damage. Results also verified that the WVD method can successfully detect and locate gear tooth wear and pitting damage.
Analytical and Experimental Vibration Analysis of a Faulty Gear System
NASA Technical Reports Server (NTRS)
Choy, F. K.; Braun, M. J.; Polyshchuk, V.; Zakrajsek, J. J.; Townsend, D. P.; Handschuh, R. F.
1994-01-01
A comprehensive analytical procedure was developed for predicting faults in gear transmission systems under normal operating conditions. A gear tooth fault model is developed to simulate the effects of pitting and wear on the vibration signal under normal operating conditions. The model uses changes in the gear mesh stiffness to simulate the effects of gear tooth faults. The overall dynamics of the gear transmission system is evaluated by coupling the dynamics of each individual gear-rotor system through gear mesh forces generated between each gear-rotor system and the bearing forces generated between the rotor and the gearbox structure. The predicted results were compared with experimental results obtained from a spiral bevel gear fatigue test rig at NASA Lewis Research Center. The Wigner-Ville distribution (WVD) was used to give a comprehensive comparison of the predicted and experimental results. The WVD method applied to the experimental results were also compared to other fault detection techniques to verify the WVD's ability to detect the pitting damage, and to determine its relative performance. Overall results show good correlation between the experimental vibration data of the damaged test gear and the predicted vibration from the model with simulated gear tooth pitting damage. Results also verified that the WVD method can successfully detect and locate gear tooth wear and pitting damage.
Holm Hansen, Christian; Warner, Pamela; Parker, Richard A; Walker, Brian R; Critchley, Hilary Od; Weir, Christopher J
2017-12-01
It is often unclear what specific adaptive trial design features lead to an efficient design which is also feasible to implement. This article describes the preparatory simulation study for a Bayesian response-adaptive dose-finding trial design. Dexamethasone for Excessive Menstruation aims to assess the efficacy of Dexamethasone in reducing excessive menstrual bleeding and to determine the best dose for further study. To maximise learning about the dose response, patients receive placebo or an active dose with randomisation probabilities adapting based on evidence from patients already recruited. The dose-response relationship is estimated using a flexible Bayesian Normal Dynamic Linear Model. Several competing design options were considered including: number of doses, proportion assigned to placebo, adaptation criterion, and number and timing of adaptations. We performed a fractional factorial study using SAS software to simulate virtual trial data for candidate adaptive designs under a variety of scenarios and to invoke WinBUGS for Bayesian model estimation. We analysed the simulated trial results using Normal linear models to estimate the effects of each design feature on empirical type I error and statistical power. Our readily-implemented approach using widely available statistical software identified a final design which performed robustly across a range of potential trial scenarios.
When can ocean acidification impacts be detected from decadal alkalinity measurements?
NASA Astrophysics Data System (ADS)
Carter, B. R.; Frölicher, T. L.; Dunne, J. P.; Rodgers, K. B.; Slater, R. D.; Sarmiento, J. L.
2016-04-01
We use a large initial condition suite of simulations (30 runs) with an Earth system model to assess the detectability of biogeochemical impacts of ocean acidification (OA) on the marine alkalinity distribution from decadally repeated hydrographic measurements such as those produced by the Global Ship-Based Hydrographic Investigations Program (GO-SHIP). Detection of these impacts is complicated by alkalinity changes from variability and long-term trends in freshwater and organic matter cycling and ocean circulation. In our ensemble simulation, variability in freshwater cycling generates large changes in alkalinity that obscure the changes of interest and prevent the attribution of observed alkalinity redistribution to OA. These complications from freshwater cycling can be mostly avoided through salinity normalization of alkalinity. With the salinity-normalized alkalinity, modeled OA impacts are broadly detectable in the surface of the subtropical gyres by 2030. Discrepancies between this finding and the finding of an earlier analysis suggest that these estimates are strongly sensitive to the patterns of calcium carbonate export simulated by the model. OA impacts are detectable later in the subpolar and equatorial regions due to slower responses of alkalinity to OA in these regions and greater seasonal equatorial alkalinity variability. OA impacts are detectable later at depth despite lower variability due to smaller rates of change and consistent measurement uncertainty.
Velocity field calculation for non-orthogonal numerical grids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flach, G. P.
2015-03-01
Computational grids containing cell faces that do not align with an orthogonal (e.g. Cartesian, cylindrical) coordinate system are routinely encountered in porous-medium numerical simulations. Such grids are referred to in this study as non-orthogonal grids because some cell faces are not orthogonal to a coordinate system plane (e.g. xy, yz or xz plane in Cartesian coordinates). Non-orthogonal grids are routinely encountered at the Savannah River Site in porous-medium flow simulations for Performance Assessments and groundwater flow modeling. Examples include grid lines that conform to the sloping roof of a waste tank or disposal unit in a 2D Performance Assessment simulation,more » and grid surfaces that conform to undulating stratigraphic surfaces in a 3D groundwater flow model. Particle tracking is routinely performed after a porous-medium numerical flow simulation to better understand the dynamics of the flow field and/or as an approximate indication of the trajectory and timing of advective solute transport. Particle tracks are computed by integrating the velocity field from cell to cell starting from designated seed (starting) positions. An accurate velocity field is required to attain accurate particle tracks. However, many numerical simulation codes report only the volumetric flowrate (e.g. PORFLOW) and/or flux (flowrate divided by area) crossing cell faces. For an orthogonal grid, the normal flux at a cell face is a component of the Darcy velocity vector in the coordinate system, and the pore velocity for particle tracking is attained by dividing by water content. For a non-orthogonal grid, the flux normal to a cell face that lies outside a coordinate plane is not a true component of velocity with respect to the coordinate system. Nonetheless, normal fluxes are often taken as Darcy velocity components, either naively or with accepted approximation. To enable accurate particle tracking or otherwise present an accurate depiction of the velocity field for a non-orthogonal grid, Darcy velocity components are rigorously derived in this study from normal fluxes to cell faces, which are assumed to be provided by or readily computed from porous-medium simulation code output. The normal fluxes are presumed to satisfy mass balances for every computational cell, and if so, the derived velocity fields are consistent with these mass balances. Derivations are provided for general two-dimensional quadrilateral and three-dimensional hexagonal systems, and for the commonly encountered special cases of perfectly vertical side faces in 2D and 3D and a rectangular footprint in 3D.« less
Full-Body Musculoskeletal Model for Muscle-Driven Simulation of Human Gait.
Rajagopal, Apoorva; Dembia, Christopher L; DeMers, Matthew S; Delp, Denny D; Hicks, Jennifer L; Delp, Scott L
2016-10-01
Musculoskeletal models provide a non-invasive means to study human movement and predict the effects of interventions on gait. Our goal was to create an open-source 3-D musculoskeletal model with high-fidelity representations of the lower limb musculature of healthy young individuals that can be used to generate accurate simulations of gait. Our model includes bony geometry for the full body, 37 degrees of freedom to define joint kinematics, Hill-type models of 80 muscle-tendon units actuating the lower limbs, and 17 ideal torque actuators driving the upper body. The model's musculotendon parameters are derived from previous anatomical measurements of 21 cadaver specimens and magnetic resonance images of 24 young healthy subjects. We tested the model by evaluating its computational time and accuracy of simulations of healthy walking and running. Generating muscle-driven simulations of normal walking and running took approximately 10 minutes on a typical desktop computer. The differences between our muscle-generated and inverse dynamics joint moments were within 3% (RMSE) of the peak inverse dynamics joint moments in both walking and running, and our simulated muscle activity showed qualitative agreement with salient features from experimental electromyography data. These results suggest that our model is suitable for generating muscle-driven simulations of healthy gait. We encourage other researchers to further validate and apply the model to study other motions of the lower extremity. The model is implemented in the open-source software platform OpenSim. The model and data used to create and test the simulations are freely available at https://simtk.org/home/full_body/, allowing others to reproduce these results and create their own simulations.
NASA Astrophysics Data System (ADS)
Pu, Yang; Chen, Jun; Wang, Wubao
2014-02-01
The scattering coefficient, μs, the anisotropy factor, g, the scattering phase function, p(θ), and the angular dependence of scattering intensity distributions of human cancerous and normal prostate tissues were systematically investigated as a function of wavelength, scattering angle and scattering particle size using Mie theory and experimental parameters. The Matlab-based codes using Mie theory for both spherical and cylindrical models were developed and applied for studying the light propagation and the key scattering properties of the prostate tissues. The optical and structural parameters of tissue such as the index of refraction of cytoplasm, size of nuclei, and the diameter of the nucleoli for cancerous and normal human prostate tissues obtained from the previous biological, biomedical and bio-optic studies were used for Mie theory simulation and calculation. The wavelength dependence of scattering coefficient and anisotropy factor were investigated in the wide spectral range from 300 nm to 1200 nm. The scattering particle size dependence of μs, g, and scattering angular distributions were studied for cancerous and normal prostate tissues. The results show that cancerous prostate tissue containing larger size scattering particles has more contribution to the forward scattering in comparison with the normal prostate tissue. In addition to the conventional simulation model that approximately considers the scattering particle as sphere, the cylinder model which is more suitable for fiber-like tissue frame components such as collagen and elastin was used for developing a computation code to study angular dependence of scattering in prostate tissues. To the best of our knowledge, this is the first study to deal with both spherical and cylindrical scattering particles in prostate tissues.
A random effects meta-analysis model with Box-Cox transformation.
Yamaguchi, Yusuke; Maruo, Kazushi; Partlett, Christopher; Riley, Richard D
2017-07-19
In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and prediction intervals from the normal random effects model. The random effects meta-analysis with the Box-Cox transformation may be an important tool for examining robustness of traditional meta-analysis results against skewness on the observed treatment effect estimates. Further critical evaluation of the method is needed.
Accurate Modeling of Dark-Field Scattering Spectra of Plasmonic Nanostructures.
Jiang, Liyong; Yin, Tingting; Dong, Zhaogang; Liao, Mingyi; Tan, Shawn J; Goh, Xiao Ming; Allioux, David; Hu, Hailong; Li, Xiangyin; Yang, Joel K W; Shen, Zexiang
2015-10-27
Dark-field microscopy is a widely used tool for measuring the optical resonance of plasmonic nanostructures. However, current numerical methods for simulating the dark-field scattering spectra were carried out with plane wave illumination either at normal incidence or at an oblique angle from one direction. In actual experiments, light is focused onto the sample through an annular ring within a range of glancing angles. In this paper, we present a theoretical model capable of accurately simulating the dark-field light source with an annular ring. Simulations correctly reproduce a counterintuitive blue shift in the scattering spectra from gold nanodisks with a diameter beyond 140 nm. We believe that our proposed simulation method can be potentially applied as a general tool capable of simulating the dark-field scattering spectra of plasmonic nanostructures as well as other dielectric nanostructures with sizes beyond the quasi-static limit.
Particle-in-cell simulation study on halo formation in anisotropic beams
NASA Astrophysics Data System (ADS)
Ikegami, Masanori
2000-11-01
In a recent paper (M. Ikegami, Nucl. Instr. and Meth. A 435 (1999) 284), we investigated halo formation processes in transversely anisotropic beams based on the particle-core model. The effect of simultaneous excitation of two normal modes of core oscillation, i.e., high- and low-frequency modes, was examined. In the present study, self-consistent particle simulations are performed to confirm the results obtained in the particle-core analysis. In these simulations, it is confirmed that the particle-core analysis can predict the halo extent accurately even in anisotropic situations. Furthermore, we find that the halo intensity is enhanced in some cases where two normal modes of core oscillation are simultaneously excited as expected in the particle-core analysis. This result is of practical importance because pure high-frequency mode oscillation has frequently been assumed in preceding halo studies. The dependence of halo intensity on the 2:1 fixed point locations is also discussed.
ERIC Educational Resources Information Center
Sengul Avsar, Asiye; Tavsancil, Ezel
2017-01-01
This study analysed polytomous items' psychometric properties according to nonparametric item response theory (NIRT) models. Thus, simulated datasets--three different test lengths (10, 20 and 30 items), three sample distributions (normal, right and left skewed) and three samples sizes (100, 250 and 500)--were generated by conducting 20…
A Novel Temporal Bone Simulation Model Using 3D Printing Techniques.
Mowry, Sarah E; Jammal, Hachem; Myer, Charles; Solares, Clementino Arturo; Weinberger, Paul
2015-09-01
An inexpensive temporal bone model for use in a temporal bone dissection laboratory setting can be made using a commercially available, consumer-grade 3D printer. Several models for a simulated temporal bone have been described but use commercial-grade printers and materials to produce these models. The goal of this project was to produce a plastic simulated temporal bone on an inexpensive 3D printer that recreates the visual and haptic experience associated with drilling a human temporal bone. Images from a high-resolution CT of a normal temporal bone were converted into stereolithography files via commercially available software, with image conversion and print settings adjusted to achieve optimal print quality. The temporal bone model was printed using acrylonitrile butadiene styrene (ABS) plastic filament on a MakerBot 2x 3D printer. Simulated temporal bones were drilled by seven expert temporal bone surgeons, assessing the fidelity of the model as compared with a human cadaveric temporal bone. Using a four-point scale, the simulated bones were assessed for haptic experience and recreation of the temporal bone anatomy. The created model was felt to be an accurate representation of a human temporal bone. All raters felt strongly this would be a good training model for junior residents or to simulate difficult surgical anatomy. Material cost for each model was $1.92. A realistic, inexpensive, and easily reproducible temporal bone model can be created on a consumer-grade desktop 3D printer.
Weibull mixture regression for marginal inference in zero-heavy continuous outcomes.
Gebregziabher, Mulugeta; Voronca, Delia; Teklehaimanot, Abeba; Santa Ana, Elizabeth J
2017-06-01
Continuous outcomes with preponderance of zero values are ubiquitous in data that arise from biomedical studies, for example studies of addictive disorders. This is known to lead to violation of standard assumptions in parametric inference and enhances the risk of misleading conclusions unless managed properly. Two-part models are commonly used to deal with this problem. However, standard two-part models have limitations with respect to obtaining parameter estimates that have marginal interpretation of covariate effects which are important in many biomedical applications. Recently marginalized two-part models are proposed but their development is limited to log-normal and log-skew-normal distributions. Thus, in this paper, we propose a finite mixture approach, with Weibull mixture regression as a special case, to deal with the problem. We use extensive simulation study to assess the performance of the proposed model in finite samples and to make comparisons with other family of models via statistical information and mean squared error criteria. We demonstrate its application on real data from a randomized controlled trial of addictive disorders. Our results show that a two-component Weibull mixture model is preferred for modeling zero-heavy continuous data when the non-zero part are simulated from Weibull or similar distributions such as Gamma or truncated Gauss.
NASA Astrophysics Data System (ADS)
Steinberg, Idan; Harbater, Osnat; Gannot, Israel
2014-07-01
The diffusion approximation is useful for many optical diagnostics modalities, such as near-infrared spectroscopy. However, the simple normal incidence, semi-infinite layer model may prove lacking in estimation of deep-tissue optical properties such as required for monitoring cerebral hemodynamics, especially in neonates. To answer this need, we present an analytical multilayered, oblique incidence diffusion model. Initially, the model equations are derived in vector-matrix form to facilitate fast and simple computation. Then, the spatiotemporal reflectance predicted by the model for a complex neonate head is compared with time-resolved Monte Carlo (TRMC) simulations under a wide range of physiologically feasible parameters. The high accuracy of the multilayer model is demonstrated in that the deviation from TRMC simulations is only a few percent even under the toughest conditions. We then turn to solve the inverse problem and estimate the oxygen saturation of deep brain tissues based on the temporal and spatial behaviors of the reflectance. Results indicate that temporal features of the reflectance are more sensitive to deep-layer optical parameters. The accuracy of estimation is shown to be more accurate and robust than the commonly used single-layer diffusion model. Finally, the limitations of such approaches are discussed thoroughly.
Simulating Bone Loss in Microgravity Using Mathematical Formulations of Bone Remodeling
NASA Technical Reports Server (NTRS)
Pennline, James A.
2009-01-01
Most mathematical models of bone remodeling are used to simulate a specific bone disease, by disrupting the steady state or balance in the normal remodeling process, and to simulate a therapeutic strategy. In this work, the ability of a mathematical model of bone remodeling to simulate bone loss as a function of time under the conditions of microgravity is investigated. The model is formed by combining a previously developed set of biochemical, cellular dynamics, and mechanical stimulus equations in the literature with two newly proposed equations; one governing the rate of change of the area of cortical bone tissue in a cross section of a cylindrical section of bone and one governing the rate of change of calcium in the bone fluid. The mechanical stimulus comes from a simple model of stress due to a compressive force on a cylindrical section of bone which can be reduced to zero to mimic the effects of skeletal unloading in microgravity. The complete set of equations formed is a system of first order ordinary differential equations. The results of selected simulations are displayed and discussed. Limitations and deficiencies of the model are also discussed as well as suggestions for further research.
NASA Astrophysics Data System (ADS)
Gektin, Yu. M.; Egoshkin, N. A.; Eremeev, V. V.; Kuznecov, A. E.; Moskatinyev, I. V.; Smelyanskiy, M. B.
2017-12-01
A set of standardized models and algorithms for geometric normalization and georeferencing images from geostationary and highly elliptical Earth observation systems is considered. The algorithms can process information from modern scanning multispectral sensors with two-coordinate scanning and represent normalized images in optimal projection. Problems of the high-precision ground calibration of the imaging equipment using reference objects, as well as issues of the flight calibration and refinement of geometric models using the absolute and relative reference points, are considered. Practical testing of the models, algorithms, and technologies is performed in the calibration of sensors for spacecrafts of the Electro-L series and during the simulation of the Arktika prospective system.
Simulation of realistic retinoscopic measurement
NASA Astrophysics Data System (ADS)
Tan, Bo; Chen, Ying-Ling; Baker, K.; Lewis, J. W.; Swartz, T.; Jiang, Y.; Wang, M.
2007-03-01
Realistic simulation of ophthalmic measurements on normal and diseased eyes is presented. We use clinical data of ametropic and keratoconus patients to construct anatomically accurate three-dimensional eye models and simulate the measurement of a streak retinoscope with all the optical elements. The results show the clinical observations including the anomalous motion in high myopia and the scissors reflex in keratoconus. The demonstrated technique can be applied to other ophthalmic instruments and to other and more extensively abnormal eye conditions. It provides promising features for medical training and for evaluating and developing ocular instruments.
NASA Technical Reports Server (NTRS)
Powers, Bruce G.
1996-01-01
The ability to use flight data to determine an aircraft model with structural dynamic effects suitable for piloted simulation. and handling qualities analysis has been developed. This technique was demonstrated using SR-71 flight test data. For the SR-71 aircraft, the most significant structural response is the longitudinal first-bending mode. This mode was modeled as a second-order system, and the other higher order modes were modeled as a time delay. The distribution of the modal response at various fuselage locations was developed using a uniform beam solution, which can be calibrated using flight data. This approach was compared to the mode shape obtained from the ground vibration test, and the general form of the uniform beam solution was found to be a good representation of the mode shape in the areas of interest. To calibrate the solution, pitch-rate and normal-acceleration instrumentation is required for at least two locations. With the resulting structural model incorporated into the simulation, a good representation of the flight characteristics was provided for handling qualities analysis and piloted simulation.
A model for generating Surface EMG signal of m. Tibialis Anterior.
Siddiqi, Ariba; Kumar, Dinesh; Arjunan, Sridhar P
2014-01-01
A model that simulates surface electromyogram (sEMG) signal of m. Tibialis Anterior has been developed and tested. This has a firing rate equation that is based on experimental findings. It also has a recruitment threshold that is based on observed statistical distribution. Importantly, it has considered both, slow and fast type which has been distinguished based on their conduction velocity. This model has assumed that the deeper unipennate half of the muscle does not contribute significantly to the potential induced on the surface of the muscle and has approximated the muscle to have parallel structure. The model was validated by comparing the simulated and the experimental sEMG signal recordings. Experiments were conducted on eight subjects who performed isometric dorsiflexion at 10, 20, 30, 50, 75, and 100% maximal voluntary contraction. Normalized root mean square and median frequency of the experimental and simulated EMG signal were computed and the slopes of the linearity with the force were statistically analyzed. The gradients were found to be similar (p>0.05) for both experimental and simulated sEMG signal, validating the proposed model.
A sEMG model with experimentally based simulation parameters.
Wheeler, Katherine A; Shimada, Hiroshima; Kumar, Dinesh K; Arjunan, Sridhar P
2010-01-01
A differential, time-invariant, surface electromyogram (sEMG) model has been implemented. While it is based on existing EMG models, the novelty of this implementation is that it assigns more accurate distributions of variables to create realistic motor unit (MU) characteristics. Variables such as muscle fibre conduction velocity, jitter (the change in the interpulse interval between subsequent action potential firings) and motor unit size have been considered to follow normal distributions about an experimentally obtained mean. In addition, motor unit firing frequencies have been considered to have non-linear and type based distributions that are in accordance with experimental results. Motor unit recruitment thresholds have been considered to be related to the MU type. The model has been used to simulate single channel differential sEMG signals from voluntary, isometric contractions of the biceps brachii muscle. The model has been experimentally verified by conducting experiments on three subjects. Comparison between simulated signals and experimental recordings shows that the Root Mean Square (RMS) increases linearly with force in both cases. The simulated signals also show similar values and rates of change of RMS to the experimental signals.
Full body musculoskeletal model for muscle-driven simulation of human gait
Rajagopal, Apoorva; Dembia, Christopher L.; DeMers, Matthew S.; Delp, Denny D.; Hicks, Jennifer L.; Delp, Scott L.
2017-01-01
Objective Musculoskeletal models provide a non-invasive means to study human movement and predict the effects of interventions on gait. Our goal was to create an open-source, three-dimensional musculoskeletal model with high-fidelity representations of the lower limb musculature of healthy young individuals that can be used to generate accurate simulations of gait. Methods Our model includes bony geometry for the full body, 37 degrees of freedom to define joint kinematics, Hill-type models of 80 muscle-tendon units actuating the lower limbs, and 17 ideal torque actuators driving the upper body. The model’s musculotendon parameters are derived from previous anatomical measurements of 21 cadaver specimens and magnetic resonance images of 24 young healthy subjects. We tested the model by evaluating its computational time and accuracy of simulations of healthy walking and running. Results Generating muscle-driven simulations of normal walking and running took approximately 10 minutes on a typical desktop computer. The differences between our muscle-generated and inverse dynamics joint moments were within 3% (RMSE) of the peak inverse dynamics joint moments in both walking and running, and our simulated muscle activity showed qualitative agreement with salient features from experimental electromyography data. Conclusion These results suggest that our model is suitable for generating muscle-driven simulations of healthy gait. We encourage other researchers to further validate and apply the model to study other motions of the lower-extremity. Significance The model is implemented in the open source software platform OpenSim. The model and data used to create and test the simulations are freely available at https://simtk.org/home/full_body/, allowing others to reproduce these results and create their own simulations. PMID:27392337
Fault detection and diagnosis using neural network approaches
NASA Technical Reports Server (NTRS)
Kramer, Mark A.
1992-01-01
Neural networks can be used to detect and identify abnormalities in real-time process data. Two basic approaches can be used, the first based on training networks using data representing both normal and abnormal modes of process behavior, and the second based on statistical characterization of the normal mode only. Given data representative of process faults, radial basis function networks can effectively identify failures. This approach is often limited by the lack of fault data, but can be facilitated by process simulation. The second approach employs elliptical and radial basis function neural networks and other models to learn the statistical distributions of process observables under normal conditions. Analytical models of failure modes can then be applied in combination with the neural network models to identify faults. Special methods can be applied to compensate for sensor failures, to produce real-time estimation of missing or failed sensors based on the correlations codified in the neural network.
NASA Astrophysics Data System (ADS)
Anikin, A. S.
2018-06-01
Conditional statistical characteristics of the phase difference are considered depending on the ratio of instantaneous output signal amplitudes of spatially separated weakly directional antennas for the normal field model for paths with radio-wave scattering. The dependences obtained are related to the physical processes on the radio-wave propagation path. The normal model parameters are established at which the statistical characteristics of the phase difference depend on the ratio of the instantaneous amplitudes and hence can be used to measure the phase difference. Using Shannon's formula, the amount of information on the phase difference of signals contained in the ratio of their amplitudes is calculated depending on the parameters of the normal field model. Approaches are suggested to reduce the shift of phase difference measured for paths with radio-wave scattering. A comparison with results of computer simulation by the Monte Carlo method is performed.
[Simulation and data analysis of stereological modeling based on virtual slices].
Wang, Hao; Shen, Hong; Bai, Xiao-yan
2008-05-01
To establish a computer-assisted stereological model for simulating the process of slice section and evaluate the relationship between section surface and estimated three-dimensional structure. The model was designed by mathematic method as a win32 software based on the MFC using Microsoft visual studio as IDE for simulating the infinite process of sections and analysis of the data derived from the model. The linearity of the fitting of the model was evaluated by comparison with the traditional formula. The win32 software based on this algorithm allowed random sectioning of the particles distributed randomly in an ideal virtual cube. The stereological parameters showed very high throughput (>94.5% and 92%) in homogeneity and independence tests. The data of density, shape and size of the section were tested to conform to normal distribution. The output of the model and that from the image analysis system showed statistical correlation and consistency. The algorithm we described can be used for evaluating the stereologic parameters of the structure of tissue slices.
Effects of septum and pericardium on heart-lung interactions in a cardiopulmonary simulation model.
Karamolegkos, Nikolaos; Albanese, Antonio; Chbat, Nicolas W
2017-07-01
Mechanical heart-lung interactions are often overlooked in clinical settings. However, their impact on cardiac function can be quite significant. Mechanistic physiology-based models can provide invaluable insights into such cardiorespiratory interactions, which occur not only under external mechanical ventilatory support but in normal physiology as well. In this work, we focus on the cardiac component of a previously developed mathematical model of the human cardiopulmonary system, aiming to improve the model's response to the intrathoracic pressure variations that are associated with the respiratory cycle. Interventricular septum and pericardial membrane are integrated into the existing model. Their effect on the overall cardiac response is explained by means of comparison against simulation results from the original model as well as experimental data from literature.
Li, Simeng; Li, Nianbei
2018-03-28
For one-dimensional (1d) nonlinear atomic lattices, the models with on-site nonlinearities such as the Frenkel-Kontorova (FK) and ϕ 4 lattices have normal energy transport while the models with inter-site nonlinearities such as the Fermi-Pasta-Ulam-β (FPU-β) lattice exhibit anomalous energy transport. The 1d Discrete Nonlinear Schrödinger (DNLS) equations with on-site nonlinearities has been previously studied and normal energy transport has also been found. Here, we investigate the energy transport of 1d FPU-like DNLS equations with inter-site nonlinearities. Extended from the FPU-β lattice, the renormalized vibration theory is developed for the FPU-like DNLS models and the predicted renormalized vibrations are verified by direct numerical simulations same as the FPU-β lattice. However, the energy diffusion processes are explored and normal energy transport is observed for the 1d FPU-like DNLS models, which is different from their atomic lattice counterpart of FPU-β lattice. The reason might be that, unlike nonlinear atomic lattices where models with on-site nonlinearities have one less conserved quantities than the models with inter-site nonlinearities, the DNLS models with on-site or inter-site nonlinearities have the same number of conserved quantities as the result of gauge transformation.
NASA Astrophysics Data System (ADS)
Tao, Zhu; Shi, Runhe; Zeng, Yuyan; Gao, Wei
2017-09-01
The 3D model is an important part of simulated remote sensing for earth observation. Regarding the small-scale spatial extent of DART software, both the details of the model itself and the number of models of the distribution have an important impact on the scene canopy Normalized Difference Vegetation Index (NDVI).Taking the phragmitesaustralis in the Yangtze Estuary as an example, this paper studied the effect of the P.australias model on the canopy NDVI, based on the previous studies of the model precision, mainly from the cell dimension of the DART software and the density distribution of the P.australias model in the scene, As well as the choice of the density of the P.australiass model under the cost of computer running time in the actual simulation. The DART Cell dimensions and the density of the scene model were set by using the optimal precision model from the existing research results. The simulation results of NDVI with different model densities under different cell dimensions were analyzed by error analysis. By studying the relationship between relative error, absolute error and time costs, we have mastered the density selection method of P.australias model in the simulation of small-scale spatial scale scene. Experiments showed that the number of P.australias in the simulated scene need not be the same as those in the real environment due to the difference between the 3D model and the real scenarios. The best simulation results could be obtained by keeping the density ratio of about 40 trees per square meter, simultaneously, of the visual effects.
Cheng, Wei; Cornwall, Roger; Crouch, Dustin L; Li, Zhongyu; Saul, Katherine R
2015-06-01
Two potential mechanisms leading to postural and osseous shoulder deformity after brachial plexus birth palsy are muscle imbalance between functioning internal rotators and paralyzed external rotators and impaired longitudinal growth of paralyzed muscles. Our goal was to evaluate the combined and isolated effects of these 2 mechanisms on transverse plane shoulder forces using a computational model of C5-6 brachial plexus injury. We modeled a C5-6 injury using a computational musculoskeletal upper limb model. Muscles expected to be denervated by C5-6 injury were classified as affected, with the remaining shoulder muscles classified as unaffected. To model muscle imbalance, affected muscles were given no resting tone whereas unaffected muscles were given resting tone at 30% of maximal activation. To model impaired growth, affected muscles were reduced in length by 30% compared with normal whereas unaffected muscles remained normal in length. Four scenarios were simulated: normal, muscle imbalance only, impaired growth only, and both muscle imbalance and impaired growth. Passive shoulder rotation range of motion and glenohumeral joint reaction forces were evaluated to assess postural and osseous deformity. All impaired scenarios exhibited restricted range of motion and increased and posteriorly directed compressive glenohumeral joint forces. Individually, impaired muscle growth caused worse restriction in range of motion and higher and more posteriorly directed glenohumeral forces than did muscle imbalance. Combined muscle imbalance and impaired growth caused the most restricted joint range of motion and the highest joint reaction force of all scenarios. Both muscle imbalance and impaired longitudinal growth contributed to range of motion and force changes consistent with clinically observed deformity, although the most substantial effects resulted from impaired muscle growth. Simulations suggest that treatment strategies emphasizing treatment of impaired longitudinal growth are warranted for reducing deformity after brachial plexus birth palsy. Copyright © 2015 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.
Jacob, Richard E.; Kuprat, Andrew P.; Einstein, Daniel R.; Corley, Richard A.
2016-01-01
Context Computational fluid dynamics (CFD) simulations of airflows coupled with physiologically-based pharmacokinetic (PBPK) modeling of respiratory tissue doses of airborne materials have traditionally used either steady-state inhalation or a sinusoidal approximation of the breathing cycle for airflow simulations despite their differences from normal breathing patterns. Objective Evaluate the impact of realistic breathing patterns, including sniffing, on predicted nasal tissue concentrations of a reactive vapor that targets the nose in rats as a case study. Materials and methods Whole-body plethysmography measurements from a free-breathing rat were used to produce profiles of normal breathing, sniffing, and combinations of both as flow inputs to CFD/PBPK simulations of acetaldehyde exposure. Results For the normal measured ventilation profile, modest reductions in time- and tissue depth-dependent areas under the curve (AUC) acetaldehyde concentrations were predicted in the wet squamous, respiratory, and transitional epithelium along the main airflow path, while corresponding increases were predicted in the olfactory epithelium, especially the most distal regions of the ethmoid turbinates, versus the idealized profile. The higher amplitude/frequency sniffing profile produced greater AUC increases over the idealized profile in the olfactory epithelium, especially in the posterior region. Conclusions The differences in tissue AUCs at known lesion-forming regions for acetaldehyde between normal and idealized profiles were minimal, suggesting that sinusoidal profiles may be used for this chemical and exposure concentration. However, depending upon the chemical, exposure system and concentration, and the time spent sniffing, the use of realistic breathing profiles—including sniffing—could become an important modulator for local tissue dose predictions. PMID:26986954
DOE Office of Scientific and Technical Information (OSTI.GOV)
Colby, Sean M.; Kabilan, Senthil; Jacob, Richard E.
Abstract Context: Computational fluid dynamics (CFD) simulations of airflows coupled with physiologically based pharmacokinetic (PBPK) modeling of respiratory tissue doses of airborne materials have traditionally used either steady-state inhalation or a sinusoidal approximation of the breathing cycle for airflow simulations despite their differences from normal breathing patterns. Objective: Evaluate the impact of realistic breathing patterns, including sniffing, on predicted nasal tissue concentrations of a reactive vapor that targets the nose in rats as a case study. Materials and methods: Whole-body plethysmography measurements from a free-breathing rat were used to produce profiles of normal breathing, sniffing and combinations of both asmore » flow inputs to CFD/PBPK simulations of acetaldehyde exposure. Results: For the normal measured ventilation profile, modest reductions in time- and tissue depth-dependent areas under the curve (AUC) acetaldehyde concentrations were predicted in the wet squamous, respiratory and transitional epithelium along the main airflow path, while corresponding increases were predicted in the olfactory epithelium, especially the most distal regions of the ethmoid turbinates, versus the idealized profile. The higher amplitude/frequency sniffing profile produced greater AUC increases over the idealized profile in the olfactory epithelium, especially in the posterior region. Conclusions: The differences in tissue AUCs at known lesion-forming regions for acetaldehyde between normal and idealized profiles were minimal, suggesting that sinusoidal profiles may be used for this chemical and exposure concentration. However, depending upon the chemical, exposure system and concentration and the time spent sniffing, the use of realistic breathing profiles, including sniffing, could become an important modulator for local tissue dose predictions.« less
NASA Technical Reports Server (NTRS)
Budd, P. A.
1981-01-01
The secondary electron emission coefficient was measured for a charged polymer (FEP-Teflon) with normally and obliquely incident primary electrons. Theories of secondary emission are reviewed and the experimental data is compared to these theories. Results were obtained for angles of incidence up to 60 deg in normal electric fields of 1500 V/mm. Additional measurements in the range from 50 to 70 deg were made in regions where the normal and tangential fields were approximately equal. The initial input angles and measured output point of the electron beam could be analyzed with computer simulations in order to determine the field within the chamber. When the field is known, the trajectories can be calculated for impacting electrons having various energies and angles of incidence. There was close agreement between the experimental results and the commonly assumed theoretical model in the presence of normal electric fields for angles of incidence up to 60 deg. High angle results obtained in the presence of tangential electric fields did not agree with the theoretical models.
Cham, Heining; West, Stephen G.; Ma, Yue; Aiken, Leona S.
2012-01-01
A Monte Carlo simulation was conducted to investigate the robustness of four latent variable interaction modeling approaches (Constrained Product Indicator [CPI], Generalized Appended Product Indicator [GAPI], Unconstrained Product Indicator [UPI], and Latent Moderated Structural Equations [LMS]) under high degrees of non-normality of the observed exogenous variables. Results showed that the CPI and LMS approaches yielded biased estimates of the interaction effect when the exogenous variables were highly non-normal. When the violation of non-normality was not severe (normal; symmetric with excess kurtosis < 1), the LMS approach yielded the most efficient estimates of the latent interaction effect with the highest statistical power. In highly non-normal conditions, the GAPI and UPI approaches with ML estimation yielded unbiased latent interaction effect estimates, with acceptable actual Type-I error rates for both the Wald and likelihood ratio tests of interaction effect at N ≥ 500. An empirical example illustrated the use of the four approaches in testing a latent variable interaction between academic self-efficacy and positive family role models in the prediction of academic performance. PMID:23457417
Simulation of keratoconus observation in photorefraction
NASA Astrophysics Data System (ADS)
Chen, Ying-Ling; Tan, B.; Baker, K.; Lewis, J. W. L.; Swartz, T.; Jiang, Y.; Wang, M.
2006-11-01
In the recent years, keratoconus (KC) has increasingly gained attention due to its treatment options and to the popularity of keratorefractive surgery. This paper investigates the potential of identification of KC using photorefraction (PR), an optical technique that is similar to objective retinoscopy and is commonly used for large-scale ocular screening. Using personalized eye models of both KC and pre-LASIK patients, computer simulations were performed to achieve visualization of this ophthalmic measurement. The simulations are validated by comparing results to two sets of experimental measurements. These PR images show distinguishable differences between KC eyes and eyes that are either normal or ametropic. The simulation technique with personalized modeling can be extended to other ophthalmic instrument developments. It makes possible investigation with the least number of real human subjects. The application is also of great interest in medical training.
The impacts of precipitation amount simulation on hydrological modeling in Nordic watersheds
NASA Astrophysics Data System (ADS)
Li, Zhi; Brissette, Fancois; Chen, Jie
2013-04-01
Stochastic modeling of daily precipitation is very important for hydrological modeling, especially when no observed data are available. Precipitation is usually modeled by two component model: occurrence generation and amount simulation. For occurrence simulation, the most common method is the first-order two-state Markov chain due to its simplification and good performance. However, various probability distributions have been reported to simulate precipitation amount, and spatiotemporal differences exist in the applicability of different distribution models. Therefore, assessing the applicability of different distribution models is necessary in order to provide more accurate precipitation information. Six precipitation probability distributions (exponential, Gamma, Weibull, skewed normal, mixed exponential, and hybrid exponential/Pareto distributions) are directly and indirectly evaluated on their ability to reproduce the original observed time series of precipitation amount. Data from 24 weather stations and two watersheds (Chute-du-Diable and Yamaska watersheds) in the province of Quebec (Canada) are used for this assessment. Various indices or statistics, such as the mean, variance, frequency distribution and extreme values are used to quantify the performance in simulating the precipitation and discharge. Performance in reproducing key statistics of the precipitation time series is well correlated to the number of parameters of the distribution function, and the three-parameter precipitation models outperform the other models, with the mixed exponential distribution being the best at simulating daily precipitation. The advantage of using more complex precipitation distributions is not as clear-cut when the simulated time series are used to drive a hydrological model. While the advantage of using functions with more parameters is not nearly as obvious, the mixed exponential distribution appears nonetheless as the best candidate for hydrological modeling. The implications of choosing a distribution function with respect to hydrological modeling and climate change impact studies are also discussed.
FLUID-STRUCTURE INTERACTION MODELS OF THE MITRAL VALVE: FUNCTION IN NORMAL AND PATHOLOGIC STATES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kunzelman, K. S.; Einstein, Daniel R.; Cochran, R. P.
2007-08-29
Successful mitral valve repair is dependent upon a full understanding of normal and abnormal mitral valve anatomy and function. Computational analysis is one such method that can be applied to simulate mitral valve function in order to analyze the roles of individual components, and evaluate proposed surgical repair. We developed the first three-dimensional, finite element (FE) computer model of the mitral valve including leaflets and chordae tendineae, however, one critical aspect that has been missing until the last few years was the evaluation of fluid flow, as coupled to the function of the mitral valve structure. We present here ourmore » latest results for normal function and specific pathologic changes using a fluid-structure interaction (FSI) model. Normal valve function was first assessed, followed by pathologic material changes in collagen fiber volume fraction, fiber stiffness, fiber splay, and isotropic stiffness. Leaflet and chordal stress and strain, and papillary muscle force was determined. In addition, transmitral flow, time to leaflet closure, and heart valve sound were assessed. Model predictions in the normal state agreed well with a wide range of available in-vivo and in-vitro data. Further, pathologic material changes that preserved the anisotropy of the valve leaflets were found to preserve valve function. By contrast, material changes that altered the anisotropy of the valve were found to profoundly alter valve function. The addition of blood flow and an experimentally driven microstructural description of mitral tissue represent significant advances in computational studies of the mitral valve, which allow further insight to be gained. This work is another building block in the foundation of a computational framework to aid in the refinement and development of a truly noninvasive diagnostic evaluation of the mitral valve. Ultimately, it represents the basis for simulation of surgical repair of pathologic valves in a clinical and educational setting.« less
Development of control systems for space shuttle vehicles. Volume 2: Appendixes
NASA Technical Reports Server (NTRS)
Stone, C. R.; Chase, T. W.; Kiziloz, B. M.; Ward, M. D.
1971-01-01
A launch phase random normal wind model is presented for delta wing, two-stage, space shuttle control system studies. Equations, data, and simulations for conventional launch studies are given as well as pitch and lateral equations and data for covariance analyses of the launch phase of MSFC vehicle B. Lateral equations and data for North American 130G and 134D are also included along with a high-altitude abort simulation.
The zoom lens of attention: Simulating shuffled versus normal text reading using the SWIFT model
Schad, Daniel J.; Engbert, Ralf
2012-01-01
Assumptions on the allocation of attention during reading are crucial for theoretical models of eye guidance. The zoom lens model of attention postulates that attentional deployment can vary from a sharp focus to a broad window. The model is closely related to the foveal load hypothesis, i.e., the assumption that the perceptual span is modulated by the difficulty of the fixated word. However, these important theoretical concepts for cognitive research have not been tested quantitatively in eye movement models. Here we show that the zoom lens model, implemented in the SWIFT model of saccade generation, captures many important patterns of eye movements. We compared the model's performance to experimental data from normal and shuffled text reading. Our results demonstrate that the zoom lens of attention might be an important concept for eye movement control in reading. PMID:22754295
Monte Carlo simulation of ferroelectric domain growth
NASA Astrophysics Data System (ADS)
Li, B. L.; Liu, X. P.; Fang, F.; Zhu, J. L.; Liu, J.-M.
2006-01-01
The kinetics of two-dimensional isothermal domain growth in a quenched ferroelectric system is investigated using Monte Carlo simulation based on a realistic Ginzburg-Landau ferroelectric model with cubic-tetragonal (square-rectangle) phase transitions. The evolution of the domain pattern and domain size with annealing time is simulated, and the stability of trijunctions and tetrajunctions of domain walls is analyzed. It is found that in this much realistic model with strong dipole alignment anisotropy and long-range Coulomb interaction, the powerlaw for normal domain growth still stands applicable. Towards the late stage of domain growth, both the average domain area and reciprocal density of domain wall junctions increase linearly with time, and the one-parameter dynamic scaling of the domain growth is demonstrated.
Turbulent transport model of wind shear in thunderstorm gust fronts and warm fronts
NASA Technical Reports Server (NTRS)
Lewellen, W. S.; Teske, M. E.; Segur, H. C. O.
1978-01-01
A model of turbulent flow in the atmospheric boundary layer was used to simulate the low-level wind and turbulence profiles associated with both local thunderstorm gust fronts and synoptic-scale warm fronts. Dimensional analyses of both type fronts provided the physical scaling necessary to permit normalized simulations to represent fronts for any temperature jump. The sensitivity of the thunderstorm gust front to five different dimensionless parameters as well as a change from axisymmetric to planar geometry was examined. The sensitivity of the warm front to variations in the Rossby number was examined. Results of the simulations are discussed in terms of the conditions which lead to wind shears which are likely to be most hazardous for aircraft operations.
NASA Astrophysics Data System (ADS)
Devanand, Anjana; Ghosh, Subimal; Paul, Supantha; Karmakar, Subhankar; Niyogi, Dev
2018-06-01
Regional simulations of the seasonal Indian summer monsoon rainfall (ISMR) require an understanding of the model sensitivities to physics and resolution, and its effect on the model uncertainties. It is also important to quantify the added value in the simulated sub-regional precipitation characteristics by a regional climate model (RCM), when compared to coarse resolution rainfall products. This study presents regional model simulations of ISMR at seasonal scale using the Weather Research and Forecasting (WRF) model with the synoptic scale forcing from ERA-interim reanalysis, for three contrasting monsoon seasons, 1994 (excess), 2002 (deficit) and 2010 (normal). Impact of four cumulus schemes, viz., Kain-Fritsch (KF), Betts-Janjić-Miller, Grell 3D and modified Kain-Fritsch (KFm), and two micro physical parameterization schemes, viz., WRF Single Moment Class 5 scheme and Lin et al. scheme (LIN), with eight different possible combinations are analyzed. The impact of spectral nudging on model sensitivity is also studied. In WRF simulations using spectral nudging, improvement in model rainfall appears to be consistent in regions with topographic variability such as Central Northeast and Konkan Western Ghat sub-regions. However the results are also dependent on choice of cumulus scheme used, with KF and KFm providing relatively good performance and the eight member ensemble mean showing better results for these sub-regions. There is no consistent improvement noted in Northeast and Peninsular Indian monsoon regions. Results indicate that the regional simulations using nested domains can provide some improvements on ISMR simulations. Spectral nudging is found to improve upon the model simulations in terms of reducing the intra ensemble spread and hence the uncertainty in the model simulated precipitation. The results provide important insights regarding the need for further improvements in the regional climate simulations of ISMR for various sub-regions and contribute to the understanding of the added value in seasonal simulations by RCMs.
NASA Astrophysics Data System (ADS)
Devanand, Anjana; Ghosh, Subimal; Paul, Supantha; Karmakar, Subhankar; Niyogi, Dev
2017-08-01
Regional simulations of the seasonal Indian summer monsoon rainfall (ISMR) require an understanding of the model sensitivities to physics and resolution, and its effect on the model uncertainties. It is also important to quantify the added value in the simulated sub-regional precipitation characteristics by a regional climate model (RCM), when compared to coarse resolution rainfall products. This study presents regional model simulations of ISMR at seasonal scale using the Weather Research and Forecasting (WRF) model with the synoptic scale forcing from ERA-interim reanalysis, for three contrasting monsoon seasons, 1994 (excess), 2002 (deficit) and 2010 (normal). Impact of four cumulus schemes, viz., Kain-Fritsch (KF), Betts-Janjić-Miller, Grell 3D and modified Kain-Fritsch (KFm), and two micro physical parameterization schemes, viz., WRF Single Moment Class 5 scheme and Lin et al. scheme (LIN), with eight different possible combinations are analyzed. The impact of spectral nudging on model sensitivity is also studied. In WRF simulations using spectral nudging, improvement in model rainfall appears to be consistent in regions with topographic variability such as Central Northeast and Konkan Western Ghat sub-regions. However the results are also dependent on choice of cumulus scheme used, with KF and KFm providing relatively good performance and the eight member ensemble mean showing better results for these sub-regions. There is no consistent improvement noted in Northeast and Peninsular Indian monsoon regions. Results indicate that the regional simulations using nested domains can provide some improvements on ISMR simulations. Spectral nudging is found to improve upon the model simulations in terms of reducing the intra ensemble spread and hence the uncertainty in the model simulated precipitation. The results provide important insights regarding the need for further improvements in the regional climate simulations of ISMR for various sub-regions and contribute to the understanding of the added value in seasonal simulations by RCMs.
Numerical Performance Prediction of a Miniature Ramjet at Mach 4
2012-09-01
with the computational fluids dynamic (CFD) code from ANSYS - CFX . The nozzle-throat area was varied to increase the backpressure and this pushed the...normal shock that was sitting within the inlet, out to the lip of the inlet cowl. Using the eddy dissipation combustion model in ANSYS - CFX , a...improved accuracy in turbulence modeling. 14. SUBJECT TERMS Mach 4, Ramjet, Drag, Turbulence Modeling, Simulation, ANSYS CFX 15. NUMBER
Pisu, Massimo; Concas, Alessandro; Cao, Giacomo
2015-04-01
Cell cycle regulates proliferative cell capacity under normal or pathologic conditions, and in general it governs all in vivo/in vitro cell growth and proliferation processes. Mathematical simulation by means of reliable and predictive models represents an important tool to interpret experiment results, to facilitate the definition of the optimal operating conditions for in vitro cultivation, or to predict the effect of a specific drug in normal/pathologic mammalian cells. Along these lines, a novel model of cell cycle progression is proposed in this work. Specifically, it is based on a population balance (PB) approach that allows one to quantitatively describe cell cycle progression through the different phases experienced by each cell of the entire population during its own life. The transition between two consecutive cell cycle phases is simulated by taking advantage of the biochemical kinetic model developed by Gérard and Goldbeter (2009) which involves cyclin-dependent kinases (CDKs) whose regulation is achieved through a variety of mechanisms that include association with cyclins and protein inhibitors, phosphorylation-dephosphorylation, and cyclin synthesis or degradation. This biochemical model properly describes the entire cell cycle of mammalian cells by maintaining a sufficient level of detail useful to identify check point for transition and to estimate phase duration required by PB. Specific examples are discussed to illustrate the ability of the proposed model to simulate the effect of drugs for in vitro trials of interest in oncology, regenerative medicine and tissue engineering. Copyright © 2015 Elsevier Ltd. All rights reserved.
Growth and yield models for central hardwoods
Martin E. Dale; Donald E. Hilt
1989-01-01
Over the last 20 years computers have become an efficient tool to estimate growth and yield. Computerized yield estimates vary from simple approximation or interpolation of traditional normal yield tables to highly sophisticated programs that simulate the growth and yield of each individual tree.
NASA Astrophysics Data System (ADS)
Lee, Taesam
2018-05-01
Multisite stochastic simulations of daily precipitation have been widely employed in hydrologic analyses for climate change assessment and agricultural model inputs. Recently, a copula model with a gamma marginal distribution has become one of the common approaches for simulating precipitation at multiple sites. Here, we tested the correlation structure of the copula modeling. The results indicate that there is a significant underestimation of the correlation in the simulated data compared to the observed data. Therefore, we proposed an indirect method for estimating the cross-correlations when simulating precipitation at multiple stations. We used the full relationship between the correlation of the observed data and the normally transformed data. Although this indirect method offers certain improvements in preserving the cross-correlations between sites in the original domain, the method was not reliable in application. Therefore, we further improved a simulation-based method (SBM) that was developed to model the multisite precipitation occurrence. The SBM preserved well the cross-correlations of the original domain. The SBM method provides around 0.2 better cross-correlation than the direct method and around 0.1 degree better than the indirect method. The three models were applied to the stations in the Nakdong River basin, and the SBM was the best alternative for reproducing the historical cross-correlation. The direct method significantly underestimates the correlations among the observed data, and the indirect method appeared to be unreliable.
Kim, Seongho; Jang, Hyejeong; Koo, Imhoi; Lee, Joohyoung; Zhang, Xiang
2017-01-01
Compared to other analytical platforms, comprehensive two-dimensional gas chromatography coupled with mass spectrometry (GC×GC-MS) has much increased separation power for analysis of complex samples and thus is increasingly used in metabolomics for biomarker discovery. However, accurate peak detection remains a bottleneck for wide applications of GC×GC-MS. Therefore, the normal-exponential-Bernoulli (NEB) model is generalized by gamma distribution and a new peak detection algorithm using the normal-gamma-Bernoulli (NGB) model is developed. Unlike the NEB model, the NGB model has no closed-form analytical solution, hampering its practical use in peak detection. To circumvent this difficulty, three numerical approaches, which are fast Fourier transform (FFT), the first-order and the second-order delta methods (D1 and D2), are introduced. The applications to simulated data and two real GC×GC-MS data sets show that the NGB-D1 method performs the best in terms of both computational expense and peak detection performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Magnusdottir, Lilja; Finsterle, Stefan
2015-03-01
Supercritical fluids exist near magmatic heat sources in geothermal reservoirs, and the high enthalpy fluid is becoming more desirable for energy production with advancing technology. In geothermal modeling, the roots of the geothermal systems are normally avoided but in order to accurately predict the thermal behavior when wells are drilled close to magmatic intrusions, it is necessary to incorporate the heat sources into the modeling scheme. Modeling supercritical conditions poses a variety of challenges due to the large gradients in fluid properties near the critical zone. This work focused on using the iTOUGH2 simulator to model the extreme temperature andmore » pressure conditions in magmatic geothermal systems.« less
Fault detection and diagnosis of photovoltaic systems
NASA Astrophysics Data System (ADS)
Wu, Xing
The rapid growth of the solar industry over the past several years has expanded the significance of photovoltaic (PV) systems. One of the primary aims of research in building-integrated PV systems is to improve the performance of the system's efficiency, availability, and reliability. Although much work has been done on technological design to increase a photovoltaic module's efficiency, there is little research so far on fault diagnosis for PV systems. Faults in a PV system, if not detected, may not only reduce power generation, but also threaten the availability and reliability, effectively the "security" of the whole system. In this paper, first a circuit-based simulation baseline model of a PV system with maximum power point tracking (MPPT) is developed using MATLAB software. MATLAB is one of the most popular tools for integrating computation, visualization and programming in an easy-to-use modeling environment. Second, data collection of a PV system at variable surface temperatures and insolation levels under normal operation is acquired. The developed simulation model of PV system is then calibrated and improved by comparing modeled I-V and P-V characteristics with measured I--V and P--V characteristics to make sure the simulated curves are close to those measured values from the experiments. Finally, based on the circuit-based simulation model, a PV model of various types of faults will be developed by changing conditions or inputs in the MATLAB model, and the I--V and P--V characteristic curves, and the time-dependent voltage and current characteristics of the fault modalities will be characterized for each type of fault. These will be developed as benchmark I-V or P-V, or prototype transient curves. If a fault occurs in a PV system, polling and comparing actual measured I--V and P--V characteristic curves with both normal operational curves and these baseline fault curves will aid in fault diagnosis.
Williamson, Tanja N.; Lant, Jeremiah G.; Claggett, Peter; Nystrom, Elizabeth A.; Milly, Paul C.D.; Nelson, Hugh L.; Hoffman, Scott A.; Colarullo, Susan J.; Fischer, Jeffrey M.
2015-11-18
The Water Availability Tool for Environmental Resources (WATER) is a decision support system for the nontidal part of the Delaware River Basin that provides a consistent and objective method of simulating streamflow under historical, forecasted, and managed conditions. In order to quantify the uncertainty associated with these simulations, however, streamflow and the associated hydroclimatic variables of potential evapotranspiration, actual evapotranspiration, and snow accumulation and snowmelt must be simulated and compared to long-term, daily observations from sites. This report details model development and optimization, statistical evaluation of simulations for 57 basins ranging from 2 to 930 km2 and 11.0 to 99.5 percent forested cover, and how this statistical evaluation of daily streamflow relates to simulating environmental changes and management decisions that are best examined at monthly time steps normalized over multiple decades. The decision support system provides a database of historical spatial and climatic data for simulating streamflow for 2001–11, in addition to land-cover and general circulation model forecasts that focus on 2030 and 2060. WATER integrates geospatial sampling of landscape characteristics, including topographic and soil properties, with a regionally calibrated hillslope-hydrology model, an impervious-surface model, and hydroclimatic models that were parameterized by using three hydrologic response units: forested, agricultural, and developed land cover. This integration enables the regional hydrologic modeling approach used in WATER without requiring site-specific optimization or those stationary conditions inferred when using a statistical model.
Statistical-Dynamical Seasonal Forecasts of Central-Southwest Asian Winter Precipitation.
NASA Astrophysics Data System (ADS)
Tippett, Michael K.; Goddard, Lisa; Barnston, Anthony G.
2005-06-01
Interannual precipitation variability in central-southwest (CSW) Asia has been associated with East Asian jet stream variability and western Pacific tropical convection. However, atmospheric general circulation models (AGCMs) forced by observed sea surface temperature (SST) poorly simulate the region's interannual precipitation variability. The statistical-dynamical approach uses statistical methods to correct systematic deficiencies in the response of AGCMs to SST forcing. Statistical correction methods linking model-simulated Indo-west Pacific precipitation and observed CSW Asia precipitation result in modest, but statistically significant, cross-validated simulation skill in the northeast part of the domain for the period from 1951 to 1998. The statistical-dynamical method is also applied to recent (winter 1998/99 to 2002/03) multimodel, two-tier December-March precipitation forecasts initiated in October. This period includes 4 yr (winter of 1998/99 to 2001/02) of severe drought. Tercile probability forecasts are produced using ensemble-mean forecasts and forecast error estimates. The statistical-dynamical forecasts show enhanced probability of below-normal precipitation for the four drought years and capture the return to normal conditions in part of the region during the winter of 2002/03.May Kabul be without gold, but not without snow.—Traditional Afghan proverb
NASA Astrophysics Data System (ADS)
Wang, Peitao; Cai, Meifeng; Ren, Fenhua; Li, Changhong; Yang, Tianhong
2017-07-01
This paper develops a numerical approach to determine the mechanical behavior of discrete fractures network (DFN) models based on digital image processing technique and particle flow code (PFC2D). A series of direct shear tests of jointed rocks were numerically performed to study the effect of normal stress, friction coefficient and joint bond strength on the mechanical behavior of joint rock and evaluate the influence of micro-parameters on the shear properties of jointed rocks using the proposed approach. The complete shear stress-displacement curve of the DFN model under direct shear tests was presented to evaluate the failure processes of jointed rock. The results show that the peak and residual strength are sensitive to normal stress. A higher normal stress has a greater effect on the initiation and propagation of cracks. Additionally, an increase in the bond strength ratio results in an increase in the number of both shear and normal cracks. The friction coefficient was also found to have a significant influence on the shear strength and shear cracks. Increasing in the friction coefficient resulted in the decreasing in the initiation of normal cracks. The unique contribution of this paper is the proposed modeling technique to simulate the mechanical behavior of jointed rock mass based on particle mechanics approaches.
Plancade, Sandra; Rozenholc, Yves; Lund, Eiliv
2012-12-11
Illumina BeadArray technology includes non specific negative control features that allow a precise estimation of the background noise. As an alternative to the background subtraction proposed in BeadStudio which leads to an important loss of information by generating negative values, a background correction method modeling the observed intensities as the sum of the exponentially distributed signal and normally distributed noise has been developed. Nevertheless, Wang and Ye (2012) display a kernel-based estimator of the signal distribution on Illumina BeadArrays and suggest that a gamma distribution would represent a better modeling of the signal density. Hence, the normal-exponential modeling may not be appropriate for Illumina data and background corrections derived from this model may lead to wrong estimation. We propose a more flexible modeling based on a gamma distributed signal and a normal distributed background noise and develop the associated background correction, implemented in the R-package NormalGamma. Our model proves to be markedly more accurate to model Illumina BeadArrays: on the one hand, it is shown on two types of Illumina BeadChips that this model offers a more correct fit of the observed intensities. On the other hand, the comparison of the operating characteristics of several background correction procedures on spike-in and on normal-gamma simulated data shows high similarities, reinforcing the validation of the normal-gamma modeling. The performance of the background corrections based on the normal-gamma and normal-exponential models are compared on two dilution data sets, through testing procedures which represent various experimental designs. Surprisingly, we observe that the implementation of a more accurate parametrisation in the model-based background correction does not increase the sensitivity. These results may be explained by the operating characteristics of the estimators: the normal-gamma background correction offers an improvement in terms of bias, but at the cost of a loss in precision. This paper addresses the lack of fit of the usual normal-exponential model by proposing a more flexible parametrisation of the signal distribution as well as the associated background correction. This new model proves to be considerably more accurate for Illumina microarrays, but the improvement in terms of modeling does not lead to a higher sensitivity in differential analysis. Nevertheless, this realistic modeling makes way for future investigations, in particular to examine the characteristics of pre-processing strategies.
The impact on midlevel vision of statistically optimal divisive normalization in V1.
Coen-Cagli, Ruben; Schwartz, Odelia
2013-07-15
The first two areas of the primate visual cortex (V1, V2) provide a paradigmatic example of hierarchical computation in the brain. However, neither the functional properties of V2 nor the interactions between the two areas are well understood. One key aspect is that the statistics of the inputs received by V2 depend on the nonlinear response properties of V1. Here, we focused on divisive normalization, a canonical nonlinear computation that is observed in many neural areas and modalities. We simulated V1 responses with (and without) different forms of surround normalization derived from statistical models of natural scenes, including canonical normalization and a statistically optimal extension that accounted for image nonhomogeneities. The statistics of the V1 population responses differed markedly across models. We then addressed how V2 receptive fields pool the responses of V1 model units with different tuning. We assumed this is achieved by learning without supervision a linear representation that removes correlations, which could be accomplished with principal component analysis. This approach revealed V2-like feature selectivity when we used the optimal normalization and, to a lesser extent, the canonical one but not in the absence of both. We compared the resulting two-stage models on two perceptual tasks; while models encompassing V1 surround normalization performed better at object recognition, only statistically optimal normalization provided systematic advantages in a task more closely matched to midlevel vision, namely figure/ground judgment. Our results suggest that experiments probing midlevel areas might benefit from using stimuli designed to engage the computations that characterize V1 optimality.
Money-center structures in dynamic banking systems
NASA Astrophysics Data System (ADS)
Li, Shouwei; Zhang, Minghui
2016-10-01
In this paper, we propose a dynamic model for banking systems based on the description of balance sheets. It generates some features identified through empirical analysis. Through simulation analysis of the model, we find that banking systems have the feature of money-center structures, that bank asset distributions are power-law distributions, and that contract size distributions are log-normal distributions.
A simulation model of the oxygen alveolo-capillary exchange in normal and pathological conditions.
Brighenti, Chiara; Gnudi, Gianni; Avanzolini, Guido
2003-05-01
This paper presents a mathematical model of the oxygen alveolo-capillary exchange to provide the capillary oxygen partial pressure profile in normal and pathological conditions. In fact, a thickening of the blood-gas barrier, heavy exercise or a low oxygen partial pressure (PO2) in the alveolar space can reduce the O2 alveolo-capillary exchange. Since the reversible binding between haemoglobin and oxygen makes it impossible to determine the closed form for the mathematical description of the PO2 profile along the pulmonary capillaries, an approximate analytical solution of the capillary PO2 profile is proposed. Simulation results are compared with the capillary PO2 profile obtained by numerical integration and by a piecewise linear interpolation of the oxyhaemoglobin dissociation curve. Finally, the proposed model is evaluated in a large range of physiopathological diffusive conditions. The good fit to numerical solutions in all experimental conditions seems to represent a substantial improvement with respect to the approach based on a linear approximation of the oxyhaemoglobin dissociation curve, and makes this model a candidate to be incorporated into the integrated descriptions of the entire respiratory system, where the datum of primary interest is the value of end capillary PO2.
Hong, Chuan; Chen, Yong; Ning, Yang; Wang, Shuang; Wu, Hao; Carroll, Raymond J
2017-01-01
Motivated by analyses of DNA methylation data, we propose a semiparametric mixture model, namely the generalized exponential tilt mixture model, to account for heterogeneity between differentially methylated and non-differentially methylated subjects in the cancer group, and capture the differences in higher order moments (e.g. mean and variance) between subjects in cancer and normal groups. A pairwise pseudolikelihood is constructed to eliminate the unknown nuisance function. To circumvent boundary and non-identifiability problems as in parametric mixture models, we modify the pseudolikelihood by adding a penalty function. In addition, the test with simple asymptotic distribution has computational advantages compared with permutation-based test for high-dimensional genetic or epigenetic data. We propose a pseudolikelihood based expectation-maximization test, and show the proposed test follows a simple chi-squared limiting distribution. Simulation studies show that the proposed test controls Type I errors well and has better power compared to several current tests. In particular, the proposed test outperforms the commonly used tests under all simulation settings considered, especially when there are variance differences between two groups. The proposed test is applied to a real data set to identify differentially methylated sites between ovarian cancer subjects and normal subjects.
Procedural wound geometry and blood flow generation for medical training simulators
NASA Astrophysics Data System (ADS)
Aras, Rifat; Shen, Yuzhong; Li, Jiang
2012-02-01
Efficient application of wound treatment procedures is vital in both emergency room and battle zone scenes. In order to train first responders for such situations, physical casualty simulation kits, which are composed of tens of individual items, are commonly used. Similar to any other training scenarios, computer simulations can be effective means for wound treatment training purposes. For immersive and high fidelity virtual reality applications, realistic 3D models are key components. However, creation of such models is a labor intensive process. In this paper, we propose a procedural wound geometry generation technique that parameterizes key simulation inputs to establish the variability of the training scenarios without the need of labor intensive remodeling of the 3D geometry. The procedural techniques described in this work are entirely handled by the graphics processing unit (GPU) to enable interactive real-time operation of the simulation and to relieve the CPU for other computational tasks. The visible human dataset is processed and used as a volumetric texture for the internal visualization of the wound geometry. To further enhance the fidelity of the simulation, we also employ a surface flow model for blood visualization. This model is realized as a dynamic texture that is composed of a height field and a normal map and animated at each simulation step on the GPU. The procedural wound geometry and the blood flow model are applied to a thigh model and the efficiency of the technique is demonstrated in a virtual surgery scene.
Tang, Shujie; Meng, Xueying
2011-01-01
The restoration of disc space height of fused segment is essential in anterior lumbar interbody fusion, while the disc space height in many cases decreased postoperatively, which may adversely aggravate the adjacent segmental degeneration. However, no literature available focused on the issue. A normal healthy finite element model of L3-5 and four anterior lumbar interbody fusion models with different disc space height of fused segment were developed. 800 N compressive loading plus 10 Nm moments simulating flexion, extension, lateral bending and axial rotation were imposed on L3 superior endplate. The intradiscal pressure, the intersegmental rotation, the tresca stress and contact force of facet joints in L3-4 were investigated. Anterior lumbar interbody fusion with severely decreased disc space height presented with the highest values of the four parameters, and the normal healthy model presented with the lowest values except, under extension, the contact force of facet joints in normal healthy model is higher than that in normal anterior lumbar interbody fusion model. With disc space height decrease, the values of parameters in each anterior lumbar interbody fusion model increase gradually. Anterior lumbar interbody fusion with decreased disc space height aggravate the adjacent segmental degeneration more adversely.
Pattern Adaptation and Normalization Reweighting.
Westrick, Zachary M; Heeger, David J; Landy, Michael S
2016-09-21
Adaptation to an oriented stimulus changes both the gain and preferred orientation of neural responses in V1. Neurons tuned near the adapted orientation are suppressed, and their preferred orientations shift away from the adapter. We propose a model in which weights of divisive normalization are dynamically adjusted to homeostatically maintain response products between pairs of neurons. We demonstrate that this adjustment can be performed by a very simple learning rule. Simulations of this model closely match existing data from visual adaptation experiments. We consider several alternative models, including variants based on homeostatic maintenance of response correlations or covariance, as well as feedforward gain-control models with multiple layers, and we demonstrate that homeostatic maintenance of response products provides the best account of the physiological data. Adaptation is a phenomenon throughout the nervous system in which neural tuning properties change in response to changes in environmental statistics. We developed a model of adaptation that combines normalization (in which a neuron's gain is reduced by the summed responses of its neighbors) and Hebbian learning (in which synaptic strength, in this case divisive normalization, is increased by correlated firing). The model is shown to account for several properties of adaptation in primary visual cortex in response to changes in the statistics of contour orientation. Copyright © 2016 the authors 0270-6474/16/369805-12$15.00/0.
Modeling the response of normal and ischemic cardiac tissue to electrical stimulation
NASA Astrophysics Data System (ADS)
Kandel, Sunil Mani
Heart disease, the leading cause of death worldwide, is often caused by ventricular fibrillation. A common treatment for this lethal arrhythmia is defibrillation: a strong electrical shock that resets the heart to its normal rhythm. To design better defibrillators, we need a better understanding of both fibrillation and defibrillation. Fundamental mysteries remain regarding the mechanism of how the heart responds to a shock, particularly anodal shocks and the resultant hyperpolarization. Virtual anodes play critical roles in defibrillation, and one cannot build better defibrillators until these mechanisms are understood. We are using mathematical modeling to numerically simulate observed phenomena, and are exploring fundamental mechanisms responsible for the heart's electrical behavior. Such simulations clarify mechanisms and identify key parameters. We investigate how systolic tissue responds to an anodal shock and how refractory tissue reacts to hyperpolarization by studying the dip in the anodal strength-interval curve. This dip is due to electrotonic interaction between regions of depolarization and hyperpolarization following a shock. The dominance of the electrotonic mechanism over calcium interactions implies the importance of the spatial distribution of virtual electrodes. We also investigate the response of localized ischemic tissue to an anodal shock by modeling a regional elevation of extracellular potassium concentration. This heterogeneity leads to action potential instability, 2:1 conduction block (alternans), and reflection-like reentry at the boarder of the normal and ischemic regions. This kind of reflection (reentry) occurs due to the delay between proximal and distal segments to re-excite the proximal segment. Our numerical simulations are based on the bidomain model, the state-of-the-art mathematical description of how cardiac tissue responds to shocks. The dynamic LuoRudy model describes the active properties of the membrane. To model ischemia, the Luo-Rudy model is modified by adding ischemic-related ion currents and concentrations to mimic conditions during the initial phase of ischemia. The stimulus is applied through a unipolar electrode that induces a complicated spatial distribution of transmembrane potential, including adjacent regions of depolarization and hyperpolarization. This research is significant because it uncovers basic properties of excitation that are fundamental for understanding cardiac pacing and defibrillation.
3D virtual human atria: A computational platform for studying clinical atrial fibrillation
Aslanidi, Oleg V; Colman, Michael A; Stott, Jonathan; Dobrzynski, Halina; Boyett, Mark R; Holden, Arun V; Zhang, Henggui
2011-01-01
Despite a vast amount of experimental and clinical data on the underlying ionic, cellular and tissue substrates, the mechanisms of common atrial arrhythmias (such as atrial fibrillation, AF) arising from the functional interactions at the whole atria level remain unclear. Computational modelling provides a quantitative framework for integrating such multi-scale data and understanding the arrhythmogenic behaviour that emerges from the collective spatio-temporal dynamics in all parts of the heart. In this study, we have developed a multi-scale hierarchy of biophysically detailed computational models for the human atria – 3D virtual human atria. Primarily, diffusion tensor MRI reconstruction of the tissue geometry and fibre orientation in the human sinoatrial node (SAN) and surrounding atrial muscle was integrated into the 3D model of the whole atria dissected from the Visible Human dataset. The anatomical models were combined with the heterogeneous atrial action potential (AP) models, and used to simulate the AP conduction in the human atria under various conditions: SAN pacemaking and atrial activation in the normal rhythm, break-down of regular AP wave-fronts during rapid atrial pacing, and the genesis of multiple re-entrant wavelets characteristic of AF. Contributions of different properties of the tissue to the mechanisms of the normal rhythm and AF arrhythmogenesis are investigated and discussed. The 3D model of the atria itself was incorporated into the torso model to simulate the body surface ECG patterns in the normal and arrhythmic conditions. Therefore, a state-of-the-art computational platform has been developed, which can be used for studying multi-scale electrical phenomena during atrial conduction and arrhythmogenesis. Results of such simulations can be directly compared with experimental electrophysiological and endocardial mapping data, as well as clinical ECG recordings. More importantly, the virtual human atria can provide validated means for directly dissecting 3D excitation propagation processes within the atrial walls from an in vivo whole heart, which are beyond the current technical capabilities of experimental or clinical set-ups. PMID:21762716
Zhang, Di; Li, Ruiqi; Batchelor, William D; Ju, Hui; Li, Yanming
2018-01-01
The North China Plain is one of the most important grain production regions in China, but is facing serious water shortages. To achieve a balance between water use and the need for food self-sufficiency, new water efficient irrigation strategies need to be developed that balance water use with farmer net return. The Crop Environment Resource Synthesis Wheat (CERES-Wheat model) was calibrated and evaluated with two years of data which consisted of 3-4 irrigation treatments, and the model was used to investigate long-term winter wheat productivity and water use from irrigation management in the North China Plain. The calibrated model simulated accurately above-ground biomass, grain yield and evapotranspiration of winter wheat in response to irrigation management. The calibrated model was then run using weather data from 1994-2016 in order to evaluate different irrigation strategies. The simulated results using historical weather data showed that grain yield and water use was sensitive to different irrigation strategies including amounts and dates of irrigation applications. The model simulated the highest yield when irrigation was applied at jointing (T9) in normal and dry rainfall years, and gave the highest simulated yields for irrigation at double ridge (T8) in wet years. A single simulated irrigation at jointing (T9) produced yields that were 88% compared to using a double irrigation treatment at T1 and T9 in wet years, 86% of that in normal years, and 91% of that in dry years. A single irrigation at jointing or double ridge produced higher water use efficiency because it obtained higher evapotranspiration. The simulated farmer irrigation practices produced the highest yield and net income. When the cost of water was taken into account, limited irrigation was found to be more profitable based on assumptions about future water costs. In order to increase farmer income, a subsidy will likely be needed to compensate farmers for yield reductions due to water savings. These results showed that there is a cost to the farmer for water conservation, but limiting irrigation to a single irrigation at jointing would minimize impact on farmer net return in North China Plain.
Computer modeling describes gravity-related adaptation in cell cultures.
Alexandrov, Ludmil B; Alexandrova, Stoyana; Usheva, Anny
2009-12-16
Questions about the changes of biological systems in response to hostile environmental factors are important but not easy to answer. Often, the traditional description with differential equations is difficult due to the overwhelming complexity of the living systems. Another way to describe complex systems is by simulating them with phenomenological models such as the well-known evolutionary agent-based model (EABM). Here we developed an EABM to simulate cell colonies as a multi-agent system that adapts to hyper-gravity in starvation conditions. In the model, the cell's heritable characteristics are generated and transferred randomly to offspring cells. After a qualitative validation of the model at normal gravity, we simulate cellular growth in hyper-gravity conditions. The obtained data are consistent with previously confirmed theoretical and experimental findings for bacterial behavior in environmental changes, including the experimental data from the microgravity Atlantis and the Hypergravity 3000 experiments. Our results demonstrate that it is possible to utilize an EABM with realistic qualitative description to examine the effects of hypergravity and starvation on complex cellular entities.
Modeling of stochastic motion of bacteria propelled spherical microbeads
NASA Astrophysics Data System (ADS)
Arabagi, Veaceslav; Behkam, Bahareh; Cheung, Eugene; Sitti, Metin
2011-06-01
This work proposes a stochastic dynamic model of bacteria propelled spherical microbeads as potential swimming microrobotic bodies. Small numbers of S. marcescens bacteria are attached with their bodies to surfaces of spherical microbeads. Average-behavior stochastic models that are normally adopted when studying such biological systems are generally not effective for cases in which a small number of agents are interacting in a complex manner, hence a stochastic model is proposed to simulate the behavior of 8-41 bacteria assembled on a curved surface. Flexibility of the flagellar hook is studied via comparing simulated and experimental results for scenarios of increasing bead size and the number of attached bacteria on a bead. Although requiring more experimental data to yield an exact, certain flagellar hook stiffness value, the examined results favor a stiffer flagella. The stochastic model is intended to be used as a design and simulation tool for future potential targeted drug delivery and disease diagnosis applications of bacteria propelled microrobots.
Cold dark matter. 2: Spatial and velocity statistics
NASA Technical Reports Server (NTRS)
Gelb, James M.; Bertschinger, Edmund
1994-01-01
We examine high-resolution gravitational N-body simulations of the omega = 1 cold dark matter (CDM) model in order to determine whether there is any normalization of the initial density fluctuation spectrum that yields acceptable results for galaxy clustering and velocities. Dense dark matter halos in the evolved mass distribution are identified with luminous galaxies; the most massive halos are also considered as sites for galaxy groups, with a range of possibilities explored for the group mass-to-light ratios. We verify the earlier conclusions of White et al. (1987) for the low-amplitude (high-bias) CDM model-the galaxy correlation function is marginally acceptable but that there are too many galaxies. We also show that the peak biasing method does not accurately reproduce the results obtained using dense halos identified in the simulations themselves. The Cosmic Background Explorer (COBE) anisotropy implies a higher normalization, resulting in problems with excessive pairwise galaxy velocity dispersion unless a strong velocity bias is present. Although we confirm the strong velocity bias of halos reported by Couchman & Carlberg (1992), we show that the galaxy motions are still too large on small scales. We find no amplitude for which the CDM model can reconcile simultaneously and galaxy correlation function, the low pairwise velocity dispersion, and the richness distribution of groups and clusters. With the normalization implied by COBE, the CDM spectrum has too much power on small scales if omega = 1.
Testability analysis on a hydraulic system in a certain equipment based on simulation model
NASA Astrophysics Data System (ADS)
Zhang, Rui; Cong, Hua; Liu, Yuanhong; Feng, Fuzhou
2018-03-01
Aiming at the problem that the complicated structure and the shortage of fault statistics information in hydraulic systems, a multi value testability analysis method based on simulation model is proposed. Based on the simulation model of AMESim, this method injects the simulated faults and records variation of test parameters ,such as pressure, flow rate, at each test point compared with those under normal conditions .Thus a multi-value fault-test dependency matrix is established. Then the fault detection rate (FDR) and fault isolation rate (FIR) are calculated based on the dependency matrix. Finally the system of testability and fault diagnosis capability are analyzed and evaluated, which can only reach a lower 54%(FDR) and 23%(FIR). In order to improve testability performance of the system,. number and position of the test points are optimized on the system. Results show the proposed test placement scheme can be used to solve the problems that difficulty, inefficiency and high cost in the system maintenance.
NASA Astrophysics Data System (ADS)
Wu, Sangwook
2016-04-01
The three transmembrane and the four transmembrane helix models are suggested for human vitamin K epoxide reductase (VKOR). In this study, we investigate the stability of the human three transmembrane/four transmembrane VKOR models by employing a coarse-grained normal mode analysis and molecular dynamics simulation. Based on the analysis of the mobility of each transmembrane domain, we suggest that the three transmembrane human VKOR model is more stable than the four transmembrane human VKOR model.
2012-01-01
Background When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. Methods An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Results Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. Conclusions The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population. PMID:22716998
Na, Hyuntae; Lee, Seung-Yub; Üstündag, Ersan; ...
2013-01-01
This paper introduces a recent development and application of a noncommercial artificial neural network (ANN) simulator with graphical user interface (GUI) to assist in rapid data modeling and analysis in the engineering diffraction field. The real-time network training/simulation monitoring tool has been customized for the study of constitutive behavior of engineering materials, and it has improved data mining and forecasting capabilities of neural networks. This software has been used to train and simulate the finite element modeling (FEM) data for a fiber composite system, both forward and inverse. The forward neural network simulation precisely reduplicates FEM results several orders ofmore » magnitude faster than the slow original FEM. The inverse simulation is more challenging; yet, material parameters can be meaningfully determined with the aid of parameter sensitivity information. The simulator GUI also reveals that output node size for materials parameter and input normalization method for strain data are critical train conditions in inverse network. The successful use of ANN modeling and simulator GUI has been validated through engineering neutron diffraction experimental data by determining constitutive laws of the real fiber composite materials via a mathematically rigorous and physically meaningful parameter search process, once the networks are successfully trained from the FEM database.« less
The current status of the simulation theory of cognition.
Hesslow, Germund
2012-01-05
It is proposed that thinking is simulated interaction with the environment. Three assumptions underlie this 'simulation' theory of cognitive function. Firstly, behaviour can be simulated in the sense that we can activate motor structures, as during a normal overt action, but suppress its execution. Secondly, perception can be simulated by internal activation of sensory cortex in a way that resembles its normal activation during perception of external stimuli. The third assumption ('anticipation') is that both overt and simulated actions can elicit perceptual simulation of their most probable consequences. A large body of evidence, mainly from neuroimaging studies, that supports these assumptions, is reviewed briefly. The theory is ontologically parsimonious and does not rely on standard cognitivist constructs such as internal models or representations. It is argued that the simulation approach can explain the relations between motor, sensory and cognitive functions and the appearance of an inner world. It also unifies and explains important features of a wide variety of cognitive phenomena such as memory and cognitive maps. Novel findings from recent developments in memory research on the similarity of imaging and memory and on the role of both prefrontal cortex and sensory cortex in declarative memory and working memory are predicted by the theory and provide striking support for it. This article is part of a Special Issue entitled "The Cognitive Neuroscience". Copyright © 2011 Elsevier B.V. All rights reserved.
Generating Multivariate Ordinal Data via Entropy Principles.
Lee, Yen; Kaplan, David
2018-03-01
When conducting robustness research where the focus of attention is on the impact of non-normality, the marginal skewness and kurtosis are often used to set the degree of non-normality. Monte Carlo methods are commonly applied to conduct this type of research by simulating data from distributions with skewness and kurtosis constrained to pre-specified values. Although several procedures have been proposed to simulate data from distributions with these constraints, no corresponding procedures have been applied for discrete distributions. In this paper, we present two procedures based on the principles of maximum entropy and minimum cross-entropy to estimate the multivariate observed ordinal distributions with constraints on skewness and kurtosis. For these procedures, the correlation matrix of the observed variables is not specified but depends on the relationships between the latent response variables. With the estimated distributions, researchers can study robustness not only focusing on the levels of non-normality but also on the variations in the distribution shapes. A simulation study demonstrates that these procedures yield excellent agreement between specified parameters and those of estimated distributions. A robustness study concerning the effect of distribution shape in the context of confirmatory factor analysis shows that shape can affect the robust [Formula: see text] and robust fit indices, especially when the sample size is small, the data are severely non-normal, and the fitted model is complex.
Wildfire risk in the wildland-urban interface: A simulation study in northwestern Wisconsin
Massada, Avi Bar; Radeloff, Volker C.; Stewart, Susan I.; Hawbaker, Todd J.
2009-01-01
The rapid growth of housing in and near the wildland–urban interface (WUI) increases wildfirerisk to lives and structures. To reduce fire risk, it is necessary to identify WUI housing areas that are more susceptible to wildfire. This is challenging, because wildfire patterns depend on fire behavior and spread, which in turn depend on ignition locations, weather conditions, the spatial arrangement of fuels, and topography. The goal of our study was to assess wildfirerisk to a 60,000 ha WUI area in northwesternWisconsin while accounting for all of these factors. We conducted 6000 simulations with two dynamic fire models: Fire Area Simulator (FARSITE) and Minimum Travel Time (MTT) in order to map the spatial pattern of burn probabilities. Simulations were run under normal and extreme weather conditions to assess the effect of weather on fire spread, burn probability, and risk to structures. The resulting burn probability maps were intersected with maps of structure locations and land cover types. The simulations revealed clear hotspots of wildfire activity and a large range of wildfirerisk to structures in the study area. As expected, the extreme weather conditions yielded higher burn probabilities over the entire landscape, as well as to different land cover classes and individual structures. Moreover, the spatial pattern of risk was significantly different between extreme and normal weather conditions. The results highlight the fact that extreme weather conditions not only produce higher fire risk than normal weather conditions, but also change the fine-scale locations of high risk areas in the landscape, which is of great importance for fire management in WUI areas. In addition, the choice of weather data may limit the potential for comparisons of risk maps for different areas and for extrapolating risk maps to future scenarios where weather conditions are unknown. Our approach to modeling wildfirerisk to structures can aid fire risk reduction management activities by identifying areas with elevated wildfirerisk and those most vulnerable under extreme weather conditions.
The impact of truck repositioning on congestion and pollution in the LA basin.
DOT National Transportation Integrated Search
2011-03-01
Pollution and congestion caused by port related truck traffic is usually estimated based on careful transportation modeling and simulation. In these efforts, however, attention is normally focused on trucks on their way from a terminal at the Los Ang...
Simulations of Turbulent Flow Over Complex Terrain Using an Immersed-Boundary Method
NASA Astrophysics Data System (ADS)
DeLeon, Rey; Sandusky, Micah; Senocak, Inanc
2018-02-01
We present an immersed-boundary method to simulate high-Reynolds-number turbulent flow over the complex terrain of Askervein and Bolund Hills under neutrally-stratified conditions. We reconstruct both the velocity and the eddy-viscosity fields in the terrain-normal direction to produce turbulent stresses as would be expected from the application of a surface-parametrization scheme based on Monin-Obukhov similarity theory. We find that it is essential to be consistent in the underlying assumptions for the velocity reconstruction and the eddy-viscosity relation to produce good results. To this end, we reconstruct the tangential component of the velocity field using a logarithmic velocity profile and adopt the mixing-length model in the near-surface turbulence model. We use a linear interpolation to reconstruct the normal component of the velocity to enforce the impermeability condition. Our approach works well for both the Askervein and Bolund Hills when the flow is attached to the surface, but shows slight disagreement in regions of flow recirculation, despite capturing the flow reversal.
Simulations of Turbulent Flow Over Complex Terrain Using an Immersed-Boundary Method
NASA Astrophysics Data System (ADS)
DeLeon, Rey; Sandusky, Micah; Senocak, Inanc
2018-06-01
We present an immersed-boundary method to simulate high-Reynolds-number turbulent flow over the complex terrain of Askervein and Bolund Hills under neutrally-stratified conditions. We reconstruct both the velocity and the eddy-viscosity fields in the terrain-normal direction to produce turbulent stresses as would be expected from the application of a surface-parametrization scheme based on Monin-Obukhov similarity theory. We find that it is essential to be consistent in the underlying assumptions for the velocity reconstruction and the eddy-viscosity relation to produce good results. To this end, we reconstruct the tangential component of the velocity field using a logarithmic velocity profile and adopt the mixing-length model in the near-surface turbulence model. We use a linear interpolation to reconstruct the normal component of the velocity to enforce the impermeability condition. Our approach works well for both the Askervein and Bolund Hills when the flow is attached to the surface, but shows slight disagreement in regions of flow recirculation, despite capturing the flow reversal.
Linear non-normality as the cause of nonlinear instability in LAPD
NASA Astrophysics Data System (ADS)
Friedman, Brett; Carter, Troy; Umansky, Maxim
2013-10-01
A BOUT + + simulation using a Braginskii fluid model reproduces drift-wave turbulence in LAPD with high qualitative and quantitative agreement. The turbulent fluctuations in the simulation sustain themselves through a nonlinear instability mechanism that injects energy into k|| = 0 fluctuations despite the fact that all of the linear eigenmodes at k|| = 0 are stable. The reason for this is the high non-orthogonality of the eigenmodes caused by the non-normality of the linear operator, which is common in fluid and plasma models that contain equilibrium gradients. While individual stable eigenmodes must decay when acted upon by their linear operator, the sum of the eigenmodes may grow transiently with initial algebraic time dependence. This transient growth can inject energy into the system, and the nonlinearities can remix the eigenmode amplitudes to self-sustain the growth. Such a mechanism also acts in subcritical neutral fluid turbulence, and the self-sustainment process is quite similar, indicating the universality of this nonlinear instability.
NASA Astrophysics Data System (ADS)
Hunter, Kendall; Zhang, Yanhang; Lanning, Craig
2005-11-01
Insight into the progression of pulmonary hypertension may be obtained from thorough study of vascular flow during reactivity testing, an invasive diagnostic procedure which can dramatically alter vascular hemodynamics. Diagnostic imaging methods, however, are limited in their ability to provide extensive data. Here we present detailed flow and wall deformation results from simulations of pulmonary arteries undergoing this procedure. Patient-specific 3-D geometric reconstructions of the first four branches of the pulmonary vasculature were obtained clinically and meshed for use with computational software. Transient simulations in normal and reactive states were obtained from four such models were completed with patient-specific velocity inlet conditions and flow impedance exit conditions. A microstructurally based orthotropic hyperelastic model that simulates pulmonary artery mechanics under normotensive and hypoxic hypertensive conditions treated wall constitutive changes due to pressure reactivity and arterial remodeling. Pressure gradients, velocity fields, arterial deformation, and complete topography of shear stress were obtained. These models provide richer detail of hemodynamics than can be obtained from current imaging techniques, and should allow maximum characterization of vascular function in the clinical situation.
Three-dimensional wideband electromagnetic modeling on massively parallel computers
NASA Astrophysics Data System (ADS)
Alumbaugh, David L.; Newman, Gregory A.; Prevost, Lydie; Shadid, John N.
1996-01-01
A method is presented for modeling the wideband, frequency domain electromagnetic (EM) response of a three-dimensional (3-D) earth to dipole sources operating at frequencies where EM diffusion dominates the response (less than 100 kHz) up into the range where propagation dominates (greater than 10 MHz). The scheme employs the modified form of the vector Helmholtz equation for the scattered electric fields to model variations in electrical conductivity, dielectric permitivity and magnetic permeability. The use of the modified form of the Helmholtz equation allows for perfectly matched layer ( PML) absorbing boundary conditions to be employed through the use of complex grid stretching. Applying the finite difference operator to the modified Helmholtz equation produces a linear system of equations for which the matrix is sparse and complex symmetrical. The solution is obtained using either the biconjugate gradient (BICG) or quasi-minimum residual (QMR) methods with preconditioning; in general we employ the QMR method with Jacobi scaling preconditioning due to stability. In order to simulate larger, more realistic models than has been previously possible, the scheme has been modified to run on massively parallel (MP) computer architectures. Execution on the 1840-processor Intel Paragon has indicated a maximum model size of 280 × 260 × 200 cells with a maximum flop rate of 14.7 Gflops. Three different geologic models are simulated to demonstrate the use of the code for frequencies ranging from 100 Hz to 30 MHz and for different source types and polarizations. The simulations show that the scheme is correctly able to model the air-earth interface and the jump in the electric and magnetic fields normal to discontinuities. For frequencies greater than 10 MHz, complex grid stretching must be employed to incorporate absorbing boundaries while below this normal (real) grid stretching can be employed.
NASA Astrophysics Data System (ADS)
Swearingen, Michelle E.
2004-04-01
An analytic model, developed in cylindrical coordinates, is described for the scattering of a spherical wave off a semi-infinite reight cylinder placed normal to a ground surface. The motivation for the research is to have a model with which one can simulate scattering from a single tree and which can be used as a fundamental element in a model for estimating the attenuation in a forest comprised of multiple tree trunks. Comparisons are made to the plane wave case, the transparent cylinder case, and the rigid and soft ground cases as a method of theoretically verifying the model for the contemplated range of model parameters. Agreement is regarded as excellent for these benchmark cases. Model sensitivity to five parameters is also explored. An experiment was performed to study the scattering from a cylinder normal to a ground surface. The data from the experiment is analyzed with a transfer function method to yield frequency and impulse responses, and calculations based on the analytic model are compared to the experimental data. Thesis advisor: David C. Swanson Copies of this thesis written in English can be obtained from
Levine, M W
1991-01-01
Simulated neural impulse trains were generated by a digital realization of the integrate-and-fire model. The variability in these impulse trains had as its origin a random noise of specified distribution. Three different distributions were used: the normal (Gaussian) distribution (no skew, normokurtic), a first-order gamma distribution (positive skew, leptokurtic), and a uniform distribution (no skew, platykurtic). Despite these differences in the distribution of the variability, the distributions of the intervals between impulses were nearly indistinguishable. These inter-impulse distributions were better fit with a hyperbolic gamma distribution than a hyperbolic normal distribution, although one might expect a better approximation for normally distributed inverse intervals. Consideration of why the inter-impulse distribution is independent of the distribution of the causative noise suggests two putative interval distributions that do not depend on the assumed noise distribution: the log normal distribution, which is predicated on the assumption that long intervals occur with the joint probability of small input values, and the random walk equation, which is the diffusion equation applied to a random walk model of the impulse generating process. Either of these equations provides a more satisfactory fit to the simulated impulse trains than the hyperbolic normal or hyperbolic gamma distributions. These equations also provide better fits to impulse trains derived from the maintained discharges of ganglion cells in the retinae of cats or goldfish. It is noted that both equations are free from the constraint that the coefficient of variation (CV) have a maximum of unity.(ABSTRACT TRUNCATED AT 250 WORDS)
Analysis of Highly-Resolved Simulations of 2-D Humps Toward Improvement of Second-Moment Closures
NASA Technical Reports Server (NTRS)
Jeyapaul, Elbert; Rumsey Christopher
2013-01-01
Fully resolved simulation data of flow separation over 2-D humps has been used to analyze the modeling terms in second-moment closures of the Reynolds-averaged Navier- Stokes equations. Existing models for the pressure-strain and dissipation terms have been analyzed using a priori calculations. All pressure-strain models are incorrect in the high-strain region near separation, although a better match is observed downstream, well into the separated-flow region. Near-wall inhomogeneity causes pressure-strain models to predict incorrect signs for the normal components close to the wall. In a posteriori computations, full Reynolds stress and explicit algebraic Reynolds stress models predict the separation point with varying degrees of success. However, as with one- and two-equation models, the separation bubble size is invariably over-predicted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawaguchi, Tomoya; Liu, Yihua; Reiter, Anthony
Here, a one-dimensional non-iterative direct method was employed for normalized crystal truncation rod analysis. The non-iterative approach, utilizing the Kramers–Kronig relation, avoids the ambiguities due to an improper initial model or incomplete convergence in the conventional iterative methods. The validity and limitations of the present method are demonstrated through both numerical simulations and experiments with Pt(111) in a 0.1 M CsF aqueous solution. The present method is compared with conventional iterative phase-retrieval methods.
Kawaguchi, Tomoya; Liu, Yihua; Reiter, Anthony; ...
2018-04-20
Here, a one-dimensional non-iterative direct method was employed for normalized crystal truncation rod analysis. The non-iterative approach, utilizing the Kramers–Kronig relation, avoids the ambiguities due to an improper initial model or incomplete convergence in the conventional iterative methods. The validity and limitations of the present method are demonstrated through both numerical simulations and experiments with Pt(111) in a 0.1 M CsF aqueous solution. The present method is compared with conventional iterative phase-retrieval methods.
Orbital Debris Shape and Orientation Effects on Ballistic Limits
NASA Technical Reports Server (NTRS)
Evans, Steven W.; Williamsen, Joel E.
2005-01-01
The SPHC hydrodynamic code was used to evaluate the effects of orbital debris particle shape and orientation on penetration of a typical spacecraft dual-wall shield. Impacts were simulated at near-normal obliquity at 12 km/sec. Debris cloud characteristics and damage potential are compared with those from impacts by spherical projectiles. Results of these simulations indicate the uncertainties in the predicted ballistic limits due to modeling uncertainty and to uncertainty in the impactor orientation.
A Model of the Traveling Charge
1980-07-01
also permits a simulation of the blowdown of the tube following the expulsion of the projectile and any unburned propellant. The interface between...N., est Approximation Properties of the Spline Fit" J. Math . Mech. 11, 225-234 1962 36 3.2 Transformed Equations Taking the.origin to be at...pressure at muzzle exit is not normally of ballistic interest, it appears from Table 4.3 that 41 mesh points are sufficient in simulations of this type
Angiogenic Signaling in Living Breast Tumor Models
2010-06-01
harmonic generation imaging of the diseased state osteogenesis imperfecta : experiment and simulation,” Biophys. J. 94(11), 4504–4514 (2008). 3. O...biopsies, mouse models of breast cancer, and dermis from mouse models of Osteogenesis Imperfecta (OIM) [1–5,7]. The F/B ratio revealed the length scale of...interest in discriminating skin with Osteogenesis Imperfecta [2] from normal dermis [2] and SHG F/B ratio measurements have been used to help determine
NASA Astrophysics Data System (ADS)
Yahya, Khairunnisa; He, Jian; Zhang, Yang
2015-12-01
Multiyear applications of an online-coupled meteorology-chemistry model allow an assessment of the variation trends in simulated meteorology, air quality, and their interactions to changes in emissions and meteorology, as well as the impacts of initial and boundary conditions (ICONs/BCONs) on simulated aerosol-cloud-radiation interactions over a period of time. In this work, the Weather Research and Forecasting model with Chemistry version 3.4.1 (WRF/Chem v. 3.4.1) with the 2005 Carbon Bond mechanism coupled with the Volatility Basis Set module for secondary organic aerosol formation (WRF/Chem-CB05-VBS) is applied for multiple years (2001, 2006, and 2010) over continental U.S. This work also examines the changes in simulated air quality and meteorology due to changes in emissions and meteorology and the model's capability in reproducing the observed variation trends in species concentrations from 2001 to 2010. In addition, the impacts of the chemical ICONs/BCONs on model predictions are analyzed. ICONs/BCONs are downscaled from two global models, the modified Community Earth System Model/Community Atmosphere model version 5.1 (CESM/CAM v5.1) and the Monitoring Atmospheric Composition and Climate model (MACC). The evaluation of WRF/Chem-CB05-VBS simulations with the CESM ICONs/BCONs for 2001, 2006, and 2010 shows that temperature at 2 m (T2) is underpredicted for all three years likely due to inaccuracies in soil moisture and soil temperature, resulting in biases in surface relative humidity, wind speed, and precipitation. With the exception of cloud fraction, other aerosol-cloud variables including aerosol optical depth, cloud droplet number concentration, and cloud optical thickness are underpredicted for all three years, resulting in overpredictions of radiation variables. The model performs well for O3 and particulate matter with diameter less than or equal to 2.5 (PM2.5) for all three years comparable to other studies from literature. The model is able to reproduce observed annual average trends in O3 and PM2.5 concentrations from 2001 to 2006 and from 2006 to 2010 but is less skillful in simulating their observed seasonal trends. The 2006 and 2010 results using CESM and MACC ICONs/BCONs are compared to analyze the impact of ICONs/BCONs on model performance and their feedbacks to aerosol, clouds, and radiation. Comparing to the simulations with MACC ICONs/BCONs, the simulations with the CESM ICONs/BCONs improve the performance of O3 mixing ratios (e.g., the normalized mean bias for maximum 8 h O3 is reduced from -17% to -1% in 2010), PM2.5 in 2010, and sulfate in 2006 (despite a slightly larger normalized mean bias for PM2.5 in 2006). The impacts of different ICONs/BCONs on simulated aerosol-cloud-radiation variables are not negligible, with larger impacts in 2006 compared to 2010.
Comparison of simulation modeling and satellite techniques for monitoring ecological processes
NASA Technical Reports Server (NTRS)
Box, Elgene O.
1988-01-01
In 1985 improvements were made in the world climatic data base for modeling and predictive mapping; in individual process models and the overall carbon-balance models; and in the interface software for mapping the simulation results. Statistical analysis of the data base was begun. In 1986 mapping was shifted to NASA-Goddard. The initial approach involving pattern comparisons was modified to a more statistical approach. A major accomplishment was the expansion and improvement of a global data base of measurements of biomass and primary production, to complement the simulation data. The main accomplishments during 1987 included: production of a master tape with all environmental and satellite data and model results for the 1600 sites; development of a complete mapping system used for the initial color maps comparing annual and monthly patterns of Normalized Difference Vegetation Index (NDVI), actual evapotranspiration, net primary productivity, gross primary productivity, and net ecosystem production; collection of more biosphere measurements for eventual improvement of the biological models; and development of some initial monthly models for primary productivity, based on satellite data.
Kowalski, K G; Olson, S; Remmers, A E; Hutmacher, M M
2008-06-01
Pharmacokinetic/pharmacodynamic (PK/PD) models were developed and clinical trial simulations were conducted to recommend a study design to test the hypothesis that a dose of SC-75416, a selective cyclooxygenase-2 inhibitor, can be identified that achieves superior pain relief (PR) compared to 400 mg ibuprofen in a post-oral surgery pain model. PK/PD models were developed for SC-75416, rofecoxib, valdecoxib, and ibuprofen relating plasma concentrations to PR scores using a nonlinear logistic-normal model. Clinical trial simulations conducted using these models suggested that 360 mg SC-75416 could achieve superior PR compared to 400 mg ibuprofen. A placebo- and positive-controlled parallel-group post-oral surgery pain study was conducted evaluating placebo, 60, 180, and 360 mg SC-75416 oral solution, and 400 mg ibuprofen. The study results confirmed the hypothesis that 360 mg SC-75416 achieved superior PR relative to 400 mg ibuprofen (DeltaTOTPAR6=3.3, P<0.05) and demonstrated the predictive performance of the PK/PD models.
NASA Astrophysics Data System (ADS)
Šarolić, A.; Živković, Z.; Reilly, J. P.
2016-06-01
The electrostimulation excitation threshold of a nerve depends on temporal and frequency parameters of the stimulus. These dependences were investigated in terms of: (1) strength-duration (SD) curve for a single monophasic rectangular pulse, and (2) frequency dependence of the excitation threshold for a continuous sinusoidal current. Experiments were performed on the single-axon measurement setup based on Lumbricus terrestris having unmyelinated nerve fibers. The simulations were performed using the well-established SENN model for a myelinated nerve. Although the unmyelinated experimental model differs from the myelinated simulation model, both refer to a single axon. Thus we hypothesized that the dependence on temporal and frequency parameters should be very similar. The comparison was made possible by normalizing each set of results to the SD time constant and the rheobase current of each model, yielding the curves that show the temporal and frequency dependencies regardless of the model differences. The results reasonably agree, suggesting that this experimental setup and method of comparison with SENN model can be used for further studies of waveform effect on nerve excitability, including unmyelinated neurons.
NASA Technical Reports Server (NTRS)
Ponomarev, Artem; Plante, Ianik; Hada, Megumi; George, Kerry; Wu, Honglu
2015-01-01
The formation of double-strand breaks (DSBs) and chromosomal aberrations (CAs) is of great importance in radiation research and, specifically, in space applications. We are presenting a recently developed model, in which chromosomes simulated by NASARTI (NASA Radiation Tracks Image) is combined with nanoscopic dose calculations performed with the Monte-Carlo simulation by RITRACKS (Relativistic Ion Tracks) in a voxelized space. The model produces the number of DSBs, as a function of dose for high-energy iron, oxygen, and carbon ions, and He ions. The combined model calculates yields of radiation-induced CAs and unrejoined chromosome breaks in normal and repair deficient cells. The merged computational model is calibrated using the relative frequencies and distributions of chromosomal aberrations reported in the literature. The model considers fractionated deposition of energy to approximate dose rates of the space flight environment. The merged model also predicts of the yields and sizes of translocations, dicentrics, rings, and more complex-type aberrations formed in the G0/G1 cell cycle phase during the first cell division after irradiation.
Šarolić, A; Živković, Z; Reilly, J P
2016-06-21
The electrostimulation excitation threshold of a nerve depends on temporal and frequency parameters of the stimulus. These dependences were investigated in terms of: (1) strength-duration (SD) curve for a single monophasic rectangular pulse, and (2) frequency dependence of the excitation threshold for a continuous sinusoidal current. Experiments were performed on the single-axon measurement setup based on Lumbricus terrestris having unmyelinated nerve fibers. The simulations were performed using the well-established SENN model for a myelinated nerve. Although the unmyelinated experimental model differs from the myelinated simulation model, both refer to a single axon. Thus we hypothesized that the dependence on temporal and frequency parameters should be very similar. The comparison was made possible by normalizing each set of results to the SD time constant and the rheobase current of each model, yielding the curves that show the temporal and frequency dependencies regardless of the model differences. The results reasonably agree, suggesting that this experimental setup and method of comparison with SENN model can be used for further studies of waveform effect on nerve excitability, including unmyelinated neurons.
Brain tumor modeling: glioma growth and interaction with chemotherapy
NASA Astrophysics Data System (ADS)
Banaem, Hossein Y.; Ahmadian, Alireza; Saberi, Hooshangh; Daneshmehr, Alireza; Khodadad, Davood
2011-10-01
In last decade increasingly mathematical models of tumor growths have been studied, particularly on solid tumors which growth mainly caused by cellular proliferation. In this paper we propose a modified model to simulate the growth of gliomas in different stages. Glioma growth is modeled by a reaction-advection-diffusion. We begin with a model of untreated gliomas and continue with models of polyclonal glioma following chemotherapy. From relatively simple assumptions involving homogeneous brain tissue bounded by a few gross anatomical landmarks (ventricles and skull) the models have been expanded to include heterogeneous brain tissue with different motilities of glioma cells in grey and white matter. Tumor growth is characterized by a dangerous change in the control mechanisms, which normally maintain a balance between the rate of proliferation and the rate of apoptosis (controlled cell death). Result shows that this model closes to clinical finding and can simulate brain tumor behavior properly.
Modeling extreme hurricane damage in the United States using generalized Pareto distribution
NASA Astrophysics Data System (ADS)
Dey, Asim Kumer
Extreme value distributions are used to understand and model natural calamities, man made catastrophes and financial collapses. Extreme value theory has been developed to study the frequency of such events and to construct a predictive model so that one can attempt to forecast the frequency of a disaster and the amount of damage from such a disaster. In this study, hurricane damages in the United States from 1900-2012 have been studied. The aim of the paper is three-fold. First, normalizing hurricane damage and fitting an appropriate model for the normalized damage data. Secondly, predicting the maximum economic damage from a hurricane in future by using the concept of return period. Finally, quantifying the uncertainty in the inference of extreme return levels of hurricane losses by using a simulated hurricane series, generated by bootstrap sampling. Normalized hurricane damage data are found to follow a generalized Pareto distribution. tion. It is demonstrated that standard deviation and coecient of variation increase with the return period which indicates an increase in uncertainty with model extrapolation.
NASA Technical Reports Server (NTRS)
Keel, Byron M.
1989-01-01
An optimum adaptive clutter rejection filter for use with airborne Doppler weather radar is presented. The radar system is being designed to operate at low-altitudes for the detection of windshear in an airport terminal area where ground clutter returns may mask the weather return. The coefficients of the adaptive clutter rejection filter are obtained using a complex form of a square root normalized recursive least squares lattice estimation algorithm which models the clutter return data as an autoregressive process. The normalized lattice structure implementation of the adaptive modeling process for determining the filter coefficients assures that the resulting coefficients will yield a stable filter and offers possible fixed point implementation. A 10th order FIR clutter rejection filter indexed by geographical location is designed through autoregressive modeling of simulated clutter data. Filtered data, containing simulated dry microburst and clutter return, are analyzed using pulse-pair estimation techniques. To measure the ability of the clutter rejection filters to remove the clutter, results are compared to pulse-pair estimates of windspeed within a simulated dry microburst without clutter. In the filter evaluation process, post-filtered pulse-pair width estimates and power levels are also used to measure the effectiveness of the filters. The results support the use of an adaptive clutter rejection filter for reducing the clutter induced bias in pulse-pair estimates of windspeed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lagerlöf, Jakob H., E-mail: Jakob@radfys.gu.se; Kindblom, Jon; Bernhardt, Peter
2014-09-15
Purpose: To construct a Monte Carlo (MC)-based simulation model for analyzing the dependence of tumor oxygen distribution on different variables related to tumor vasculature [blood velocity, vessel-to-vessel proximity (vessel proximity), and inflowing oxygen partial pressure (pO{sub 2})]. Methods: A voxel-based tissue model containing parallel capillaries with square cross-sections (sides of 10 μm) was constructed. Green's function was used for diffusion calculations and Michaelis-Menten's kinetics to manage oxygen consumption. The model was tuned to approximately reproduce the oxygenational status of a renal carcinoma; the depth oxygenation curves (DOC) were fitted with an analytical expression to facilitate rapid MC simulations of tumormore » oxygen distribution. DOCs were simulated with three variables at three settings each (blood velocity, vessel proximity, and inflowing pO{sub 2}), which resulted in 27 combinations of conditions. To create a model that simulated variable oxygen distributions, the oxygen tension at a specific point was randomly sampled with trilinear interpolation in the dataset from the first simulation. Six correlations between blood velocity, vessel proximity, and inflowing pO{sub 2} were hypothesized. Variable models with correlated parameters were compared to each other and to a nonvariable, DOC-based model to evaluate the differences in simulated oxygen distributions and tumor radiosensitivities for different tumor sizes. Results: For tumors with radii ranging from 5 to 30 mm, the nonvariable DOC model tended to generate normal or log-normal oxygen distributions, with a cut-off at zero. The pO{sub 2} distributions simulated with the six-variable DOC models were quite different from the distributions generated with the nonvariable DOC model; in the former case the variable models simulated oxygen distributions that were more similar to in vivo results found in the literature. For larger tumors, the oxygen distributions became truncated in the lower end, due to anoxia, but smaller tumors showed undisturbed oxygen distributions. The six different models with correlated parameters generated three classes of oxygen distributions. The first was a hypothetical, negative covariance between vessel proximity and pO{sub 2} (VPO-C scenario); the second was a hypothetical positive covariance between vessel proximity and pO{sub 2} (VPO+C scenario); and the third was the hypothesis of no correlation between vessel proximity and pO{sub 2} (UP scenario). The VPO-C scenario produced a distinctly different oxygen distribution than the two other scenarios. The shape of the VPO-C scenario was similar to that of the nonvariable DOC model, and the larger the tumor, the greater the similarity between the two models. For all simulations, the mean oxygen tension decreased and the hypoxic fraction increased with tumor size. The absorbed dose required for definitive tumor control was highest for the VPO+C scenario, followed by the UP and VPO-C scenarios. Conclusions: A novel MC algorithm was presented which simulated oxygen distributions and radiation response for various biological parameter values. The analysis showed that the VPO-C scenario generated a clearly different oxygen distribution from the VPO+C scenario; the former exhibited a lower hypoxic fraction and higher radiosensitivity. In future studies, this modeling approach might be valuable for qualitative analyses of factors that affect oxygen distribution as well as analyses of specific experimental and clinical situations.« less
Lagerlöf, Jakob H; Kindblom, Jon; Bernhardt, Peter
2014-09-01
To construct a Monte Carlo (MC)-based simulation model for analyzing the dependence of tumor oxygen distribution on different variables related to tumor vasculature [blood velocity, vessel-to-vessel proximity (vessel proximity), and inflowing oxygen partial pressure (pO2)]. A voxel-based tissue model containing parallel capillaries with square cross-sections (sides of 10 μm) was constructed. Green's function was used for diffusion calculations and Michaelis-Menten's kinetics to manage oxygen consumption. The model was tuned to approximately reproduce the oxygenational status of a renal carcinoma; the depth oxygenation curves (DOC) were fitted with an analytical expression to facilitate rapid MC simulations of tumor oxygen distribution. DOCs were simulated with three variables at three settings each (blood velocity, vessel proximity, and inflowing pO2), which resulted in 27 combinations of conditions. To create a model that simulated variable oxygen distributions, the oxygen tension at a specific point was randomly sampled with trilinear interpolation in the dataset from the first simulation. Six correlations between blood velocity, vessel proximity, and inflowing pO2 were hypothesized. Variable models with correlated parameters were compared to each other and to a nonvariable, DOC-based model to evaluate the differences in simulated oxygen distributions and tumor radiosensitivities for different tumor sizes. For tumors with radii ranging from 5 to 30 mm, the nonvariable DOC model tended to generate normal or log-normal oxygen distributions, with a cut-off at zero. The pO2 distributions simulated with the six-variable DOC models were quite different from the distributions generated with the nonvariable DOC model; in the former case the variable models simulated oxygen distributions that were more similar to in vivo results found in the literature. For larger tumors, the oxygen distributions became truncated in the lower end, due to anoxia, but smaller tumors showed undisturbed oxygen distributions. The six different models with correlated parameters generated three classes of oxygen distributions. The first was a hypothetical, negative covariance between vessel proximity and pO2 (VPO-C scenario); the second was a hypothetical positive covariance between vessel proximity and pO2 (VPO+C scenario); and the third was the hypothesis of no correlation between vessel proximity and pO2 (UP scenario). The VPO-C scenario produced a distinctly different oxygen distribution than the two other scenarios. The shape of the VPO-C scenario was similar to that of the nonvariable DOC model, and the larger the tumor, the greater the similarity between the two models. For all simulations, the mean oxygen tension decreased and the hypoxic fraction increased with tumor size. The absorbed dose required for definitive tumor control was highest for the VPO+C scenario, followed by the UP and VPO-C scenarios. A novel MC algorithm was presented which simulated oxygen distributions and radiation response for various biological parameter values. The analysis showed that the VPO-C scenario generated a clearly different oxygen distribution from the VPO+C scenario; the former exhibited a lower hypoxic fraction and higher radiosensitivity. In future studies, this modeling approach might be valuable for qualitative analyses of factors that affect oxygen distribution as well as analyses of specific experimental and clinical situations.
Traffic Flow Density Distribution Based on FEM
NASA Astrophysics Data System (ADS)
Ma, Jing; Cui, Jianming
In analysis of normal traffic flow, it usually uses the static or dynamic model to numerical analyze based on fluid mechanics. However, in such handling process, the problem of massive modeling and data handling exist, and the accuracy is not high. Finite Element Method (FEM) is a production which is developed from the combination of a modern mathematics, mathematics and computer technology, and it has been widely applied in various domain such as engineering. Based on existing theory of traffic flow, ITS and the development of FEM, a simulation theory of the FEM that solves the problems existing in traffic flow is put forward. Based on this theory, using the existing Finite Element Analysis (FEA) software, the traffic flow is simulated analyzed with fluid mechanics and the dynamics. Massive data processing problem of manually modeling and numerical analysis is solved, and the authenticity of simulation is enhanced.
Recovery of atmospheric refractivity profiles from simulated satellite-to-satellite tracking data
NASA Technical Reports Server (NTRS)
Murray, C. W., Jr.; Rangaswamy, S.
1975-01-01
Techniques for recovering atmospheric refractivity profiles from simulated satellite-to-satellite tracking data are documented. Examples are given using the geometric configuration of the ATS-6/NIMBUS-6 Tracking Experiment. The underlying refractivity model for the lower atmosphere has the spherically symmetric form N = exp P(s) where P(s) is a polynomial in the normalized height s. For the simulation used, the Herglotz-Wiechert technique recovered values which were 0.4% and 40% different from the input values at the surface and at a height of 33 kilometers, respectively. Using the same input data, the model fitting technique recovered refractivity values 0.05% and 1% different from the input values at the surface and at a height of 50 kilometers, respectively. It is also shown that if ionospheric and water vapor effects can be properly modelled or effectively removed from the data, pressure and temperature distributions can be obtained.
Sumner, Walton; Xu, Jin Zhong
2002-01-01
The American Board of Family Practice is developing a patient simulation program to evaluate diagnostic and management skills. The simulator must give temporally and physiologically reasonable answers to symptom questions such as "Have you been tired?" A three-step process generates symptom histories. In the first step, the simulator determines points in time where it should calculate instantaneous symptom status. In the second step, a Bayesian network implementing a roughly physiologic model of the symptom generates a value on a severity scale at each sampling time. Positive, zero, and negative values represent increased, normal, and decreased status, as applicable. The simulator plots these values over time. In the third step, another Bayesian network inspects this plot and reports how the symptom changed over time. This mechanism handles major trends, multiple and concurrent symptom causes, and gradually effective treatments. Other temporal insights, such as observations about short-term symptom relief, require complimentary mechanisms.
Cárdenas-García, Maura; González-Pérez, Pedro Pablo
2013-03-01
Apoptotic cell death plays a crucial role in development and homeostasis. This process is driven by mitochondrial permeabilization and activation of caspases. In this paper we adopt a tuple spaces-based modelling and simulation approach, and show how it can be applied to the simulation of this intracellular signalling pathway. Specifically, we are working to explore and to understand the complex interaction patterns of the caspases apoptotic and the mitochondrial role. As a first approximation, using the tuple spacesbased in silico approach, we model and simulate both the extrinsic and intrinsic apoptotic signalling pathways and the interactions between them. During apoptosis, mitochondrial proteins, released from mitochondria to cytosol are decisively involved in the process. If the decision is to die, from this point there is normally no return, cancer cells offer resistance to the mitochondrial induction.
Cárdenas-García, Maura; González-Pérez, Pedro Pablo
2013-04-11
Apoptotic cell death plays a crucial role in development and homeostasis. This process is driven by mitochondrial permeabilization and activation of caspases. In this paper we adopt a tuple spaces-based modelling and simulation approach, and show how it can be applied to the simulation of this intracellular signalling pathway. Specifically, we are working to explore and to understand the complex interaction patterns of the caspases apoptotic and the mitochondrial role. As a first approximation, using the tuple spaces-based in silico approach, we model and simulate both the extrinsic and intrinsic apoptotic signalling pathways and the interactions between them. During apoptosis, mitochondrial proteins, released from mitochondria to cytosol are decisively involved in the process. If the decision is to die, from this point there is normally no return, cancer cells offer resistance to the mitochondrial induction.
NASA Technical Reports Server (NTRS)
Joncas, K. P.
1972-01-01
Concepts and techniques for identifying and simulating both the steady state and dynamic characteristics of electrical loads for use during integrated system test and evaluation are discussed. The investigations showed that it is feasible to design and develop interrogation and simulation equipment to perform the desired functions. During the evaluation, actual spacecraft loads were interrogated by stimulating the loads with their normal input voltage and measuring the resultant voltage and current time histories. Elements of the circuits were optimized by an iterative process of selecting element values and comparing the time-domain response of the model with those obtained from the real equipment during interrogation.
Simulations of Ground and Space-Based Oxygen Atom Experiments
NASA Technical Reports Server (NTRS)
Finchum, A. (Technical Monitor); Cline, J. A.; Minton, T. K.; Braunstein, M.
2003-01-01
A low-earth orbit (LEO) materials erosion scenario and the ground-based experiment designed to simulate it are compared using the direct-simulation Monte Carlo (DSMC) method. The DSMC model provides a detailed description of the interactions between the hyperthermal gas flow and a normally oriented flat plate for each case. We find that while the general characteristics of the LEO exposure are represented in the ground-based experiment, multi-collision effects can potentially alter the impact energy and directionality of the impinging molecules in the ground-based experiment. Multi-collision phenomena also affect downstream flux measurements.
All is not lost: deriving a top-down mass budget of plastic at sea
NASA Astrophysics Data System (ADS)
Koelmans, Albert A.; Kooi, Merel; Lavender Law, Kara; van Sebille, Erik
2017-11-01
Understanding the global mass inventory is one of the main challenges in present research on plastic marine debris. Especially the fragmentation and vertical transport processes of oceanic plastic are poorly understood. However, whereas fragmentation rates are unknown, information on plastic emissions, concentrations of plastics in the ocean surface layer (OSL) and fragmentation mechanisms is available. Here, we apply a systems engineering analytical approach and propose a tentative ‘whole ocean’ mass balance model that combines emission data, surface area-normalized plastic fragmentation rates, estimated concentrations in the OSL, and removal from the OSL by sinking. We simulate known plastic abundances in the OSL and calculate an average whole ocean apparent surface area-normalized plastic fragmentation rate constant, given representative radii for macroplastic and microplastic. Simulations show that 99.8% of the plastic that had entered the ocean since 1950 had settled below the OSL by 2016, with an additional 9.4 million tons settling per year. In 2016, the model predicts that of the 0.309 million tons in the OSL, an estimated 83.7% was macroplastic, 13.8% microplastic, and 2.5% was < 0.335 mm ‘nanoplastic’. A zero future emission simulation shows that almost all plastic in the OSL would be removed within three years, implying a fast response time of surface plastic abundance to changes in inputs. The model complements current spatially explicit models, points to future experiments that would inform critical model parameters, and allows for further validation when more experimental and field data become available.
Sensitivity of Induced Seismic Sequences to Rate-and-State Frictional Processes
NASA Astrophysics Data System (ADS)
Kroll, Kayla A.; Richards-Dinger, Keith B.; Dieterich, James H.
2017-12-01
It is well established that subsurface injection of fluids increases pore fluid pressures that may lead to shear failure along a preexisting fault surface. Concern among oil and gas, geothermal, and carbon storage operators has risen dramatically over the past decade due to the increase in the number and magnitude of induced earthquakes. Efforts to mitigate the risk associated with injection-induced earthquakes include modeling of the interaction between fluids and earthquake faults. Here we investigate this relationship with simulations that couple a geomechanical reservoir model and RSQSim, a physics-based earthquake simulator. RSQSim employs rate- and state-dependent friction (RSF) that enables the investigation of the time-dependent nature of earthquake sequences. We explore the effect of two RSF parameters and normal stress on the spatiotemporal characteristics of injection-induced seismicity. We perform >200 simulations to systematically investigate the effect of these model components on the evolution of induced seismicity sequences and compare the spatiotemporal characteristics of our synthetic catalogs to observations of induced earthquakes. We find that the RSF parameters control the ability of seismicity to migrate away from the injection well, the total number and maximum magnitude of induced events. Additionally, the RSF parameters control the occurrence/absence of premonitory events. Lastly, we find that earthquake stress drops can be modulated by the normal stress and/or the RSF parameters. Insight gained from this study can aid in further development of models that address best practice protocols for injection operations, site-specific models of injection-induced earthquakes, and probabilistic hazard and risk assessments.
Sensitivity of Induced Seismic Sequences to Rate-and-State Frictional Processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kroll, Kayla A.; Richards-Dinger, Keith B.; Dieterich, James H.
It is well established that subsurface injection of fluids increases pore fluid pressures that may lead to shear failure along a preexisting fault surface. Concern among oil and gas, geothermal, and carbon storage operators has risen dramatically over the past decade due to the increase in the number and magnitude of induced earthquakes. Efforts to mitigate the risk associated with injection-induced earthquakes include modeling of the interaction between fluids and earthquake faults. Here we investigate this relationship with simulations that couple a geomechanical reservoir model and RSQSim, a physics-based earthquake simulator. RSQSim employs rate- and state-dependent friction (RSF) that enablesmore » the investigation of the time-dependent nature of earthquake sequences. We explore the effect of two RSF parameters and normal stress on the spatiotemporal characteristics of injection-induced seismicity. We perform >200 simulations to systematically investigate the effect of these model components on the evolution of induced seismicity sequences and compare the spatiotemporal characteristics of our synthetic catalogs to observations of induced earthquakes. We find that the RSF parameters control the ability of seismicity to migrate away from the injection well, the total number and maximum magnitude of induced events. Additionally, the RSF parameters control the occurrence/absence of premonitory events. Finally, we find that earthquake stress drops can be modulated by the normal stress and/or the RSF parameters. Insight gained from this study can aid in further development of models that address best practice protocols for injection operations, site-specific models of injection-induced earthquakes, and probabilistic hazard and risk assessments.« less
Sensitivity of Induced Seismic Sequences to Rate-and-State Frictional Processes
Kroll, Kayla A.; Richards-Dinger, Keith B.; Dieterich, James H.
2017-11-09
It is well established that subsurface injection of fluids increases pore fluid pressures that may lead to shear failure along a preexisting fault surface. Concern among oil and gas, geothermal, and carbon storage operators has risen dramatically over the past decade due to the increase in the number and magnitude of induced earthquakes. Efforts to mitigate the risk associated with injection-induced earthquakes include modeling of the interaction between fluids and earthquake faults. Here we investigate this relationship with simulations that couple a geomechanical reservoir model and RSQSim, a physics-based earthquake simulator. RSQSim employs rate- and state-dependent friction (RSF) that enablesmore » the investigation of the time-dependent nature of earthquake sequences. We explore the effect of two RSF parameters and normal stress on the spatiotemporal characteristics of injection-induced seismicity. We perform >200 simulations to systematically investigate the effect of these model components on the evolution of induced seismicity sequences and compare the spatiotemporal characteristics of our synthetic catalogs to observations of induced earthquakes. We find that the RSF parameters control the ability of seismicity to migrate away from the injection well, the total number and maximum magnitude of induced events. Additionally, the RSF parameters control the occurrence/absence of premonitory events. Finally, we find that earthquake stress drops can be modulated by the normal stress and/or the RSF parameters. Insight gained from this study can aid in further development of models that address best practice protocols for injection operations, site-specific models of injection-induced earthquakes, and probabilistic hazard and risk assessments.« less
Escherichia coli growth under modeled reduced gravity
NASA Technical Reports Server (NTRS)
Baker, Paul W.; Meyer, Michelle L.; Leff, Laura G.
2004-01-01
Bacteria exhibit varying responses to modeled reduced gravity that can be simulated by clino-rotation. When Escherichia coli was subjected to different rotation speeds during clino-rotation, significant differences between modeled reduced gravity and normal gravity controls were observed only at higher speeds (30-50 rpm). There was no apparent affect of removing samples on the results obtained. When E. coli was grown in minimal medium (at 40 rpm), cell size was not affected by modeled reduced gravity and there were few differences in cell numbers. However, in higher nutrient conditions (i.e., dilute nutrient broth), total cell numbers were higher and cells were smaller under reduced gravity compared to normal gravity controls. Overall, the responses to modeled reduced gravity varied with nutrient conditions; larger surface to volume ratios may help compensate for the zone of nutrient depletion around the cells under modeled reduced gravity.
Effects of simulated weightlessness on fish otolith growth: Clinostat versus Rotating-Wall Vessel
NASA Astrophysics Data System (ADS)
Brungs, Sonja; Hauslage, Jens; Hilbig, Reinhard; Hemmersbach, Ruth; Anken, Ralf
2011-09-01
Stimulus dependence is a general feature of developing sensory systems. It has been shown earlier that the growth of inner ear heavy stones (otoliths) of late-stage Cichlid fish ( Oreochromis mossambicus) and Zebrafish ( Danio rerio) is slowed down by hypergravity, whereas microgravity during space flight yields an opposite effect, i.e. larger than 1 g otoliths, in Swordtail ( Xiphophorus helleri) and in Cichlid fish late-stage embryos. These and related studies proposed that otolith growth is actively adjusted via a feedback mechanism to produce a test mass of the appropriate physical capacity. Using ground-based techniques to apply simulated weightlessness, long-term clinorotation (CR; exposure on a fast-rotating Clinostat with one axis of rotation) led to larger than 1 g otoliths in late-stage Cichlid fish. Larger than normal otoliths were also found in early-staged Zebrafish embryos after short-term Wall Vessel Rotation (WVR; also regarded as a method to simulate weightlessness). These results are basically in line with the results obtained on Swordtails from space flight. Thus, the growth of fish inner ear otoliths seems to be an appropriate parameter to assess the quality of "simulated weightlessness" provided by a particular simulation device. Since CR and WVR are in worldwide use to simulate weightlessness conditions on ground using small-sized specimens, we were prompted to directly compare the effects of CR and WVR on otolith growth using developing Cichlids as model organism. Animals were simultaneously subjected to CR and WVR from a point of time when otolith primordia had begun to calcify both within the utricle (gravity perception) and the saccule (hearing); the respective otoliths are the lapilli and the sagittae. Three such runs were subsequently carried out, using three different batches of fish. The runs were discontinued when the animals began to hatch. In the course of all three runs performed, CR led to larger than normal lapilli, whereas WVR had no effect on the growth of these otoliths. Regarding sagittae, CR resulted in larger than normal stones in one of the three runs. The other CR runs and all WVR runs had no effect on sagittal growth. These results clearly indicate that CR rather than WVR can be regarded as a device to simulate weightlessness using the Cichlid as model organism. Since WVR has earlier been shown to affect otolith growth in Zebrafish, the lifestyle of an animal (mouth-breeding versus egg-laying) seems to be of considerable importance. Further studies using a variety of simulation techniques (including, e.g. magnetic levitation and random positioning) and various species are needed in order to identify the most appropriate technique to simulate weightlessness regarding a particular model organism.
NASA Astrophysics Data System (ADS)
Lentz, C. L.; Baker, D. N.; Jaynes, A. N.; Dewey, R. M.; Lee, C. O.; Halekas, J. S.; Brain, D. A.
2018-02-01
Normal solar wind flows and intense solar transient events interact directly with the upper Martian atmosphere due to the absence of an intrinsic global planetary magnetic field. Since the launch of the Mars Atmosphere and Volatile EvolutioN (MAVEN) mission, there are now new means to directly observe solar wind parameters at the planet's orbital location for limited time spans. Due to MAVEN's highly elliptical orbit, in situ measurements cannot be taken while MAVEN is inside Mars' magnetosheath. To model solar wind conditions during these atmospheric and magnetospheric passages, this research project utilized the solar wind forecasting capabilities of the WSA-ENLIL+Cone model. The model was used to simulate solar wind parameters that included magnetic field magnitude, plasma particle density, dynamic pressure, proton temperature, and velocity during a four Carrington rotation-long segment. An additional simulation that lasted 18 Carrington rotations was then conducted. The precision of each simulation was examined for intervals when MAVEN was in the upstream solar wind, that is, with no exospheric or magnetospheric phenomena altering in situ measurements. It was determined that generalized, extensive simulations have comparable prediction capabilities as shorter, more comprehensive simulations. Generally, this study aimed to quantify the loss of detail in long-term simulations and to determine if extended simulations can provide accurate, continuous upstream solar wind conditions when there is a lack of in situ measurements.
Computational Study of Thrombus Formation and Clotting Factor Effects under Venous Flow Conditions
Govindarajan, Vijay; Rakesh, Vineet; Reifman, Jaques; Mitrophanov, Alexander Y.
2016-01-01
A comprehensive understanding of thrombus formation as a physicochemical process that has evolved to protect the integrity of the human vasculature is critical to our ability to predict and control pathological states caused by a malfunctioning blood coagulation system. Despite numerous investigations, the spatial and temporal details of thrombus growth as a multicomponent process are not fully understood. Here, we used computational modeling to investigate the temporal changes in the spatial distributions of the key enzymatic (i.e., thrombin) and structural (i.e., platelets and fibrin) components within a growing thrombus. Moreover, we investigated the interplay between clot structure and its mechanical properties, such as hydraulic resistance to flow. Our model relied on the coupling of computational fluid dynamics and biochemical kinetics, and was validated using flow-chamber data from a previous experimental study. The model allowed us to identify the distinct patterns characterizing the spatial distributions of thrombin, platelets, and fibrin accumulating within a thrombus. Our modeling results suggested that under the simulated conditions, thrombin kinetics was determined predominantly by prothrombinase. Furthermore, our simulations showed that thrombus resistance imparted by fibrin was ∼30-fold higher than that imparted by platelets. Yet, thrombus-mediated bloodflow occlusion was driven primarily by the platelet deposition process, because the height of the platelet accumulation domain was approximately twice that of the fibrin accumulation domain. Fibrinogen supplementation in normal blood resulted in a nonlinear increase in thrombus resistance, and for a supplemented fibrinogen level of 48%, the thrombus resistance increased by ∼2.7-fold. Finally, our model predicted that restoring the normal levels of clotting factors II, IX, and X while simultaneously restoring fibrinogen (to 88% of its normal level) in diluted blood can restore fibrin generation to ∼78% of its normal level and hence improve clot formation under dilution. PMID:27119646
ERIC Educational Resources Information Center
Wang, Yan; Rodríguez de Gil, Patricia; Chen, Yi-Hsin; Kromrey, Jeffrey D.; Kim, Eun Sook; Pham, Thanh; Nguyen, Diep; Romano, Jeanine L.
2017-01-01
Various tests to check the homogeneity of variance assumption have been proposed in the literature, yet there is no consensus as to their robustness when the assumption of normality does not hold. This simulation study evaluated the performance of 14 tests for the homogeneity of variance assumption in one-way ANOVA models in terms of Type I error…
Multiplicative Modeling of Children's Growth and Its Statistical Properties
NASA Astrophysics Data System (ADS)
Kuninaka, Hiroto; Matsushita, Mitsugu
2014-03-01
We develop a numerical growth model that can predict the statistical properties of the height distribution of Japanese children. Our previous studies have clarified that the height distribution of schoolchildren shows a transition from the lognormal distribution to the normal distribution during puberty. In this study, we demonstrate by simulation that the transition occurs owing to the variability of the onset of puberty.
ERIC Educational Resources Information Center
Rhemtulla, Mijke; Brosseau-Liard, Patricia E.; Savalei, Victoria
2012-01-01
A simulation study compared the performance of robust normal theory maximum likelihood (ML) and robust categorical least squares (cat-LS) methodology for estimating confirmatory factor analysis models with ordinal variables. Data were generated from 2 models with 2-7 categories, 4 sample sizes, 2 latent distributions, and 5 patterns of category…
NASA Astrophysics Data System (ADS)
Yu, Junliang; Froning, Dieter; Reimer, Uwe; Lehnert, Werner
2018-06-01
The lattice Boltzmann method is adopted to simulate the three dimensional dynamic process of liquid water breaking through the gas diffusion layer (GDL) in the polymer electrolyte membrane fuel cell. 22 micro-structures of Toray GDL are built based on a stochastic geometry model. It is found that more than one breakthrough locations are formed randomly on the GDL surface. Breakthrough location distance (BLD) are analyzed statistically in two ways. The distribution is evaluated statistically by the Lilliefors test. It is concluded that the BLD can be described by the normal distribution with certain statistic characteristics. Information of the shortest neighbor breakthrough location distance can be the input modeling setups on the cell-scale simulations in the field of fuel cell simulation.
Experimental and numerical modeling of heat transfer in directed thermoplates
Khalil, Imane; Hayes, Ryan; Pratt, Quinn; ...
2018-03-20
We present three-dimensional numerical simulations to quantify the design specifications of a directional thermoplate expanded channel heat exchanger, also called dimpleplate. Parametric thermofluidic simulations were performed independently varying the number of spot welds, the diameter of the spot welds, and the thickness of the fluid channel within the laminar flow regime. Results from computational fluid dynamics simulations show an improvement in heat transfer is achieved under a variety of conditions: when the thermoplate has a relatively large cross-sectional area normal to the flow, a ratio of spot weld spacing to channel length of 0.2, and a ratio of the spotmore » weld diameter with respect to channel width of 0.3. Lastly, experimental results performed to validate the model are also presented.« less
Experimental and numerical modeling of heat transfer in directed thermoplates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khalil, Imane; Hayes, Ryan; Pratt, Quinn
We present three-dimensional numerical simulations to quantify the design specifications of a directional thermoplate expanded channel heat exchanger, also called dimpleplate. Parametric thermofluidic simulations were performed independently varying the number of spot welds, the diameter of the spot welds, and the thickness of the fluid channel within the laminar flow regime. Results from computational fluid dynamics simulations show an improvement in heat transfer is achieved under a variety of conditions: when the thermoplate has a relatively large cross-sectional area normal to the flow, a ratio of spot weld spacing to channel length of 0.2, and a ratio of the spotmore » weld diameter with respect to channel width of 0.3. Lastly, experimental results performed to validate the model are also presented.« less
A Near-Wall Reynolds-Stress Closure without Wall Normals
NASA Technical Reports Server (NTRS)
Yuan, S. P.; So, R. M. C.
1997-01-01
With the aid of near-wall asymptotic analysis and results of direct numerical simulation, a new near-wall Reynolds stress model (NNWRS) is formulated based on the SSG high-Reynolds-stress model with wall-independent near-wall corrections. Only one damping function is used for flows with a wide range of Reynolds numbers to ensure that the near-wall modifications diminish away from the walls. The model is able to reproduce complicated flow phenomena induced by complex geometry, such as flow recirculation, reattachment and boundary-layer redevelopment in backward-facing step flow and secondary flow in three-dimensional square duct flow. In simple flows, including fully developed channel/pipe flow, Couette flow and boundary-layer flow, the wall effects are dominant, and the NNWRS model predicts less degree of turbulent anisotropy in the near-wall region compared with a wall-dependent near-wall Reynolds Stress model (NWRS) developed by So and colleagues. The comparison of the predictions given by the two models rectifies the misconception that the overshooting of skin friction coefficient in backward-facing step flow prevalent in those near-wall, models with wall normal is caused by he use of wall normal.
Arecibo pulsar survey using ALFA. III. Precursor survey and population synthesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swiggum, J. K.; Lorimer, D. R.; McLaughlin, M. A.
The Pulsar Arecibo L-band Feed Array (PALFA) Survey uses the ALFA 7-beam receiver to search both inner and outer Galactic sectors visible from Arecibo (32° ≲ ℓ ≲ 77° and 168° ≲ ℓ ≲ 214°) close to the Galactic plane (|b| ≲ 5°) for pulsars. The PALFA survey is sensitive to sources fainter and more distant than have previously been seen because of Arecibo's unrivaled sensitivity. In this paper we detail a precursor survey of this region with PALFA, which observed a subset of the full region (slightly more restrictive in ℓ and |b| ≲ 1°) and detected 45 pulsars.more » Detections included 1 known millisecond pulsar and 11 previously unknown, long-period pulsars. In the surveyed part of the sky that overlaps with the Parkes Multibeam Pulsar Survey (36° ≲ ℓ ≲ 50°), PALFA is probing deeper than the Parkes survey, with four discoveries in this region. For both Galactic millisecond and normal pulsar populations, we compare the survey's detections with simulations to model these populations and, in particular, to estimate the number of observable pulsars in the Galaxy. We place 95% confidence intervals of 82,000 to 143,000 on the number of detectable normal pulsars and 9000 to 100,000 on the number of detectable millisecond pulsars in the Galactic disk. These are consistent with previous estimates. Given the most likely population size in each case (107,000 and 15,000 for normal and millisecond pulsars, respectively), we extend survey detection simulations to predict that, when complete, the full PALFA survey should have detected 1000{sub −230}{sup +330} normal pulsars and 30{sub −20}{sup +200} millisecond pulsars. Identical estimation techniques predict that 490{sub −115}{sup +160} normal pulsars and 12{sub −5}{sup +70} millisecond pulsars would be detected by the beginning of 2014; at the time, the PALFA survey had detected 283 normal pulsars and 31 millisecond pulsars, respectively. We attribute the deficiency in normal pulsar detections predominantly to the radio frequency interference environment at Arecibo and perhaps also scintillation—both effects that are currently not accounted for in population simulation models.« less
Comparative Investigation of Normal Modes and Molecular Dynamics of Hepatitis C NS5B Protein
NASA Astrophysics Data System (ADS)
Asafi, M. S.; Yildirim, A.; Tekpinar, M.
2016-04-01
Understanding dynamics of proteins has many practical implications in terms of finding a cure for many protein related diseases. Normal mode analysis and molecular dynamics methods are widely used physics-based computational methods for investigating dynamics of proteins. In this work, we studied dynamics of Hepatitis C NS5B protein with molecular dynamics and normal mode analysis. Principal components obtained from a 100 nanoseconds molecular dynamics simulation show good overlaps with normal modes calculated with a coarse-grained elastic network model. Coarse-grained normal mode analysis takes at least an order of magnitude shorter time. Encouraged by this good overlaps and short computation times, we analyzed further low frequency normal modes of Hepatitis C NS5B. Motion directions and average spatial fluctuations have been analyzed in detail. Finally, biological implications of these motions in drug design efforts against Hepatitis C infections have been elaborated.
Application of remote sensing to hydrology. [for the formulation of watershed behavior models
NASA Technical Reports Server (NTRS)
Ambaruch, R.; Simmons, J. W.
1973-01-01
Streamflow forecasting and hydrologic modelling are considered in a feasibility assessment of using the data produced by remote observation from space and/or aircraft to reduce the time and expense normally involved in achieving the ability to predict the hydrological behavior of an ungaged watershed. Existing watershed models are described, and both stochastic and parametric techniques are discussed towards the selection of a suitable simulation model. Technical progress and applications are reported and recommendations are made for additional research.
Development and Integration of Control System Models
NASA Technical Reports Server (NTRS)
Kim, Young K.
1998-01-01
The computer simulation tool, TREETOPS, has been upgraded and used at NASA/MSFC to model various complicated mechanical systems and to perform their dynamics and control analysis with pointing control systems. A TREETOPS model of Advanced X-ray Astrophysics Facility - Imaging (AXAF-1) dynamics and control system was developed to evaluate the AXAF-I pointing performance for Normal Pointing Mode. An optical model of Shooting Star Experiment (SSE) was also developed and its optical performance analysis was done using the MACOS software.
NASA Technical Reports Server (NTRS)
Holms, A. G.
1974-01-01
Monte Carlo studies using population models intended to represent response surface applications are reported. Simulated experiments were generated by adding pseudo random normally distributed errors to population values to generate observations. Model equations were fitted to the observations and the decision procedure was used to delete terms. Comparison of values predicted by the reduced models with the true population values enabled the identification of deletion strategies that are approximately optimal for minimizing prediction errors.
The Monash University Interactive Simple Climate Model
NASA Astrophysics Data System (ADS)
Dommenget, D.
2013-12-01
The Monash university interactive simple climate model is a web-based interface that allows students and the general public to explore the physical simulation of the climate system with a real global climate model. It is based on the Globally Resolved Energy Balance (GREB) model, which is a climate model published by Dommenget and Floeter [2011] in the international peer review science journal Climate Dynamics. The model simulates most of the main physical processes in the climate system in a very simplistic way and therefore allows very fast and simple climate model simulations on a normal PC computer. Despite its simplicity the model simulates the climate response to external forcings, such as doubling of the CO2 concentrations very realistically (similar to state of the art climate models). The Monash simple climate model web-interface allows you to study the results of more than a 2000 different model experiments in an interactive way and it allows you to study a number of tutorials on the interactions of physical processes in the climate system and solve some puzzles. By switching OFF/ON physical processes you can deconstruct the climate and learn how all the different processes interact to generate the observed climate and how the processes interact to generate the IPCC predicted climate change for anthropogenic CO2 increase. The presentation will illustrate how this web-base tool works and what are the possibilities in teaching students with this tool are.
Investigations on 3-dimensional temperature distribution in a FLATCON-type CPV module
NASA Astrophysics Data System (ADS)
Wiesenfarth, Maike; Gamisch, Sebastian; Kraus, Harald; Bett, Andreas W.
2013-09-01
The thermal flow in a FLATCON®-type CPV module is investigated theoretically and experimentally. For the simulation a model in the computational fluid dynamics (CFD) software SolidWorks Flow Simulation was established. In order to verify the simulation results the calculated and measured temperatures were compared assuming the same operating conditions (wind speed and direction, direct normal irradiance (DNI) and ambient temperature). Therefore, an experimental module was manufactured and equipped with temperature sensors at defined positions. In addition, the temperature distribution on the back plate of the module was displayed by infrared images. The simulated absolute temperature and the distribution compare well with an average deviation of only 3.3 K to the sensor measurements. Finally, the validated model was used to investigate the influence of the back plate material on the temperature distribution by replacing the glass material by aluminum. The simulation showed that it is important to consider heat dissipation by radiation when designing a CPV module.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naitoh, Masanori; Ujita, Hiroshi; Nagumo, Hiroichi
1997-07-01
The Nuclear Power Engineering Corporation (NUPEC) has initiated a long-term program to develop the simulation system {open_quotes}IMPACT{close_quotes} for analysis of hypothetical severe accidents in nuclear power plants. IMPACT employs advanced methods of physical modeling and numerical computation, and can simulate a wide spectrum of senarios ranging from normal operation to hypothetical, beyond-design-basis-accident events. Designed as a large-scale system of interconnected, hierarchical modules, IMPACT`s distinguishing features include mechanistic models based on first principles and high speed simulation on parallel processing computers. The present plan is a ten-year program starting from 1993, consisting of the initial one-year of preparatory work followed bymore » three technical phases: Phase-1 for development of a prototype system; Phase-2 for completion of the simulation system, incorporating new achievements from basic studies; and Phase-3 for refinement through extensive verification and validation against test results and available real plant data.« less
Multiscale modeling and characterization for performance and safety of lithium-ion batteries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pannala, Sreekanth; Turner, John A.; Allu, Srikanth
Lithium-ion batteries are highly complex electrochemical systems whose performance and safety are governed by coupled nonlinear electrochemical-electrical-thermal-mechanical processes over a range of spatiotemporal scales. In this paper we describe a new, open source computational framework for Lithium-ion battery simulations that is designed to support a variety of model types and formulations. This framework has been used to create three-dimensional cell and battery pack models that explicitly simulate all the battery components (current collectors, electrodes, and separator). The models are used to predict battery performance under normal operations and to study thermal and mechanical safety aspects under adverse conditions. The modelmore » development and validation are supported by experimental methods such as IR-imaging, X-ray tomography and micro-Raman mapping.« less
Applying Multivariate Discrete Distributions to Genetically Informative Count Data.
Kirkpatrick, Robert M; Neale, Michael C
2016-03-01
We present a novel method of conducting biometric analysis of twin data when the phenotypes are integer-valued counts, which often show an L-shaped distribution. Monte Carlo simulation is used to compare five likelihood-based approaches to modeling: our multivariate discrete method, when its distributional assumptions are correct, when they are incorrect, and three other methods in common use. With data simulated from a skewed discrete distribution, recovery of twin correlations and proportions of additive genetic and common environment variance was generally poor for the Normal, Lognormal and Ordinal models, but good for the two discrete models. Sex-separate applications to substance-use data from twins in the Minnesota Twin Family Study showed superior performance of two discrete models. The new methods are implemented using R and OpenMx and are freely available.
Multiscale modeling and characterization for performance and safety of lithium-ion batteries
Pannala, Sreekanth; Turner, John A.; Allu, Srikanth; ...
2015-08-19
Lithium-ion batteries are highly complex electrochemical systems whose performance and safety are governed by coupled nonlinear electrochemical-electrical-thermal-mechanical processes over a range of spatiotemporal scales. In this paper we describe a new, open source computational framework for Lithium-ion battery simulations that is designed to support a variety of model types and formulations. This framework has been used to create three-dimensional cell and battery pack models that explicitly simulate all the battery components (current collectors, electrodes, and separator). The models are used to predict battery performance under normal operations and to study thermal and mechanical safety aspects under adverse conditions. The modelmore » development and validation are supported by experimental methods such as IR-imaging, X-ray tomography and micro-Raman mapping.« less
A time dependent anatomically detailed model of cardiac conduction
NASA Technical Reports Server (NTRS)
Saxberg, B. E.; Grumbach, M. P.; Cohen, R. J.
1985-01-01
In order to understand the determinants of transitions in cardiac electrical activity from normal patterns to dysrhythmias such as ventricular fibrillation, we are constructing an anatomically and physiologically detailed finite element simulation of myocardial electrical propagation. A healthy human heart embedded in paraffin was sectioned to provide a detailed anatomical substrate for model calculations. The simulation of propagation includes anisotropy in conduction velocity due to fiber orientation as well as gradients in conduction velocities, absolute and relative refractory periods, action potential duration and electrotonic influence of nearest neighbors. The model also includes changes in the behaviour of myocardial tissue as a function of the past local activity. With this model, we can examine the significance of fiber orientation and time dependence of local propagation parameters on dysrhythmogenesis.
Li, Mao; Li, Yan; Wen, Peng Paul
2014-01-01
The biological microenvironment is interrupted when tumour masses are introduced because of the strong competition for oxygen. During the period of avascular growth of tumours, capillaries that existed play a crucial role in supplying oxygen to both tumourous and healthy cells. Due to limitations of oxygen supply from capillaries, healthy cells have to compete for oxygen with tumourous cells. In this study, an improved Krogh's cylinder model which is more realistic than the previously reported assumption that oxygen is homogeneously distributed in a microenvironment, is proposed to describe the process of the oxygen diffusion from a capillary to its surrounding environment. The capillary wall permeability is also taken into account. The simulation study is conducted and the results show that when tumour masses are implanted at the upstream part of a capillary and followed by normal tissues, the whole normal tissues suffer from hypoxia. In contrast, when normal tissues are ahead of tumour masses, their pO2 is sufficient. In both situations, the pO2 in the whole normal tissues drops significantly due to the axial diffusion at the interface of normal tissues and tumourous cells. As the existence of the axial oxygen diffusion cannot supply the whole tumour masses, only these tumourous cells that are near the interface can be partially supplied, and have a small chance to survive.
NASA Technical Reports Server (NTRS)
Radin, Shula; Ducheyne, P.; Ayyaswamy, P. S.
2003-01-01
Biomimetically modified bioactive materials with bone-like surface properties are attractive candidates for use as microcarriers for 3-D bone-like tissue engineering under simulated microgravity conditions of NASA designed rotating wall vessel (RWV) bioreactors. The simulated microgravity environment is attainable under suitable parametric conditions of the RWV bioreactors. Ca-P containing bioactive glass (BG), whose stimulatory effect on bone cell function had been previously demonstrated, was used in the present study. BG surface modification via reactions in solution, resulting formation of bone-like minerals at the surface and adsorption of serum proteins is critical for obtaining the stimulatory effect. In this paper, we report on the major effects of simulated microgravity conditions of the RWV on the BG reactions surface reactions and protein adsorption in physiological solutions. Control tests at normal gravity were conducted at static and dynamic conditions. The study revealed that simulated microgravity remarkably enhanced reactions involved in the BG surface modification, including BG dissolution, formation of bone-like minerals at the surface and adsorption of serum proteins. Simultaneously, numerical models were developed to simulate the mass transport of chemical species to and from the BG surface under normal gravity and simulated microgravity conditions. The numerical results showed an excellent agreement with the experimental data at both testing conditions.
Díaz-González, Lorena; Quiroz-Ruiz, Alfredo
2014-01-01
Using highly precise and accurate Monte Carlo simulations of 20,000,000 replications and 102 independent simulation experiments with extremely low simulation errors and total uncertainties, we evaluated the performance of four single outlier discordancy tests (Grubbs test N2, Dixon test N8, skewness test N14, and kurtosis test N15) for normal samples of sizes 5 to 20. Statistical contaminations of a single observation resulting from parameters called δ from ±0.1 up to ±20 for modeling the slippage of central tendency or ε from ±1.1 up to ±200 for slippage of dispersion, as well as no contamination (δ = 0 and ε = ±1), were simulated. Because of the use of precise and accurate random and normally distributed simulated data, very large replications, and a large number of independent experiments, this paper presents a novel approach for precise and accurate estimations of power functions of four popular discordancy tests and, therefore, should not be considered as a simple simulation exercise unrelated to probability and statistics. From both criteria of the Power of Test proposed by Hayes and Kinsella and the Test Performance Criterion of Barnett and Lewis, Dixon test N8 performs less well than the other three tests. The overall performance of these four tests could be summarized as N2≅N15 > N14 > N8. PMID:24737992
Verma, Surendra P; Díaz-González, Lorena; Rosales-Rivera, Mauricio; Quiroz-Ruiz, Alfredo
2014-01-01
Using highly precise and accurate Monte Carlo simulations of 20,000,000 replications and 102 independent simulation experiments with extremely low simulation errors and total uncertainties, we evaluated the performance of four single outlier discordancy tests (Grubbs test N2, Dixon test N8, skewness test N14, and kurtosis test N15) for normal samples of sizes 5 to 20. Statistical contaminations of a single observation resulting from parameters called δ from ±0.1 up to ±20 for modeling the slippage of central tendency or ε from ±1.1 up to ±200 for slippage of dispersion, as well as no contamination (δ = 0 and ε = ±1), were simulated. Because of the use of precise and accurate random and normally distributed simulated data, very large replications, and a large number of independent experiments, this paper presents a novel approach for precise and accurate estimations of power functions of four popular discordancy tests and, therefore, should not be considered as a simple simulation exercise unrelated to probability and statistics. From both criteria of the Power of Test proposed by Hayes and Kinsella and the Test Performance Criterion of Barnett and Lewis, Dixon test N8 performs less well than the other three tests. The overall performance of these four tests could be summarized as N2≅N15 > N14 > N8.
Rose, William J.; Robertson, Dale M.; Mergener, Elizabeth A.
2004-01-01
Simulations using water-quality models within the Wisconsin Lake Model Suite (WiLMS) indicated Pike Lake's response to 13 different phosphorus-loading scenarios. These scenarios included a base 'normal' year (2000) for which lake water quality and loading were known, six different percentage increases or decreases in phosphorus loading from controllable sources, and six different loading scenarios corresponding to specific management actions. Model simulations indicate that a 50-percent reduction in controllable loading sources would be needed to achieve a mesotrophic classification with respect to phosphorus, chlorophyll a, and Secchi depth (an index of water clarity). Model simulations indicated that short-circuiting of phosphorus from the inlet to the outlet was the main reason the water quality of the lake is good relative to the amount of loading from the Rubicon River and that changes in the percentage of inlet-to-outlet short-circuiting have a significant influence on the water quality of the lake.
Multivariate stochastic simulation with subjective multivariate normal distributions
P. J. Ince; J. Buongiorno
1991-01-01
In many applications of Monte Carlo simulation in forestry or forest products, it may be known that some variables are correlated. However, for simplicity, in most simulations it has been assumed that random variables are independently distributed. This report describes an alternative Monte Carlo simulation technique for subjectively assesed multivariate normal...
The impact on midlevel vision of statistically optimal divisive normalization in V1
Coen-Cagli, Ruben; Schwartz, Odelia
2013-01-01
The first two areas of the primate visual cortex (V1, V2) provide a paradigmatic example of hierarchical computation in the brain. However, neither the functional properties of V2 nor the interactions between the two areas are well understood. One key aspect is that the statistics of the inputs received by V2 depend on the nonlinear response properties of V1. Here, we focused on divisive normalization, a canonical nonlinear computation that is observed in many neural areas and modalities. We simulated V1 responses with (and without) different forms of surround normalization derived from statistical models of natural scenes, including canonical normalization and a statistically optimal extension that accounted for image nonhomogeneities. The statistics of the V1 population responses differed markedly across models. We then addressed how V2 receptive fields pool the responses of V1 model units with different tuning. We assumed this is achieved by learning without supervision a linear representation that removes correlations, which could be accomplished with principal component analysis. This approach revealed V2-like feature selectivity when we used the optimal normalization and, to a lesser extent, the canonical one but not in the absence of both. We compared the resulting two-stage models on two perceptual tasks; while models encompassing V1 surround normalization performed better at object recognition, only statistically optimal normalization provided systematic advantages in a task more closely matched to midlevel vision, namely figure/ground judgment. Our results suggest that experiments probing midlevel areas might benefit from using stimuli designed to engage the computations that characterize V1 optimality. PMID:23857950
NASA Astrophysics Data System (ADS)
Silva, F. E. O. E.; Naghettini, M. D. C.; Fernandes, W.
2014-12-01
This paper evaluated the uncertainties associated with the estimation of the parameters of a conceptual rainfall-runoff model, through the use of Bayesian inference techniques by Monte Carlo simulation. The Pará River sub-basin, located in the upper São Francisco river basin, in southeastern Brazil, was selected for developing the studies. In this paper, we used the Rio Grande conceptual hydrologic model (EHR/UFMG, 2001) and the Markov Chain Monte Carlo simulation method named DREAM (VRUGT, 2008a). Two probabilistic models for the residues were analyzed: (i) the classic [Normal likelihood - r ≈ N (0, σ²)]; and (ii) a generalized likelihood (SCHOUPS & VRUGT, 2010), in which it is assumed that the differences between observed and simulated flows are correlated, non-stationary, and distributed as a Skew Exponential Power density. The assumptions made for both models were checked to ensure that the estimation of uncertainties in the parameters was not biased. The results showed that the Bayesian approach proved to be adequate to the proposed objectives, enabling and reinforcing the importance of assessing the uncertainties associated with hydrological modeling.
Nonlinear dynamic mechanism of vocal tremor from voice analysis and model simulations
NASA Astrophysics Data System (ADS)
Zhang, Yu; Jiang, Jack J.
2008-09-01
Nonlinear dynamic analysis and model simulations are used to study the nonlinear dynamic characteristics of vocal folds with vocal tremor, which can typically be characterized by low-frequency modulation and aperiodicity. Tremor voices from patients with disorders such as paresis, Parkinson's disease, hyperfunction, and adductor spasmodic dysphonia show low-dimensional characteristics, differing from random noise. Correlation dimension analysis statistically distinguishes tremor voices from normal voices. Furthermore, a nonlinear tremor model is proposed to study the vibrations of the vocal folds with vocal tremor. Fractal dimensions and positive Lyapunov exponents demonstrate the evidence of chaos in the tremor model, where amplitude and frequency play important roles in governing vocal fold dynamics. Nonlinear dynamic voice analysis and vocal fold modeling may provide a useful set of tools for understanding the dynamic mechanism of vocal tremor in patients with laryngeal diseases.
Computer simulation of the probability that endangered whales will interact with oil spills
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reed, M.; Jayko, K.; Bowles, A.
1987-03-01
A numerical model system was developed to assess quantitatively the probability that endangered bowhead and gray whales will encounter spilled oil in Alaskan waters. Bowhead and gray whale migration and diving-surfacing models, and an oil-spill trajectory model comprise the system. The migration models were developed from conceptual considerations, then calibrated with and tested against observations. The movement of a whale point is governed by a random walk algorithm which stochastically follows a migratory pathway. The oil-spill model, developed under a series of other contracts, accounts for transport and spreading behavior in open water and in the presence of sea ice.more » Historical wind records and heavy, normal, or light ice cover data sets are selected at random to provide stochastic oil-spill scenarios for whale-oil interaction simulations.« less
NASA Astrophysics Data System (ADS)
Wu, Liang-Chun; Li, Chien-Hung; Chan, Pei-Chen; Lin, Ming-Lang
2017-04-01
According to the investigations of well-known disastrous earthquakes in recent years, ground deformation induced by faulting is one of the causes for engineering structure damages in addition to strong ground motion. Most of structures located on faulting zone has been destroyed by fault offset. Take the Norcia Earthquake in Italy (2016, Mw=6.2) as an example, the highway bridge in Arquata crossing the rupture area of the active normal fault suffered a quantity of displacement which causing abutment settlement, the piers of bridge fractured and so on. However, The Seismic Design Provisions and Commentary for Highway Bridges in Taiwan, the stating of it in the general rule of first chapter, the design in bridges crossing active fault: "This specification is not applicable of making design in bridges crossing or near active fault, that design ought to the other particular considerations ".This indicates that the safty of bridges crossing active fault are not only consider the seismic performance, the most ground deformation should be attended. In this research, to understand the failure mechanism and the deformation characteristics, we will organize the case which the bridges subjected faulting at home and abroad. The processes of research are through physical sandbox experiment and numerical simulation by discrete element models (PFC3-D). The normal fault case in Taiwan is Shanchiao Fault. As above, the research can explore the deformation in overburden soil and the influences in the foundations of bridges by normal faulting. While we can understand the behavior of foundations, we will make the bridge superstructures into two separations, simple beam and continuous beam and make a further research on the main control variables in bridges by faulting. Through the above mentioned, we can then give appropriate suggestions about planning considerations and design approaches. This research presents results from sandbox experiment and 3-D numerical analysis to simulate overburden soil and embedded pile foundations subjected to normal faulting. In order to validate this numerical model, it is compared to sandbox experiments. Since the 3-D numerical analysis corresponds to the sandbox expeiments, the response of pile foundations and ground deformation induced by normal faulting are discussed. To understand the 3-D behavior of ground deformation and pile foundations, the observation such as the triangular shear zone, the width of primary deformation zone and the inclination, displacements, of the pile foundations are discussed in experiments and simulations. Furthermore, to understand the safty of bridges crossing faulting zone. The different superstructures of bridges, simple beam and continuous beam will be discussed subsequently in simulations.
NASA Astrophysics Data System (ADS)
Máirtín, Éamonn Ó.; Parry, Guillaume; Beltz, Glenn E.; McGarry, J. Patrick
2014-02-01
This paper, the second of two parts, presents three novel finite element case studies to demonstrate the importance of normal-tangential coupling in cohesive zone models (CZMs) for the prediction of mixed-mode interface debonding. Specifically, four new CZMs proposed in Part I of this study are implemented, namely the potential-based MP model and the non-potential-based NP1, NP2 and SMC models. For comparison, simulations are also performed for the well established potential-based Xu-Needleman (XN) model and the non-potential-based model of van den Bosch, Schreurs and Geers (BSG model). Case study 1: Debonding and rebonding of a biological cell from a cyclically deforming silicone substrate is simulated when the mode II work of separation is higher than the mode I work of separation at the cell-substrate interface. An active formulation for the contractility and remodelling of the cell cytoskeleton is implemented. It is demonstrated that when the XN potential function is used at the cell-substrate interface repulsive normal tractions are computed, preventing rebonding of significant regions of the cell to the substrate. In contrast, the proposed MP potential function at the cell-substrate interface results in negligible repulsive normal tractions, allowing for the prediction of experimentally observed patterns of cell cytoskeletal remodelling. Case study 2: Buckling of a coating from the compressive surface of a stent is simulated. It is demonstrated that during expansion of the stent the coating is initially compressed into the stent surface, while simultaneously undergoing tangential (shear) tractions at the coating-stent interface. It is demonstrated that when either the proposed NP1 or NP2 model is implemented at the stent-coating interface mixed-mode over-closure is correctly penalised. Further expansion of the stent results in the prediction of significant buckling of the coating from the stent surface, as observed experimentally. In contrast, the BSG model does not correctly penalise mixed-mode over-closure at the stent-coating interface, significantly altering the stress state in the coating and preventing the prediction of buckling. Case study 3: Application of a displacement to the base of a bi-layered composite arch results in a symmetric sinusoidal distribution of normal and tangential traction at the arch interface. The traction defined mode mixity at the interface ranges from pure mode II at the base of the arch to pure mode I at the top of the arch. It is demonstrated that predicted debonding patterns are highly sensitive to normal-tangential coupling terms in a CZM. The NP2, XN, and BSG models exhibit a strong bias towards mode I separation at the top of the arch, while the NP1 model exhibits a bias towards mode II debonding at the base of the arch. Only the SMC model provides mode-independent behaviour in the early stages of debonding. This case study provides a practical example of the importance of the behaviour of CZMs under conditions of traction controlled mode mixity, following from the theoretical analysis presented in Part I of this study.
Simulating the impact of dust cooling on the statistical properties of the intra-cluster medium
NASA Astrophysics Data System (ADS)
Pointecouteau, Etienne; da Silva, Antonio; Catalano, Andrea; Montier, Ludovic; Lanoux, Joseph; Roncarelli, Mauro; Giard, Martin
2009-08-01
From the first stages of star and galaxy formation, non-gravitational processes such as ram pressure stripping, SNs, galactic winds, AGNs, galaxy-galaxy mergers, etc. lead to the enrichment of the IGM in stars, metals as well as dust, via the ejection of galactic material into the IGM. We know now that these processes shape, side by side with gravitation, the formation and the evolution of structures. We present here hydrodynamic simulations of structure formation implementing the effect of the cooling by dust on large scale structure formation. We focus on the scale of galaxy clusters and study the statistical properties of clusters. Here, we present our results on the TX-M and the LX-M scaling relations which exhibit changes on both the slope and normalization when adding cooling by dust to the standard radiative cooling model. For example, the normalization of the TX-M relation changes only by a maximum of 2% at M=1014M⊙ whereas the normalization of the LX-TX changes by as much as 10% at TX=1keV for models that including dust cooling. Our study shows that the dust is an added non-gravitational process that contributes shaping the thermodynamical state of the hot ICM gas.
Fault stability under conditions of variable normal stress
Dieterich, J.H.; Linker, M.F.
1992-01-01
The stability of fault slip under conditions of varying normal stress is modelled as a spring and slider system with rate- and state-dependent friction. Coupling of normal stress to shear stress is achieved by inclining the spring at an angle, ??, to the sliding surface. Linear analysis yields two conditions for unstable slip. The first, of a type previously identified for constant normal stress systems, results in instability if stiffness is below a critical value. Critical stiffness depends on normal stress, constitutive parameters, characteristic sliding distance and the spring angle. Instability of the first type is possible only for velocity-weakening friction. The second condition yields instability if spring angle ?? <-cot-1??ss, where ??ss is steady-state sliding friction. The second condition can arise under conditions of velocity strengthening or weakening. Stability fields for finite perturbations are investigated by numerical simulation. -Authors
Generalized Pseudo-Reaction Zone Model for Non-Ideal Explosives
NASA Astrophysics Data System (ADS)
Wescott, B. L.
2007-12-01
The pseudo-reaction zone model was proposed to improve engineering scale simulations with high explosives that have a slow reaction component. In this work an extension of the pseudo-reaction zone model is developed for non-ideal explosives that propagate well below the steady-planar Chapman-Jouguet velocity. A programmed burn method utilizing Detonation Shock Dynamics (DSD) and a detonation velocity dependent pseudo-reaction rate has been developed for non-ideal explosives and applied to the explosive mixture of ammonium nitrate and fuel oil (ANFO). The pseudo-reaction rate is calibrated to the experimentally obtained normal detonation velocity—shock curvature relation. Cylinder test simulations predict the proper expansion to within 1% even though significant reaction occurs as the cylinder expands.
Numerical Simulation of Abandoned Gob Methane Drainage through Surface Vertical Wells
Hu, Guozhong
2015-01-01
The influence of the ventilation system on the abandoned gob weakens, so the gas seepage characteristics in the abandoned gob are significantly different from those in a normal mining gob. In connection with this, this study physically simulated the movement of overlying rock strata. A spatial distribution function for gob permeability was derived. A numerical model using FLUENT for abandoned gob methane drainage through surface wells was established, and the derived spatial distribution function for gob permeability was imported into the numerical model. The control range of surface wells, flow patterns and distribution rules for static pressure in the abandoned gob under different well locations were determined using the calculated results from the numerical model. PMID:25955438
Liu, Y; Allen, R
2002-09-01
The study aimed to model the cerebrovascular system, using a linear ARX model based on data simulated by a comprehensive physiological model, and to assess the range of applicability of linear parametric models. Arterial blood pressure (ABP) and middle cerebral arterial blood flow velocity (MCAV) were measured from 11 subjects non-invasively, following step changes in ABP, using the thigh cuff technique. By optimising parameters associated with autoregulation, using a non-linear optimisation technique, the physiological model showed a good performance (r=0.83+/-0.14) in fitting MCAV. An additional five sets of measured ABP of length 236+/-154 s were acquired from a subject at rest. These were normalised and rescaled to coefficients of variation (CV=SD/mean) of 2% and 10% for model comparisons. Randomly generated Gaussian noise with standard deviation (SD) from 1% to 5% was added to both ABP and physiologically simulated MCAV (SMCAV), with 'normal' and 'impaired' cerebral autoregulation, to simulate the real measurement conditions. ABP and SMCAV were fitted by ARX modelling, and cerebral autoregulation was quantified by a 5 s recovery percentage R5% of the step responses of the ARX models. The study suggests that cerebral autoregulation can be assessed by computing the R5% of the step response of an ARX model of appropriate order, even when measurement noise is considerable.
Large-amplitude nonlinear normal modes of the discrete sine lattices.
Smirnov, Valeri V; Manevitch, Leonid I
2017-02-01
We present an analytical description of the large-amplitude stationary oscillations of the finite discrete system of harmonically coupled pendulums without any restrictions on their amplitudes (excluding a vicinity of π). Although this model has numerous applications in different fields of physics, it was studied earlier in the infinite limit only. The discrete chain with a finite length can be considered as a well analytical analog of the coarse-grain models of flexible polymers in the molecular dynamics simulations. The developed approach allows to find the dispersion relations for arbitrary amplitudes of the nonlinear normal modes. We emphasize that the long-wavelength approximation, which is described by well-known sine-Gordon equation, leads to an inadequate zone structure for the amplitudes of about π/2 even if the chain is long enough. An extremely complex zone structure at the large amplitudes corresponds to multiple resonances between nonlinear normal modes even with strongly different wave numbers. Due to the complexity of the dispersion relations the modes with shorter wavelengths may have smaller frequencies. The stability of the nonlinear normal modes under condition of the resonant interaction are discussed. It is shown that this interaction of the modes in the vicinity of the long wavelength edge of the spectrum leads to the localization of the oscillations. The thresholds of instability and localization are determined explicitly. The numerical simulation of the dynamics of a finite-length chain is in a good agreement with obtained analytical predictions.
Thiros, Susan A.
2006-01-01
This report evaluates the performance of a numerical model of the ground-water system in northern Utah Valley, Utah, that originally simulated ground-water conditions during 1947-1980 and was updated to include conditions estimated for 1981-2002. Estimates of annual recharge to the ground-water system and discharge from wells in the area were added to the original ground-water flow model of the area.The files used in the original transient-state model of the ground-water flow system in northern Utah Valley were imported into MODFLOW-96, an updated version of MODFLOW. The main model input files modified as part of this effort were the well and recharge files. Discharge from pumping wells in northern Utah Valley was estimated on an annual basis for 1981-2002. Although the amount of average annual withdrawals from wells has not changed much since the previous study, there have been changes in the distribution of well discharge in the area. Discharge estimates for flowing wells during 1981-2002 were assumed to be the same as those used in the last stress period of the original model because of a lack of new data. Variations in annual recharge were assumed to be proportional to changes in total surface-water inflow to northern Utah Valley. Recharge specified in the model during the additional stress periods varied from 255,000 acre-feet in 1986 to 137,000 acre-feet in 1992.The ability of the updated transient-state model to match hydrologic conditions determined for 1981-2002 was evaluated by comparing water-level changes measured in wells to those computed by the model. Water-level measurements made in February, March, or April were available for 39 wells in the modeled area during all or part of 1981-2003. In most cases, the magnitude and direction of annual water-level change from 1981 to 2002 simulated by the updated model reasonably matched the measured change. The greater-than-normal precipitation that occurred during 1982-84 resulted in period-of-record high water levels measured in many of the observation wells in March 1984. The model-computed water levels at the end of 1982-84 also are among the highest for the period. Both measured and computed water levels decreased during the period representing ground-water conditions from 1999 to 2002. Precipitation was less than normal during 1999-2002.The ability of the model to adequately simulate climatic extremes such as the wetter-than-normal conditions of 1982-84 and the drier-than-normal conditions of 1999-2002 indicates that the annual variation of recharge to the ground-water system based on streamflow entering the valley, which in turn is primarily dependent upon precipitation, is appropriate but can be improved. The updated transient-state model of the ground-water system in northern Utah Valley can be improved by making revisions on the basis of currently available data and information.
Simulation, guidance and navigation of the B-737 for rollout and turnoff using MLS measurements
NASA Technical Reports Server (NTRS)
Pines, S.; Schmidt, S. F.; Mann, F.
1975-01-01
A simulation program is described for the B-737 aircraft in landing approach, a touchdown, rollout and turnoff for normal and CAT III weather conditions. Preliminary results indicate that microwave landing systems can be used in place of instrument landing systems landing aids and that a single magnetic cable can be used for automated rollout and turnoff. Recommendations are made for further refinement of the model and additional testing to finalize a set of guidance laws for rollout and turnoff.
Determining prescription durations based on the parametric waiting time distribution.
Støvring, Henrik; Pottegård, Anton; Hallas, Jesper
2016-12-01
The purpose of the study is to develop a method to estimate the duration of single prescriptions in pharmacoepidemiological studies when the single prescription duration is not available. We developed an estimation algorithm based on maximum likelihood estimation of a parametric two-component mixture model for the waiting time distribution (WTD). The distribution component for prevalent users estimates the forward recurrence density (FRD), which is related to the distribution of time between subsequent prescription redemptions, the inter-arrival density (IAD), for users in continued treatment. We exploited this to estimate percentiles of the IAD by inversion of the estimated FRD and defined the duration of a prescription as the time within which 80% of current users will have presented themselves again. Statistical properties were examined in simulation studies, and the method was applied to empirical data for four model drugs: non-steroidal anti-inflammatory drugs (NSAIDs), warfarin, bendroflumethiazide, and levothyroxine. Simulation studies found negligible bias when the data-generating model for the IAD coincided with the FRD used in the WTD estimation (Log-Normal). When the IAD consisted of a mixture of two Log-Normal distributions, but was analyzed with a single Log-Normal distribution, relative bias did not exceed 9%. Using a Log-Normal FRD, we estimated prescription durations of 117, 91, 137, and 118 days for NSAIDs, warfarin, bendroflumethiazide, and levothyroxine, respectively. Similar results were found with a Weibull FRD. The algorithm allows valid estimation of single prescription durations, especially when the WTD reliably separates current users from incident users, and may replace ad-hoc decision rules in automated implementations. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Radiation Modeling in Shock-Tubes and Entry Flows
2009-09-01
the MSRO surface , the local spherical coordinate system with a normal n is entered. Radiation Modeling in Shock-Tubes and Entry Flows 10 - 30 RTO...for each simulated photon group. Radiation Modeling in Shock-Tubes and Entry Flows 10 - 52 RTO-EN-AVT-162 There are two algorithms. In the first...Tubes and Entry Flows RTO-EN-AVT-162 10 - 57 all surfaces of the spatial finite-difference mesh should be calculated. This is illustrated in Figure
Siebert, Tobias; Leichsenring, Kay; Rode, Christian; Wick, Carolin; Stutzig, Norman; Schubert, Harald; Blickhan, Reinhard; Böl, Markus
2015-01-01
The vastly increasing number of neuro-muscular simulation studies (with increasing numbers of muscles used per simulation) is in sharp contrast to a narrow database of necessary muscle parameters. Simulation results depend heavily on rough parameter estimates often obtained by scaling of one muscle parameter set. However, in vivo muscles differ in their individual properties and architecture. Here we provide a comprehensive dataset of dynamic (n = 6 per muscle) and geometric (three-dimensional architecture, n = 3 per muscle) muscle properties of the rabbit calf muscles gastrocnemius, plantaris, and soleus. For completeness we provide the dynamic muscle properties for further important shank muscles (flexor digitorum longus, extensor digitorum longus, and tibialis anterior; n = 1 per muscle). Maximum shortening velocity (normalized to optimal fiber length) of the gastrocnemius is about twice that of soleus, while plantaris showed an intermediate value. The force-velocity relation is similar for gastrocnemius and plantaris but is much more bent for the soleus. Although the muscles vary greatly in their three-dimensional architecture their mean pennation angle and normalized force-length relationships are almost similar. Forces of the muscles were enhanced in the isometric phase following stretching and were depressed following shortening compared to the corresponding isometric forces. While the enhancement was independent of the ramp velocity, the depression was inversely related to the ramp velocity. The lowest effect strength for soleus supports the idea that these effects adapt to muscle function. The careful acquisition of typical dynamical parameters (e.g. force-length and force-velocity relations, force elongation relations of passive components), enhancement and depression effects, and 3D muscle architecture of calf muscles provides valuable comprehensive datasets for e.g. simulations with neuro-muscular models, development of more realistic muscle models, or simulation of muscle packages. PMID:26114955
NASA Astrophysics Data System (ADS)
Alimi, Isiaka; Shahpari, Ali; Ribeiro, Vítor; Sousa, Artur; Monteiro, Paulo; Teixeira, António
2017-05-01
In this paper, we present experimental results on channel characterization of single input single output (SISO) free-space optical (FSO) communication link that is based on channel measurements. The histograms of the FSO channel samples and the log-normal distribution fittings are presented along with the measured scintillation index. Furthermore, we extend our studies to diversity schemes and propose a closed-form expression for determining ergodic channel capacity of multiple input multiple output (MIMO) FSO communication systems over atmospheric turbulence fading channels. The proposed empirical model is based on SISO FSO channel characterization. Also, the scintillation effects on the system performance are analyzed and results for different turbulence conditions are presented. Moreover, we observed that the histograms of the FSO channel samples that we collected from a 1548.51 nm link have good fits with log-normal distributions and the proposed model for MIMO FSO channel capacity is in conformity with the simulation results in terms of normalized mean-square error (NMSE).
NASA Astrophysics Data System (ADS)
Karagiannis, Georgios T.; Grivas, Ioannis; Tsingotjidou, Anastasia; Apostolidis, Georgios K.; Grigoriadou, Ifigeneia; Dori, I.; Poulatsidou, Kyriaki-Nefeli; Doumas, Argyrios; Wesarg, Stefan; Georgoulias, Panagiotis
2015-03-01
Malignant melanoma is a form of skin cancer, with increasing incidence worldwide. Early diagnosis is crucial for the prognosis and treatment of the disease. The objective of this study is to develop a novel animal model of melanoma and apply a combination of the non-invasive imaging techniques acoustic microscopy, infrared (IR) and Raman spectroscopies, for the detection of developing tumors. Acoustic microscopy provides information about the 3D structure of the tumor, whereas, both spectroscopic modalities give qualitative insight of biochemical changes during melanoma development. In order to efficiently set up the final devices, propagation of ultrasonic and electromagnetic waves in normal skin and melanoma simulated structures was performed. Synthetic and grape-extracted melanin (simulated tumors), endermally injected, were scanned and compared to normal skin. For both cases acoustic microscopy with central operating frequencies of 110MHz and 175MHz were used, resulting to the tomographic imaging of the simulated tumor, while with the spectroscopic modalities IR and Raman differences among spectra of normal and melanin- injected sites were identified in skin depth. Subsequently, growth of actual tumors in an animal melanoma model, with the use of human malignant melanoma cells was achieved. Acoustic microscopy and IR and Raman spectroscopies were also applied. The development of tumors at different time points was displayed using acoustic microscopy. Moreover, the changes of the IR and Raman spectra were studied between the melanoma tumors and adjacent healthy skin. The most significant changes between healthy skin and the melanoma area were observed in the range of 900-1800cm-1 and 350-2000cm-1, respectively.
Ghasemizadeh, Reza; Yu, Xue; Butscher, Christoph; Hellweger, Ferdi; Padilla, Ingrid; Alshawabkeh, Akram
2015-01-01
Karst aquifers have a high degree of heterogeneity and anisotropy in their geologic and hydrogeologic properties which makes predicting their behavior difficult. This paper evaluates the application of the Equivalent Porous Media (EPM) approach to simulate groundwater hydraulics and contaminant transport in karst aquifers using an example from the North Coast limestone aquifer system in Puerto Rico. The goal is to evaluate if the EPM approach, which approximates the karst features with a conceptualized, equivalent continuous medium, is feasible for an actual project, based on available data and the study scale and purpose. Existing National Oceanic Atmospheric Administration (NOAA) data and previous hydrogeological U. S. Geological Survey (USGS) studies were used to define the model input parameters. Hydraulic conductivity and specific yield were estimated using measured groundwater heads over the study area and further calibrated against continuous water level data of three USGS observation wells. The water-table fluctuation results indicate that the model can practically reflect the steady-state groundwater hydraulics (normalized RMSE of 12.4%) and long-term variability (normalized RMSE of 3.0%) at regional and intermediate scales and can be applied to predict future water table behavior under different hydrogeological conditions. The application of the EPM approach to simulate transport is limited because it does not directly consider possible irregular conduit flow pathways. However, the results from the present study suggest that the EPM approach is capable to reproduce the spreading of a TCE plume at intermediate scales with sufficient accuracy (normalized RMSE of 8.45%) for groundwater resources management and the planning of contamination mitigation strategies.
Ghasemizadeh, Reza; Yu, Xue; Butscher, Christoph; Hellweger, Ferdi; Padilla, Ingrid; Alshawabkeh, Akram
2015-01-01
Karst aquifers have a high degree of heterogeneity and anisotropy in their geologic and hydrogeologic properties which makes predicting their behavior difficult. This paper evaluates the application of the Equivalent Porous Media (EPM) approach to simulate groundwater hydraulics and contaminant transport in karst aquifers using an example from the North Coast limestone aquifer system in Puerto Rico. The goal is to evaluate if the EPM approach, which approximates the karst features with a conceptualized, equivalent continuous medium, is feasible for an actual project, based on available data and the study scale and purpose. Existing National Oceanic Atmospheric Administration (NOAA) data and previous hydrogeological U. S. Geological Survey (USGS) studies were used to define the model input parameters. Hydraulic conductivity and specific yield were estimated using measured groundwater heads over the study area and further calibrated against continuous water level data of three USGS observation wells. The water-table fluctuation results indicate that the model can practically reflect the steady-state groundwater hydraulics (normalized RMSE of 12.4%) and long-term variability (normalized RMSE of 3.0%) at regional and intermediate scales and can be applied to predict future water table behavior under different hydrogeological conditions. The application of the EPM approach to simulate transport is limited because it does not directly consider possible irregular conduit flow pathways. However, the results from the present study suggest that the EPM approach is capable to reproduce the spreading of a TCE plume at intermediate scales with sufficient accuracy (normalized RMSE of 8.45%) for groundwater resources management and the planning of contamination mitigation strategies. PMID:26422202
Flores-Alsina, Xavier; Comas, Joaquim; Rodriguez-Roda, Ignasi; Gernaey, Krist V; Rosen, Christian
2009-10-01
The main objective of this paper is to demonstrate how including the occurrence of filamentous bulking sludge in a secondary clarifier model will affect the predicted process performance during the simulation of WWTPs. The IWA Benchmark Simulation Model No. 2 (BSM2) is hereby used as a simulation case study. Practically, the proposed approach includes a risk assessment model based on a knowledge-based decision tree to detect favourable conditions for the development of filamentous bulking sludge. Once such conditions are detected, the settling characteristics of the secondary clarifier model are automatically changed during the simulation by modifying the settling model parameters to mimic the effect of growth of filamentous bacteria. The simulation results demonstrate that including effects of filamentous bulking in the secondary clarifier model results in a more realistic plant performance. Particularly, during the periods when the conditions for the development of filamentous bulking sludge are favourable--leading to poor activated sludge compaction, low return and waste TSS concentrations and difficulties in maintaining the biomass in the aeration basins--a subsequent reduction in overall pollution removal efficiency is observed. Also, a scenario analysis is conducted to examine i) the influence of sludge retention time (SRT), the external recirculation flow rate (Q(r)) and the air flow rate in the bioreactor (modelled as k(L)a) as factors promoting bulking sludge, and ii) the effect on the model predictions when the settling properties are changed due to a possible proliferation of filamentous microorganisms. Finally, the potentially adverse effects of certain operational procedures are highlighted, since such effects are normally not considered by state-of-the-art models that do not include microbiology-related solids separation problems.
Wu, Sheng-Nan
2004-03-31
The purpose of this study was to develop a method to simulate the cardiac action potential using a Microsoft Excel spreadsheet. The mathematical model contained voltage-gated ionic currents that were modeled using either Beeler-Reuter (B-R) or Luo-Rudy (L-R) phase 1 kinetics. The simulation protocol involves the use of in-cell formulas directly typed into a spreadsheet. The capability of spreadsheet iteration was used in these simulations. It does not require any prior knowledge of computer programming, although the use of the macro language can speed up the calculation. The normal configuration of the cardiac ventricular action potential can be well simulated in the B-R model that is defined by four individual ionic currents, each representing the diffusion of ions through channels in the membrane. The contribution of Na+ inward current to the rate of depolarization is reproduced in this model. After removal of Na+ current from the model, a constant current stimulus elicits an oscillatory change in membrane potential. In the L-R phase 1 model where six types of ionic currents were defined, the effect of extracellular K+ concentration on changes both in the time course of repolarization and in the time-independent K+ current can be demonstrated, when the solutions are implemented in Excel. Using the simulation protocols described here, the users can readily study and graphically display the underlying properties of ionic currents to see how changes in these properties determine the behavior of the heart cell. The method employed in these simulation protocols may also be extended or modified to other biological simulation programs.
NASA Astrophysics Data System (ADS)
Krasilenko, Vladimir G.; Nikolsky, Aleksandr I.; Lazarev, Alexander A.; Magas, Taras E.
2010-04-01
Equivalence models (EM) advantages of neural networks (NN) are shown in paper. EMs are based on vectormatrix procedures with basic operations of continuous neurologic: normalized vector operations "equivalence", "nonequivalence", "autoequivalence", "autononequivalence". The capacity of NN on the basis of EM and of its modifications, including auto-and heteroassociative memories for 2D images, exceeds in several times quantity of neurons. Such neuroparadigms are very perspective for processing, recognition, storing large size and strongly correlated images. A family of "normalized equivalence-nonequivalence" neuro-fuzzy logic operations on the based of generalized operations fuzzy-negation, t-norm and s-norm is elaborated. A biologically motivated concept and time pulse encoding principles of continuous logic photocurrent reflexions and sample-storage devices with pulse-width photoconverters have allowed us to design generalized structures for realization of the family of normalized linear vector operations "equivalence"-"nonequivalence". Simulation results show, that processing time in such circuits does not exceed units of micro seconds. Circuits are simple, have low supply voltage (1-3 V), low power consumption (milliwatts), low levels of input signals (microwatts), integrated construction, satisfy the problem of interconnections and cascading.
NASA Astrophysics Data System (ADS)
Srivastava, Priyesh; Sarkar, Kausik
2012-11-01
The shear rheology of moderately concentrated emulsions (5-27% volume fraction) in the presence of inertia is numerically investigated. Typically, an emulsion of viscous drops experiences positive first normal stress difference (N1) and negative second normal stress difference (N2) , as has also been predicted by perturbative analysis (Choi-Schowalter model) and numerical simulation. However, recently using single drop results we have shown [Li and Sarkar, 2005, J. Rheo, 49, 1377] that introduction of inertia reverses the signs of the normal stress difference in the dilute limit. Here, we numerically investigate the effects of interactions between drops in a concentrated system. The simulation is validated against the dilute results as well as analytical relations. It also shows the reversal of signs for N1 and N2 for small Capillary numbers above a critical Reynolds number. The physics is explained by the inertia-induced orientation of the individual drops in shear. Increasing volume fraction increases the critical Reynolds number at which N1 and N2change sign. The breakdown of linearity with volume fraction with increasing concentration is also analyzed. Partially supported by NSF.
Simulation model of stratified thermal energy storage tank using finite difference method
NASA Astrophysics Data System (ADS)
Waluyo, Joko
2016-06-01
Stratified TES tank is normally used in the cogeneration plant. The stratified TES tanks are simple, low cost, and equal or superior in thermal performance. The advantage of TES tank is that it enables shifting of energy usage from off-peak demand for on-peak demand requirement. To increase energy utilization in a stratified TES tank, it is required to build a simulation model which capable to simulate the charging phenomenon in the stratified TES tank precisely. This paper is aimed to develop a novel model in addressing the aforementioned problem. The model incorporated chiller into the charging of stratified TES tank system in a closed system. The model was developed in one-dimensional type involve with heat transfer aspect. The model covers the main factors affect to degradation of temperature distribution namely conduction through the tank wall, conduction between cool and warm water, mixing effect on the initial flow of the charging as well as heat loss to surrounding. The simulation model is developed based on finite difference method utilizing buffer concept theory and solved in explicit method. Validation of the simulation model is carried out using observed data obtained from operating stratified TES tank in cogeneration plant. The temperature distribution of the model capable of representing S-curve pattern as well as simulating decreased charging temperature after reaching full condition. The coefficient of determination values between the observed data and model obtained higher than 0.88. Meaning that the model has capability in simulating the charging phenomenon in the stratified TES tank. The model is not only capable of generating temperature distribution but also can be enhanced for representing transient condition during the charging of stratified TES tank. This successful model can be addressed for solving the limitation temperature occurs in charging of the stratified TES tank with the absorption chiller. Further, the stratified TES tank can be charged with the cooling energy of absorption chiller that utilizes from waste heat from gas turbine of the cogeneration plant.
Robson, Stanley G.
1978-01-01
This study investigated the use of a two-dimensional profile-oriented water-quality model for the simulation of head and water-quality changes through the saturated thickness of an aquifer. The profile model is able to simulate confined or unconfined aquifers with nonhomogeneous anisotropic hydraulic conductivity, nonhomogeneous specific storage and porosity, and nonuniform saturated thickness. An aquifer may be simulated under either steady or nonsteady flow conditions provided that the ground-water flow path along which the longitudinal axis of the model is oriented does not move in the aquifer during the simulation time period. The profile model parameters are more difficult to quantify than are the corresponding parameters for an areal-oriented water-fluality model. However, the sensitivity of the profile model to the parameters may be such that the normal error of parameter estimation will not preclude obtaining acceptable model results. Although the profile model has the advantage of being able to simulate vertical flow and water-quality changes in a single- or multiple-aquifer system, the types of problems to which it can be applied is limited by the requirements that (1) the ground-water flow path remain oriented along the longitudinal axis of the model and (2) any subsequent hydrologic factors to be evaluated using the model must be located along the land-surface trace of the model. Simulation of hypothetical ground-water management practices indicates that the profile model is applicable to problem-oriented studies and can provide quantitative results applicable to a variety of management practices. In particular, simulations of the movement and dissolved-solids concentration of a zone of degraded ground-water quality near Barstow, Calif., indicate that halting subsurface disposal of treated sewage effluent in conjunction with pumping a line of fully penetrating wells would be an effective means of controlling the movement of degraded ground water.
van Spengen, W Merlijn; Turq, Viviane; Frenken, Joost W M
2010-01-01
We have replaced the periodic Prandtl-Tomlinson model with an atomic-scale friction model with a random roughness term describing the surface roughness of micro-electromechanical systems (MEMS) devices with sliding surfaces. This new model is shown to exhibit the same features as previously reported experimental MEMS friction loop data. The correlation function of the surface roughness is shown to play a critical role in the modelling. It is experimentally obtained by probing the sidewall surfaces of a MEMS device flipped upright in on-chip hinges with an AFM (atomic force microscope). The addition of a modulation term to the model allows us to also simulate the effect of vibration-induced friction reduction (normal-force modulation), as a function of both vibration amplitude and frequency. The results obtained agree very well with measurement data reported previously.
CFD Modeling of Helium Pressurant Effects on Cryogenic Tank Pressure Rise Rates in Normal Gravity
NASA Technical Reports Server (NTRS)
Grayson, Gary; Lopez, Alfredo; Chandler, Frank; Hastings, Leon; Hedayat, Ali; Brethour, James
2007-01-01
A recently developed computational fluid dynamics modeling capability for cryogenic tanks is used to simulate both self-pressurization from external heating and also depressurization from thermodynamic vent operation. Axisymmetric models using a modified version of the commercially available FLOW-3D software are used to simulate actual physical tests. The models assume an incompressible liquid phase with density that is a function of temperature only. A fully compressible formulation is used for the ullage gas mixture that contains both condensable vapor and a noncondensable gas component. The tests, conducted at the NASA Marshall Space Flight Center, include both liquid hydrogen and nitrogen in tanks with ullage gas mixtures of each liquid's vapor and helium. Pressure and temperature predictions from the model are compared to sensor measurements from the tests and a good agreement is achieved. This further establishes the accuracy of the developed FLOW-3D based modeling approach for cryogenic systems.
Modeling Simple Driving Tasks with a One-Boundary Diffusion Model
Ratcliff, Roger; Strayer, David
2014-01-01
A one-boundary diffusion model was applied to the data from two experiments in which subjects were performing a simple simulated driving task. In the first experiment, the same subjects were tested on two driving tasks using a PC-based driving simulator and the psychomotor vigilance test (PVT). The diffusion model fit the response time (RT) distributions for each task and individual subject well. Model parameters were found to correlate across tasks which suggests common component processes were being tapped in the three tasks. The model was also fit to a distracted driving experiment of Cooper and Strayer (2008). Results showed that distraction altered performance by affecting the rate of evidence accumulation (drift rate) and/or increasing the boundary settings. This provides an interpretation of cognitive distraction whereby conversing on a cell phone diverts attention from the normal accumulation of information in the driving environment. PMID:24297620
Software for Brain Network Simulations: A Comparative Study
Tikidji-Hamburyan, Ruben A.; Narayana, Vikram; Bozkus, Zeki; El-Ghazawi, Tarek A.
2017-01-01
Numerical simulations of brain networks are a critical part of our efforts in understanding brain functions under pathological and normal conditions. For several decades, the community has developed many software packages and simulators to accelerate research in computational neuroscience. In this article, we select the three most popular simulators, as determined by the number of models in the ModelDB database, such as NEURON, GENESIS, and BRIAN, and perform an independent evaluation of these simulators. In addition, we study NEST, one of the lead simulators of the Human Brain Project. First, we study them based on one of the most important characteristics, the range of supported models. Our investigation reveals that brain network simulators may be biased toward supporting a specific set of models. However, all simulators tend to expand the supported range of models by providing a universal environment for the computational study of individual neurons and brain networks. Next, our investigations on the characteristics of computational architecture and efficiency indicate that all simulators compile the most computationally intensive procedures into binary code, with the aim of maximizing their computational performance. However, not all simulators provide the simplest method for module development and/or guarantee efficient binary code. Third, a study of their amenability for high-performance computing reveals that NEST can almost transparently map an existing model on a cluster or multicore computer, while NEURON requires code modification if the model developed for a single computer has to be mapped on a computational cluster. Interestingly, parallelization is the weakest characteristic of BRIAN, which provides no support for cluster computations and limited support for multicore computers. Fourth, we identify the level of user support and frequency of usage for all simulators. Finally, we carry out an evaluation using two case studies: a large network with simplified neural and synaptic models and a small network with detailed models. These two case studies allow us to avoid any bias toward a particular software package. The results indicate that BRIAN provides the most concise language for both cases considered. Furthermore, as expected, NEST mostly favors large network models, while NEURON is better suited for detailed models. Overall, the case studies reinforce our general observation that simulators have a bias in the computational performance toward specific types of the brain network models. PMID:28775687
Detection of fallen trees in ALS point clouds using a Normalized Cut approach trained by simulation
NASA Astrophysics Data System (ADS)
Polewski, Przemyslaw; Yao, Wei; Heurich, Marco; Krzystek, Peter; Stilla, Uwe
2015-07-01
Downed dead wood is regarded as an important part of forest ecosystems from an ecological perspective, which drives the need for investigating its spatial distribution. Based on several studies, Airborne Laser Scanning (ALS) has proven to be a valuable remote sensing technique for obtaining such information. This paper describes a unified approach to the detection of fallen trees from ALS point clouds based on merging short segments into whole stems using the Normalized Cut algorithm. We introduce a new method of defining the segment similarity function for the clustering procedure, where the attribute weights are learned from labeled data. Based on a relationship between Normalized Cut's similarity function and a class of regression models, we show how to learn the similarity function by training a classifier. Furthermore, we propose using an appearance-based stopping criterion for the graph cut algorithm as an alternative to the standard Normalized Cut threshold approach. We set up a virtual fallen tree generation scheme to simulate complex forest scenarios with multiple overlapping fallen stems. This simulated data is then used as a basis to learn both the similarity function and the stopping criterion for Normalized Cut. We evaluate our approach on 5 plots from the strictly protected mixed mountain forest within the Bavarian Forest National Park using reference data obtained via a manual field inventory. The experimental results show that our method is able to detect up to 90% of fallen stems in plots having 30-40% overstory cover with a correctness exceeding 80%, even in quite complex forest scenes. Moreover, the performance for feature weights trained on simulated data is competitive with the case when the weights are calculated using a grid search on the test data, which indicates that the learned similarity function and stopping criterion can generalize well on new plots.
Robust and efficient estimation with weighted composite quantile regression
NASA Astrophysics Data System (ADS)
Jiang, Xuejun; Li, Jingzhi; Xia, Tian; Yan, Wanfeng
2016-09-01
In this paper we introduce a weighted composite quantile regression (CQR) estimation approach and study its application in nonlinear models such as exponential models and ARCH-type models. The weighted CQR is augmented by using a data-driven weighting scheme. With the error distribution unspecified, the proposed estimators share robustness from quantile regression and achieve nearly the same efficiency as the oracle maximum likelihood estimator (MLE) for a variety of error distributions including the normal, mixed-normal, Student's t, Cauchy distributions, etc. We also suggest an algorithm for the fast implementation of the proposed methodology. Simulations are carried out to compare the performance of different estimators, and the proposed approach is used to analyze the daily S&P 500 Composite index, which verifies the effectiveness and efficiency of our theoretical results.
An assessment of gravity model improvements using TOPEX/Poseidon TDRSS observations
NASA Technical Reports Server (NTRS)
Putney, B. H.; Teles, J.; Eddy, W. F.; Klosko, S. M.
1992-01-01
The contribution of TOPEX/Poseidon (T/P) TDRSS data to geopotential model recovery is assessed. Simulated TDRSS one-way and Bilateration Ranging Transponder System (BRTS) observations have been generated and orbitally reduced to form normal equations for geopotential parameters. These normals have been combined with those of the latest prelaunch T/P gravity model solution using data from over 30 satellites. A study of the resulting solution error covariance shows that TDRSS can make important contributions to geopotential recovery, especially for improving T/P specific effects like those arising from orbital resonance. It is argued that future effort is desirable both to establish TDRSS orbit determination limits in a reference frame compatible with that used for the precise laser/DORIS orbits, and the reduction of these TDRSS data for geopotential recovery.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makhmalbaf, Atefe; Srivastava, Viraj; Wang, Na
Weather normalization is a crucial task in several applications related to building energy conservation such as retrofit measurements and energy rating. This paper documents preliminary results found from an effort to determine a set of weather adjustment coefficients that can be used to smooth out impacts of weather on energy use of buildings in 1020 weather location sites available in the U.S. The U.S. Department of Energy (DOE) commercial reference building models are adopted as hypothetical models with standard operations to deliver consistency in modeling. The correlation between building envelop design, HVAC system design and properties for different building typesmore » and the change in heating and cooling energy consumption caused by variations in weather is examined.« less
NASA Technical Reports Server (NTRS)
Ruhlin, C. L.; Rauch, F. J., Jr.; Waters, C.
1982-01-01
The model was a 1/6.5-size, semipan version of a wing proposed for an executive-jet-transport airplane. The model was tested with a normal wingtip, a wingtip with winglet, and a normal wingtip ballasted to simulate the winglet mass properties. Flutter and aerodynamic data were acquired at Mach numbers (M) from 0.6 to 0.95. The measured transonic flutter speed boundary for each wingtip configuration had roughly the same shape with a minimum flutter speed near M=0.82. The winglet addition and wingtip mass ballast decreased the wing flutter speed by about 7 and 5 percent, respectively; thus, the winglet effect on flutter was more a mass effect than an aerodynamic effect.
The Dynamical Behaviors for a Class of Immunogenic Tumor Model with Delay
Muthoni, Mutei Damaris; Pang, Jianhua
2017-01-01
This paper aims at studying the model proposed by Kuznetsov and Taylor in 1994. Inspired by Mayer et al., time delay is introduced in the general model. The dynamic behaviors of this model are studied, which include the existence and stability of the equilibria and Hopf bifurcation of the model with discrete delays. The properties of the bifurcated periodic solutions are studied by using the normal form on the center manifold. Numerical examples and simulations are given to illustrate the bifurcation analysis and the obtained results. PMID:29312457
Interfacial contact stiffness of fractal rough surfaces.
Zhang, Dayi; Xia, Ying; Scarpa, Fabrizio; Hong, Jie; Ma, Yanhong
2017-10-09
In this work we describe a theoretical model that predicts the interfacial contact stiffness of fractal rough surfaces by considering the effects of elastic and plastic deformations of the fractal asperities. We also develop an original test rig that simulates dovetail joints for turbo machinery blades, which can fine tune the normal contact load existing between the contacting surfaces of the blade root. The interfacial contact stiffness is obtained through an inverse identification method in which finite element simulations are fitted to the experimental results. Excellent agreement is observed between the contact stiffness predicted by the theoretical model and by the analogous experimental results. We demonstrate that the contact stiffness is a power law function of the normal contact load with an exponent α within the whole range of fractal dimension D(1 < D < 2). We also show that for 1 < D < 1.5 the Pohrt-Popov behavior (α = 1/(3 - D)) is valid, however for 1.5 < D < 2, the exponent α is different and equal to 2(D - 1)/D. The diversity between the model developed in the work and the Pohrt-Popov one is explained in detail.
Ijichi, Shinji; Ijichi, Naomi; Ijichi, Yukina; Imamura, Chikako; Sameshima, Hisami; Kawaike, Yoichi; Morioka, Hirofumi
2018-01-01
The continuing prevalence of a highly heritable and hypo-reproductive extreme tail of a human neurobehavioral quantitative diversity suggests the possibility that the reproductive majority retains the genetic mechanism for the extremes. From the perspective of stochastic epistasis, the effect of an epistatic modifier variant can randomly vary in both phenotypic value and effect direction among the careers depending on the genetic individuality, and the modifier careers are ubiquitous in the population distribution. The neutrality of the mean genetic effect in the careers warrants the survival of the variant under selection pressures. Functionally or metabolically related modifier variants make an epistatic network module and dozens of modules may be involved in the phenotype. To assess the significance of stochastic epistasis, a simplified module-based model was employed. The individual repertoire of the modifier variants in a module also participates in the genetic individuality which determines the genetic contribution of each modifier in the career. Because the entire contribution of a module to the phenotypic outcome is consequently unpredictable in the model, the module effect represents the total contribution of the related modifiers as a stochastic unit in the simulations. As a result, the intrinsic compatibility between distributional robustness and quantitative changeability could mathematically be simulated using the model. The artificial normal distribution shape in large-sized simulations was preserved in each generation even if the lowest fitness tail was un-reproductive. The robustness of normality beyond generations is analogous to the real situations of human complex diversity including neurodevelopmental conditions. The repeated regeneration of the un-reproductive extreme tail may be inevitable for the reproductive majority's competence to survive and change, suggesting implications of the extremes for others. Further model-simulations to illustrate how the fitness of extreme individuals can be low through generations may be warranted to increase the credibility of this stochastic epistasis model.
Modeling regulation of cardiac KATP and L-type Ca2+ currents by ATP, ADP, and Mg2+.
Michailova, Anushka; Saucerman, Jeffrey; Belik, Mary Ellen; McCulloch, Andrew D
2005-03-01
Changes in cytosolic free Mg(2+) and adenosine nucleotide phosphates affect cardiac excitability and contractility. To investigate how modulation by Mg(2+), ATP, and ADP of K(ATP) and L-type Ca(2+) channels influences excitation-contraction coupling, we incorporated equations for intracellular ATP and MgADP regulation of the K(ATP) current and MgATP regulation of the L-type Ca(2+) current in an ionic-metabolic model of the canine ventricular myocyte. The new model: 1), quantitatively reproduces a dose-response relationship for the effects of changes in ATP on K(ATP) current, 2), simulates effects of ADP in modulating ATP sensitivity of K(ATP) channel, 3), predicts activation of Ca(2+) current during rapid increase in MgATP, and 4), demonstrates that decreased ATP/ADP ratio with normal total Mg(2+) or increased free Mg(2+) with normal ATP and ADP activate K(ATP) current, shorten action potential, and alter ionic currents and intracellular Ca(2+) signals. The model predictions are in agreement with experimental data measured under normal and a variety of pathological conditions.
Modeling regulation of cardiac KATP and L-type Ca2+ currents by ATP, ADP, and Mg2+
NASA Technical Reports Server (NTRS)
Michailova, Anushka; Saucerman, Jeffrey; Belik, Mary Ellen; McCulloch, Andrew D.
2005-01-01
Changes in cytosolic free Mg(2+) and adenosine nucleotide phosphates affect cardiac excitability and contractility. To investigate how modulation by Mg(2+), ATP, and ADP of K(ATP) and L-type Ca(2+) channels influences excitation-contraction coupling, we incorporated equations for intracellular ATP and MgADP regulation of the K(ATP) current and MgATP regulation of the L-type Ca(2+) current in an ionic-metabolic model of the canine ventricular myocyte. The new model: 1), quantitatively reproduces a dose-response relationship for the effects of changes in ATP on K(ATP) current, 2), simulates effects of ADP in modulating ATP sensitivity of K(ATP) channel, 3), predicts activation of Ca(2+) current during rapid increase in MgATP, and 4), demonstrates that decreased ATP/ADP ratio with normal total Mg(2+) or increased free Mg(2+) with normal ATP and ADP activate K(ATP) current, shorten action potential, and alter ionic currents and intracellular Ca(2+) signals. The model predictions are in agreement with experimental data measured under normal and a variety of pathological conditions.
Fabrication and characterization of diamond-like carbon/Ni bimorph normally closed microcages
NASA Astrophysics Data System (ADS)
Luo, J. K.; He, J. H.; Fu, Y. Q.; Flewitt, A. J.; Spearing, S. M.; Fleck, N. A.; Milne, W. I.
2005-08-01
Normally closed microcages based on highly compressively stressed diamond-like carbon (DLC) and electroplated Ni bimorph structures have been simulated, fabricated and characterized. Finite-element and analytical models were used to simulate the device performance. It was found that the radius of curvature of the bimorph layer can be adjusted by varying the DLC film stress, the total layer thickness and the thickness ratio of the DLC to Ni layers. The angular deflection of the bimorph structures can also be adjusted by varying the finger length. The radius of curvature of the microcage was in the range of 18-50 µm, suitable for capturing and confining micro-objects with sizes of 20-100 µm. The operation of this type of device is very efficient due to the large difference in thermal expansion coefficients of the DLC and the Ni layers. Electrical tests have shown that these microcages can be opened by ~90° utilizing a power smaller than 20 mW. The operating temperatures of the devices under various pulsed currents were extracted through the change in electrical resistance of the devices. The results showed that an average temperature in the range of 400-450 °C is needed to open this type of microcage by ~90°, consistent with the results from analytical simulation and finite-element modelling.
[Image reconstruction of conductivity on magnetoacoustic tomography with magnetic induction].
Li, Jingyu; Yin, Tao; Liu, Zhipeng; Xu, Guohui
2010-04-01
The electric characteristics such as impedance and conductivity of the organization will change in the case where pathological changes occurred in the biological tissue. The change in electric characteristics usually took place before the change in the density of tissues, and also, the difference in electric characteristics such as conductivity between normal tissue and pathological tissue is obvious. The method of magneto-acoustic tomography with magnetic induction is based on the theory of magnetic eddy current induction, the principle of vibration generation and acoustic transmission to get the boundary of the pathological tissue. The pathological change could be inspected by electricity characteristic imaging which is invasive to the tissue. In this study, a two-layer concentric spherical model is established to simulate the malignant tumor tissue surrounded by normal tissue mutual relations of the magneto-sound coupling effect and the coupling equations in the magnetic field are used to get the algorithms for reconstructing the conductivity. Simulation study is conducted to test the proposed model and validate the performance of the reconstructed algorithms. The result indicates that the use of signal processing method in this paper can image the conductivity boundaries of the sample in the scanning cross section. The computer simulating results validate the feasibility of applying the method of magneto-acoustic tomography with magnetic induction for malignant tumor imaging.
Ammonia emission model for whole farm evaluation of dairy production systems.
Rotz, C Alan; Montes, Felipe; Hafner, Sasha D; Heber, Albert J; Grant, Richard H
2014-07-01
Ammonia (NH) emissions vary considerably among farms as influenced by climate and management. Because emission measurement is difficult and expensive, process-based models provide an alternative for estimating whole farm emissions. A model that simulates the processes of NH formation, speciation, aqueous-gas partitioning, and mass transfer was developed and incorporated in a whole farm simulation model (the Integrated Farm System Model). Farm sources included manure on the floor of the housing facility, manure in storage (if used), field-applied manure, and deposits on pasture (if grazing is used). In a comprehensive evaluation of the model, simulated daily, seasonal, and annual emissions compared well with data measured over 2 yr for five free stall barns and two manure storages on dairy farms in the eastern United States. In a further comparison with published data, simulated and measured barn emissions were similar over differing barn designs, protein feeding levels, and seasons of the year. Simulated emissions from manure storage were also highly correlated with published emission data across locations, seasons, and different storage covers. For field applied manure, the range in simulated annual emissions normally bounded reported mean values for different manure dry matter contents and application methods. Emissions from pastures measured in northern Europe across seasons and fertilization levels were also represented well by the model. After this evaluation, simulations of a representative dairy farm in Pennsylvania illustrated the effects of animal housing and manure management on whole farm emissions and their interactions with greenhouse gas emissions, nitrate leaching, production costs, and farm profitability. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Bellin, Alberto; Tonina, Daniele
2007-10-30
Available models of solute transport in heterogeneous formations lack in providing complete characterization of the predicted concentration. This is a serious drawback especially in risk analysis where confidence intervals and probability of exceeding threshold values are required. Our contribution to fill this gap of knowledge is a probability distribution model for the local concentration of conservative tracers migrating in heterogeneous aquifers. Our model accounts for dilution, mechanical mixing within the sampling volume and spreading due to formation heterogeneity. It is developed by modeling local concentration dynamics with an Ito Stochastic Differential Equation (SDE) that under the hypothesis of statistical stationarity leads to the Beta probability distribution function (pdf) for the solute concentration. This model shows large flexibility in capturing the smoothing effect of the sampling volume and the associated reduction of the probability of exceeding large concentrations. Furthermore, it is fully characterized by the first two moments of the solute concentration, and these are the same pieces of information required for standard geostatistical techniques employing Normal or Log-Normal distributions. Additionally, we show that in the absence of pore-scale dispersion and for point concentrations the pdf model converges to the binary distribution of [Dagan, G., 1982. Stochastic modeling of groundwater flow by unconditional and conditional probabilities, 2, The solute transport. Water Resour. Res. 18 (4), 835-848.], while it approaches the Normal distribution for sampling volumes much larger than the characteristic scale of the aquifer heterogeneity. Furthermore, we demonstrate that the same model with the spatial moments replacing the statistical moments can be applied to estimate the proportion of the plume volume where solute concentrations are above or below critical thresholds. Application of this model to point and vertically averaged bromide concentrations from the first Cape Cod tracer test and to a set of numerical simulations confirms the above findings and for the first time it shows the superiority of the Beta model to both Normal and Log-Normal models in interpreting field data. Furthermore, we show that assuming a-priori that local concentrations are normally or log-normally distributed may result in a severe underestimate of the probability of exceeding large concentrations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ngirmang, Gregory K., E-mail: ngirmang.1@osu.edu; Orban, Chris; Feister, Scott
We present 3D Particle-in-Cell (PIC) modeling of an ultra-intense laser experiment by the Extreme Light group at the Air Force Research Laboratory using the Large Scale Plasma (LSP) PIC code. This is the first time PIC simulations have been performed in 3D for this experiment which involves an ultra-intense, short-pulse (30 fs) laser interacting with a water jet target at normal incidence. The laser-energy-to-ejected-electron-energy conversion efficiency observed in 2D(3v) simulations were comparable to the conversion efficiencies seen in the 3D simulations, but the angular distribution of ejected electrons in the 2D(3v) simulations displayed interesting differences with the 3D simulations' angular distribution;more » the observed differences between the 2D(3v) and 3D simulations were more noticeable for the simulations with higher intensity laser pulses. An analytic plane-wave model is discussed which provides some explanation for the angular distribution and energies of ejected electrons in the 2D(3v) simulations. We also performed a 3D simulation with circularly polarized light and found a significantly higher conversion efficiency and peak electron energy, which is promising for future experiments.« less
A multilayer approach for price dynamics in financial markets
NASA Astrophysics Data System (ADS)
Biondo, Alessio Emanuele; Pluchino, Alessandro; Rapisarda, Andrea
2017-02-01
We introduce a new Self-Organized Criticality (SOC) model for simulating price evolution in an artificial financial market, based on a multilayer network of traders. The model also implements, in a quite realistic way with respect to previous studies, the order book dynamics, by considering two assets with variable fundamental prices. Fat tails in the probability distributions of normalized returns are observed, together with other features of real financial markets.
NASA Astrophysics Data System (ADS)
Zhao, Shaorong; Takemoto, Shuzo
2000-08-01
The interseismic deformation associated with plate coupling at a subduction zone is commonly simulated by the steady-slip model in which a reverse dip-slip is imposed on the down-dip extension of the locked plate interface, or by the backslip model in which a normal slip is imposed on the locked plate interface. It is found that these two models, although totally different in principle, produce similar patterns for the vertical deformation at a subduction zone. This suggests that it is almost impossible to distinguish between these two models by analysing only the interseismic vertical deformation observed at a subduction zone. The steady-slip model cannot correctly predict the horizontal deformation associated with plate coupling at a subduction zone, a fact that is proved by both the numerical modelling in this study and the GPS (Global Positioning System) observations near the Nankai trough, southwest Japan. It is therefore inadequate to simulate the effect of the plate coupling at a subduction zone by the steady-slip model. It is also revealed that the unphysical assumption inherent in the backslip model of imposing a normal slip on the locked plate interface makes it impossible to predict correctly the horizontal motion of the subducted plate and the stress change within the overthrust zone associated with the plate coupling during interseismic stages. If the analysis made in this work is proved to be correct, some of the previous studies on interpreting the interseismic deformation observed at several subduction zones based on these two models might need substantial revision. On the basis of the investigations on plate interaction at subduction zones made using the finite element method and the kinematic/mechanical conditions of the plate coupling implied by the present plate tectonics, a synthesized model is proposed to simulate the kinematic effect of the plate interaction during interseismic stages. A numerical analysis shows that the proposed model, designed to simulate the motion of a subducted slab, can correctly produce the deformation and the main pattern of stress concentration associated with plate coupling at a subduction zone. The validity of the synthesized model is examined and partially verified by analysing the horizontal deformation observed by GPS near the Nankai trough, southwest Japan.
NASA Astrophysics Data System (ADS)
Larsson, David; Spühler, Jeannette H.; Günyeli, Elif; Weinkauf, Tino; Hoffman, Johan; Colarieti-Tosti, Massimiliano; Winter, Reidar; Larsson, Matilda
2017-03-01
Echocardiography is the most commonly used image modality in cardiology, assessing several aspects of cardiac viability. The importance of cardiac hemodynamics and 4D blood flow motion has recently been highlighted, however such assessment is still difficult using routine echo-imaging. Instead, combining imaging with computational fluid dynamics (CFD)-simulations has proven valuable, but only a few models have been applied clinically. In the following, patient-specific CFD-simulations from transthoracic dobutamin stress echocardiography have been used to analyze the left ventricular 4D blood flow in three subjects: two with normal and one with reduced left ventricular function. At each stress level, 4D-images were acquired using a GE Vivid E9 (4VD, 1.7MHz/3.3MHz) and velocity fields simulated using a presented pathway involving endocardial segmentation, valve position identification, and solution of the incompressible Navier-Stokes equation. Flow components defined as direct flow, delayed ejection flow, retained inflow, and residual volume were calculated by particle tracing using 4th-order Runge-Kutta integration. Additionally, systolic and diastolic average velocity fields were generated. Results indicated no major changes in average velocity fields for any of the subjects. For the two subjects with normal left ventricular function, increased direct flow, decreased delayed ejection flow, constant retained inflow, and a considerable drop in residual volume was seen at increasing stress. Contrary, for the subject with reduced left ventricular function, the delayed ejection flow increased whilst the retained inflow decreased at increasing stress levels. This feasibility study represents one of the first clinical applications of an echo-based patient-specific CFD-model at elevated stress levels, and highlights the potential of using echo-based models to capture highly transient flow events, as well as the ability of using simulation tools to study clinically complex phenomena. With larger patient studies planned for the future, and with the possibility of adding more anatomical features into the model framework, the current work demonstrates the potential of patient-specific CFD-models as a tool for quantifying 4D blood flow in the heart.
Fox, Aaron S; Carty, Christopher P; Modenese, Luca; Barber, Lee A; Lichtwark, Glen A
2018-03-01
Altered neural control of movement and musculoskeletal deficiencies are common in children with spastic cerebral palsy (SCP), with muscle weakness and contracture commonly experienced. Both neural and musculoskeletal deficiencies are likely to contribute to abnormal gait, such as equinus gait (toe-walking), in children with SCP. However, it is not known whether the musculoskeletal deficiencies prevent normal gait or if neural control could be altered to achieve normal gait. This study examined the effect of simulated muscle weakness and contracture of the major plantarflexor/dorsiflexor muscles on the neuromuscular requirements for achieving normal walking gait in children. Initial muscle-driven simulations of walking with normal musculoskeletal properties by typically developing children were undertaken. Additional simulations with altered musculoskeletal properties were then undertaken; with muscle weakness and contracture simulated by reducing the maximum isometric force and tendon slack length, respectively, of selected muscles. Muscle activations and forces required across all simulations were then compared via waveform analysis. Maintenance of normal gait appeared robust to muscle weakness in isolation, with increased activation of weakened muscles the major compensatory strategy. With muscle contracture, reduced activation of the plantarflexors was required across the mid-portion of stance suggesting a greater contribution from passive forces. Increased activation and force during swing was also required from the tibialis anterior to counteract the increased passive forces from the simulated dorsiflexor muscle contracture. Improvements in plantarflexor and dorsiflexor motor function and muscle strength, concomitant with reductions in plantarflexor muscle stiffness may target the deficits associated with SCP that limit normal gait. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Reisgen, Uwe; Schleser, Markus; Mokrov, Oleg; Zabirov, Alexander
2011-06-01
A two dimensional transient numerical analysis and computational module for simulation of electrical and thermal characteristics during electrode melting and metal transfer involved in Gas-Metal-Arc-Welding (GMAW) processes is presented. Solution of non-linear transient heat transfer equation is carried out using a control volume finite difference technique. The computational module also includes controlling and regulation algorithms of industrial welding power sources. The simulation results are the current and voltage waveforms, mean voltage drops at different parts of circuit, total electric power, cathode, anode and arc powers and arc length. We describe application of the model for normal process (constant voltage) and for pulsed processes with U/I and I/I-modulation modes. The comparisons with experimental waveforms of current and voltage show that the model predicts current, voltage and electric power with a high accuracy. The model is used in simulation package SimWeld for calculation of heat flux into the work-piece and the weld seam formation. From the calculated heat flux and weld pool sizes, an equivalent volumetric heat source according to Goldak model, can be generated. The method was implemented and investigated with the simulation software SimWeld developed by the ISF at RWTH Aachen University.
NASA Astrophysics Data System (ADS)
Long, Jeffrey K.
1989-09-01
This theses developed computer models of two types of amplitude comparison monopulse processors using the Block Oriented System Simulation (BOSS) software package and to determine the response to these models to impulsive input signals. This study was an effort to determine the susceptibility of monopulse tracking radars to impulsing jamming signals. Two types of amplitude comparison monopulse receivers were modeled, one using logarithmic amplifiers and the other using automatic gain control for signal normalization. Simulations of both types of systems were run under various conditions of gain or frequency imbalance between the two receiver channels. The resulting errors from the imbalanced simulations were compared to the outputs of similar, baseline simulations which had no electrical imbalances. The accuracy of both types of processors was directly affected by gain or frequency imbalances in their receiver channels. In most cases, it was possible to generate both positive and negative angular errors, dependent upon the type and degree of mismatch between the channels. The system most susceptible to induced errors was a frequency imbalanced processor which used AGC circuitry. Any errors introduced will be a function of the degree of mismatch between the channels and therefore would be difficult to exploit reliably.
Characteristic analysis on UAV-MIMO channel based on normalized correlation matrix.
Gao, Xi jun; Chen, Zi li; Hu, Yong Jiang
2014-01-01
Based on the three-dimensional GBSBCM (geometrically based double bounce cylinder model) channel model of MIMO for unmanned aerial vehicle (UAV), the simple form of UAV space-time-frequency channel correlation function which includes the LOS, SPE, and DIF components is presented. By the methods of channel matrix decomposition and coefficient normalization, the analytic formula of UAV-MIMO normalized correlation matrix is deduced. This formula can be used directly to analyze the condition number of UAV-MIMO channel matrix, the channel capacity, and other characteristic parameters. The simulation results show that this channel correlation matrix can be applied to describe the changes of UAV-MIMO channel characteristics under different parameter settings comprehensively. This analysis method provides a theoretical basis for improving the transmission performance of UAV-MIMO channel. The development of MIMO technology shows practical application value in the field of UAV communication.
Characteristic Analysis on UAV-MIMO Channel Based on Normalized Correlation Matrix
Xi jun, Gao; Zi li, Chen; Yong Jiang, Hu
2014-01-01
Based on the three-dimensional GBSBCM (geometrically based double bounce cylinder model) channel model of MIMO for unmanned aerial vehicle (UAV), the simple form of UAV space-time-frequency channel correlation function which includes the LOS, SPE, and DIF components is presented. By the methods of channel matrix decomposition and coefficient normalization, the analytic formula of UAV-MIMO normalized correlation matrix is deduced. This formula can be used directly to analyze the condition number of UAV-MIMO channel matrix, the channel capacity, and other characteristic parameters. The simulation results show that this channel correlation matrix can be applied to describe the changes of UAV-MIMO channel characteristics under different parameter settings comprehensively. This analysis method provides a theoretical basis for improving the transmission performance of UAV-MIMO channel. The development of MIMO technology shows practical application value in the field of UAV communication. PMID:24977185
NASA Technical Reports Server (NTRS)
Mckissick, B. T.; Ashworth, B. R.; Parrish, R. V.; Martin, D. J., Jr.
1980-01-01
NASA's Langley Research Center conducted a simulation experiment to ascertain the comparative effects of motion cues (combinations of platform motion and g-seat normal acceleration cues) on compensatory tracking performance. In the experiment, a full six-degree-of-freedom YF-16 model was used as the simulated pursuit aircraft. The Langley Visual Motion Simulator (with in-house developed wash-out), and a Langley developed g-seat were principal components of the simulation. The results of the experiment were examined utilizing univariate and multivariate techniques. The statistical analyses demonstrate that the platform motion and g-seat cues provide additional information to the pilot that allows substantial reduction of lateral tracking error. Also, the analyses show that the g-seat cue helps reduce vertical error.
Optical Fiber Illumination System for visual flight simulation
NASA Technical Reports Server (NTRS)
Hollow, R. H.
1981-01-01
An electronically controlled lighting system simulating runway, aircraft carrier, and landing aid lights for flight simulations is described. The various colored lights that would be visible to a pilot by day, at dusk, or at night are duplicated at the distances the lights would normally become visible. Plastic optical fiber illuminators using tungsten halogen lights are distributed behind the model. The tips of the fibers of illuminators simulating runway lights are bevelled in order that they may be seen from long distances and at low angles. Fibers representing taxiway lights are pointed and polished for omni-directional visibility. The electronic intensity controls, which can be operated either manually or remotely, regulate the intensity of the lights to simulate changes in distance. A dichronic mirror, infrared filter system is used to maintain color integrity.
Jackson, M I; Hiley, M J; Yeadon, M R
2011-10-13
In the table contact phase of gymnastics vaulting both dynamic and static friction act. The purpose of this study was to develop a method of simulating Coulomb friction that incorporated both dynamic and static phases and to compare the results with those obtained using a pseudo-Coulomb implementation of friction when applied to the table contact phase of gymnastics vaulting. Kinematic data were obtained from an elite level gymnast performing handspring straight somersault vaults using a Vicon optoelectronic motion capture system. An angle-driven computer model of vaulting that simulated the interaction between a seven segment gymnast and a single segment vaulting table during the table contact phase of the vault was developed. Both dynamic and static friction were incorporated within the model by switching between two implementations of the tangential frictional force. Two vaulting trials were used to determine the model parameters using a genetic algorithm to match simulations to recorded performances. A third independent trial was used to evaluate the model and close agreement was found between the simulation and the recorded performance with an overall difference of 13.5%. The two-state simulation model was found to be capable of replicating performance at take-off and also of replicating key contact phase features such as the normal and tangential motion of the hands. The results of the two-state model were compared to those using a pseudo-Coulomb friction implementation within the simulation model. The two-state model achieved similar overall results to those of the pseudo-Coulomb model but obtained solutions more rapidly. Copyright © 2011 Elsevier Ltd. All rights reserved.
Control of thermal therapies with moving power deposition field.
Arora, Dhiraj; Minor, Mark A; Skliar, Mikhail; Roemer, Robert B
2006-03-07
A thermal therapy feedback control approach to control thermal dose using a moving power deposition field is developed and evaluated using simulations. A normal tissue safety objective is incorporated in the controller design by imposing constraints on temperature elevations at selected normal tissue locations. The proposed control technique consists of two stages. The first stage uses a model-based sliding mode controller that dynamically generates an 'ideal' power deposition profile which is generally unrealizable with available heating modalities. Subsequently, in order to approximately realize this spatially distributed idealized power deposition, a constrained quadratic optimizer is implemented to compute intensities and dwell times for a set of pre-selected power deposition fields created by a scanned focused transducer. The dwell times for various power deposition profiles are dynamically generated online as opposed to the commonly employed a priori-decided heating strategies. Dynamic intensity and trajectory generation safeguards the treatment outcome against modelling uncertainties and unknown disturbances. The controller is designed to enforce simultaneous activation of multiple normal tissue temperature constraints by rapidly switching between various power deposition profiles. The hypothesis behind the controller design is that the simultaneous activation of multiple constraints substantially reduces treatment time without compromising normal tissue safety. The controller performance and robustness with respect to parameter uncertainties is evaluated using simulations. The results demonstrate that the proposed controller can successfully deliver the desired thermal dose to the target while maintaining the temperatures at the user-specified normal tissue locations at or below the maximum allowable values. Although demonstrated for the case of a scanned focused ultrasound transducer, the developed approach can be extended to other heating modalities with moving deposition fields, such as external and interstitial ultrasound phased arrays, multiple radiofrequency needle applicators and microwave antennae.
Theoretical analysis of evaporative cooling of classic heat stroke patients
NASA Astrophysics Data System (ADS)
Alzeer, Abdulaziz H.; Wissler, E. H.
2018-05-01
Heat stroke is a serious health concern globally, which is associated with high mortality. Newer treatments must be designed to improve outcomes. The aim of this study is to evaluate the effect of variations in ambient temperature and wind speed on the rate of cooling in a simulated heat stroke subject using the dynamic model of Wissler. We assume that a 60-year-old 70-kg female suffers classic heat stroke after walking fully exposed to the sun for 4 h while the ambient temperature is 40 °C, relative humidity is 20%, and wind speed is 2.5 m/s-1. Her esophageal and skin temperatures are 41.9 and 40.7 °C at the time of collapse. Cooling is accomplished by misting with lukewarm water while exposed to forced airflow at a temperature of 20 to 40 °C and a velocity of 0.5 or 1 m/s-1. Skin blood flow is assumed to be either normal, one-half of normal, or twice normal. At wind speed of 0.5 m/s-1 and normal skin blood flow, the air temperature decreased from 40 to 20 °C, increased cooling, and reduced time required to reach to a desired temperature of 38 °C. This relationship was also maintained in reduced blood flow states. Increasing wind speed to 1 m/s-1 increased cooling and reduced the time to reach optimal temperature both in normal and reduced skin blood flow states. In conclusion, evaporative cooling methods provide an effective method for cooling classic heat stroke patients. The maximum heat dissipation from the simulated model of Wissler was recorded when the entire body was misted with lukewarm water and applied forced air at 1 m/s at temperature of 20 °C.
Numerical simulation model of hyperacute/acute stage white matter infarction.
Sakai, Koji; Yamada, Kei; Oouchi, Hiroyuki; Nishimura, Tsunehiko
2008-01-01
Although previous studies have revealed the mechanisms of changes in diffusivity (apparent diffusion coefficient [ADC]) in acute brain infarction, changes in diffusion anisotropy (fractional anisotropy [FA]) in white matter have not been examined. We hypothesized that membrane permeability as well as axonal swelling play important roles, and we therefore constructed a simulation model using random walk simulation to replicate the diffusion of water molecules. We implemented a numerical diffusion simulation model of normal and infarcted human brains using C++ language. We constructed this 2-pool model using simple tubes aligned in a single direction. Random walk simulation diffused water. Axon diameters and membrane permeability were then altered in step-wise fashion. To estimate the effects of axonal swelling, axon diameters were changed from 6 to 10 microm. Membrane permeability was altered from 0% to 40%. Finally, both elements were combined to explain increasing FA in the hyperacute stage of white matter infarction. The simulation demonstrated that simple water shift into the intracellular space reduces ADC and increases FA, but not to the extent expected from actual human cases (ADC approximately 50%; FA approximately +20%). Similarly, membrane permeability alone was insufficient to explain this phenomenon. However, a combination of both factors successfully replicated changes in diffusivity indices. Both axonal swelling and reduced membrane permeability appear important in explaining changes in ADC and FA based on eigenvalues in hyperacute-stage white matter infarction.
NASA Astrophysics Data System (ADS)
McGovern, S.; Kollet, S. J.; Buerger, C. M.; Schwede, R. L.; Podlaha, O. G.
2017-12-01
In the context of sedimentary basins, we present a model for the simulation of the movement of ageological formation (layers) during the evolution of the basin through sedimentation and compactionprocesses. Assuming a single phase saturated porous medium for the sedimentary layers, the modelfocuses on the tracking of the layer interfaces, through the use of the level set method, as sedimentationdrives fluid-flow and reduction of pore space by compaction. On the assumption of Terzaghi's effectivestress concept, the coupling of the pore fluid pressure to the motion of interfaces in 1-D is presented inMcGovern, et.al (2017) [1] .The current work extends the spatial domain to 3-D, though we maintain the assumption ofvertical effective stress to drive the compaction. The idealized geological evolution is conceptualized asthe motion of interfaces between rock layers, whose paths are determined by the magnitude of a speedfunction in the direction normal to the evolving layer interface. The speeds normal to the interface aredependent on the change in porosity, determined through an effective stress-based compaction law,such as the exponential Athy's law. Provided with the speeds normal to the interface, the level setmethod uses an advection equation to evolve a potential function, whose zero level set defines theinterface. Thus, the moving layer geometry influences the pore pressure distribution which couplesback to the interface speeds. The flexible construction of the speed function allows extension, in thefuture, to other terms to represent different physical processes, analogous to how the compaction rulerepresents material deformation.The 3-D model is implemented using the generic finite element method framework Deal II,which provides tools, building on p4est and interfacing to PETSc, for the massively parallel distributedsolution to the model equations [2]. Experiments are being run on the Juelich Supercomputing Center'sJureca cluster. [1] McGovern, et.al. (2017). Novel basin modelling concept for simulating deformation from mechanical compaction using level sets. Computational Geosciences, SI:ECMOR XV, 1-14.[2] Bangerth, et. al. (2011). Algorithms and data structures for massively parallel generic adaptive finite element codes. ACM Transactions on Mathematical Software (TOMS), 38(2):14.
Biegert, Edward; Vowinckel, Bernhard; Meiburg, Eckart
2017-03-21
We present a collision model for phase-resolved Direct Numerical Simulations of sediment transport that couple the fluid and particles by the Immersed Boundary Method. Typically, a contact model for these types of simulations comprises a lubrication force for particles in close proximity to another solid object, a normal contact force to prevent particles from overlapping, and a tangential contact force to account for friction. Our model extends the work of previous authors to improve upon the time integration scheme to obtain consistent results for particle-wall collisions. Furthermore, we account for polydisperse spherical particles and introduce new criteria to account formore » enduring contact, which occurs in many sediment transport situations. This is done without using arbitrary values for physically-defined parameters and by maintaining the full momentum balance of a particle in enduring contact. Lastly, we validate our model against several test cases for binary particle-wall collisions as well as the collective motion of a sediment bed sheared by a viscous flow, yielding satisfactory agreement with experimental data by various authors.« less
NASA Astrophysics Data System (ADS)
Fen, Cao; XuHai, Yang; ZhiGang, Li; ChuGang, Feng
2016-08-01
The normal consecutive observing model in Chinese Area Positioning System (CAPS) can only supply observations of one GEO satellite in 1 day from one station. However, this can't satisfy the project need for observing many GEO satellites in 1 day. In order to obtain observations of several GEO satellites in 1 day like GPS/GLONASS/Galileo/BeiDou, the time-sharing observing model for GEO satellites in CAPS needs research. The principle of time-sharing observing model is illuminated with subsequent Precise Orbit Determination (POD) experiments using simulated time-sharing observations in 2005 and the real time-sharing observations in 2015. From time-sharing simulation experiments before 2014, the time-sharing observing 6 GEO satellites every 2 h has nearly the same orbit precision with the consecutive observing model. From POD experiments using the real time-sharing observations, POD precision for ZX12# and Yatai7# are about 3.234 m and 2.570 m, respectively, which indicates the time-sharing observing model is appropriate for CBTR system and can realize observing many GEO satellites in 1 day.
NASA Astrophysics Data System (ADS)
Biegert, Edward; Vowinckel, Bernhard; Meiburg, Eckart
2017-07-01
We present a collision model for phase-resolved Direct Numerical Simulations of sediment transport that couple the fluid and particles by the Immersed Boundary Method. Typically, a contact model for these types of simulations comprises a lubrication force for particles in close proximity to another solid object, a normal contact force to prevent particles from overlapping, and a tangential contact force to account for friction. Our model extends the work of previous authors to improve upon the time integration scheme to obtain consistent results for particle-wall collisions. Furthermore, we account for polydisperse spherical particles and introduce new criteria to account for enduring contact, which occurs in many sediment transport situations. This is done without using arbitrary values for physically-defined parameters and by maintaining the full momentum balance of a particle in enduring contact. We validate our model against several test cases for binary particle-wall collisions as well as the collective motion of a sediment bed sheared by a viscous flow, yielding satisfactory agreement with experimental data by various authors.
A study of different modeling choices for simulating platelets within the immersed boundary method
Shankar, Varun; Wright, Grady B.; Fogelson, Aaron L.; Kirby, Robert M.
2012-01-01
The Immersed Boundary (IB) method is a widely-used numerical methodology for the simulation of fluid–structure interaction problems. The IB method utilizes an Eulerian discretization for the fluid equations of motion while maintaining a Lagrangian representation of structural objects. Operators are defined for transmitting information (forces and velocities) between these two representations. Most IB simulations represent their structures with piecewise linear approximations and utilize Hookean spring models to approximate structural forces. Our specific motivation is the modeling of platelets in hemodynamic flows. In this paper, we study two alternative representations – radial basis functions (RBFs) and Fourier-based (trigonometric polynomials and spherical harmonics) representations – for the modeling of platelets in two and three dimensions within the IB framework, and compare our results with the traditional piecewise linear approximation methodology. For different representative shapes, we examine the geometric modeling errors (position and normal vectors), force computation errors, and computational cost and provide an engineering trade-off strategy for when and why one might select to employ these different representations. PMID:23585704
A simplified dynamic model of the T700 turboshaft engine
NASA Technical Reports Server (NTRS)
Duyar, Ahmet; Gu, Zhen; Litt, Jonathan S.
1992-01-01
A simplified open-loop dynamic model of the T700 turboshaft engine, valid within the normal operating range of the engine, is developed. This model is obtained by linking linear state space models obtained at different engine operating points. Each linear model is developed from a detailed nonlinear engine simulation using a multivariable system identification and realization method. The simplified model may be used with a model-based real time diagnostic scheme for fault detection and diagnostics, as well as for open loop engine dynamics studies and closed loop control analysis utilizing a user generated control law.
Atomistic minimal model for estimating profile of electrodeposited nanopatterns
NASA Astrophysics Data System (ADS)
Asgharpour Hassankiadeh, Somayeh; Sadeghi, Ali
2018-06-01
We develop a computationally efficient and methodologically simple approach to realize molecular dynamics simulations of electrodeposition. Our minimal model takes into account the nontrivial electric field due a sharp electrode tip to perform simulations of the controllable coating of a thin layer on a surface with an atomic precision. On the atomic scale a highly site-selective electrodeposition of ions and charged particles by means of the sharp tip of a scanning probe microscope is possible. A better understanding of the microscopic process, obtained mainly from atomistic simulations, helps us to enhance the quality of this nanopatterning technique and to make it applicable in fabrication of nanowires and nanocontacts. In the limit of screened inter-particle interactions, it is feasible to run very fast simulations of the electrodeposition process within the framework of the proposed model and thus to investigate how the shape of the overlayer depends on the tip-sample geometry and dielectric properties, electrolyte viscosity, etc. Our calculation results reveal that the sharpness of the profile of a nano-scale deposited overlayer is dictated by the normal-to-sample surface component of the electric field underneath the tip.
A new class of actuator surface models for wind turbines
NASA Astrophysics Data System (ADS)
Yang, Xiaolei; Sotiropoulos, Fotis
2018-05-01
Actuator line model has been widely employed in wind turbine simulations. However, the standard actuator line model does not include a model for the turbine nacelle which can significantly impact turbine wake characteristics as shown in the literature. Another disadvantage of the standard actuator line model is that more geometrical features of turbine blades cannot be resolved on a finer mesh. To alleviate these disadvantages of the standard model, we develop a new class of actuator surface models for turbine blades and nacelle to take into account more geometrical details of turbine blades and include the effect of turbine nacelle. In the actuator surface model for blade, the aerodynamic forces calculated using the blade element method are distributed from the surface formed by the foil chords at different radial locations. In the actuator surface model for nacelle, the forces are distributed from the actual nacelle surface with the normal force component computed in the same way as in the direct forcing immersed boundary method and the tangential force component computed using a friction coefficient and a reference velocity of the incoming flow. The actuator surface model for nacelle is evaluated by simulating the flow over periodically placed nacelles. Both the actuator surface simulation and the wall-resolved large-eddy simulation are carried out. The comparison shows that the actuator surface model is able to give acceptable results especially at far wake locations on a very coarse mesh. It is noted that although this model is employed for the turbine nacelle in this work, it is also applicable to other bluff bodies. The capability of the actuator surface model in predicting turbine wakes is assessed by simulating the flow over the MEXICO (Model experiments in Controlled Conditions) turbine and a hydrokinetic turbine.
Evidence of negative-index refraction in nonlinear chemical waves.
Yuan, Xujin; Wang, Hongli; Ouyang, Qi
2011-05-06
The negative index of refraction of nonlinear chemical waves has become a recent focus in nonlinear dynamics researches. Theoretical analysis and computer simulations have predicted that the negative index of refraction can occur on the interface between antiwaves and normal waves in a reaction-diffusion (RD) system. However, no experimental evidence has been found so far. In this Letter, we report our experimental design in searching for such a phenomenon in a chlorite-iodide-malonic acid (CIMA) reaction. Our experimental results demonstrate that competition between waves and antiwaves at their interface determines the fate of the wave interaction. The negative index of refraction was only observed when the oscillation frequency of a normal wave is significantly smaller than that of the antiwave. All experimental results were supported by simulations using the Lengyel-Epstein RD model which describes the CIMA reaction-diffusion system.
Contact area of rough spheres: Large scale simulations and simple scaling laws
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pastewka, Lars, E-mail: lars.pastewka@kit.edu; Department of Physics and Astronomy, Johns Hopkins University, 3400 North Charles Street, Baltimore, Maryland 21218; Robbins, Mark O., E-mail: mr@pha.jhu.edu
2016-05-30
We use molecular simulations to study the nonadhesive and adhesive atomic-scale contact of rough spheres with radii ranging from nanometers to micrometers over more than ten orders of magnitude in applied normal load. At the lowest loads, the interfacial mechanics is governed by the contact mechanics of the first asperity that touches. The dependence of contact area on normal force becomes linear at intermediate loads and crosses over to Hertzian at the largest loads. By combining theories for the limiting cases of nominally flat rough surfaces and smooth spheres, we provide parameter-free analytical expressions for contact area over the wholemore » range of loads. Our results establish a range of validity for common approximations that neglect curvature or roughness in modeling objects on scales from atomic force microscope tips to ball bearings.« less
Li, Jingrui; Kondov, Ivan; Wang, Haobin; Thoss, Michael
2015-04-10
A recently developed methodology to simulate photoinduced electron transfer processes at dye-semiconductor interfaces is outlined. The methodology employs a first-principles-based model Hamiltonian and accurate quantum dynamics simulations using the multilayer multiconfiguration time-dependent Hartree approach. This method is applied to study electron injection in the dye-semiconductor system coumarin 343-TiO2. Specifically, the influence of electronic-vibrational coupling is analyzed. Extending previous work, we consider the influence of Dushinsky rotation of the normal modes as well as anharmonicities of the potential energy surfaces on the electron transfer dynamics.
Numerical Simulation of the Flow over a Segment-Conical Body on the Basis of Reynolds Equations
NASA Astrophysics Data System (ADS)
Egorov, I. V.; Novikov, A. V.; Palchekovskaya, N. V.
2018-01-01
Numerical simulation was used to study the 3D supersonic flow over a segment-conical body similar in shape to the ExoMars space vehicle. The nonmonotone behavior of the normal force acting on the body placed in a supersonic gas flow was analyzed depending on the angle of attack. The simulation was based on the numerical solution of the unsteady Reynolds-averaged Navier-Stokes equations with a two-parameter differential turbulence model. The solution of the problem was obtained using the in-house solver HSFlow with an efficient parallel algorithm intended for multiprocessor super computers.
Gilson, Erik P; Davidson, Ronald C; Efthimion, Philip C; Majeski, Richard
2004-04-16
The results presented here demonstrate that the Paul trap simulator experiment (PTSX) simulates the propagation of intense charged particle beams over distances of many kilometers through magnetic alternating-gradient (AG) transport systems by making use of the similarity between the transverse dynamics of particles in the two systems. Plasmas have been trapped that correspond to normalized intensity parameters s=omega(2)(p)(0)/2omega(2)(q)
NASA Technical Reports Server (NTRS)
Gilbert, W. P.; Nguyen, L. T.; Vangunst, R. W.
1976-01-01
A piloted, fixed-base simulation was conducted to study the effectiveness of some automatic control system features designed to improve the stability and control characteristics of fighter airplanes at high angles of attack. These features include an angle-of-attack limiter, a normal-acceleration limiter, an aileron-rudder interconnect, and a stability-axis yaw damper. The study was based on a current lightweight fighter prototype. The aerodynamic data used in the simulation were measured on a 0.15-scale model at low Reynolds number and low subsonic Mach number. The simulation was conducted on the Langley differential maneuvering simulator, and the evaluation involved representative combat maneuvering. Results of the investigation show the fully augmented airplane to be quite stable and maneuverable throughout the operational angle-of-attack range. The angle-of-attack/normal-acceleration limiting feature of the pitch control system is found to be a necessity to avoid angle-of-attack excursions at high angles of attack. The aileron-rudder interconnect system is shown to be very effective in making the airplane departure resistant while the stability-axis yaw damper provided improved high-angle-of-attack roll performance with a minimum of sideslip excursions.
Time-invariant component-based normalization for a simultaneous PET-MR scanner.
Belzunce, M A; Reader, A J
2016-05-07
Component-based normalization is a method used to compensate for the sensitivity of each of the lines of response acquired in positron emission tomography. This method consists of modelling the sensitivity of each line of response as a product of multiple factors, which can be classified as time-invariant, time-variant and acquisition-dependent components. Typical time-variant factors are the intrinsic crystal efficiencies, which are needed to be updated by a regular normalization scan. Failure to do so would in principle generate artifacts in the reconstructed images due to the use of out of date time-variant factors. For this reason, an assessment of the variability and the impact of the crystal efficiencies in the reconstructed images is important to determine the frequency needed for the normalization scans, as well as to estimate the error obtained when an inappropriate normalization is used. Furthermore, if the fluctuations of these components are low enough, they could be neglected and nearly artifact-free reconstructions become achievable without performing a regular normalization scan. In this work, we analyse the impact of the time-variant factors in the component-based normalization used in the Biograph mMR scanner, but the work is applicable to other PET scanners. These factors are the intrinsic crystal efficiencies and the axial factors. For the latter, we propose a new method to obtain fixed axial factors that was validated with simulated data. Regarding the crystal efficiencies, we assessed their fluctuations during a period of 230 d and we found that they had good stability and low dispersion. We studied the impact of not including the intrinsic crystal efficiencies in the normalization when reconstructing simulated and real data. Based on this assessment and using the fixed axial factors, we propose the use of a time-invariant normalization that is able to achieve comparable results to the standard, daily updated, normalization factors used in this scanner. Moreover, to extend the analysis to other scanners, we generated distributions of crystal efficiencies with greater fluctuations than those found in the Biograph mMR scanner and evaluated their impact in simulations with a wide variety of noise levels. An important finding of this work is that a regular normalization scan is not needed in scanners with photodetectors with relatively low dispersion in their efficiencies.
Time-invariant component-based normalization for a simultaneous PET-MR scanner
NASA Astrophysics Data System (ADS)
Belzunce, M. A.; Reader, A. J.
2016-05-01
Component-based normalization is a method used to compensate for the sensitivity of each of the lines of response acquired in positron emission tomography. This method consists of modelling the sensitivity of each line of response as a product of multiple factors, which can be classified as time-invariant, time-variant and acquisition-dependent components. Typical time-variant factors are the intrinsic crystal efficiencies, which are needed to be updated by a regular normalization scan. Failure to do so would in principle generate artifacts in the reconstructed images due to the use of out of date time-variant factors. For this reason, an assessment of the variability and the impact of the crystal efficiencies in the reconstructed images is important to determine the frequency needed for the normalization scans, as well as to estimate the error obtained when an inappropriate normalization is used. Furthermore, if the fluctuations of these components are low enough, they could be neglected and nearly artifact-free reconstructions become achievable without performing a regular normalization scan. In this work, we analyse the impact of the time-variant factors in the component-based normalization used in the Biograph mMR scanner, but the work is applicable to other PET scanners. These factors are the intrinsic crystal efficiencies and the axial factors. For the latter, we propose a new method to obtain fixed axial factors that was validated with simulated data. Regarding the crystal efficiencies, we assessed their fluctuations during a period of 230 d and we found that they had good stability and low dispersion. We studied the impact of not including the intrinsic crystal efficiencies in the normalization when reconstructing simulated and real data. Based on this assessment and using the fixed axial factors, we propose the use of a time-invariant normalization that is able to achieve comparable results to the standard, daily updated, normalization factors used in this scanner. Moreover, to extend the analysis to other scanners, we generated distributions of crystal efficiencies with greater fluctuations than those found in the Biograph mMR scanner and evaluated their impact in simulations with a wide variety of noise levels. An important finding of this work is that a regular normalization scan is not needed in scanners with photodetectors with relatively low dispersion in their efficiencies.
Fang, Chao-Hua; Chang, Chia-Ming; Lai, Yu-Shu; Chen, Wen-Chuan; Song, Da-Yong; McClean, Colin J; Kao, Hao-Yuan; Qu, Tie-Bing; Cheng, Cheng-Kung
2015-11-01
Excellent clinical and kinematical performance is commonly reported after medial pivot knee arthroplasty. However, there is conflicting evidence as to whether the posterior cruciate ligament should be retained. This study simulated how the posterior cruciate ligament, post-cam mechanism and medial tibial insert morphology may affect postoperative kinematics. After the computational intact knee model was validated according to the motion of a normal knee, four TKA models were built based on a medial pivot prosthesis; PS type, modified PS type, CR type with PCL retained and CR type with PCL sacrificed. Anteroposterior translation and axial rotation of femoral condyles on the tibia during 0°-135° knee flexion were analyzed. There was no significant difference in kinematics between the intact knee model and reported data for a normal knee. In all TKA models, normal motion was almost fully restored, except for the CR type with PCL sacrificed. Sacrificing the PCL produced paradoxical anterior femoral translation and tibial external rotation during full flexion. Either the posterior cruciate ligament or post-cam mechanism is necessary for medial pivot prostheses to regain normal kinematics after total knee arthroplasty. The morphology of medial tibial insert was also shown to produce a small but noticeable effect on knee kinematics. V.
Simulation of the oscillation regimes of bowed bars: a non-linear modal approach
NASA Astrophysics Data System (ADS)
Inácio, Octávio; Henrique, Luís.; Antunes, José
2003-06-01
It is still a challenge to properly simulate the complex stick-slip behavior of multi-degree-of-freedom systems. In the present paper we investigate the self-excited non-linear responses of bowed bars, using a time-domain modal approach, coupled with an explicit model for the frictional forces, which is able to emulate stick-slip behavior. This computational approach can provide very detailed simulations and is well suited to deal with systems presenting a dispersive behavior. The effects of the bar supporting fixture are included in the model, as well as a velocity-dependent friction coefficient. We present the results of numerical simulations, for representative ranges of the bowing velocity and normal force. Computations have been performed for constant-section aluminum bars, as well as for real vibraphone bars, which display a central undercutting, intended to help tuning the first modes. Our results show limiting values for the normal force FN and bowing velocity ẏbow for which the "musical" self-sustained solutions exist. Beyond this "playability space", double period and even chaotic regimes were found for specific ranges of the input parameters FN and ẏbow. As also displayed by bowed strings, the vibration amplitudes of bowed bars also increase with the bow velocity. However, in contrast to string instruments, bowed bars "slip" during most of the motion cycle. Another important difference is that, in bowed bars, the self-excited motions are dominated by the system's first mode. Our numerical results are qualitatively supported by preliminary experimental results.
Korayem, M H; Shahali, S; Rastegar, Z
2018-06-01
Plasma membrane of most cells is not smooth. The surfaces of both small and large micropermeable cells are folded and corrugated which makes mammalian cells to have a larger membrane surface than the supposed ideal mode, that is, the smooth sphere of the same volume. Since cancer is an anthropic disease, cancer cells tend to have a larger membrane area than normal cells. Therefore, cancer cells have higher folding factor and larger radius than normal and healthy cells. On the other hand, the prevalence of breast cancer has prompted researchers to improve the treatment options raised for the disease in the past. In this paper, the impact of folding factor of the cell surface has been investigated. Considering that AFM is one of the most effective tools in performing the tests at micro- and nanoscales, it was used to determine the topography of MCF10 cells and then the resulting images and results were used to experimentally extract the folding factor of cells. By applying this factor in the Hertz, DMT and JKR contact models in the elastic and viscoelastic states, these models have been modified and the simulation of the three models shows that the simulation results are closer to the experimental results by considering the folding in the calculations. Additionally, the simulation of 3D manipulation has been done in both elastic and viscoelastic states with and without consideration of folding. Finally, the results were compared to investigate the effects of folding of the cell surface to the critical force and critical time of sliding and rolling in contact with the substrate and AFM tip in the 3D manipulation model.
Generating classes of 3D virtual mandibles for AR-based medical simulation.
Hippalgaonkar, Neha R; Sider, Alexa D; Hamza-Lup, Felix G; Santhanam, Anand P; Jaganathan, Bala; Imielinska, Celina; Rolland, Jannick P
2008-01-01
Simulation and modeling represent promising tools for several application domains from engineering to forensic science and medicine. Advances in 3D imaging technology convey paradigms such as augmented reality (AR) and mixed reality inside promising simulation tools for the training industry. Motivated by the requirement for superimposing anatomically correct 3D models on a human patient simulator (HPS) and visualizing them in an AR environment, the purpose of this research effort was to develop and validate a method for scaling a source human mandible to a target human mandible within a 2 mm root mean square (RMS) error. Results show that, given a distance between 2 same landmarks on 2 different mandibles, a relative scaling factor may be computed. Using this scaling factor, results show that a 3D virtual mandible model can be made morphometrically equivalent to a real target-specific mandible within a 1.30 mm RMS error. The virtual mandible may be further used as a reference target for registering other anatomic models, such as the lungs, on the HPS. Such registration will be made possible by physical constraints among the mandible and the spinal column in the horizontal normal rest position.
Basafa, Ehsan; Murphy, Ryan J; Kutzer, Michael D; Otake, Yoshito; Armand, Mehran
2013-01-01
Femoroplasty is a potential preventive treatment for osteoporotic hip fractures. It involves augmenting mechanical properties of the femur by injecting Polymethylmethacrylate (PMMA) bone cement. To reduce the risks involved and maximize the outcome, however, the procedure needs to be carefully planned and executed. An important part of the planning system is predicting infiltration of cement into the porous medium of cancellous bone. We used the method of Smoothed Particle Hydrodynamics (SPH) to model the flow of PMMA inside porous media. We modified the standard formulation of SPH to incorporate the extreme viscosities associated with bone cement. Darcy creeping flow of fluids through isotropic porous media was simulated and the results were compared with those reported in the literature. Further validation involved injecting PMMA cement inside porous foam blocks - osteoporotic cancellous bone surrogates - and simulating the injections using our proposed SPH model. Millimeter accuracy was obtained in comparing the simulated and actual cement shapes. Also, strong correlations were found between the simulated and the experimental data of spreading distance (R(2) = 0.86) and normalized pressure (R(2) = 0.90). Results suggest that the proposed model is suitable for use in an osteoporotic femoral augmentation planning framework.
Fast ray-tracing of human eye optics on Graphics Processing Units.
Wei, Qi; Patkar, Saket; Pai, Dinesh K
2014-05-01
We present a new technique for simulating retinal image formation by tracing a large number of rays from objects in three dimensions as they pass through the optic apparatus of the eye to objects. Simulating human optics is useful for understanding basic questions of vision science and for studying vision defects and their corrections. Because of the complexity of computing such simulations accurately, most previous efforts used simplified analytical models of the normal eye. This makes them less effective in modeling vision disorders associated with abnormal shapes of the ocular structures which are hard to be precisely represented by analytical surfaces. We have developed a computer simulator that can simulate ocular structures of arbitrary shapes, for instance represented by polygon meshes. Topographic and geometric measurements of the cornea, lens, and retina from keratometer or medical imaging data can be integrated for individualized examination. We utilize parallel processing using modern Graphics Processing Units (GPUs) to efficiently compute retinal images by tracing millions of rays. A stable retinal image can be generated within minutes. We simulated depth-of-field, accommodation, chromatic aberrations, as well as astigmatism and correction. We also show application of the technique in patient specific vision correction by incorporating geometric models of the orbit reconstructed from clinical medical images. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
A Database of COBE-normalized Cold Dark Matter Simulations
NASA Astrophysics Data System (ADS)
Martel, Hugo; Matzner, Richard
2000-02-01
We have simulated the formation and evolution of large-scale structure in the universe, for 68 different COBE-normalized cosmological models. For each cosmological model, we have performed between one and three simulations, for a total of 160 simulations. This constitutes the largest database of cosmological simulations ever assembled, and the largest cosmological parameter space ever covered by such simulations. We are making this database available to the astronomical community. We provide instructions for accessing the database and for converting the data from computational units to physical units. The database includes tilted cold dark matter (TCDM) models, tilted open cold dark matter (TOCDM) models, and tilted Λ cold dark matter (TΛCDM) models. (For several simulations, the primordial exponent n of the power spectrum is near unity, hence these simulations can be considered as ``untilted.'') The simulations cover a four-dimensional cosmological parameter phase space, the parameters being the present density parameter Ω0, cosmological constant λ0, and Hubble constant H0, and the rms density fluctuation σ8 at scale 8 h-1 Mpc. All simulations were performed using a P3M algorithm with 643 particles on a 1283 mesh, in a cubic volume of comoving size 128 Mpc. Each simulation starts at a redshift of 24 and is carried up to the present. More simulations will be added to the database in the future. We have performed a limited amount of data reduction and analysis of the final states of the simulations. We computed the rms density fluctuation, the two-point correlation function, the velocity moments, and the properties of clusters. Our results are the following:1. The numerical value σnum8 of the rms density fluctuation differs from the value σcont8 obtained by integrating the power spectrum at early times and extrapolating linearly up the present. This results from the combined effects of discreteness in the numerical representation of the power spectrum, the presence of a Gaussian factor in the initial conditions, and late-time nonlinear evolution. The first of these three effects is negligible. The second and third are comparable, and can both modify the value of σ8 by up to 10%. Nonlinear effects, however, are important only for models with σ8>0.6, and can result in either an increase or a decrease in σ8.2. The observed galaxy two-point correlation function is well reproduced (assuming an unbiased relation between galaxies and mass) by models with σ8~0.8, nearly independently of the values of the other parameters, Ω0, λ0, and H0. For models with σ8>0.8, the correlation function is too large and its slope is too steep. For models with σ8<0.8, the correlation function is too small and its slope is too shallow.3. At small separations, r<1 Mpc, the velocity moments indicate that small clusters have reached virial equilibrium, while still accreting matter from the field. The velocity moments depend essentially upon Ω0 and σ8, and not λ0 and H0. The pairwise particle velocity dispersions are much larger than the observed pairwise galaxy velocity dispersion, for nearly all models. Velocity bias between galaxies and dark matter is needed to reconcile the simulations with observations.4. The cluster multiplicity function is decreasing for models with σ8~0.3. It has a horizontal plateau for models with σ8 in the range 0.4-0.9. For models with σ8>0.9, it has a U shape, which is probably a numerical artifact caused by the finite number of particles used in the simulations. For all models, clusters have densities in the range 100-1000 times the mean background density, the spin parameters λ are in the range 0.008-0.2, with the median near 0.05, and about 2/3 of the clusters are prolate. Rotationally supported disks do not form in these simulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polsdofer, E; Crilly, R
Purpose: This study investigates the effect of eye size and eccentricity on doses to critical tissues by simulating doses in the Plaque Simulator (v. 6.3.1) software. Present OHSU plaque brachytherapy treatment focuses on delivering radiation to the tumor measured with ocular ultrasound plus a small margin and assumes the orbit has the dimensions of a “standard eye.” Accurately modeling the dimensions of the orbit requires a high resolution ocular CT. This study quantifies how standard differences in equatorial diameters and eccentricity affect calculated doses to critical structures in order to query the justification of the additional CT scan to themore » treatment planning process. Methods: Tumors of 10 mm × 10 mm × 5 mm were modeled at the 12:00:00 hour with a latitude of 45 degrees. Right eyes were modeled at a number of equatorial diameters from 17.5 to 28 mm for each of the standard non-notched COMS plaques with silastic inserts. The COMS plaques were fully loaded with uniform activity, centered on the tumor, and prescribed to a common tumor dose (85 Gy/100 hours). Variations in the calculated doses to normal structures were examined to see if the changes were significant. Results: The calculated dose to normal structures show a marked dependence on eye geometry. This is exemplified by fovea dose which more than doubled in the smaller eyes and nearly halved in the larger model. Additional significant dependence was found in plaque size on the calculated dose in spite of all plaques giving the same dose to the prescription point. Conclusion: The variation in dose with eye dimension fully justifies the addition of a high resolution ocular CT to the planning technique. Additional attention must be made to plaque size beyond simply covering the tumor when considering normal tissue dose.« less
Extinction models for cancer stem cell therapy
Sehl, Mary; Zhou, Hua; Sinsheimer, Janet S.; Lange, Kenneth L.
2012-01-01
Cells with stem cell-like properties are now viewed as initiating and sustaining many cancers. This suggests that cancer can be cured by driving these cancer stem cells to extinction. The problem with this strategy is that ordinary stem cells are apt to be killed in the process. This paper sets bounds on the killing differential (difference between death rates of cancer stem cells and normal stem cells) that must exist for the survival of an adequate number of normal stem cells. Our main tools are birth–death Markov chains in continuous time. In this framework, we investigate the extinction times of cancer stem cells and normal stem cells. Application of extreme value theory from mathematical statistics yields an accurate asymptotic distribution and corresponding moments for both extinction times. We compare these distributions for the two cell populations as a function of the killing rates. Perhaps a more telling comparison involves the number of normal stem cells NH at the extinction time of the cancer stem cells. Conditioning on the asymptotic time to extinction of the cancer stem cells allows us to calculate the asymptotic mean and variance of NH. The full distribution of NH can be retrieved by the finite Fourier transform and, in some parameter regimes, by an eigenfunction expansion. Finally, we discuss the impact of quiescence (the resting state) on stem cell dynamics. Quiescence can act as a sanctuary for cancer stem cells and imperils the proposed therapy. We approach the complication of quiescence via multitype branching process models and stochastic simulation. Improvements to the τ-leaping method of stochastic simulation make it a versatile tool in this context. We conclude that the proposed therapy must target quiescent cancer stem cells as well as actively dividing cancer stem cells. The current cancer models demonstrate the virtue of attacking the same quantitative questions from a variety of modeling, mathematical, and computational perspectives. PMID:22001354
Extinction models for cancer stem cell therapy.
Sehl, Mary; Zhou, Hua; Sinsheimer, Janet S; Lange, Kenneth L
2011-12-01
Cells with stem cell-like properties are now viewed as initiating and sustaining many cancers. This suggests that cancer can be cured by driving these cancer stem cells to extinction. The problem with this strategy is that ordinary stem cells are apt to be killed in the process. This paper sets bounds on the killing differential (difference between death rates of cancer stem cells and normal stem cells) that must exist for the survival of an adequate number of normal stem cells. Our main tools are birth-death Markov chains in continuous time. In this framework, we investigate the extinction times of cancer stem cells and normal stem cells. Application of extreme value theory from mathematical statistics yields an accurate asymptotic distribution and corresponding moments for both extinction times. We compare these distributions for the two cell populations as a function of the killing rates. Perhaps a more telling comparison involves the number of normal stem cells NH at the extinction time of the cancer stem cells. Conditioning on the asymptotic time to extinction of the cancer stem cells allows us to calculate the asymptotic mean and variance of NH. The full distribution of NH can be retrieved by the finite Fourier transform and, in some parameter regimes, by an eigenfunction expansion. Finally, we discuss the impact of quiescence (the resting state) on stem cell dynamics. Quiescence can act as a sanctuary for cancer stem cells and imperils the proposed therapy. We approach the complication of quiescence via multitype branching process models and stochastic simulation. Improvements to the τ-leaping method of stochastic simulation make it a versatile tool in this context. We conclude that the proposed therapy must target quiescent cancer stem cells as well as actively dividing cancer stem cells. The current cancer models demonstrate the virtue of attacking the same quantitative questions from a variety of modeling, mathematical, and computational perspectives. Copyright © 2011 Elsevier Inc. All rights reserved.
Modeling interface roughness scattering in a layered seabed for normal-incident chirp sonar signals.
Tang, Dajun; Hefner, Brian T
2012-04-01
Downward looking sonar, such as the chirp sonar, is widely used as a sediment survey tool in shallow water environments. Inversion of geo-acoustic parameters from such sonar data precedes the availability of forward models. An exact numerical model is developed to initiate the simulation of the acoustic field produced by such a sonar in the presence of multiple rough interfaces. The sediment layers are assumed to be fluid layers with non-intercepting rough interfaces.
Normal and hemiparetic walking
NASA Astrophysics Data System (ADS)
Pfeiffer, Friedrich; König, Eberhard
2013-01-01
The idea of a model-based control of rehabilitation for hemiparetic patients requires efficient models of human walking, healthy walking as well as hemiparetic walking. Such models are presented in this paper. They include 42 degrees of freedom and allow especially the evaluation of kinetic magnitudes with the goal to evaluate measures for the hardness of hemiparesis. As far as feasible, the simulations have been compared successfully with measurements, thus improving the confidence level for an application in clinical practice. The paper is mainly based on the dissertation [19].
Direct Numerical Simulation and Theories of Wall Turbulence with a Range of Pressure Gradients
NASA Technical Reports Server (NTRS)
Coleman, G. N.; Garbaruk, A.; Spalart, P. R.
2014-01-01
A new Direct Numerical Simulation (DNS) of Couette-Poiseuille flow at a higher Reynolds number is presented and compared with DNS of other wall-bounded flows. It is analyzed in terms of testing semi-theoretical proposals for universal behavior of the velocity, mixing length, or eddy viscosity in pressure gradients, and in terms of assessing the accuracy of two turbulence models. These models are used in two modes, the traditional one with only a dependence on the wall-normal coordinate y, and a newer one in which a lateral dependence on z is added. For pure Couette flow and the Couette-Poiseuille case considered here, this z-dependence allows some models to generate steady streamwise vortices, which generally improves the agreement with DNS and experiment. On the other hand, it complicates the comparison between DNS and models.
Edge Detection Based On the Characteristic of Primary Visual Cortex Cells
NASA Astrophysics Data System (ADS)
Zhu, M. M.; Xu, Y. L.; Ma, H. Q.
2018-01-01
Aiming at the problem that it is difficult to balance the accuracy of edge detection and anti-noise performance, and referring to the dynamic and static perceptions of the primary visual cortex (V1) cells, a V1 cell model is established to perform edge detection. A spatiotemporal filter is adopted to simulate the receptive field of V1 simple cells, the model V1 cell is obtained after integrating the responses of simple cells by half-wave rectification and normalization, Then the natural image edge is detected by using static perception of V1 cells. The simulation results show that, the V1 model can basically fit the biological data and has the universality of biology. What’s more, compared with other edge detection operators, the proposed model is more effective and has better robustness
X-ray microanalysis of porous materials using Monte Carlo simulations.
Poirier, Dominique; Gauvin, Raynald
2011-01-01
Quantitative X-ray microanalysis models, such as ZAF or φ(ρz) methods, are normally based on solid, flat-polished specimens. This limits their use in various domains where porous materials are studied, such as powder metallurgy, catalysts, foams, etc. Previous experimental studies have shown that an increase in porosity leads to a deficit in X-ray emission for various materials, such as graphite, Cr(2) O(3) , CuO, ZnS (Ichinokawa et al., '69), Al(2) O(3) , and Ag (Lakis et al., '92). However, the mechanisms responsible for this decrease are unclear. The porosity by itself does not explain the loss in intensity, other mechanisms have therefore been proposed, such as extra energy loss by the diffusion of electrons by surface plasmons generated at the pores-solid interfaces, surface roughness, extra charging at the pores-solid interface, or carbon diffusion in the pores. However, the exact mechanism is still unclear. In order to better understand the effects of porosity on quantitative microanalysis, a new approach using Monte Carlo simulations was developed by Gauvin (2005) using a constant pore size. In this new study, the X-ray emissions model was modified to include a random log normal distribution of pores size in the simulated materials. This article presents, after a literature review of the previous works performed about X-ray microanalysis of porous materials, some of the results obtained with Gauvin's modified model. They are then compared with experimental results. Copyright © 2011 Wiley Periodicals, Inc.
Modelling the heart with the atrioventricular plane as a piston unit.
Maksuti, Elira; Bjällmark, Anna; Broomé, Michael
2015-01-01
Medical imaging and clinical studies have proven that the heart pumps by means of minor outer volume changes and back-and-forth longitudinal movements in the atrioventricular (AV) region. The magnitude of AV-plane displacement has also shown to be a reliable index for diagnosis of heart failure. Despite this, AV-plane displacement is usually omitted from cardiovascular modelling. We present a lumped-parameter cardiac model in which the heart is described as a displacement pump with the AV plane functioning as a piston unit (AV piston). This unit is constructed of different upper and lower areas analogous with the difference in the atrial and ventricular cross-sections. The model output reproduces normal physiology, with a left ventricular pressure in the range of 8-130 mmHg, an atrial pressure of approximatly 9 mmHg, and an arterial pressure change between 75 mmHg and 130 mmHg. In addition, the model reproduces the direction of the main systolic and diastolic movements of the AV piston with realistic velocity magnitude (∼10 cm/s). Moreover, changes in the simulated systolic ventricular-contraction force influence diastolic filling, emphasizing the coupling between cardiac systolic and diastolic functions. The agreement between the simulation and normal physiology highlights the importance of myocardial longitudinal movements and of atrioventricular interactions in cardiac pumping. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
A Five-Dimensional Mathematical Model for Regional and Global Changes in Cardiac Uptake and Motion
NASA Astrophysics Data System (ADS)
Pretorius, P. H.; King, M. A.; Gifford, H. C.
2004-10-01
The objective of this work was to simultaneously introduce known regional changes in contraction pattern and perfusion to the existing gated Mathematical Cardiac Torso (MCAT) phantom heart model. We derived a simple integral to calculate the fraction of the ellipsoidal volume that makes up the left ventricle (LV), taking into account the stationary apex and the moving base. After calculating the LV myocardium volume of the existing beating heart model, we employed the property of conservation of mass to manipulate the LV ejection fraction to values ranging between 13.5% and 68.9%. Multiple dynamic heart models that differ in degree of LV wall thickening, base-to-apex motion, and ejection fraction, are thus available for use with the existing MCAT methodology. To introduce more complex regional LV contraction and perfusion patterns, we used composites of dynamic heart models to create a central region with little or no motion or perfusion, surrounded by a region in which the motion and perfusion gradually reverts to normal. To illustrate this methodology, the following gated cardiac acquisitions for different clinical situations were simulated analytically: 1) reduced regional motion and perfusion; 2) same perfusion as in (1) without motion intervention; and 3) washout from the normal and diseased myocardial regions. Both motion and perfusion can change dynamically during a single rotation or multiple rotations of a simulated single-photon emission computed tomography acquisition system.
Element soil behaviour during pile installation simulated by 2D-DEM
NASA Astrophysics Data System (ADS)
Ji, Xiaohui; Cheng, Yi Pik; Liu, Junwei
2017-06-01
The estimation of the skin friction of onshore or offshore piles in sand is still a difficult problem for geotechnical engineers. It has been accepted by many researchers that the mechanism of driving piles in the soil has shared some similarities with that of an element shear test under the constant normal stiffness (CNS) condition. This paper describes the behaviour of an element of soil next to a pile during the process of pile penetration into dense fine sand using the 2D-DEM numerical simulation software. A new CNS servo was added to the horizontal boundary while maintaining the vertical stress constant. This should simulate the soil in a similar manner to that of a CNS pile-soil interface shear test, but allowing the vertical stress to remain constant which is more realistic to the field situation. Shear behaviours observed in these simulations were very similar to the results from previous researchers' lab shearing tests. With the normal stress and shear stress obtained from the virtual models, the friction angle and the shaft friction factor β mentioned in the API-2007 offshore pile design guideline were calculated and compared with the API recommended values.
Zhang, Di; Li, Ruiqi; Batchelor, William D.; Ju, Hui
2018-01-01
The North China Plain is one of the most important grain production regions in China, but is facing serious water shortages. To achieve a balance between water use and the need for food self-sufficiency, new water efficient irrigation strategies need to be developed that balance water use with farmer net return. The Crop Environment Resource Synthesis Wheat (CERES-Wheat model) was calibrated and evaluated with two years of data which consisted of 3–4 irrigation treatments, and the model was used to investigate long-term winter wheat productivity and water use from irrigation management in the North China Plain. The calibrated model simulated accurately above-ground biomass, grain yield and evapotranspiration of winter wheat in response to irrigation management. The calibrated model was then run using weather data from 1994–2016 in order to evaluate different irrigation strategies. The simulated results using historical weather data showed that grain yield and water use was sensitive to different irrigation strategies including amounts and dates of irrigation applications. The model simulated the highest yield when irrigation was applied at jointing (T9) in normal and dry rainfall years, and gave the highest simulated yields for irrigation at double ridge (T8) in wet years. A single simulated irrigation at jointing (T9) produced yields that were 88% compared to using a double irrigation treatment at T1 and T9 in wet years, 86% of that in normal years, and 91% of that in dry years. A single irrigation at jointing or double ridge produced higher water use efficiency because it obtained higher evapotranspiration. The simulated farmer irrigation practices produced the highest yield and net income. When the cost of water was taken into account, limited irrigation was found to be more profitable based on assumptions about future water costs. In order to increase farmer income, a subsidy will likely be needed to compensate farmers for yield reductions due to water savings. These results showed that there is a cost to the farmer for water conservation, but limiting irrigation to a single irrigation at jointing would minimize impact on farmer net return in North China Plain. PMID:29370186
THREE-DIMENSIONAL MODELING OF THE DYNAMICS OF THERAPEUTIC ULTRASOUND CONTRAST AGENTS
Hsiao, Chao-Tsung; Lu, Xiaozhen; Chahine, Georges
2010-01-01
A 3-D thick-shell contrast agent dynamics model was developed by coupling a finite volume Navier-Stokes solver and a potential boundary element method flow solver to simulate the dynamics of thick-shelled contrast agents subjected to pressure waves. The 3-D model was validated using a spherical thick-shell model validated by experimental observations. We then used this model to study shell break-up during nonspherical deformations resulting from multiple contrast agent interaction or the presence of a nearby solid wall. Our simulations indicate that the thick viscous shell resists the contrast agent from forming a re-entrant jet, as normally observed for an air bubble oscillating near a solid wall. Instead, the shell thickness varies significantly from location to location during the dynamics, and this could lead to shell break-up caused by local shell thinning and stretching. PMID:20950929