Li, Chen; Nagasaki, Masao; Ueno, Kazuko; Miyano, Satoru
2009-04-27
Model checking approaches were applied to biological pathway validations around 2003. Recently, Fisher et al. have proved the importance of model checking approach by inferring new regulation of signaling crosstalk in C. elegans and confirming the regulation with biological experiments. They took a discrete and state-based approach to explore all possible states of the system underlying vulval precursor cell (VPC) fate specification for desired properties. However, since both discrete and continuous features appear to be an indispensable part of biological processes, it is more appropriate to use quantitative models to capture the dynamics of biological systems. Our key motivation of this paper is to establish a quantitative methodology to model and analyze in silico models incorporating the use of model checking approach. A novel method of modeling and simulating biological systems with the use of model checking approach is proposed based on hybrid functional Petri net with extension (HFPNe) as the framework dealing with both discrete and continuous events. Firstly, we construct a quantitative VPC fate model with 1761 components by using HFPNe. Secondly, we employ two major biological fate determination rules - Rule I and Rule II - to VPC fate model. We then conduct 10,000 simulations for each of 48 sets of different genotypes, investigate variations of cell fate patterns under each genotype, and validate the two rules by comparing three simulation targets consisting of fate patterns obtained from in silico and in vivo experiments. In particular, an evaluation was successfully done by using our VPC fate model to investigate one target derived from biological experiments involving hybrid lineage observations. However, the understandings of hybrid lineages are hard to make on a discrete model because the hybrid lineage occurs when the system comes close to certain thresholds as discussed by Sternberg and Horvitz in 1986. Our simulation results suggest that: Rule I that cannot be applied with qualitative based model checking, is more reasonable than Rule II owing to the high coverage of predicted fate patterns (except for the genotype of lin-15ko; lin-12ko double mutants). More insights are also suggested. The quantitative simulation-based model checking approach is a useful means to provide us valuable biological insights and better understandings of biological systems and observation data that may be hard to capture with the qualitative one.
Toward improved design of check dam systems: A case study in the Loess Plateau, China
NASA Astrophysics Data System (ADS)
Pal, Debasish; Galelli, Stefano; Tang, Honglei; Ran, Qihua
2018-04-01
Check dams are one of the most common strategies for controlling sediment transport in erosion prone areas, along with soil and water conservation measures. However, existing mathematical models that simulate sediment production and delivery are often unable to simulate how the storage capacity of check dams varies with time. To explicitly account for this process-and to support the design of check dam systems-we developed a modelling framework consisting of two components, namely (1) the spatially distributed Soil Erosion and Sediment Delivery Model (WaTEM/SEDEM), and (2) a network-based model of check dam storage dynamics. The two models are run sequentially, with the second model receiving the initial sediment input to check dams from WaTEM/SEDEM. The framework is first applied to Shejiagou catchment, a 4.26 km2 area located in the Loess Plateau, China, where we study the effect of the existing check dam system on sediment dynamics. Results show that the deployment of check dams altered significantly the sediment delivery ratio of the catchment. Furthermore, the network-based model reveals a large variability in the life expectancy of check dams and abrupt changes in their filling rates. The application of the framework to six alternative check dam deployment scenarios is then used to illustrate its usefulness for planning purposes, and to derive some insights on the effect of key decision variables, such as the number, size, and site location of check dams. Simulation results suggest that better performance-in terms of life expectancy and sediment delivery ratio-could have been achieved with an alternative deployment strategy.
Simulation-based MDP verification for leading-edge masks
NASA Astrophysics Data System (ADS)
Su, Bo; Syrel, Oleg; Pomerantsev, Michael; Hagiwara, Kazuyuki; Pearman, Ryan; Pang, Leo; Fujimara, Aki
2017-07-01
For IC design starts below the 20nm technology node, the assist features on photomasks shrink well below 60nm and the printed patterns of those features on masks written by VSB eBeam writers start to show a large deviation from the mask designs. Traditional geometry-based fracturing starts to show large errors for those small features. As a result, other mask data preparation (MDP) methods have become available and adopted, such as rule-based Mask Process Correction (MPC), model-based MPC and eventually model-based MDP. The new MDP methods may place shot edges slightly differently from target to compensate for mask process effects, so that the final patterns on a mask are much closer to the design (which can be viewed as the ideal mask), especially for those assist features. Such an alteration generally produces better masks that are closer to the intended mask design. Traditional XOR-based MDP verification cannot detect problems caused by eBeam effects. Much like model-based OPC verification which became a necessity for OPC a decade ago, we see the same trend in MDP today. Simulation-based MDP verification solution requires a GPU-accelerated computational geometry engine with simulation capabilities. To have a meaningful simulation-based mask check, a good mask process model is needed. The TrueModel® system is a field tested physical mask model developed by D2S. The GPU-accelerated D2S Computational Design Platform (CDP) is used to run simulation-based mask check, as well as model-based MDP. In addition to simulation-based checks such as mask EPE or dose margin, geometry-based rules are also available to detect quality issues such as slivers or CD splits. Dose margin related hotspots can also be detected by setting a correct detection threshold. In this paper, we will demonstrate GPU-acceleration for geometry processing, and give examples of mask check results and performance data. GPU-acceleration is necessary to make simulation-based mask MDP verification acceptable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delgoshaei, Parastoo; Austin, Mark A.; Pertzborn, Amanda J.
State-of-the-art building simulation control methods incorporate physical constraints into their mathematical models, but omit implicit constraints associated with policies of operation and dependency relationships among rules representing those constraints. To overcome these shortcomings, there is a recent trend in enabling the control strategies with inference-based rule checking capabilities. One solution is to exploit semantic web technologies in building simulation control. Such approaches provide the tools for semantic modeling of domains, and the ability to deduce new information based on the models through use of Description Logic (DL). In a step toward enabling this capability, this paper presents a cross-disciplinary data-drivenmore » control strategy for building energy management simulation that integrates semantic modeling and formal rule checking mechanisms into a Model Predictive Control (MPC) formulation. The results show that MPC provides superior levels of performance when initial conditions and inputs are derived from inference-based rules.« less
Razzaq, Misbah; Ahmad, Jamil
2015-01-01
Internet worms are analogous to biological viruses since they can infect a host and have the ability to propagate through a chosen medium. To prevent the spread of a worm or to grasp how to regulate a prevailing worm, compartmental models are commonly used as a means to examine and understand the patterns and mechanisms of a worm spread. However, one of the greatest challenge is to produce methods to verify and validate the behavioural properties of a compartmental model. This is why in this study we suggest a framework based on Petri Nets and Model Checking through which we can meticulously examine and validate these models. We investigate Susceptible-Exposed-Infectious-Recovered (SEIR) model and propose a new model Susceptible-Exposed-Infectious-Recovered-Delayed-Quarantined (Susceptible/Recovered) (SEIDQR(S/I)) along with hybrid quarantine strategy, which is then constructed and analysed using Stochastic Petri Nets and Continuous Time Markov Chain. The analysis shows that the hybrid quarantine strategy is extremely effective in reducing the risk of propagating the worm. Through Model Checking, we gained insight into the functionality of compartmental models. Model Checking results validate simulation ones well, which fully support the proposed framework. PMID:26713449
Razzaq, Misbah; Ahmad, Jamil
2015-01-01
Internet worms are analogous to biological viruses since they can infect a host and have the ability to propagate through a chosen medium. To prevent the spread of a worm or to grasp how to regulate a prevailing worm, compartmental models are commonly used as a means to examine and understand the patterns and mechanisms of a worm spread. However, one of the greatest challenge is to produce methods to verify and validate the behavioural properties of a compartmental model. This is why in this study we suggest a framework based on Petri Nets and Model Checking through which we can meticulously examine and validate these models. We investigate Susceptible-Exposed-Infectious-Recovered (SEIR) model and propose a new model Susceptible-Exposed-Infectious-Recovered-Delayed-Quarantined (Susceptible/Recovered) (SEIDQR(S/I)) along with hybrid quarantine strategy, which is then constructed and analysed using Stochastic Petri Nets and Continuous Time Markov Chain. The analysis shows that the hybrid quarantine strategy is extremely effective in reducing the risk of propagating the worm. Through Model Checking, we gained insight into the functionality of compartmental models. Model Checking results validate simulation ones well, which fully support the proposed framework.
Aviation Safety: Modeling and Analyzing Complex Interactions between Humans and Automated Systems
NASA Technical Reports Server (NTRS)
Rungta, Neha; Brat, Guillaume; Clancey, William J.; Linde, Charlotte; Raimondi, Franco; Seah, Chin; Shafto, Michael
2013-01-01
The on-going transformation from the current US Air Traffic System (ATS) to the Next Generation Air Traffic System (NextGen) will force the introduction of new automated systems and most likely will cause automation to migrate from ground to air. This will yield new function allocations between humans and automation and therefore change the roles and responsibilities in the ATS. Yet, safety in NextGen is required to be at least as good as in the current system. We therefore need techniques to evaluate the safety of the interactions between humans and automation. We think that current human factor studies and simulation-based techniques will fall short in front of the ATS complexity, and that we need to add more automated techniques to simulations, such as model checking, which offers exhaustive coverage of the non-deterministic behaviors in nominal and off-nominal scenarios. In this work, we present a verification approach based both on simulations and on model checking for evaluating the roles and responsibilities of humans and automation. Models are created using Brahms (a multi-agent framework) and we show that the traditional Brahms simulations can be integrated with automated exploration techniques based on model checking, thus offering a complete exploration of the behavioral space of the scenario. Our formal analysis supports the notion of beliefs and probabilities to reason about human behavior. We demonstrate the technique with the Ueberligen accident since it exemplifies authority problems when receiving conflicting advices from human and automated systems.
Hussain, Faraz; Jha, Sumit K; Jha, Susmit; Langmead, Christopher J
2014-01-01
Stochastic models are increasingly used to study the behaviour of biochemical systems. While the structure of such models is often readily available from first principles, unknown quantitative features of the model are incorporated into the model as parameters. Algorithmic discovery of parameter values from experimentally observed facts remains a challenge for the computational systems biology community. We present a new parameter discovery algorithm that uses simulated annealing, sequential hypothesis testing, and statistical model checking to learn the parameters in a stochastic model. We apply our technique to a model of glucose and insulin metabolism used for in-silico validation of artificial pancreata and demonstrate its effectiveness by developing parallel CUDA-based implementation for parameter synthesis in this model.
Test of Hadronic Interaction Models with the KASCADE Hadron Calorimeter
NASA Astrophysics Data System (ADS)
Milke, J.; KASCADE Collaboration
The interpretation of extensive air shower (EAS) measurements often requires the comparison with EAS simulations based on high-energy hadronic interaction models. These interaction models have to extrapolate into kinematical regions and energy ranges beyond the limit of present accelerators. Therefore, it is necessary to test whether these models are able to describe the EAS development in a consistent way. By measuring simultaneously the hadronic, electromagnetic, and muonic part of an EAS the experiment KASCADE offers best facilities for checking the models. For the EAS simulations the program CORSIKA with several hadronic event generators implemented is used. Different hadronic observables, e.g. hadron number, energy spectrum, lateral distribution, are investigated, as well as their correlations with the electromagnetic and muonic shower size. By comparing measurements and simulations the consistency of the description of the EAS development is checked. First results with the new interaction model NEXUS and the version II.5 of the model DPMJET, recently included in CORSIKA, are presented and compared with QGSJET simulations.
Litho hotspots fixing using model based algorithm
NASA Astrophysics Data System (ADS)
Zhang, Meili; Yu, Shirui; Mao, Zhibiao; Shafee, Marwa; Madkour, Kareem; ElManhawy, Wael; Kwan, Joe; Hu, Xinyi; Wan, Qijian; Du, Chunshan
2017-04-01
As technology advances, IC designs are getting more sophisticated, thus it becomes more critical and challenging to fix printability issues in the design flow. Running lithography checks before tapeout is now mandatory for designers, which creates a need for more advanced and easy-to-use techniques for fixing hotspots found after lithographic simulation without creating a new design rule checking (DRC) violation or generating a new hotspot. This paper presents a new methodology for fixing hotspots on layouts while using the same engine currently used to detect the hotspots. The fix is achieved by applying minimum movement of edges causing the hotspot, with consideration of DRC constraints. The fix is internally simulated by the lithographic simulation engine to verify that the hotspot is eliminated and that no new hotspot is generated by the new edge locations. Hotspot fix checking is enhanced by adding DRC checks to the litho-friendly design (LFD) rule file to guarantee that any fix options that violate DRC checks are removed from the output hint file. This extra checking eliminates the need to re-run both DRC and LFD checks to ensure the change successfully fixed the hotspot, which saves time and simplifies the designer's workflow. This methodology is demonstrated on industrial designs, where the fixing rate of single and dual layer hotspots is reported.
Numerical modeling and performance analysis of zinc oxide (ZnO) thin-film based gas sensor
NASA Astrophysics Data System (ADS)
Punetha, Deepak; Ranjan, Rashmi; Pandey, Saurabh Kumar
2018-05-01
This manuscript describes the modeling and analysis of Zinc Oxide thin film based gas sensor. The conductance and sensitivity of the sensing layer has been described by change in temperature as well as change in gas concentration. The analysis has been done for reducing and oxidizing agents. Simulation results revealed the change in resistance and sensitivity of the sensor with respect to temperature and different gas concentration. To check the feasibility of the model, all the simulated results have been analyze by different experimental reported work. Wolkenstein theory has been used to model the proposed sensor and the simulation results have been shown by using device simulation software.
Model Checking Temporal Logic Formulas Using Sticker Automata
Feng, Changwei; Wu, Huanmei
2017-01-01
As an important complex problem, the temporal logic model checking problem is still far from being fully resolved under the circumstance of DNA computing, especially Computation Tree Logic (CTL), Interval Temporal Logic (ITL), and Projection Temporal Logic (PTL), because there is still a lack of approaches for DNA model checking. To address this challenge, a model checking method is proposed for checking the basic formulas in the above three temporal logic types with DNA molecules. First, one-type single-stranded DNA molecules are employed to encode the Finite State Automaton (FSA) model of the given basic formula so that a sticker automaton is obtained. On the other hand, other single-stranded DNA molecules are employed to encode the given system model so that the input strings of the sticker automaton are obtained. Next, a series of biochemical reactions are conducted between the above two types of single-stranded DNA molecules. It can then be decided whether the system satisfies the formula or not. As a result, we have developed a DNA-based approach for checking all the basic formulas of CTL, ITL, and PTL. The simulated results demonstrate the effectiveness of the new method. PMID:29119114
NASA Astrophysics Data System (ADS)
Song, Yunquan; Lin, Lu; Jian, Ling
2016-07-01
Single-index varying-coefficient model is an important mathematical modeling method to model nonlinear phenomena in science and engineering. In this paper, we develop a variable selection method for high-dimensional single-index varying-coefficient models using a shrinkage idea. The proposed procedure can simultaneously select significant nonparametric components and parametric components. Under defined regularity conditions, with appropriate selection of tuning parameters, the consistency of the variable selection procedure and the oracle property of the estimators are established. Moreover, due to the robustness of the check loss function to outliers in the finite samples, our proposed variable selection method is more robust than the ones based on the least squares criterion. Finally, the method is illustrated with numerical simulations.
Temporal Precedence Checking for Switched Models and its Application to a Parallel Landing Protocol
NASA Technical Reports Server (NTRS)
Duggirala, Parasara Sridhar; Wang, Le; Mitra, Sayan; Viswanathan, Mahesh; Munoz, Cesar A.
2014-01-01
This paper presents an algorithm for checking temporal precedence properties of nonlinear switched systems. This class of properties subsume bounded safety and capture requirements about visiting a sequence of predicates within given time intervals. The algorithm handles nonlinear predicates that arise from dynamics-based predictions used in alerting protocols for state-of-the-art transportation systems. It is sound and complete for nonlinear switch systems that robustly satisfy the given property. The algorithm is implemented in the Compare Execute Check Engine (C2E2) using validated simulations. As a case study, a simplified model of an alerting system for closely spaced parallel runways is considered. The proposed approach is applied to this model to check safety properties of the alerting logic for different operating conditions such as initial velocities, bank angles, aircraft longitudinal separation, and runway separation.
Wu, Hua'an; Zeng, Bo; Zhou, Meng
2017-11-15
High accuracy in water demand predictions is an important basis for the rational allocation of city water resources and forms the basis for sustainable urban development. The shortage of water resources in Chongqing, the youngest central municipality in Southwest China, has significantly increased with the population growth and rapid economic development. In this paper, a new grey water-forecasting model (GWFM) was built based on the data characteristics of water consumption. The parameter estimation and error checking methods of the GWFM model were investigated. Then, the GWFM model was employed to simulate the water demands of Chongqing from 2009 to 2015 and forecast it in 2016. The simulation and prediction errors of the GWFM model was checked, and the results show the GWFM model exhibits better simulation and prediction precisions than those of the classical Grey Model with one variable and single order equation GM(1,1) for short and the frequently-used Discrete Grey Model with one variable and single order equation, DGM(1,1) for short. Finally, the water demand in Chongqing from 2017 to 2022 was forecasted, and some corresponding control measures and recommendations were provided based on the prediction results to ensure a viable water supply and promote the sustainable development of the Chongqing economy.
Intra-Urban Human Mobility and Activity Transition: Evidence from Social Media Check-In Data
Wu, Lun; Zhi, Ye; Sui, Zhengwei; Liu, Yu
2014-01-01
Most existing human mobility literature focuses on exterior characteristics of movements but neglects activities, the driving force that underlies human movements. In this research, we combine activity-based analysis with a movement-based approach to model the intra-urban human mobility observed from about 15 million check-in records during a yearlong period in Shanghai, China. The proposed model is activity-based and includes two parts: the transition of travel demands during a specific time period and the movement between locations. For the first part, we find the transition probability between activities varies over time, and then we construct a temporal transition probability matrix to represent the transition probability of travel demands during a time interval. For the second part, we suggest that the travel demands can be divided into two classes, locationally mandatory activity (LMA) and locationally stochastic activity (LSA), according to whether the demand is associated with fixed location or not. By judging the combination of predecessor activity type and successor activity type we determine three trip patterns, each associated with a different decay parameter. To validate the model, we adopt the mechanism of an agent-based model and compare the simulated results with the observed pattern from the displacement distance distribution, the spatio-temporal distribution of activities, and the temporal distribution of travel demand transitions. The results show that the simulated patterns fit the observed data well, indicating that these findings open new directions for combining activity-based analysis with a movement-based approach using social media check-in data. PMID:24824892
Li, Chen; Nagasaki, Masao; Koh, Chuan Hock; Miyano, Satoru
2011-05-01
Mathematical modeling and simulation studies are playing an increasingly important role in helping researchers elucidate how living organisms function in cells. In systems biology, researchers typically tune many parameters manually to achieve simulation results that are consistent with biological knowledge. This severely limits the size and complexity of simulation models built. In order to break this limitation, we propose a computational framework to automatically estimate kinetic parameters for a given network structure. We utilized an online (on-the-fly) model checking technique (which saves resources compared to the offline approach), with a quantitative modeling and simulation architecture named hybrid functional Petri net with extension (HFPNe). We demonstrate the applicability of this framework by the analysis of the underlying model for the neuronal cell fate decision model (ASE fate model) in Caenorhabditis elegans. First, we built a quantitative ASE fate model containing 3327 components emulating nine genetic conditions. Then, using our developed efficient online model checker, MIRACH 1.0, together with parameter estimation, we ran 20-million simulation runs, and were able to locate 57 parameter sets for 23 parameters in the model that are consistent with 45 biological rules extracted from published biological articles without much manual intervention. To evaluate the robustness of these 57 parameter sets, we run another 20 million simulation runs using different magnitudes of noise. Our simulation results concluded that among these models, one model is the most reasonable and robust simulation model owing to the high stability against these stochastic noises. Our simulation results provide interesting biological findings which could be used for future wet-lab experiments.
Examining Passenger Flow Choke Points at Airports Using Discrete Event Simulation
NASA Technical Reports Server (NTRS)
Brown, Jeremy R.; Madhavan, Poomima
2011-01-01
The movement of passengers through an airport quickly, safely, and efficiently is the main function of the various checkpoints (check-in, security. etc) found in airports. Human error combined with other breakdowns in the complex system of the airport can disrupt passenger flow through the airport leading to lengthy waiting times, missing luggage and missed flights. In this paper we present a model of passenger flow through an airport using discrete event simulation that will provide a closer look into the possible reasons for breakdowns and their implications for passenger flow. The simulation is based on data collected at Norfolk International Airport (ORF). The primary goal of this simulation is to present ways to optimize the work force to keep passenger flow smooth even during peak travel times and for emergency preparedness at ORF in case of adverse events. In this simulation we ran three different scenarios: real world, increased check-in stations, and multiple waiting lines. Increased check-in stations increased waiting time and instantaneous utilization. while the multiple waiting lines decreased both the waiting time and instantaneous utilization. This simulation was able to show how different changes affected the passenger flow through the airport.
Symbolic discrete event system specification
NASA Technical Reports Server (NTRS)
Zeigler, Bernard P.; Chi, Sungdo
1992-01-01
Extending discrete event modeling formalisms to facilitate greater symbol manipulation capabilities is important to further their use in intelligent control and design of high autonomy systems. An extension to the DEVS formalism that facilitates symbolic expression of event times by extending the time base from the real numbers to the field of linear polynomials over the reals is defined. A simulation algorithm is developed to generate the branching trajectories resulting from the underlying nondeterminism. To efficiently manage symbolic constraints, a consistency checking algorithm for linear polynomial constraints based on feasibility checking algorithms borrowed from linear programming has been developed. The extended formalism offers a convenient means to conduct multiple, simultaneous explorations of model behaviors. Examples of application are given with concentration on fault model analysis.
Method and system to perform energy-extraction based active noise control
NASA Technical Reports Server (NTRS)
Kelkar, Atul (Inventor); Joshi, Suresh M. (Inventor)
2009-01-01
A method to provide active noise control to reduce noise and vibration in reverberant acoustic enclosures such as aircraft, vehicles, appliances, instruments, industrial equipment and the like is presented. A continuous-time multi-input multi-output (MIMO) state space mathematical model of the plant is obtained via analytical modeling and system identification. Compensation is designed to render the mathematical model passive in the sense of mathematical system theory. The compensated system is checked to ensure robustness of the passive property of the plant. The check ensures that the passivity is preserved if the mathematical model parameters are perturbed from nominal values. A passivity-based controller is designed and verified using numerical simulations and then tested. The controller is designed so that the resulting closed-loop response shows the desired noise reduction.
Full implementation of a distributed hydrological model based on check dam trapped sediment volumes
NASA Astrophysics Data System (ADS)
Bussi, Gianbattista; Francés, Félix
2014-05-01
Lack of hydrometeorological data is one of the most compelling limitations to the implementation of distributed environmental models. Mediterranean catchments, in particular, are characterised by high spatial variability of meteorological phenomena and soil characteristics, which may prevents from transferring model calibrations from a fully gauged catchment to a totally o partially ungauged one. For this reason, new sources of data are required in order to extend the use of distributed models to non-monitored or low-monitored areas. An important source of information regarding the hydrological and sediment cycle is represented by sediment deposits accumulated at the bottom of reservoirs. Since the 60s, reservoir sedimentation volumes were used as proxy data for the estimation of inter-annual total sediment yield rates, or, in more recent years, as a reference measure of the sediment transport for sediment model calibration and validation. Nevertheless, the possibility of using such data for constraining the calibration of a hydrological model has not been exhaustively investigated so far. In this study, the use of nine check dam reservoir sedimentation volumes for hydrological and sedimentological model calibration and spatio-temporal validation was examined. Check dams are common structures in Mediterranean areas, and are a potential source of spatially distributed information regarding both hydrological and sediment cycle. In this case-study, the TETIS hydrological and sediment model was implemented in a medium-size Mediterranean catchment (Rambla del Poyo, Spain) by taking advantage of sediment deposits accumulated behind the check dams located in the catchment headwaters. Reservoir trap efficiency was taken into account by coupling the TETIS model with a pond trap efficiency model. The model was calibrated by adjusting some of its parameters in order to reproduce the total sediment volume accumulated behind a check dam. Then, the model was spatially validated by obtaining the simulated sedimentation volume at the other eight check dams and comparing it to the observed sedimentation volumes. Lastly, the simulated water discharge at the catchment outlet was compared with observed water discharge records in order to check the hydrological sub-model behaviour. Model results provided highly valuable information concerning the spatial distribution of soil erosion and sediment transport. Spatial validation of the sediment sub-model provided very good results at seven check dams out of nine. This study shows that check dams can be a useful tool also for constraining hydrological model calibration, as model results agree with water discharge observations. In fact, the hydrological model validation at a downstream water flow gauge obtained a Nash-Sutcliffe efficiency of 0.8. This technique is applicable to all catchments with presence of check dams, and only requires rainfall and temperature data and soil characteristics maps.
Model Checking Satellite Operational Procedures
NASA Astrophysics Data System (ADS)
Cavaliere, Federico; Mari, Federico; Melatti, Igor; Minei, Giovanni; Salvo, Ivano; Tronci, Enrico; Verzino, Giovanni; Yushtein, Yuri
2011-08-01
We present a model checking approach for the automatic verification of satellite operational procedures (OPs). Building a model for a complex system as a satellite is a hard task. We overcome this obstruction by using a suitable simulator (SIMSAT) for the satellite. Our approach aims at improving OP quality assurance by automatic exhaustive exploration of all possible simulation scenarios. Moreover, our solution decreases OP verification costs by using a model checker (CMurphi) to automatically drive the simulator. We model OPs as user-executed programs observing the simulator telemetries and sending telecommands to the simulator. In order to assess feasibility of our approach we present experimental results on a simple meaningful scenario. Our results show that we can save up to 90% of verification time.
Wu, Hua’an; Zhou, Meng
2017-01-01
High accuracy in water demand predictions is an important basis for the rational allocation of city water resources and forms the basis for sustainable urban development. The shortage of water resources in Chongqing, the youngest central municipality in Southwest China, has significantly increased with the population growth and rapid economic development. In this paper, a new grey water-forecasting model (GWFM) was built based on the data characteristics of water consumption. The parameter estimation and error checking methods of the GWFM model were investigated. Then, the GWFM model was employed to simulate the water demands of Chongqing from 2009 to 2015 and forecast it in 2016. The simulation and prediction errors of the GWFM model was checked, and the results show the GWFM model exhibits better simulation and prediction precisions than those of the classical Grey Model with one variable and single order equation GM(1,1) for short and the frequently-used Discrete Grey Model with one variable and single order equation, DGM(1,1) for short. Finally, the water demand in Chongqing from 2017 to 2022 was forecasted, and some corresponding control measures and recommendations were provided based on the prediction results to ensure a viable water supply and promote the sustainable development of the Chongqing economy. PMID:29140266
Posterior Predictive Model Checking in Bayesian Networks
ERIC Educational Resources Information Center
Crawford, Aaron
2014-01-01
This simulation study compared the utility of various discrepancy measures within a posterior predictive model checking (PPMC) framework for detecting different types of data-model misfit in multidimensional Bayesian network (BN) models. The investigated conditions were motivated by an applied research program utilizing an operational complex…
Blakes, Jonathan; Twycross, Jamie; Romero-Campero, Francisco Jose; Krasnogor, Natalio
2011-12-01
The Infobiotics Workbench is an integrated software suite incorporating model specification, simulation, parameter optimization and model checking for Systems and Synthetic Biology. A modular model specification allows for straightforward creation of large-scale models containing many compartments and reactions. Models are simulated either using stochastic simulation or numerical integration, and visualized in time and space. Model parameters and structure can be optimized with evolutionary algorithms, and model properties calculated using probabilistic model checking. Source code and binaries for Linux, Mac and Windows are available at http://www.infobiotics.org/infobiotics-workbench/; released under the GNU General Public License (GPL) version 3. Natalio.Krasnogor@nottingham.ac.uk.
Design of Installing Check Dam Using RAMMS Model in Seorak National Park of South Korea
NASA Astrophysics Data System (ADS)
Jun, K.; Tak, W.; JUN, B. H.; Lee, H. J.; KIM, S. D.
2016-12-01
Design of Installing Check Dam Using RAMMS Model in Seorak National Park of South Korea Kye-Won Jun*, Won-Jun Tak*, Byong-Hee Jun**, Ho-Jin Lee***, Soung-Doug Kim* *Graduate School of Disaster Prevention, Kangwon National University, 346 Joogang-ro, Samcheok-si, Gangwon-do, Korea **School of Fire and Disaster Protection, Kangwon National University, 346 Joogang-ro, Samcheok-si, Gangwon-do, Korea ***School of Civil Engineering, Chungbuk National University, 1 Chungdae-ro, Seowon-gu, Cheongju, Korea Abstract As more than 64% of the land in South Korea is mountainous area, so many regions in South Korea are exposed to the danger of landslide and debris flow. So it is important to understand the behavior of debris flow in mountainous terrains, the various methods and models are being presented and developed based on the mathematical concept. The purpose of this study is to investigate the regions that experienced the debris flow due to typhoon called Ewiniar and to perform numerical modeling to design and layout of the Check dam for reducing the damage by the debris flow. For the performance of numerical modeling, on-site measurement of the research area was conducted including: topographic investigation, research on bridges in the downstream, and precision LiDAR 3D scanning for composed basic data of numerical modeling. The numerical simulation of this study was performed using RAMMS (Rapid Mass Movements Simulation) model for the analysis of the debris flow. This model applied to the conditions of the Check dam which was installed in the upstream, midstream, and downstream. Considering the reduction effect of debris flow, the expansion of debris flow, and the influence on the bridges in the downstream, proper location of the Check dam was designated. The result of present numerical model showed that when the Check dam was installed in the downstream section, 50 m above the bridge, the reduction effect of the debris flow was higher compared to when the Check dam were installed in other sections. Key words: Debris flow, LiDAR, Check dam, RAMMSAcknowledgementsThis research was supported by a grant [MPSS-NH-2014-74] through the Disaster and Safety Management Institute funded by Ministry of Public Safety and Security of Korean government
Model-checking techniques based on cumulative residuals.
Lin, D Y; Wei, L J; Ying, Z
2002-03-01
Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.
2009-01-01
Background The study of biological networks has led to the development of increasingly large and detailed models. Computer tools are essential for the simulation of the dynamical behavior of the networks from the model. However, as the size of the models grows, it becomes infeasible to manually verify the predictions against experimental data or identify interesting features in a large number of simulation traces. Formal verification based on temporal logic and model checking provides promising methods to automate and scale the analysis of the models. However, a framework that tightly integrates modeling and simulation tools with model checkers is currently missing, on both the conceptual and the implementational level. Results We have developed a generic and modular web service, based on a service-oriented architecture, for integrating the modeling and formal verification of genetic regulatory networks. The architecture has been implemented in the context of the qualitative modeling and simulation tool GNA and the model checkers NUSMV and CADP. GNA has been extended with a verification module for the specification and checking of biological properties. The verification module also allows the display and visual inspection of the verification results. Conclusions The practical use of the proposed web service is illustrated by means of a scenario involving the analysis of a qualitative model of the carbon starvation response in E. coli. The service-oriented architecture allows modelers to define the model and proceed with the specification and formal verification of the biological properties by means of a unified graphical user interface. This guarantees a transparent access to formal verification technology for modelers of genetic regulatory networks. PMID:20042075
Visual Predictive Check in Models with Time-Varying Input Function.
Largajolli, Anna; Bertoldo, Alessandra; Campioni, Marco; Cobelli, Claudio
2015-11-01
The nonlinear mixed effects models are commonly used modeling techniques in the pharmaceutical research as they enable the characterization of the individual profiles together with the population to which the individuals belong. To ensure a correct use of them is fundamental to provide powerful diagnostic tools that are able to evaluate the predictive performance of the models. The visual predictive check (VPC) is a commonly used tool that helps the user to check by visual inspection if the model is able to reproduce the variability and the main trend of the observed data. However, the simulation from the model is not always trivial, for example, when using models with time-varying input function (IF). In this class of models, there is a potential mismatch between each set of simulated parameters and the associated individual IF which can cause an incorrect profile simulation. We introduce a refinement of the VPC by taking in consideration a correlation term (the Mahalanobis or normalized Euclidean distance) that helps the association of the correct IF with the individual set of simulated parameters. We investigate and compare its performance with the standard VPC in models of the glucose and insulin system applied on real and simulated data and in a simulated pharmacokinetic/pharmacodynamic (PK/PD) example. The newly proposed VPC performance appears to be better with respect to the standard VPC especially for the models with big variability in the IF where the probability of simulating incorrect profiles is higher.
NASA Astrophysics Data System (ADS)
Rousseau, A. N.; Álvarez; Yu, X.; Savary, S.; Duffy, C.
2015-12-01
Most physically-based hydrological models simulate to various extents the relevant watershed processes occurring at different spatiotemporal scales. These models use different physical domain representations (e.g., hydrological response units, discretized control volumes) and numerical solution techniques (e.g., finite difference method, finite element method) as well as a variety of approximations for representing the physical processes. Despite the fact that several models have been developed so far, very few inter-comparison studies have been conducted to check beyond streamflows whether different modeling approaches could simulate in a similar fashion the other processes at the watershed scale. In this study, PIHM (Qu and Duffy, 2007), a fully coupled, distributed model, and HYDROTEL (Fortin et al., 2001; Turcotte et al., 2003, 2007), a pseudo-coupled, semi-distributed model, were compared to check whether the models could corroborate observed streamflows while equally representing other processes as well such as evapotranspiration, snow accumulation/melt or infiltration, etc. For this study, the Young Womans Creek watershed, PA, was used to compare: streamflows (channel routing), actual evapotranspiration, snow water equivalent (snow accumulation and melt), infiltration, recharge, shallow water depth above the soil surface (surface flow), lateral flow into the river (surface and subsurface flow) and height of the saturated soil column (subsurface flow). Despite a lack of observed data for contrasting most of the simulated processes, it can be said that the two models can be used as simulation tools for streamflows, actual evapotranspiration, infiltration, lateral flows into the river, and height of the saturated soil column. However, each process presents particular differences as a result of the physical parameters and the modeling approaches used by each model. Potentially, these differences should be object of further analyses to definitively confirm or reject modeling hypotheses.
The NASA Lewis integrated propulsion and flight control simulator
NASA Technical Reports Server (NTRS)
Bright, Michelle M.; Simon, Donald L.
1991-01-01
A new flight simulation facility has been developed at NASA Lewis to allow integrated propulsion-control and flight-control algorithm development and evaluation in real time. As a preliminary check of the simulator facility and the correct integration of its components, the control design and physics models for an STOVL fighter aircraft model have been demonstrated, with their associated system integration and architecture, pilot vehicle interfaces, and display symbology. The results show that this fixed-based flight simulator can provide real-time feedback and display of both airframe and propulsion variables for validation of integrated systems and testing of control design methodologies and cockpit mechanizations.
León, Larry F; Cai, Tianxi
2012-04-01
In this paper we develop model checking techniques for assessing functional form specifications of covariates in censored linear regression models. These procedures are based on a censored data analog to taking cumulative sums of "robust" residuals over the space of the covariate under investigation. These cumulative sums are formed by integrating certain Kaplan-Meier estimators and may be viewed as "robust" censored data analogs to the processes considered by Lin, Wei & Ying (2002). The null distributions of these stochastic processes can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be generated by computer simulation. Each observed process can then be graphically compared with a few realizations from the Gaussian process. We also develop formal test statistics for numerical comparison. Such comparisons enable one to assess objectively whether an apparent trend seen in a residual plot reects model misspecification or natural variation. We illustrate the methods with a well known dataset. In addition, we examine the finite sample performance of the proposed test statistics in simulation experiments. In our simulation experiments, the proposed test statistics have good power of detecting misspecification while at the same time controlling the size of the test.
NASA Technical Reports Server (NTRS)
Izygon, Michel
1992-01-01
This report summarizes the findings and lessons learned from the development of an intelligent user interface for a space flight planning simulation program, in the specific area related to constraint-checking. The different functionalities of the Graphical User Interface part and of the rule-based part of the system have been identified. Their respective domain of applicability for error prevention and error checking have been specified.
Building Single-Cell Models of Planktonic Metabolism Using PSAMM
NASA Astrophysics Data System (ADS)
Dufault-Thompson, K.; Zhang, Y.; Steffensen, J. L.
2016-02-01
The Genome-scale models (GEMs) of metabolic networks simulate the metabolic activities of individual cells by integrating omics data with biochemical and physiological measurements. GEMs were applied in the simulation of various photo-, chemo-, and heterotrophic organisms and provide significant insights into the function and evolution of planktonic cells. Despite the quick accumulation of GEMs, challenges remain in assembling the individual cell-based models into community-level models. Among various problems, the lack of consistencies in model representation and model quality checking has hindered the integration of individual GEMs and can lead to erroneous conclusions in the development of new modeling algorithms. Here, we present a Portable System for the Analysis of Metabolic Models (PSAMM). Along with the software a novel format of model representation was developed to enhance the readability of model files and permit the inclusion of heterogeneous, model-specific annotation information. A number of quality checking procedures was also implemented in PSAMM to ensure stoichiometric balance and to identify unused reactions. Using a case study of Shewanella piezotolerans WP3, we demonstrated the application of PSAMM in simulating the coupling of carbon utilization and energy production pathways under low-temperature and high-pressure stress. Applying PSAMM, we have also analyzed over 50 GEMs in the current literature and released an updated collection of the models with corrections on a number of common inconsistencies. Overall, PSAMM opens up new opportunities for integrating individual GEMs for the construction and mathematical simulation of community-level models in the scope of entire ecosystems.
NASA Astrophysics Data System (ADS)
Liu, Lei; Huang, Chuanhui; Yu, Ping; Zhang, Lei
2017-10-01
To improve the dynamic characteristics and cavitation characteristics of large-flow pilot operated check valve, consider the pilot poppet as the research object, analyses working principle and design three different kinds of pilot poppets. The vibration characteristics and impact characteristics are analyzed. The simulation model is established through flow field simulation software. The cavitation characteristics of large-flow pilot operated check valve are studied and discussed. On this basis, high-pressure large-flow impact experimental system is used for impact experiment, and the cavitation index is discussed. Then optimal structure is obtained. Simulation results indicate that the increase of pilot poppet half cone angle can effectively reduce the cavitation area, reducing the generation of cavitation. Experimental results show that the pressure impact is not decreasing with increasing of pilot poppet half cone angle in process of unloading, but the unloading capacity, response speed and pilot poppet half cone angle are positively correlated. The impact characteristics of 60° pilot poppet, and its cavitation index is lesser, which indicates 60° pilot poppet is the optimal structure, with the theory results are basically identical.
Stochastic Earthquake Rupture Modeling Using Nonparametric Co-Regionalization
NASA Astrophysics Data System (ADS)
Lee, Kyungbook; Song, Seok Goo
2017-09-01
Accurate predictions of the intensity and variability of ground motions are essential in simulation-based seismic hazard assessment. Advanced simulation-based ground motion prediction methods have been proposed to complement the empirical approach, which suffers from the lack of observed ground motion data, especially in the near-source region for large events. It is important to quantify the variability of the earthquake rupture process for future events and to produce a number of rupture scenario models to capture the variability in simulation-based ground motion predictions. In this study, we improved the previously developed stochastic earthquake rupture modeling method by applying the nonparametric co-regionalization, which was proposed in geostatistics, to the correlation models estimated from dynamically derived earthquake rupture models. The nonparametric approach adopted in this study is computationally efficient and, therefore, enables us to simulate numerous rupture scenarios, including large events ( M > 7.0). It also gives us an opportunity to check the shape of true input correlation models in stochastic modeling after being deformed for permissibility. We expect that this type of modeling will improve our ability to simulate a wide range of rupture scenario models and thereby predict ground motions and perform seismic hazard assessment more accurately.
Check-Cases for Verification of 6-Degree-of-Freedom Flight Vehicle Simulations
NASA Technical Reports Server (NTRS)
Murri, Daniel G.; Jackson, E. Bruce; Shelton, Robert O.
2015-01-01
The rise of innovative unmanned aeronautical systems and the emergence of commercial space activities have resulted in a number of relatively new aerospace organizations that are designing innovative systems and solutions. These organizations use a variety of commercial off-the-shelf and in-house-developed simulation and analysis tools including 6-degree-of-freedom (6-DOF) flight simulation tools. The increased affordability of computing capability has made highfidelity flight simulation practical for all participants. Verification of the tools' equations-of-motion and environment models (e.g., atmosphere, gravitation, and geodesy) is desirable to assure accuracy of results. However, aside from simple textbook examples, minimal verification data exists in open literature for 6-DOF flight simulation problems. This assessment compared multiple solution trajectories to a set of verification check-cases that covered atmospheric and exo-atmospheric (i.e., orbital) flight. Each scenario consisted of predefined flight vehicles, initial conditions, and maneuvers. These scenarios were implemented and executed in a variety of analytical and real-time simulation tools. This tool-set included simulation tools in a variety of programming languages based on modified flat-Earth, round- Earth, and rotating oblate spheroidal Earth geodesy and gravitation models, and independently derived equations-of-motion and propagation techniques. The resulting simulated parameter trajectories were compared by over-plotting and difference-plotting to yield a family of solutions. In total, seven simulation tools were exercised.
ALC: automated reduction of rule-based models
Koschorreck, Markus; Gilles, Ernst Dieter
2008-01-01
Background Combinatorial complexity is a challenging problem for the modeling of cellular signal transduction since the association of a few proteins can give rise to an enormous amount of feasible protein complexes. The layer-based approach is an approximative, but accurate method for the mathematical modeling of signaling systems with inherent combinatorial complexity. The number of variables in the simulation equations is highly reduced and the resulting dynamic models show a pronounced modularity. Layer-based modeling allows for the modeling of systems not accessible previously. Results ALC (Automated Layer Construction) is a computer program that highly simplifies the building of reduced modular models, according to the layer-based approach. The model is defined using a simple but powerful rule-based syntax that supports the concepts of modularity and macrostates. ALC performs consistency checks on the model definition and provides the model output in different formats (C MEX, MATLAB, Mathematica and SBML) as ready-to-run simulation files. ALC also provides additional documentation files that simplify the publication or presentation of the models. The tool can be used offline or via a form on the ALC website. Conclusion ALC allows for a simple rule-based generation of layer-based reduced models. The model files are given in different formats as ready-to-run simulation files. PMID:18973705
NASA Astrophysics Data System (ADS)
Chen, Yang; Wang, Huasheng; Xia, Jixia; Cai, Guobiao; Zhang, Zhenpeng
2017-04-01
For the pressure reducing regulator and check valve double-valve combined test system in an integral bipropellant propulsion system, a system model is established with modular models of various typical components. The simulation research is conducted on the whole working process of an experiment of 9 MPa working condition from startup to rated working condition and finally to shutdown. Comparison of simulation results with test data shows: five working conditions including standby, startup, rated pressurization, shutdown and halt and nine stages of the combined test system are comprehensively disclosed; valve-spool opening and closing details of the regulator and two check valves are accurately revealed; the simulation also clarifies two phenomena which test data are unable to clarify, one is the critical opening state in which the check valve spools slightly open and close alternately in their own fully closed positions, the other is the obvious effects of flow-field temperature drop and temperature rise in pipeline network with helium gas flowing. Moreover, simulation results with consideration of component wall heat transfer are closer to the test data than those under the adiabatic-wall condition, and more able to reveal the dynamic characteristics of the system in various working stages.
The Automation of Nowcast Model Assessment Processes
2016-09-01
that will automate real-time WRE-N model simulations, collect and quality control check weather observations for assimilation and verification, and...domains centered near White Sands Missile Range, New Mexico, where the Meteorological Sensor Array (MSA) will be located. The MSA will provide...observations and performing quality -control checks for the pre-forecast data assimilation period. 2. Run the WRE-N model to generate model forecast data
Modelling and study of active vibration control for off-road vehicle
NASA Astrophysics Data System (ADS)
Zhang, Junwei; Chen, Sizhong
2014-05-01
In view of special working characteristics and structure, engineering machineries do not have conventional suspension system typically. Consequently, operators have to endure severe vibrations which are detrimental both to their health and to the productivity of the loader. Based on displacement control, a kind of active damping method is developed for a skid-steer loader. In this paper, the whole hydraulic system for active damping method is modelled which include swash plate dynamics model, proportional valve model, piston accumulator model, pilot-operated check valve model, relief valve model, pump loss model, and cylinder model. A new road excitation model is developed for the skid-steer loader specially. The response of chassis vibration acceleration to road excitation is verified through simulation. The simulation result of passive accumulator damping is compared with measurements and the comparison shows that they are close. Based on this, parallel PID controller and track PID controller with acceleration feedback are brought into the simulation model, and the simulation results are compared with passive accumulator damping. It shows that the active damping methods with PID controllers are better in reducing chassis vibration acceleration and pitch movement. In the end, the test work for active damping method is proposed for the future work.
The NASA Lewis integrated propulsion and flight control simulator
NASA Technical Reports Server (NTRS)
Bright, Michelle M.; Simon, Donald L.
1991-01-01
A new flight simulation facility was developed at NASA-Lewis. The purpose of this flight simulator is to allow integrated propulsion control and flight control algorithm development and evaluation in real time. As a preliminary check of the simulator facility capabilities and correct integration of its components, the control design and physics models for a short take-off and vertical landing fighter aircraft model were shown, with their associated system integration and architecture, pilot vehicle interfaces, and display symbology. The initial testing and evaluation results show that this fixed based flight simulator can provide real time feedback and display of both airframe and propulsion variables for validation of integrated flight and propulsion control systems. Additionally, through the use of this flight simulator, various control design methodologies and cockpit mechanizations can be tested and evaluated in a real time environment.
Symbolic LTL Compilation for Model Checking: Extended Abstract
NASA Technical Reports Server (NTRS)
Rozier, Kristin Y.; Vardi, Moshe Y.
2007-01-01
In Linear Temporal Logic (LTL) model checking, we check LTL formulas representing desired behaviors against a formal model of the system designed to exhibit these behaviors. To accomplish this task, the LTL formulas must be translated into automata [21]. We focus on LTL compilation by investigating LTL satisfiability checking via a reduction to model checking. Having shown that symbolic LTL compilation algorithms are superior to explicit automata construction algorithms for this task [16], we concentrate here on seeking a better symbolic algorithm.We present experimental data comparing algorithmic variations such as normal forms, encoding methods, and variable ordering and examine their effects on performance metrics including processing time and scalability. Safety critical systems, such as air traffic control, life support systems, hazardous environment controls, and automotive control systems, pervade our daily lives, yet testing and simulation alone cannot adequately verify their reliability [3]. Model checking is a promising approach to formal verification for safety critical systems which involves creating a formal mathematical model of the system and translating desired safety properties into a formal specification for this model. The complement of the specification is then checked against the system model. When the model does not satisfy the specification, model-checking tools accompany this negative answer with a counterexample, which points to an inconsistency between the system and the desired behaviors and aids debugging efforts.
Microscopic analysis and simulation of check-mark stain on the galvanized steel strip
NASA Astrophysics Data System (ADS)
So, Hongyun; Yoon, Hyun Gi; Chung, Myung Kyoon
2010-11-01
When galvanized steel strip is produced through a continuous hot-dip galvanizing process, the thickness of adhered zinc film is controlled by plane impinging air gas jet referred to as "air-knife system". In such a gas-jet wiping process, stain of check-mark or sag line shape frequently appears. The check-mark defect is caused by non-uniform zinc coating and the oblique patterns such as "W", "V" or "X" on the coated surface. The present paper presents a cause and analysis of the check-mark formation and a numerical simulation of sag lines by using the numerical data produced by Large Eddy Simulation (LES) of the three-dimensional compressible turbulent flow field around the air-knife system. It was found that there is alternating plane-wise vortices near the impinging stagnation region and such alternating vortices move almost periodically to the right and to the left sides on the stagnation line due to the jet flow instability. Meanwhile, in order to simulate the check-mark formation, a novel perturbation model has been developed to predict the variation of coating thickness along the transverse direction. Finally, the three-dimensional zinc coating surface was obtained by the present perturbation model. It was found that the sag line formation is determined by the combination of the instantaneous coating thickness distribution along the transverse direction near the stagnation line and the feed speed of the steel strip.
14 CFR 135.337 - Qualifications: Check airmen (aircraft) and check airmen (simulator).
Code of Federal Regulations, 2011 CFR
2011-01-01
... who is qualified to conduct flight checks in an aircraft, in a flight simulator, or in a flight... to conduct flight checks, but only in a flight simulator, in a flight training device, or both, for a... the 12-month preceding the performance of any check airman duty in a flight simulator; or (2...
14 CFR 135.337 - Qualifications: Check airmen (aircraft) and check airmen (simulator).
Code of Federal Regulations, 2014 CFR
2014-01-01
... who is qualified to conduct flight checks in an aircraft, in a flight simulator, or in a flight... to conduct flight checks, but only in a flight simulator, in a flight training device, or both, for a... the 12-month preceding the performance of any check airman duty in a flight simulator; or (2...
14 CFR 135.337 - Qualifications: Check airmen (aircraft) and check airmen (simulator).
Code of Federal Regulations, 2012 CFR
2012-01-01
... who is qualified to conduct flight checks in an aircraft, in a flight simulator, or in a flight... to conduct flight checks, but only in a flight simulator, in a flight training device, or both, for a... the 12-month preceding the performance of any check airman duty in a flight simulator; or (2...
14 CFR 135.337 - Qualifications: Check airmen (aircraft) and check airmen (simulator).
Code of Federal Regulations, 2013 CFR
2013-01-01
... who is qualified to conduct flight checks in an aircraft, in a flight simulator, or in a flight... to conduct flight checks, but only in a flight simulator, in a flight training device, or both, for a... the 12-month preceding the performance of any check airman duty in a flight simulator; or (2...
NASA Astrophysics Data System (ADS)
Curcó, David; Casanovas, Jordi; Roca, Marc; Alemán, Carlos
2005-07-01
A method for generating atomistic models of dense amorphous polymers is presented. The method is organized in a two-steps procedure. First, structures are generated using an algorithm that minimizes the torsional strain. After this, a relaxation algorithm is applied to minimize the non-bonding interactions. Two alternative relaxation methods, which are based simple minimization and Concerted Rotation techniques, have been implemented. The performance of the method has been checked by simulating polyethylene, polypropylene, nylon 6, poly(L,D-lactic acid) and polyglycolic acid.
An experimental method to verify soil conservation by check dams on the Loess Plateau, China.
Xu, X Z; Zhang, H W; Wang, G Q; Chen, S C; Dang, W Q
2009-12-01
A successful experiment with a physical model requires necessary conditions of similarity. This study presents an experimental method with a semi-scale physical model. The model is used to monitor and verify soil conservation by check dams in a small watershed on the Loess Plateau of China. During experiments, the model-prototype ratio of geomorphic variables was kept constant under each rainfall event. Consequently, experimental data are available for verification of soil erosion processes in the field and for predicting soil loss in a model watershed with check dams. Thus, it can predict the amount of soil loss in a catchment. This study also mentions four criteria: similarities of watershed geometry, grain size and bare land, Froude number (Fr) for rainfall event, and soil erosion in downscaled models. The efficacy of the proposed method was confirmed using these criteria in two different downscaled model experiments. The B-Model, a large scale model, simulates watershed prototype. The two small scale models, D(a) and D(b), have different erosion rates, but are the same size. These two models simulate hydraulic processes in the B-Model. Experiment results show that while soil loss in the small scale models was converted by multiplying the soil loss scale number, it was very close to that of the B-Model. Obviously, with a semi-scale physical model, experiments are available to verify and predict soil loss in a small watershed area with check dam system on the Loess Plateau, China.
14 CFR 91.1089 - Qualifications: Check pilots (aircraft) and check pilots (simulator).
Code of Federal Regulations, 2013 CFR
2013-01-01
... simulator, or in a flight training device for a particular type aircraft. (2) A check pilot (simulator) is a person who is qualified to conduct flight checks, but only in a flight simulator, in a flight training... (simulator) must accomplish the following— (1) Fly at least two flight segments as a required crewmember for...
14 CFR 91.1089 - Qualifications: Check pilots (aircraft) and check pilots (simulator).
Code of Federal Regulations, 2014 CFR
2014-01-01
... simulator, or in a flight training device for a particular type aircraft. (2) A check pilot (simulator) is a person who is qualified to conduct flight checks, but only in a flight simulator, in a flight training... (simulator) must accomplish the following— (1) Fly at least two flight segments as a required crewmember for...
14 CFR 91.1089 - Qualifications: Check pilots (aircraft) and check pilots (simulator).
Code of Federal Regulations, 2012 CFR
2012-01-01
... simulator, or in a flight training device for a particular type aircraft. (2) A check pilot (simulator) is a person who is qualified to conduct flight checks, but only in a flight simulator, in a flight training... (simulator) must accomplish the following— (1) Fly at least two flight segments as a required crewmember for...
Code of Federal Regulations, 2012 CFR
2012-01-01
... observation check may be accomplished in part or in full in an aircraft, in a flight simulator, or in a flight... accomplished in full or in part in flight, in a flight simulator, or in a flight training device, as appropriate. (g) The initial and transition flight training for a check pilot (simulator) must include the...
Code of Federal Regulations, 2011 CFR
2011-01-01
... observation check may be accomplished in part or in full in an aircraft, in a flight simulator, or in a flight... accomplished in full or in part in flight, in a flight simulator, or in a flight training device, as appropriate. (g) The initial and transition flight training for a check pilot (simulator) must include the...
Code of Federal Regulations, 2010 CFR
2010-01-01
... observation check may be accomplished in part or in full in an aircraft, in a flight simulator, or in a flight... accomplished in full or in part in flight, in a flight simulator, or in a flight training device, as appropriate. (g) The initial and transition flight training for a check pilot (simulator) must include the...
Code of Federal Regulations, 2013 CFR
2013-01-01
... observation check may be accomplished in part or in full in an aircraft, in a flight simulator, or in a flight... accomplished in full or in part in flight, in a flight simulator, or in a flight training device, as appropriate. (g) The initial and transition flight training for a check pilot (simulator) must include the...
Code of Federal Regulations, 2014 CFR
2014-01-01
... observation check may be accomplished in part or in full in an aircraft, in a flight simulator, or in a flight... accomplished in full or in part in flight, in a flight simulator, or in a flight training device, as appropriate. (g) The initial and transition flight training for a check pilot (simulator) must include the...
The impact of joint responses of devices in an airport security system.
Nie, Xiaofeng; Batta, Rajan; Drury, Colin G; Lin, Li
2009-02-01
In this article, we consider a model for an airport security system in which the declaration of a threat is based on the joint responses of inspection devices. This is in contrast to the typical system in which each check station independently declares a passenger as having a threat or not having a threat. In our framework the declaration of threat/no-threat is based upon the passenger scores at the check stations he/she goes through. To do this we use concepts from classification theory in the field of multivariate statistics analysis and focus on the main objective of minimizing the expected cost of misclassification. The corresponding correct classification and misclassification probabilities can be obtained by using a simulation-based method. After computing the overall false alarm and false clear probabilities, we compare our joint response system with two other independently operated systems. A model that groups passengers in a manner that minimizes the false alarm probability while maintaining the false clear probability within specifications set by a security authority is considered. We also analyze the staffing needs at each check station for such an inspection scheme. An illustrative example is provided along with sensitivity analysis on key model parameters. A discussion is provided on some implementation issues, on the various assumptions made in the analysis, and on potential drawbacks of the approach.
Simulation-Based Model Checking for Nondeterministic Systems and Rare Events
2016-03-24
year, we have investigated AO* search and Monte Carlo Tree Search algorithms to complement and enhance CMU’s SMCMDP. 1 Final Report, March 14... tree , so we can use it to find the probability of reachability for a property in PRISM’s Probabilistic LTL. By finding the maximum probability of...savings, particularly when handling very large models. 2.3 Monte Carlo Tree Search The Monte Carlo sampling process in SMCMDP can take a long time to
An object-oriented software for fate and exposure assessments.
Scheil, S; Baumgarten, G; Reiter, B; Schwartz, S; Wagner, J O; Trapp, S; Matthies, M
1995-07-01
The model system CemoS(1) (Chemical Exposure Model System) was developed for the exposure prediction of hazardous chemicals released to the environment. Eight different models were implemented involving chemicals fate simulation in air, water, soil and plants after continuous or single emissions from point and diffuse sources. Scenario studies are supported by a substance and an environmental data base. All input data are checked on their plausibility. Substance and environmental process estimation functions facilitate generic model calculations. CemoS is implemented in a modular structure using object-oriented programming.
PyNN: A Common Interface for Neuronal Network Simulators.
Davison, Andrew P; Brüderle, Daniel; Eppler, Jochen; Kremkow, Jens; Muller, Eilif; Pecevski, Dejan; Perrinet, Laurent; Yger, Pierre
2008-01-01
Computational neuroscience has produced a diversity of software for simulations of networks of spiking neurons, with both negative and positive consequences. On the one hand, each simulator uses its own programming or configuration language, leading to considerable difficulty in porting models from one simulator to another. This impedes communication between investigators and makes it harder to reproduce and build on the work of others. On the other hand, simulation results can be cross-checked between different simulators, giving greater confidence in their correctness, and each simulator has different optimizations, so the most appropriate simulator can be chosen for a given modelling task. A common programming interface to multiple simulators would reduce or eliminate the problems of simulator diversity while retaining the benefits. PyNN is such an interface, making it possible to write a simulation script once, using the Python programming language, and run it without modification on any supported simulator (currently NEURON, NEST, PCSIM, Brian and the Heidelberg VLSI neuromorphic hardware). PyNN increases the productivity of neuronal network modelling by providing high-level abstraction, by promoting code sharing and reuse, and by providing a foundation for simulator-agnostic analysis, visualization and data-management tools. PyNN increases the reliability of modelling studies by making it much easier to check results on multiple simulators. PyNN is open-source software and is available from http://neuralensemble.org/PyNN.
PyNN: A Common Interface for Neuronal Network Simulators
Davison, Andrew P.; Brüderle, Daniel; Eppler, Jochen; Kremkow, Jens; Muller, Eilif; Pecevski, Dejan; Perrinet, Laurent; Yger, Pierre
2008-01-01
Computational neuroscience has produced a diversity of software for simulations of networks of spiking neurons, with both negative and positive consequences. On the one hand, each simulator uses its own programming or configuration language, leading to considerable difficulty in porting models from one simulator to another. This impedes communication between investigators and makes it harder to reproduce and build on the work of others. On the other hand, simulation results can be cross-checked between different simulators, giving greater confidence in their correctness, and each simulator has different optimizations, so the most appropriate simulator can be chosen for a given modelling task. A common programming interface to multiple simulators would reduce or eliminate the problems of simulator diversity while retaining the benefits. PyNN is such an interface, making it possible to write a simulation script once, using the Python programming language, and run it without modification on any supported simulator (currently NEURON, NEST, PCSIM, Brian and the Heidelberg VLSI neuromorphic hardware). PyNN increases the productivity of neuronal network modelling by providing high-level abstraction, by promoting code sharing and reuse, and by providing a foundation for simulator-agnostic analysis, visualization and data-management tools. PyNN increases the reliability of modelling studies by making it much easier to check results on multiple simulators. PyNN is open-source software and is available from http://neuralensemble.org/PyNN. PMID:19194529
Accurate Simulation of MPPT Methods Performance When Applied to Commercial Photovoltaic Panels
2015-01-01
A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions. PMID:25874262
Accurate simulation of MPPT methods performance when applied to commercial photovoltaic panels.
Cubas, Javier; Pindado, Santiago; Sanz-Andrés, Ángel
2015-01-01
A new, simple, and quick-calculation methodology to obtain a solar panel model, based on the manufacturers' datasheet, to perform MPPT simulations, is described. The method takes into account variations on the ambient conditions (sun irradiation and solar cells temperature) and allows fast MPPT methods comparison or their performance prediction when applied to a particular solar panel. The feasibility of the described methodology is checked with four different MPPT methods applied to a commercial solar panel, within a day, and under realistic ambient conditions.
Code of Federal Regulations, 2011 CFR
2011-01-01
... an aircraft, in a flight simulator, or in a flight training device. This paragraph applies after... accomplished in full or in part in flight, in a flight simulator, or in a flight training device, as appropriate. (g) The initial and transition flight training for check airmen (simulator) must include the...
Code of Federal Regulations, 2010 CFR
2010-01-01
... an aircraft, in a flight simulator, or in a flight training device. This paragraph applies after... accomplished in full or in part in flight, in a flight simulator, or in a flight training device, as appropriate. (g) The initial and transition flight training for check airmen (simulator) must include the...
Code of Federal Regulations, 2014 CFR
2014-01-01
... an aircraft, in a flight simulator, or in a flight training device. This paragraph applies after... accomplished in full or in part in flight, in a flight simulator, or in a flight training device, as appropriate. (g) The initial and transition flight training for check airmen (simulator) must include the...
Code of Federal Regulations, 2013 CFR
2013-01-01
... an aircraft, in a flight simulator, or in a flight training device. This paragraph applies after... accomplished in full or in part in flight, in a flight simulator, or in a flight training device, as appropriate. (g) The initial and transition flight training for check airmen (simulator) must include the...
Code of Federal Regulations, 2012 CFR
2012-01-01
... an aircraft, in a flight simulator, or in a flight training device. This paragraph applies after... accomplished in full or in part in flight, in a flight simulator, or in a flight training device, as appropriate. (g) The initial and transition flight training for check airmen (simulator) must include the...
An approach to checking case-crossover analyses based on equivalence with time-series methods.
Lu, Yun; Symons, James Morel; Geyh, Alison S; Zeger, Scott L
2008-03-01
The case-crossover design has been increasingly applied to epidemiologic investigations of acute adverse health effects associated with ambient air pollution. The correspondence of the design to that of matched case-control studies makes it inferentially appealing for epidemiologic studies. Case-crossover analyses generally use conditional logistic regression modeling. This technique is equivalent to time-series log-linear regression models when there is a common exposure across individuals, as in air pollution studies. Previous methods for obtaining unbiased estimates for case-crossover analyses have assumed that time-varying risk factors are constant within reference windows. In this paper, we rely on the connection between case-crossover and time-series methods to illustrate model-checking procedures from log-linear model diagnostics for time-stratified case-crossover analyses. Additionally, we compare the relative performance of the time-stratified case-crossover approach to time-series methods under 3 simulated scenarios representing different temporal patterns of daily mortality associated with air pollution in Chicago, Illinois, during 1995 and 1996. Whenever a model-be it time-series or case-crossover-fails to account appropriately for fluctuations in time that confound the exposure, the effect estimate will be biased. It is therefore important to perform model-checking in time-stratified case-crossover analyses rather than assume the estimator is unbiased.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Potter, Kristin C; Brunhart-Lupo, Nicholas J; Bush, Brian W
We have developed a framework for the exploration, design, and planning of energy systems that combines interactive visualization with machine-learning based approximations of simulations through a general purpose dataflow API. Our system provides a visual inter- face allowing users to explore an ensemble of energy simulations representing a subset of the complex input parameter space, and spawn new simulations to 'fill in' input regions corresponding to new enegery system scenarios. Unfortunately, many energy simula- tions are far too slow to provide interactive responses. To support interactive feedback, we are developing reduced-form models via machine learning techniques, which provide statistically soundmore » esti- mates of the full simulations at a fraction of the computational cost and which are used as proxies for the full-form models. Fast com- putation and an agile dataflow enhance the engagement with energy simulations, and allow researchers to better allocate computational resources to capture informative relationships within the system and provide a low-cost method for validating and quality-checking large-scale modeling efforts.« less
Safety Verification of a Fault Tolerant Reconfigurable Autonomous Goal-Based Robotic Control System
NASA Technical Reports Server (NTRS)
Braman, Julia M. B.; Murray, Richard M; Wagner, David A.
2007-01-01
Fault tolerance and safety verification of control systems are essential for the success of autonomous robotic systems. A control architecture called Mission Data System (MDS), developed at the Jet Propulsion Laboratory, takes a goal-based control approach. In this paper, a method for converting goal network control programs into linear hybrid systems is developed. The linear hybrid system can then be verified for safety in the presence of failures using existing symbolic model checkers. An example task is simulated in MDS and successfully verified using HyTech, a symbolic model checking software for linear hybrid systems.
NASA Technical Reports Server (NTRS)
Fortenbaugh, R. L.
1980-01-01
A mathematical model of a high performance airplane capable of vertical attitude takeoff and landing (VATOL) was developed. An off line digital simulation program incorporating this model was developed to provide trim conditions and dynamic check runs for the piloted simulation studies and support dynamic analyses of proposed VATOL configuration and flight control concepts. Development details for the various simulation component models and the application of the off line simulation program, Vertical Attitude Take-Off and Landing Simulation (VATLAS), to develop a baseline control system for the Vought SF-121 VATOL airplane concept are described.
14 CFR 121.411 - Qualifications: Check airmen (airplane) and check airmen (simulator).
Code of Federal Regulations, 2014 CFR
2014-01-01
... 14 Aeronautics and Space 3 2014-01-01 2014-01-01 false Qualifications: Check airmen (airplane) and... § 121.411 Qualifications: Check airmen (airplane) and check airmen (simulator). Link to an amendment... airman (airplane) is a person who is qualified, and permitted, to conduct flight checks or instruction in...
Qin, Heng; Zuo, Yong; Zhang, Dong; Li, Yinghui; Wu, Jian
2017-03-06
Through slight modification on typical photon multiplier tube (PMT) receiver output statistics, a generalized received response model considering both scattered propagation and random detection is presented to investigate the impact of inter-symbol interference (ISI) on link data rate of short-range non-line-of-sight (NLOS) ultraviolet communication. Good agreement with the experimental results by numerical simulation is shown. Based on the received response characteristics, a heuristic check matrix construction algorithm of low-density-parity-check (LDPC) code is further proposed to approach the data rate bound derived in a delayed sampling (DS) binary pulse position modulation (PPM) system. Compared to conventional LDPC coding methods, better bit error ratio (BER) below 1E-05 is achieved for short-range NLOS UVC systems operating at data rate of 2Mbps.
NASA Astrophysics Data System (ADS)
Torghabeh, A. A.; Tousi, A. M.
2007-08-01
This paper presents Fuzzy Logic and Neural Networks approach to Gas Turbine Fuel schedules. Modeling of non-linear system using feed forward artificial Neural Networks using data generated by a simulated gas turbine program is introduced. Two artificial Neural Networks are used , depicting the non-linear relationship between gas generator speed and fuel flow, and turbine inlet temperature and fuel flow respectively . Off-line fast simulations are used for engine controller design for turbojet engine based on repeated simulation. The Mamdani and Sugeno models are used to expression the Fuzzy system . The linguistic Fuzzy rules and membership functions are presents and a Fuzzy controller will be proposed to provide an Open-Loop control for the gas turbine engine during acceleration and deceleration . MATLAB Simulink was used to apply the Fuzzy Logic and Neural Networks analysis. Both systems were able to approximate functions characterizing the acceleration and deceleration schedules . Surge and Flame-out avoidance during acceleration and deceleration phases are then checked . Turbine Inlet Temperature also checked and controls by Neural Networks controller. This Fuzzy Logic and Neural Network Controllers output results are validated and evaluated by GSP software . The validation results are used to evaluate the generalization ability of these artificial Neural Networks and Fuzzy Logic controllers.
Towards Symbolic Model Checking for Multi-Agent Systems via OBDDs
NASA Technical Reports Server (NTRS)
Raimondi, Franco; Lomunscio, Alessio
2004-01-01
We present an algorithm for model checking temporal-epistemic properties of multi-agent systems, expressed in the formalism of interpreted systems. We first introduce a technique for the translation of interpreted systems into boolean formulae, and then present a model-checking algorithm based on this translation. The algorithm is based on OBDD's, as they offer a compact and efficient representation for boolean formulae.
Center for Advanced Modeling and Simulation Intern
Gertman, Vanessa
2017-12-13
Some interns just copy papers and seal envelopes. Not at INL! Check out how Vanessa Gertman, an INL intern working at the Center for Advanced Modeling and Simulation, spent her summer working with some intense visualization software. Lots more content like this is available at INL's facebook page http://www.facebook.com/idahonationallaboratory.
Center for Advanced Modeling and Simulation Intern
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gertman, Vanessa
Some interns just copy papers and seal envelopes. Not at INL! Check out how Vanessa Gertman, an INL intern working at the Center for Advanced Modeling and Simulation, spent her summer working with some intense visualization software. Lots more content like this is available at INL's facebook page http://www.facebook.com/idahonationallaboratory.
14 CFR 121.411 - Qualifications: Check airmen (airplane) and check airmen (simulator).
Code of Federal Regulations, 2012 CFR
2012-01-01
... 14 Aeronautics and Space 3 2012-01-01 2012-01-01 false Qualifications: Check airmen (airplane) and... § 121.411 Qualifications: Check airmen (airplane) and check airmen (simulator). (a) For the purposes of this section and § 121.413: (1) A check airman (airplane) is a person who is qualified, and permitted...
14 CFR 121.411 - Qualifications: Check airmen (airplane) and check airmen (simulator).
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 3 2011-01-01 2011-01-01 false Qualifications: Check airmen (airplane) and... § 121.411 Qualifications: Check airmen (airplane) and check airmen (simulator). (a) For the purposes of this section and § 121.413: (1) A check airman (airplane) is a person who is qualified, and permitted...
14 CFR 121.411 - Qualifications: Check airmen (airplane) and check airmen (simulator).
Code of Federal Regulations, 2013 CFR
2013-01-01
... 14 Aeronautics and Space 3 2013-01-01 2013-01-01 false Qualifications: Check airmen (airplane) and... § 121.411 Qualifications: Check airmen (airplane) and check airmen (simulator). (a) For the purposes of this section and § 121.413: (1) A check airman (airplane) is a person who is qualified, and permitted...
14 CFR 121.411 - Qualifications: Check airmen (airplane) and check airmen (simulator).
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Qualifications: Check airmen (airplane) and... § 121.411 Qualifications: Check airmen (airplane) and check airmen (simulator). (a) For the purposes of this section and § 121.413: (1) A check airman (airplane) is a person who is qualified, and permitted...
14 CFR 91.1089 - Qualifications: Check pilots (aircraft) and check pilots (simulator).
Code of Federal Regulations, 2011 CFR
2011-01-01
... 14 Aeronautics and Space 2 2011-01-01 2011-01-01 false Qualifications: Check pilots (aircraft) and check pilots (simulator). 91.1089 Section 91.1089 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION... RULES Fractional Ownership Operations Program Management § 91.1089 Qualifications: Check pilots...
14 CFR 91.1089 - Qualifications: Check pilots (aircraft) and check pilots (simulator).
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 2 2010-01-01 2010-01-01 false Qualifications: Check pilots (aircraft) and check pilots (simulator). 91.1089 Section 91.1089 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION... RULES Fractional Ownership Operations Program Management § 91.1089 Qualifications: Check pilots...
Code of Federal Regulations, 2012 CFR
2012-01-01
... accomplished in part or in full in an airplane, in a flight simulator, or in a flight training device. This... accomplished in full or in part in flight, in a flight simulator, or in a flight training device, as appropriate. (g) The initial and transition flight training for check airmen (simulator) must include the...
Code of Federal Regulations, 2013 CFR
2013-01-01
... accomplished in part or in full in an airplane, in a flight simulator, or in a flight training device. This... accomplished in full or in part in flight, in a flight simulator, or in a flight training device, as appropriate. (g) The initial and transition flight training for check airmen (simulator) must include the...
Code of Federal Regulations, 2010 CFR
2010-01-01
... accomplished in part or in full in an airplane, in a flight simulator, or in a flight training device. This... accomplished in full or in part in flight, in a flight simulator, or in a flight training device, as appropriate. (g) The initial and transition flight training for check airmen (simulator) must include the...
Code of Federal Regulations, 2011 CFR
2011-01-01
... accomplished in part or in full in an airplane, in a flight simulator, or in a flight training device. This... accomplished in full or in part in flight, in a flight simulator, or in a flight training device, as appropriate. (g) The initial and transition flight training for check airmen (simulator) must include the...
Verifying Multi-Agent Systems via Unbounded Model Checking
NASA Technical Reports Server (NTRS)
Kacprzak, M.; Lomuscio, A.; Lasica, T.; Penczek, W.; Szreter, M.
2004-01-01
We present an approach to the problem of verification of epistemic properties in multi-agent systems by means of symbolic model checking. In particular, it is shown how to extend the technique of unbounded model checking from a purely temporal setting to a temporal-epistemic one. In order to achieve this, we base our discussion on interpreted systems semantics, a popular semantics used in multi-agent systems literature. We give details of the technique and show how it can be applied to the well known train, gate and controller problem. Keywords: model checking, unbounded model checking, multi-agent systems
Heartbeat-based error diagnosis framework for distributed embedded systems
NASA Astrophysics Data System (ADS)
Mishra, Swagat; Khilar, Pabitra Mohan
2012-01-01
Distributed Embedded Systems have significant applications in automobile industry as steer-by-wire, fly-by-wire and brake-by-wire systems. In this paper, we provide a general framework for fault detection in a distributed embedded real time system. We use heartbeat monitoring, check pointing and model based redundancy to design a scalable framework that takes care of task scheduling, temperature control and diagnosis of faulty nodes in a distributed embedded system. This helps in diagnosis and shutting down of faulty actuators before the system becomes unsafe. The framework is designed and tested using a new simulation model consisting of virtual nodes working on a message passing system.
Heartbeat-based error diagnosis framework for distributed embedded systems
NASA Astrophysics Data System (ADS)
Mishra, Swagat; Khilar, Pabitra Mohan
2011-12-01
Distributed Embedded Systems have significant applications in automobile industry as steer-by-wire, fly-by-wire and brake-by-wire systems. In this paper, we provide a general framework for fault detection in a distributed embedded real time system. We use heartbeat monitoring, check pointing and model based redundancy to design a scalable framework that takes care of task scheduling, temperature control and diagnosis of faulty nodes in a distributed embedded system. This helps in diagnosis and shutting down of faulty actuators before the system becomes unsafe. The framework is designed and tested using a new simulation model consisting of virtual nodes working on a message passing system.
PSAMM: A Portable System for the Analysis of Metabolic Models
Steffensen, Jon Lund; Dufault-Thompson, Keith; Zhang, Ying
2016-01-01
The genome-scale models of metabolic networks have been broadly applied in phenotype prediction, evolutionary reconstruction, community functional analysis, and metabolic engineering. Despite the development of tools that support individual steps along the modeling procedure, it is still difficult to associate mathematical simulation results with the annotation and biological interpretation of metabolic models. In order to solve this problem, here we developed a Portable System for the Analysis of Metabolic Models (PSAMM), a new open-source software package that supports the integration of heterogeneous metadata in model annotations and provides a user-friendly interface for the analysis of metabolic models. PSAMM is independent of paid software environments like MATLAB, and all its dependencies are freely available for academic users. Compared to existing tools, PSAMM significantly reduced the running time of constraint-based analysis and enabled flexible settings of simulation parameters using simple one-line commands. The integration of heterogeneous, model-specific annotation information in PSAMM is achieved with a novel format of YAML-based model representation, which has several advantages, such as providing a modular organization of model components and simulation settings, enabling model version tracking, and permitting the integration of multiple simulation problems. PSAMM also includes a number of quality checking procedures to examine stoichiometric balance and to identify blocked reactions. Applying PSAMM to 57 models collected from current literature, we demonstrated how the software can be used for managing and simulating metabolic models. We identified a number of common inconsistencies in existing models and constructed an updated model repository to document the resolution of these inconsistencies. PMID:26828591
Efficient model checking of network authentication protocol based on SPIN
NASA Astrophysics Data System (ADS)
Tan, Zhi-hua; Zhang, Da-fang; Miao, Li; Zhao, Dan
2013-03-01
Model checking is a very useful technique for verifying the network authentication protocols. In order to improve the efficiency of modeling and verification on the protocols with the model checking technology, this paper first proposes a universal formalization description method of the protocol. Combined with the model checker SPIN, the method can expediently verify the properties of the protocol. By some modeling simplified strategies, this paper can model several protocols efficiently, and reduce the states space of the model. Compared with the previous literature, this paper achieves higher degree of automation, and better efficiency of verification. Finally based on the method described in the paper, we model and verify the Privacy and Key Management (PKM) authentication protocol. The experimental results show that the method of model checking is effective, which is useful for the other authentication protocols.
NASA Technical Reports Server (NTRS)
Lee, Sangsan; Lele, Sanjiva K.; Moin, Parviz
1992-01-01
For the numerical simulation of inhomogeneous turbulent flows, a method is developed for generating stochastic inflow boundary conditions with a prescribed power spectrum. Turbulence statistics from spatial simulations using this method with a low fluctuation Mach number are in excellent agreement with the experimental data, which validates the procedure. Turbulence statistics from spatial simulations are also compared to those from temporal simulations using Taylor's hypothesis. Statistics such as turbulence intensity, vorticity, and velocity derivative skewness compare favorably with the temporal simulation. However, the statistics of dilatation show a significant departure from those obtained in the temporal simulation. To directly check the applicability of Taylor's hypothesis, space-time correlations of fluctuations in velocity, vorticity, and dilatation are investigated. Convection velocities based on vorticity and velocity fluctuations are computed as functions of the spatial and temporal separations. The profile of the space-time correlation of dilatation fluctuations is explained via a wave propagation model.
Monte Carlo simulations of lattice models for single polymer systems
NASA Astrophysics Data System (ADS)
Hsu, Hsiao-Ping
2014-10-01
Single linear polymer chains in dilute solutions under good solvent conditions are studied by Monte Carlo simulations with the pruned-enriched Rosenbluth method up to the chain length N ˜ O(10^4). Based on the standard simple cubic lattice model (SCLM) with fixed bond length and the bond fluctuation model (BFM) with bond lengths in a range between 2 and sqrt{10}, we investigate the conformations of polymer chains described by self-avoiding walks on the simple cubic lattice, and by random walks and non-reversible random walks in the absence of excluded volume interactions. In addition to flexible chains, we also extend our study to semiflexible chains for different stiffness controlled by a bending potential. The persistence lengths of chains extracted from the orientational correlations are estimated for all cases. We show that chains based on the BFM are more flexible than those based on the SCLM for a fixed bending energy. The microscopic differences between these two lattice models are discussed and the theoretical predictions of scaling laws given in the literature are checked and verified. Our simulations clarify that a different mapping ratio between the coarse-grained models and the atomistically realistic description of polymers is required in a coarse-graining approach due to the different crossovers to the asymptotic behavior.
Code of Federal Regulations, 2014 CFR
2014-01-01
... flight simulator, or in a flight training device. This paragraph applies after March 19, 1997. (b) The... simulator, or in a flight training device, as appropriate. (g) The initial and transition flight training... simulator or in a flight training device. (2) Training in the operation of flight simulators or flight...
Methodology for balancing design and process tradeoffs for deep-subwavelength technologies
NASA Astrophysics Data System (ADS)
Graur, Ioana; Wagner, Tina; Ryan, Deborah; Chidambarrao, Dureseti; Kumaraswamy, Anand; Bickford, Jeanne; Styduhar, Mark; Wang, Lee
2011-04-01
For process development of deep-subwavelength technologies, it has become accepted practice to use model-based simulation to predict systematic and parametric failures. Increasingly, these techniques are being used by designers to ensure layout manufacturability, as an alternative to, or complement to, restrictive design rules. The benefit of model-based simulation tools in the design environment is that manufacturability problems are addressed in a design-aware way by making appropriate trade-offs, e.g., between overall chip density and manufacturing cost and yield. The paper shows how library elements and the full ASIC design flow benefit from eliminating hot spots and improving design robustness early in the design cycle. It demonstrates a path to yield optimization and first time right designs implemented in leading edge technologies. The approach described herein identifies those areas in the design that could benefit from being fixed early, leading to design updates and avoiding later design churn by careful selection of design sensitivities. This paper shows how to achieve this goal by using simulation tools incorporating various models from sparse to rigorously physical, pattern detection and pattern matching, checking and validating failure thresholds.
NASA Astrophysics Data System (ADS)
Javed, M. U.; Hens, K.; Martinez, M.; Kubistin, D.; Novelli, A.; Beygi, Z. H.; Axinte, R.; Nölscher, A. C.; Sinha, V.; Song, W.; Johnson, A. M.; Auld, J.; Bohn, B.; Sander, R.; Taraborrelli, D.; Williams, J.; Fischer, H.; Lelieveld, J.; Harder, H.
2016-12-01
Peroxy radicals play a key role in ozone (O3) production and hydroxyl (OH) recycling influencing the self-cleansing capacity and air quality. Organic peroxy radical (RO2) concentrations are estimated by three different approaches for a boreal forest, based on the field campaign HUMPPA-COPEC 2010 in Southern Finland. RO2 concentrations were simulated by a box model constrained by the comprehensive dataset from the campaign and cross-checked against the photostationary state (PSS) of NOx [= nitric oxide (NO) + nitrogen dioxide (NO2)] calculations. The model simulated RO2 concentrations appear too low to explain the measured PSS of NOx. As the atmospheric RO2 production is proportional to OH loss, the total OH loss rate frequency (total OH reactivity) in the model is underestimated compared to the measurements. The total OH reactivity of the model is tuned to match the observed total OH reactivity by increasing the biogenic volatile organic compound (BVOCs) concentrations for the model simulations. The new-found simulated RO2 concentrations based on the tuned OH reactivity explain the measured PSS of NOx reasonably well. Furthermore, the sensitivity of the NOx lifetime and the catalytic efficiency of NOx (CE) in O3 production, in the context of organic alkyl nitrate (RONO2) formation, was also investigated. Based on the campaign data, it was found that the lifetime of NOx and the CE are reduced and are sensitive to the RONO2 formation under low-NOx conditions, which matches a previous model-based study.
Towards a supported common NEAMS software stack
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cormac Garvey
2012-04-01
The NEAMS IPSC's are developing multidimensional, multiphysics, multiscale simulation codes based on first principles that will be capable of predicting all aspects of current and future nuclear reactor systems. These new breeds of simulation codes will include rigorous verification, validation and uncertainty quantification checks to quantify the accuracy and quality of the simulation results. The resulting NEAMS IPSC simulation codes will be an invaluable tool in designing the next generation of Nuclear Reactors and also contribute to a more speedy process in the acquisition of licenses from the NRC for new Reactor designs. Due to the high resolution of themore » models, the complexity of the physics and the added computational resources to quantify the accuracy/quality of the results, the NEAMS IPSC codes will require large HPC resources to carry out the production simulation runs.« less
Schmidt, Paul; Schmid, Volker J; Gaser, Christian; Buck, Dorothea; Bührlen, Susanne; Förschler, Annette; Mühlau, Mark
2013-01-01
Aiming at iron-related T2-hypointensity, which is related to normal aging and neurodegenerative processes, we here present two practicable approaches, based on Bayesian inference, for preprocessing and statistical analysis of a complex set of structural MRI data. In particular, Markov Chain Monte Carlo methods were used to simulate posterior distributions. First, we rendered a segmentation algorithm that uses outlier detection based on model checking techniques within a Bayesian mixture model. Second, we rendered an analytical tool comprising a Bayesian regression model with smoothness priors (in the form of Gaussian Markov random fields) mitigating the necessity to smooth data prior to statistical analysis. For validation, we used simulated data and MRI data of 27 healthy controls (age: [Formula: see text]; range, [Formula: see text]). We first observed robust segmentation of both simulated T2-hypointensities and gray-matter regions known to be T2-hypointense. Second, simulated data and images of segmented T2-hypointensity were analyzed. We found not only robust identification of simulated effects but also a biologically plausible age-related increase of T2-hypointensity primarily within the dentate nucleus but also within the globus pallidus, substantia nigra, and red nucleus. Our results indicate that fully Bayesian inference can successfully be applied for preprocessing and statistical analysis of structural MRI data.
John, Shalini; Thangapandian, Sundarapandian; Lee, Keun Woo
2012-01-01
Human pancreatic cholesterol esterase (hCEase) is one of the lipases found to involve in the digestion of large and broad spectrum of substrates including triglycerides, phospholipids, cholesteryl esters, etc. The presence of bile salts is found to be very important for the activation of hCEase. Molecular dynamic simulations were performed for the apoform and bile salt complexed form of hCEase using the co-ordinates of two bile salts from bovine CEase. The stability of the systems throughout the simulation time was checked and two representative structures from the highly populated regions were selected using cluster analysis. These two representative structures were used in pharmacophore model generation. The generated pharmacophore models were validated and used in database screening. The screened hits were refined for their drug-like properties based on Lipinski's rule of five and ADMET properties. The drug-like compounds were further refined by molecular docking simulation using GOLD program based on the GOLD fitness score, mode of binding, and molecular interactions with the active site amino acids. Finally, three hits of novel scaffolds were selected as potential leads to be used in novel and potent hCEase inhibitor design. The stability of binding modes and molecular interactions of these final hits were re-assured by molecular dynamics simulations.
A Data Stream Model For Runoff Simulation In A Changing Environment
NASA Astrophysics Data System (ADS)
Yang, Q.; Shao, J.; Zhang, H.; Wang, G.
2017-12-01
Runoff simulation is of great significance for water engineering design, water disaster control, water resources planning and management in a catchment or region. A large number of methods including concept-based process-driven models and statistic-based data-driven models, have been proposed and widely used in worldwide during past decades. Most existing models assume that the relationship among runoff and its impacting factors is stationary. However, in the changing environment (e.g., climate change, human disturbance), their relationship usually evolves over time. In this study, we propose a data stream model for runoff simulation in a changing environment. Specifically, the proposed model works in three steps: learning a rule set, expansion of a rule, and simulation. The first step is to initialize a rule set. When a new observation arrives, the model will check which rule covers it and then use the rule for simulation. Meanwhile, Page-Hinckley (PH) change detection test is used to monitor the online simulation error of each rule. If a change is detected, the corresponding rule is removed from the rule set. In the second step, for each rule, if it covers more than a given number of instance, the rule is expected to expand. In the third step, a simulation model of each leaf node is learnt with a perceptron without activation function, and is updated with adding a newly incoming observation. Taking Fuxi River catchment as a case study, we applied the model to simulate the monthly runoff in the catchment. Results show that abrupt change is detected in the year of 1997 by using the Page-Hinckley change detection test method, which is consistent with the historic record of flooding. In addition, the model achieves good simulation results with the RMSE of 13.326, and outperforms many established methods. The findings demonstrated that the proposed data stream model provides a promising way to simulate runoff in a changing environment.
1985-04-01
and equipment whose operation can be verified with a visual or aural check. The sequence of outputs shall be cyclic, with provisions to stop the...private memory. The decision to provide spare, expansion capability, or a combination of both shall be based on life cycle cost (to the best extent...Computational System should be determined in conjunction with a computer expert (if possible). In any event, it is best to postpone completing - this
Multi-Nozzle Base Flow Model in the 10- by 10-Foot Supersonic Wind Tunnel
1964-02-21
Researchers check the setup of a multi-nozzle base flow model in the 10- by 10-Foot Supersonic Wind Tunnel at the National Aeronautics and Space Administration (NASA) Lewis Research Center. NASA researchers were struggling to understand the complex flow phenomena resulting from the use of multiple rocket engines. Robert Wasko and Theodore Cover of the Advanced Development and Evaluation Division’s analysis and operations sections conducted a set of tests in the 10- by 10 tunnel to further understand the flow issues. The Lewis researchers studied four and five-nozzle configurations in the 10- by 10 at simulated altitudes from 60,000 to 200,000 feet. The nozzles were gimbaled during some of the test runs to simulate steering. The flow field for the four-nozzle clusters was surveyed in the center and the lateral areas between the nozzles, whereas the five-nozzle cluster was surveyed in the lateral area only.
Comparing Macroscale and Microscale Simulations of Porous Battery Electrodes
Higa, Kenneth; Wu, Shao-Ling; Parkinson, Dilworth Y.; ...
2017-06-22
This article describes a vertically-integrated exploration of NMC electrode rate limitations, combining experiments with corresponding macroscale (macro-homogeneous) and microscale models. Parameters common to both models were obtained from experiments or based on published results. Positive electrode tortuosity was the sole fitting parameter used in the macroscale model, while the microscale model used no fitting parameters, instead relying on microstructural domains generated from X-ray microtomography of pristine electrode material held under compression while immersed in electrolyte solution (additionally providing novel observations of electrode wetting). Macroscale simulations showed that the capacity decrease observed at higher rates resulted primarily from solution-phase diffusion resistance.more » This ability to provide such qualitative insights at low computational costs is a strength of macroscale models, made possible by neglecting electrode spatial details. To explore the consequences of such simplification, the corresponding, computationally-expensive microscale model was constructed. This was found to have limitations preventing quantitatively accurate predictions, for reasons that are discussed in the hope of guiding future work. Nevertheless, the microscale simulation results complement those of the macroscale model by providing a reality-check based on microstructural information; in particular, this novel comparison of the two approaches suggests a reexamination of salt diffusivity measurements.« less
Model checking for linear temporal logic: An efficient implementation
NASA Technical Reports Server (NTRS)
Sherman, Rivi; Pnueli, Amir
1990-01-01
This report provides evidence to support the claim that model checking for linear temporal logic (LTL) is practically efficient. Two implementations of a linear temporal logic model checker is described. One is based on transforming the model checking problem into a satisfiability problem; the other checks an LTL formula for a finite model by computing the cross-product of the finite state transition graph of the program with a structure containing all possible models for the property. An experiment was done with a set of mutual exclusion algorithms and tested safety and liveness under fairness for these algorithms.
Analysis about modeling MEC7000 excitation system of nuclear power unit
NASA Astrophysics Data System (ADS)
Liu, Guangshi; Sun, Zhiyuan; Dou, Qian; Liu, Mosi; Zhang, Yihui; Wang, Xiaoming
2018-02-01
Aiming at the importance of accurate modeling excitation system in stability calculation of nuclear power plant inland and lack of research in modeling MEC7000 excitation system,this paper summarize a general method to modeling and simulate MEC7000 excitation system. Among this method also solve the key issues of computing method of IO interface parameter and the conversion process of excitation system measured model to BPA simulation model. At last complete the simulation modeling of MEC7000 excitation system first time in domestic. By used No-load small disturbance check, demonstrates that the proposed model and algorithm is corrective and efficient.
Bresso, Emmanuel; Togawa, Roberto; Hammond-Kosack, Kim; Urban, Martin; Maigret, Bernard; Martins, Natalia Florencio
2016-12-15
Fusarium graminearum (FG) is one of the major cereal infecting pathogens causing high economic losses worldwide and resulting in adverse effects on human and animal health. Therefore, the development of new fungicides against FG is an important issue to reduce cereal infection and economic impact. In the strategy for developing new fungicides, a critical step is the identification of new targets against which innovative chemicals weapons can be designed. As several G-protein coupled receptors (GPCRs) are implicated in signaling pathways critical for the fungi development and survival, such proteins could be valuable efficient targets to reduce Fusarium growth and therefore to prevent food contamination. In this study, GPCRs were predicted in the FG proteome using a manually curated pipeline dedicated to the identification of GPCRs. Based on several successive filters, the most appropriate GPCR candidate target for developing new fungicides was selected. Searching for new compounds blocking this particular target requires the knowledge of its 3D-structure. As no experimental X-Ray structure of the selected protein was available, a 3D model was built by homology modeling. The model quality and stability was checked by 100 ns of molecular dynamics simulations. Two stable conformations representative of the conformational families of the protein were extracted from the 100 ns simulation and were used for an ensemble docking campaign. The model quality and stability was checked by 100 ns of molecular dynamics simulations previously to the virtual screening step. The virtual screening step comprised the exploration of a chemical library with 11,000 compounds that were docked to the GPCR model. Among these compounds, we selected the ten top-ranked nontoxic molecules proposed to be experimentally tested to validate the in silico simulation. This study provides an integrated process merging genomics, structural bioinformatics and drug design for proposing innovative solutions to a world wide threat to grain producers and consumers.
LISA: a java API for performing simulations of trajectories for all types of balloons
NASA Astrophysics Data System (ADS)
Conessa, Huguette
2016-07-01
LISA (LIbrarie de Simulation pour les Aerostats) is a java API for performing simulations of trajectories for all types of balloons (Zero Pressure Balloons, Pressurized Balloons, Infrared Montgolfier), and for all phases of flight (ascent, ceiling, descent). This library has for goals to establish a reliable repository of Balloons flight physics models, to capitalize developments and control models used in different tools. It is already used for flight physics study software in CNES, to understand and reproduce the behavior of balloons, observed during real flights. It will be used operationally for the ground segment of the STRATEOLE2 mission. It was developed with quality rules of "critical software." It is based on fundamental generic concepts, linking the simulation state variables to interchangeable calculation models. Each LISA model defines how to calculate a consistent set of state variables combining validity checks. To perform a simulation for a type of balloon and a phase of flight, it is necessary to select or create a macro-model that is to say, a consistent set of models to choose from among those offered by LISA, defining the behavior of the environment and the balloon. The purpose of this presentation is to introduce the main concepts of LISA, and the new perspectives offered by this library.
Compositional schedulability analysis of real-time actor-based systems.
Jaghoori, Mohammad Mahdi; de Boer, Frank; Longuet, Delphine; Chothia, Tom; Sirjani, Marjan
2017-01-01
We present an extension of the actor model with real-time, including deadlines associated with messages, and explicit application-level scheduling policies, e.g.,"earliest deadline first" which can be associated with individual actors. Schedulability analysis in this setting amounts to checking whether, given a scheduling policy for each actor, every task is processed within its designated deadline. To check schedulability, we introduce a compositional automata-theoretic approach, based on maximal use of model checking combined with testing. Behavioral interfaces define what an actor expects from the environment, and the deadlines for messages given these assumptions. We use model checking to verify that actors match their behavioral interfaces. We extend timed automata refinement with the notion of deadlines and use it to define compatibility of actor environments with the behavioral interfaces. Model checking of compatibility is computationally hard, so we propose a special testing process. We show that the analyses are decidable and automate the process using the Uppaal model checker.
Bayesian models based on test statistics for multiple hypothesis testing problems.
Ji, Yuan; Lu, Yiling; Mills, Gordon B
2008-04-01
We propose a Bayesian method for the problem of multiple hypothesis testing that is routinely encountered in bioinformatics research, such as the differential gene expression analysis. Our algorithm is based on modeling the distributions of test statistics under both null and alternative hypotheses. We substantially reduce the complexity of the process of defining posterior model probabilities by modeling the test statistics directly instead of modeling the full data. Computationally, we apply a Bayesian FDR approach to control the number of rejections of null hypotheses. To check if our model assumptions for the test statistics are valid for various bioinformatics experiments, we also propose a simple graphical model-assessment tool. Using extensive simulations, we demonstrate the performance of our models and the utility of the model-assessment tool. In the end, we apply the proposed methodology to an siRNA screening and a gene expression experiment.
Upadhyay, S K; Mukherjee, Bhaswati; Gupta, Ashutosh
2009-09-01
Several models for studies related to tensile strength of materials are proposed in the literature where the size or length component has been taken to be an important factor for studying the specimens' failure behaviour. An important model, developed on the basis of cumulative damage approach, is the three-parameter extension of the Birnbaum-Saunders fatigue model that incorporates size of the specimen as an additional variable. This model is a strong competitor of the commonly used Weibull model and stands better than the traditional models, which do not incorporate the size effect. The paper considers two such cumulative damage models, checks their compatibility with a real dataset, compares them with some of the recent toolkits, and finally recommends a model, which appears an appropriate one. Throughout the study is Bayesian based on Markov chain Monte Carlo simulation.
Experimental validation of numerical simulations on a cerebral aneurysm phantom model
Seshadhri, Santhosh; Janiga, Gábor; Skalej, Martin; Thévenin, Dominique
2012-01-01
The treatment of cerebral aneurysms, found in roughly 5% of the population and associated in case of rupture to a high mortality rate, is a major challenge for neurosurgery and neuroradiology due to the complexity of the intervention and to the resulting, high hazard ratio. Improvements are possible but require a better understanding of the associated, unsteady blood flow patterns in complex 3D geometries. It would be very useful to carry out such studies using suitable numerical models, if it is proven that they reproduce accurately enough the real conditions. This validation step is classically based on comparisons with measured data. Since in vivo measurements are extremely difficult and therefore of limited accuracy, complementary model-based investigations considering realistic configurations are essential. In the present study, simulations based on computational fluid dynamics (CFD) have been compared with in situ, laser-Doppler velocimetry (LDV) measurements in the phantom model of a cerebral aneurysm. The employed 1:1 model is made from transparent silicone. A liquid mixture composed of water, glycerin, xanthan gum and sodium chloride has been specifically adapted for the present investigation. It shows physical flow properties similar to real blood and leads to a refraction index perfectly matched to that of the silicone model, allowing accurate optical measurements of the flow velocity. For both experiments and simulations, complex pulsatile flow waveforms and flow rates were accounted for. This finally allows a direct, quantitative comparison between measurements and simulations. In this manner, the accuracy of the employed computational model can be checked. PMID:24265876
The Priority Inversion Problem and Real-Time Symbolic Model Checking
1993-04-23
real time systems unpredictable in subtle ways. This makes it more difficult to implement and debug such systems. Our work discusses this problem and presents one possible solution. The solution is formalized and verified using temporal logic model checking techniques. In order to perform the verification, the BDD-based symbolic model checking algorithm given in previous works was extended to handle real-time properties using the bounded until operator. We believe that this algorithm, which is based on discrete time, is able to handle many real-time properties
The method of a joint intraday security check system based on cloud computing
NASA Astrophysics Data System (ADS)
Dong, Wei; Feng, Changyou; Zhou, Caiqi; Cai, Zhi; Dan, Xu; Dai, Sai; Zhang, Chuancheng
2017-01-01
The intraday security check is the core application in the dispatching control system. The existing security check calculation only uses the dispatch center’s local model and data as the functional margin. This paper introduces the design of all-grid intraday joint security check system based on cloud computing and its implementation. To reduce the effect of subarea bad data on the all-grid security check, a new power flow algorithm basing on comparison and adjustment with inter-provincial tie-line plan is presented. And the numerical example illustrated the effectiveness and feasibility of the proposed method.
Improvement on a simplified model for protein folding simulation.
Zhang, Ming; Chen, Changjun; He, Yi; Xiao, Yi
2005-11-01
Improvements were made on a simplified protein model--the Ramachandran model-to achieve better computer simulation of protein folding. To check the validity of such improvements, we chose the ultrafast folding protein Engrailed Homeodomain as an example and explored several aspects of its folding. The engrailed homeodomain is a mainly alpha-helical protein of 61 residues from Drosophila melanogaster. We found that the simplified model of Engrailed Homeodomain can fold into a global minimum state with a tertiary structure in good agreement with its native structure.
Norman, Laura M.; Niraula, Rewati
2016-01-01
The objective of this study was to evaluate the effect of check dam infrastructure on soil and water conservation at the catchment scale using the Soil and Water Assessment Tool (SWAT). This paired watershed study includes a watershed treated with over 2000 check dams and a Control watershed which has none, in the West Turkey Creek watershed, Southeast Arizona, USA. SWAT was calibrated for streamflow using discharge documented during the summer of 2013 at the Control site. Model results depict the necessity to eliminate lateral flow from SWAT models of aridland environments, the urgency to standardize geospatial soils data, and the care for which modelers must document altering parameters when presenting findings. Performance was assessed using the percent bias (PBIAS), with values of ±2.34%. The calibrated model was then used to examine the impacts of check dams at the Treated watershed. Approximately 630 tons of sediment is estimated to be stored behind check dams in the Treated watershed over the 3-year simulation, increasing water quality for fish habitat. A minimum precipitation event of 15 mm was necessary to instigate the detachment of soil, sediments, or rock from the study area, which occurred 2% of the time. The resulting watershed model is useful as a predictive framework and decision-support tool to consider long-term impacts of restoration and potential for future restoration.
Architecture and inherent robustness of a bacterial cell-cycle control system.
Shen, Xiling; Collier, Justine; Dill, David; Shapiro, Lucy; Horowitz, Mark; McAdams, Harley H
2008-08-12
A closed-loop control system drives progression of the coupled stalked and swarmer cell cycles of the bacterium Caulobacter crescentus in a near-mechanical step-like fashion. The cell-cycle control has a cyclical genetic circuit composed of four regulatory proteins with tight coupling to processive chromosome replication and cell division subsystems. We report a hybrid simulation of the coupled cell-cycle control system, including asymmetric cell division and responses to external starvation signals, that replicates mRNA and protein concentration patterns and is consistent with observed mutant phenotypes. An asynchronous sequential digital circuit model equivalent to the validated simulation model was created. Formal model-checking analysis of the digital circuit showed that the cell-cycle control is robust to intrinsic stochastic variations in reaction rates and nutrient supply, and that it reliably stops and restarts to accommodate nutrient starvation. Model checking also showed that mechanisms involving methylation-state changes in regulatory promoter regions during DNA replication increase the robustness of the cell-cycle control. The hybrid cell-cycle simulation implementation is inherently extensible and provides a promising approach for development of whole-cell behavioral models that can replicate the observed functionality of the cell and its responses to changing environmental conditions.
Attitude control of the space construction base: A modular approach
NASA Technical Reports Server (NTRS)
Oconnor, D. A.
1982-01-01
A planar model of a space base and one module is considered. For this simplified system, a feedback controller which is compatible with the modular construction method is described. The systems dynamics are decomposed into two parts corresponding to base and module. The information structure of the problem is non-classical in that not all system information is supplied to each controller. The base controller is designed to accommodate structural changes that occur as the module is added and the module controller is designed to regulate its own states and follow commands from the base. Overall stability of the system is checked by Liapunov analysis and controller effectiveness is verified by computer simulation.
What happens to full-f gyrokinetic transport and turbulence in a toroidal wedge simulation?
Kim, Kyuho; Chang, C. S.; Seo, Janghoon; ...
2017-01-24
Here, in order to save the computing time or to fit the simulation size into a limited computing hardware in a gyrokinetic turbulence simulation of a tokamak plasma, a toroidal wedge simulation may be utilized in which only a partial toroidal section is modeled with a periodic boundary condition in the toroidal direction. The most severe restriction in the wedge simulation is expected to be in the longest wavelength turbulence, i.e., ion temperature gradient (ITG) driven turbulence. The global full-f gyrokinetic code XGC1 is used to compare the transport and turbulence properties from a toroidal wedge simulation against the fullmore » torus simulation in an ITG unstable plasma in a model toroidal geometry. It is found that (1) the convergence study in the wedge number needs to be conducted all the way down to the full torus in order to avoid a false convergence, (2) a reasonably accurate simulation can be performed if the correct wedge number N can be identified, (3) the validity of a wedge simulation may be checked by performing a wave-number spectral analysis of the turbulence amplitude |δΦ| and assuring that the variation of δΦ between the discrete kθ values is less than 25% compared to the peak |δΦ|, and (4) a frequency spectrum may not be used for the validity check of a wedge simulation.« less
What happens to full-f gyrokinetic transport and turbulence in a toroidal wedge simulation?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Kyuho; Chang, C. S.; Seo, Janghoon
Here, in order to save the computing time or to fit the simulation size into a limited computing hardware in a gyrokinetic turbulence simulation of a tokamak plasma, a toroidal wedge simulation may be utilized in which only a partial toroidal section is modeled with a periodic boundary condition in the toroidal direction. The most severe restriction in the wedge simulation is expected to be in the longest wavelength turbulence, i.e., ion temperature gradient (ITG) driven turbulence. The global full-f gyrokinetic code XGC1 is used to compare the transport and turbulence properties from a toroidal wedge simulation against the fullmore » torus simulation in an ITG unstable plasma in a model toroidal geometry. It is found that (1) the convergence study in the wedge number needs to be conducted all the way down to the full torus in order to avoid a false convergence, (2) a reasonably accurate simulation can be performed if the correct wedge number N can be identified, (3) the validity of a wedge simulation may be checked by performing a wave-number spectral analysis of the turbulence amplitude |δΦ| and assuring that the variation of δΦ between the discrete kθ values is less than 25% compared to the peak |δΦ|, and (4) a frequency spectrum may not be used for the validity check of a wedge simulation.« less
NASA Technical Reports Server (NTRS)
Jackson, E. Bruce; Madden, Michael M.; Shelton, Robert; Jackson, A. A.; Castro, Manuel P.; Noble, Deleena M.; Zimmerman, Curtis J.; Shidner, Jeremy D.; White, Joseph P.; Dutta, Doumyo;
2015-01-01
This follow-on paper describes the principal methods of implementing, and documents the results of exercising, a set of six-degree-of-freedom rigid-body equations of motion and planetary geodetic, gravitation and atmospheric models for simple vehicles in a variety of endo- and exo-atmospheric conditions with various NASA, and one popular open-source, engineering simulation tools. This effort is intended to provide an additional means of verification of flight simulations. The models used in this comparison, as well as the resulting time-history trajectory data, are available electronically for persons and organizations wishing to compare their flight simulation implementations of the same models.
NASA Technical Reports Server (NTRS)
Lindsey, Tony; Pecheur, Charles
2004-01-01
Livingstone PathFinder (LPF) is a simulation-based computer program for verifying autonomous diagnostic software. LPF is designed especially to be applied to NASA s Livingstone computer program, which implements a qualitative-model-based algorithm that diagnoses faults in a complex automated system (e.g., an exploratory robot, spacecraft, or aircraft). LPF forms a software test bed containing a Livingstone diagnosis engine, embedded in a simulated operating environment consisting of a simulator of the system to be diagnosed by Livingstone and a driver program that issues commands and faults according to a nondeterministic scenario provided by the user. LPF runs the test bed through all executions allowed by the scenario, checking for various selectable error conditions after each step. All components of the test bed are instrumented, so that execution can be single-stepped both backward and forward. The architecture of LPF is modular and includes generic interfaces to facilitate substitution of alternative versions of its different parts. Altogether, LPF provides a flexible, extensible framework for simulation-based analysis of diagnostic software; these characteristics also render it amenable to application to diagnostic programs other than Livingstone.
Incremental checking of Master Data Management model based on contextual graphs
NASA Astrophysics Data System (ADS)
Lamolle, Myriam; Menet, Ludovic; Le Duc, Chan
2015-10-01
The validation of models is a crucial step in distributed heterogeneous systems. In this paper, an incremental validation method is proposed in the scope of a Model Driven Engineering (MDE) approach, which is used to develop a Master Data Management (MDM) field represented by XML Schema models. The MDE approach presented in this paper is based on the definition of an abstraction layer using UML class diagrams. The validation method aims to minimise the model errors and to optimisethe process of model checking. Therefore, the notion of validation contexts is introduced allowing the verification of data model views. Description logics specify constraints that the models have to check. An experimentation of the approach is presented through an application developed in ArgoUML IDE.
[Model for unplanned self extubation of ICU patients using system dynamics approach].
Song, Yu Gil; Yun, Eun Kyoung
2015-04-01
In this study a system dynamics methodology was used to identify correlation and nonlinear feedback structure among factors affecting unplanned extubation (UE) of ICU patients and to construct and verify a simulation model. Factors affecting UE were identified through a theoretical background established by reviewing literature and preceding studies and referencing various statistical data. Related variables were decided through verification of content validity by an expert group. A causal loop diagram (CLD) was made based on the variables. Stock & Flow modeling using Vensim PLE Plus Version 6.0 b was performed to establish a model for UE. Based on the literature review and expert verification, 18 variables associated with UE were identified and CLD was prepared. From the prepared CLD, a model was developed by converting to the Stock & Flow Diagram. Results of the simulation showed that patient stress, patient in an agitated state, restraint application, patient movability, and individual intensive nursing were variables giving the greatest effect to UE probability. To verify agreement of the UE model with real situations, simulation with 5 cases was performed. Equation check and sensitivity analysis on TIME STEP were executed to validate model integrity. Results show that identification of a proper model enables prediction of UE probability. This prediction allows for adjustment of related factors, and provides basic data do develop nursing interventions to decrease UE.
NASA Astrophysics Data System (ADS)
Hilgert, Toralf; Hennig, Heiko
2017-03-01
Groundwater heads were mapped for the entire State of Mecklenburg-Western Pomerania by applying a Detrended Kriging method based on a numerical geohydraulic model. The general groundwater flow system (trend surface) was represented by a two-dimensional horizontal flow model. Thus deviations of observed groundwater heads from simulated groundwater heads are no longer subject to a regional trend and can be interpolated by means of Ordinary Kriging. Subsequently, the groundwater heads were obtained from the sum of the simulated trend surface and interpolated residuals. Furthermore, the described procedure allowed a plausibility check of observed groundwater heads by comparing them to results of the hydraulic model. If significant deviations were seen, the observation wells could be allocated to different aquifers. The final results are two hydraulically established groundwater head distributions - one for the regional main aquifer and one for the upper aquifer which may differ locally from the main aquifer.
The KATE shell: An implementation of model-based control, monitor and diagnosis
NASA Technical Reports Server (NTRS)
Cornell, Matthew
1987-01-01
The conventional control and monitor software currently used by the Space Center for Space Shuttle processing has many limitations such as high maintenance costs, limited diagnostic capabilities and simulation support. These limitations have caused the development of a knowledge based (or model based) shell to generically control and monitor electro-mechanical systems. The knowledge base describes the system's structure and function and is used by a software shell to do real time constraints checking, low level control of components, diagnosis of detected faults, sensor validation, automatic generation of schematic diagrams and automatic recovery from failures. This approach is more versatile and more powerful than the conventional hard coded approach and offers many advantages over it, although, for systems which require high speed reaction times or aren't well understood, knowledge based control and monitor systems may not be appropriate.
Query Language for Location-Based Services: A Model Checking Approach
NASA Astrophysics Data System (ADS)
Hoareau, Christian; Satoh, Ichiro
We present a model checking approach to the rationale, implementation, and applications of a query language for location-based services. Such query mechanisms are necessary so that users, objects, and/or services can effectively benefit from the location-awareness of their surrounding environment. The underlying data model is founded on a symbolic model of space organized in a tree structure. Once extended to a semantic model for modal logic, we regard location query processing as a model checking problem, and thus define location queries as hybrid logicbased formulas. Our approach is unique to existing research because it explores the connection between location models and query processing in ubiquitous computing systems, relies on a sound theoretical basis, and provides modal logic-based query mechanisms for expressive searches over a decentralized data structure. A prototype implementation is also presented and will be discussed.
Topologically Consistent Models for Efficient Big Geo-Spatio Data Distribution
NASA Astrophysics Data System (ADS)
Jahn, M. W.; Bradley, P. E.; Doori, M. Al; Breunig, M.
2017-10-01
Geo-spatio-temporal topology models are likely to become a key concept to check the consistency of 3D (spatial space) and 4D (spatial + temporal space) models for emerging GIS applications such as subsurface reservoir modelling or the simulation of energy and water supply of mega or smart cities. Furthermore, the data management for complex models consisting of big geo-spatial data is a challenge for GIS and geo-database research. General challenges, concepts, and techniques of big geo-spatial data management are presented. In this paper we introduce a sound mathematical approach for a topologically consistent geo-spatio-temporal model based on the concept of the incidence graph. We redesign DB4GeO, our service-based geo-spatio-temporal database architecture, on the way to the parallel management of massive geo-spatial data. Approaches for a new geo-spatio-temporal and object model of DB4GeO meeting the requirements of big geo-spatial data are discussed in detail. Finally, a conclusion and outlook on our future research are given on the way to support the processing of geo-analytics and -simulations in a parallel and distributed system environment.
Using chemical organization theory for model checking
Kaleta, Christoph; Richter, Stephan; Dittrich, Peter
2009-01-01
Motivation: The increasing number and complexity of biomodels makes automatic procedures for checking the models' properties and quality necessary. Approaches like elementary mode analysis, flux balance analysis, deficiency analysis and chemical organization theory (OT) require only the stoichiometric structure of the reaction network for derivation of valuable information. In formalisms like Systems Biology Markup Language (SBML), however, information about the stoichiometric coefficients required for an analysis of chemical organizations can be hidden in kinetic laws. Results: First, we introduce an algorithm that uncovers stoichiometric information that might be hidden in the kinetic laws of a reaction network. This allows us to apply OT to SBML models using modifiers. Second, using the new algorithm, we performed a large-scale analysis of the 185 models contained in the manually curated BioModels Database. We found that for 41 models (22%) the set of organizations changes when modifiers are considered correctly. We discuss one of these models in detail (BIOMD149, a combined model of the ERK- and Wnt-signaling pathways), whose set of organizations drastically changes when modifiers are considered. Third, we found inconsistencies in 5 models (3%) and identified their characteristics. Compared with flux-based methods, OT is able to identify those species and reactions more accurately [in 26 cases (14%)] that can be present in a long-term simulation of the model. We conclude that our approach is a valuable tool that helps to improve the consistency of biomodels and their repositories. Availability: All data and a JAVA applet to check SBML-models is available from http://www.minet.uni-jena.de/csb/prj/ot/tools Contact: dittrich@minet.uni-jena.de Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19468053
Specific spice modeling of microcrystalline silicon TFTs
NASA Astrophysics Data System (ADS)
Moustapha, O.; Bui, V. D.; Bonnassieux, Y.; Parey, J. Y.
2008-03-01
In this paper we present a specific spice static and dynamic model of microcrystalline silicon (μc-Si) thin film transistors (TFTs) taking into account the access resistances and the capacitors contributions. The previously existing models of amorphous silicon and polysilicon TFTs were not completely suited, so we combined them to build a new specific model of μc-Si TFTs. The reliability of the model is then checked by the comparison of experimental measurements to simulations and by simulating the characteristics of some electronic devices (OLED pixels, inverters, and so on).
STS-26 Commander Hauck in fixed based (FB) shuttle mission simulator (SMS)
NASA Technical Reports Server (NTRS)
1988-01-01
STS-26 Discovery, Orbiter Vehicle (OV) 103, Commander Frederick H. Hauck, wearing comunications kit assembly headset, checks control panel data while seated in the commanders seat on forward flight deck. A flight data file (FDF) notebook rests on his lap. A portable computer (laptop) is positioned on the center console. The STS-26 crew is training in the fixed base (FB) shuttle mission simulator (SMS) located in JSC Mission Simulation and Training Facility Bldg 5.
Pharmacist and Technician Perceptions of Tech-Check-Tech in Community Pharmacy Practice Settings.
Frost, Timothy P; Adams, Alex J
2018-04-01
Tech-check-tech (TCT) is a practice model in which pharmacy technicians with advanced training can perform final verification of prescriptions that have been previously reviewed for appropriateness by a pharmacist. Few states have adopted TCT in part because of the common view that this model is controversial among members of the profession. This article aims to summarize the existing research on pharmacist and technician perceptions of community pharmacy-based TCT. A literature review was conducted using MEDLINE (January 1990 to August 2016) and Google Scholar (January 1990 to August 2016) using the terms "tech* and check," "tech-check-tech," "checking technician," and "accuracy checking tech*." Of the 7 studies identified we found general agreement among both pharmacists and technicians that TCT in community pharmacy settings can be safely performed. This agreement persisted in studies of theoretical TCT models and in studies assessing participants in actual community-based TCT models. Pharmacists who had previously worked with a checking technician were generally more favorable toward TCT. Both pharmacists and technicians in community pharmacy settings generally perceived TCT to be safe, in both theoretical surveys and in surveys following actual TCT demonstration projects. These perceptions of safety align well with the actual outcomes achieved from community pharmacy TCT studies.
Implication of correlations among some common stability statistics - a Monte Carlo simulations.
Piepho, H P
1995-03-01
Stability analysis of multilocation trials is often based on a mixed two-way model. Two stability measures in frequent use are the environmental variance (S i (2) )and the ecovalence (W i). Under the two-way model the rank orders of the expected values of these two statistics are identical for a given set of genotypes. By contrast, empirical rank correlations among these measures are consistently low. This suggests that the two-way mixed model may not be appropriate for describing real data. To check this hypothesis, a Monte Carlo simulation was conducted. It revealed that the low empirical rank correlation amongS i (2) and W i is most likely due to sampling errors. It is concluded that the observed low rank correlation does not invalidate the two-way model. The paper also discusses tests for homogeneity of S i (2) as well as implications of the two-way model for the classification of stability statistics.
Weichert, Veronika; Sellmann, Timur; Wetzchewald, Dietmar; Gasch, Bernd; Hunziker, Sabina; Marsch, Stephan
2015-11-01
While the 2005 cardiopulmonary resuscitation (CPR) guidelines recommended to provide CPR for five cycles before the next cardiac rhythm check, the current 2010 guideline now recommend to provide CPR for 2 min. Our aim was to compare adherence to both targets in a simulator-based randomized trial. 119 teams, consisting of three to four physicians each, were randomized to receive a graphical display of the simplified circular adult BLS algorithm with the instruction to perform CPR for either 2 min or five cycles 30:2. Subsequently teams had to treat a simulated unwitnessed cardiac arrest. Data analysis was performed using video-recordings obtained during simulations. The primary endpoint was adherence, defined as being within ±20% of the instructed target (i.e. 96-144s in the 2 min teams and 4-6 cycles in the fivex30:2 teams). 22/62 (35%) of the "two minutes" teams and 48/57 (84%) of the "five×30:2″ teams provided CPR within a range of ± 20% of their instructed target (P<0.0001). The median time of CPR prior to rhythm check was 91s and 87s, respectively, (P=0.59) with a significant larger variance (P=0.023) in the "two minutes" group. This randomized simulator-based trial found better adherence and less variance to an instruction to continue CPR for five cycles before the next cardiac rhythm check compared to continuing CPR for 2 min. Avoiding temporal targets whenever possible in guidelines relating to stressful events appears advisable. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Power quality analysis of DC arc furnace operation using the Bowman model for electric arc
NASA Astrophysics Data System (ADS)
Gherman, P. L.
2018-01-01
This work is about a relatively new domain. The DC electric arc is superior to the AC electric arc and it’s not used in Romania. This is why we analyzed the work functions of these furnaces by simulation and model checking of the simulation results.The conclusions are favorable, to be carried is to develop a real-time control system of steel elaboration process.
14 CFR 121.441 - Proficiency checks.
Code of Federal Regulations, 2010 CFR
2010-01-01
... certificate holder may use any person nor may any person serve as a required pilot flight crewmember unless that person has satisfactorily completed either a proficiency check, or an approved simulator course of... check or the simulator training. (2) For all other pilots— (i) Within the preceding 24 calendar months...
14 CFR 121.441 - Proficiency checks.
Code of Federal Regulations, 2012 CFR
2012-01-01
... certificate holder may use any person nor may any person serve as a required pilot flight crewmember unless that person has satisfactorily completed either a proficiency check, or an approved simulator course of... check or the simulator training. (2) For all other pilots— (i) Within the preceding 24 calendar months...
Check-Cases for Verification of 6-Degree-of-Freedom Flight Vehicle Simulations. Volume 2; Appendices
NASA Technical Reports Server (NTRS)
Murri, Daniel G.; Jackson, E. Bruce; Shelton, Robert O.
2015-01-01
This NASA Engineering and Safety Center (NESC) assessment was established to develop a set of time histories for the flight behavior of increasingly complex example aerospacecraft that could be used to partially validate various simulation frameworks. The assessment was conducted by representatives from several NASA Centers and an open-source simulation project. This document contains details on models, implementation, and results.
Phi29 Connector-DNA Interactions Govern DNA Crunching and Rotation, Supporting the Check-Valve Model
Kumar, Rajendra; Grubmüller, Helmut
2016-01-01
During replication of the ϕ29 bacteriophage inside a bacterial host cell, a DNA packaging motor transports the viral DNA into the procapsid against a pressure difference of up to 40 ± 20 atm. Several models have been proposed for the underlying molecular mechanism. Here we have used molecular dynamics simulations to examine the role of the connector part of the motor, and specifically the one-way revolution and the push-roll model. We have focused at the structure and intermolecular interactions between the DNA and the connector, for which a near-complete structure is available. The connector is found to induce considerable DNA deformations with respect to its canonical B-form. We further assessed by force-probe simulations to which extent the connector is able to prevent DNA leakage and found that the connector can act as a partial one-way valve by a check-valve mechanism via its mobile loops. Analysis of the geometry, flexibility, and energetics of channel lysine residues suggested that this arrangement of residues is incompatible with the observed DNA packaging step-size of ∼2.5 bp, such that the step-size is probably determined by the other components of the motor. Previously proposed DNA revolution and rolling motions inside the connector channel are both found implausible due to structural entanglement between the DNA and connector loops that have not been resolved in the crystal structure. Rather, in the simulations, the connector facilitates minor DNA rotation during the packaging process compatible with recent optical-tweezers experiments. Combined with the available experimental data, our simulation results suggest that the connector acts as a check-valve that prevents DNA leakage and induces DNA compression and rotation during DNA packaging. PMID:26789768
Simulation Experiments: Better Data, Not Just Big Data
2014-12-01
Modeling and Computer Simulation 22 (4): 20:1–20:17. Hogan, Joe 2014, June 9. “So Far, Big Data is Small Potatoes ”. Scientific American Blog Network...Available via http://blogs.scientificamerican.com/cross-check/2014/06/09/so-far- big-data-is-small- potatoes /. IBM. 2014. “Big Data at the Speed of Business
Chambert, Thierry; Rotella, Jay J; Higgs, Megan D
2014-01-01
The investigation of individual heterogeneity in vital rates has recently received growing attention among population ecologists. Individual heterogeneity in wild animal populations has been accounted for and quantified by including individually varying effects in models for mark–recapture data, but the real need for underlying individual effects to account for observed levels of individual variation has recently been questioned by the work of Tuljapurkar et al. (Ecology Letters, 12, 93, 2009) on dynamic heterogeneity. Model-selection approaches based on information criteria or Bayes factors have been used to address this question. Here, we suggest that, in addition to model-selection, model-checking methods can provide additional important insights to tackle this issue, as they allow one to evaluate a model's misfit in terms of ecologically meaningful measures. Specifically, we propose the use of posterior predictive checks to explicitly assess discrepancies between a model and the data, and we explain how to incorporate model checking into the inferential process used to assess the practical implications of ignoring individual heterogeneity. Posterior predictive checking is a straightforward and flexible approach for performing model checks in a Bayesian framework that is based on comparisons of observed data to model-generated replications of the data, where parameter uncertainty is incorporated through use of the posterior distribution. If discrepancy measures are chosen carefully and are relevant to the scientific context, posterior predictive checks can provide important information allowing for more efficient model refinement. We illustrate this approach using analyses of vital rates with long-term mark–recapture data for Weddell seals and emphasize its utility for identifying shortfalls or successes of a model at representing a biological process or pattern of interest. We show how posterior predictive checks can be used to strengthen inferences in ecological studies. We demonstrate the application of this method on analyses dealing with the question of individual reproductive heterogeneity in a population of Antarctic pinnipeds. PMID:24834335
Prediction and Validation of Mars Pathfinder Hypersonic Aerodynamic Data Base
NASA Technical Reports Server (NTRS)
Gnoffo, Peter A.; Braun, Robert D.; Weilmuenster, K. James; Mitcheltree, Robert A.; Engelund, Walter C.; Powell, Richard W.
1998-01-01
Postflight analysis of the Mars Pathfinder hypersonic, continuum aerodynamic data base is presented. Measured data include accelerations along the body axis and axis normal directions. Comparisons of preflight simulation and measurements show good agreement. The prediction of two static instabilities associated with movement of the sonic line from the shoulder to the nose and back was confirmed by measured normal accelerations. Reconstruction of atmospheric density during entry has an uncertainty directly proportional to the uncertainty in the predicted axial coefficient. The sensitivity of the moment coefficient to freestream density, kinetic models and center-of-gravity location are examined to provide additional consistency checks of the simulation with flight data. The atmospheric density as derived from axial coefficient and measured axial accelerations falls within the range required for sonic line shift and static stability transition as independently determined from normal accelerations.
Low energy electron transport in furfural
NASA Astrophysics Data System (ADS)
Lozano, Ana I.; Krupa, Kateryna; Ferreira da Silva, Filipe; Limão-Vieira, Paulo; Blanco, Francisco; Muñoz, Antonio; Jones, Darryl B.; Brunger, Michael J.; García, Gustavo
2017-09-01
We report on an initial investigation into the transport of electrons through a gas cell containing 1 mTorr of gaseous furfural. Results from our Monte Carlo simulation are implicitly checked against those from a corresponding electron transmission measurement. To enable this simulation a self-consistent cross section data base was constructed. This data base is benchmarked through new total cross section measurements which are also described here. In addition, again to facilitate the simulation, our preferred energy loss distribution function is presented and discussed.
Ertefaie, Ashkan; Shortreed, Susan; Chakraborty, Bibhas
2016-06-15
Q-learning is a regression-based approach that uses longitudinal data to construct dynamic treatment regimes, which are sequences of decision rules that use patient information to inform future treatment decisions. An optimal dynamic treatment regime is composed of a sequence of decision rules that indicate how to optimally individualize treatment using the patients' baseline and time-varying characteristics to optimize the final outcome. Constructing optimal dynamic regimes using Q-learning depends heavily on the assumption that regression models at each decision point are correctly specified; yet model checking in the context of Q-learning has been largely overlooked in the current literature. In this article, we show that residual plots obtained from standard Q-learning models may fail to adequately check the quality of the model fit. We present a modified Q-learning procedure that accommodates residual analyses using standard tools. We present simulation studies showing the advantage of the proposed modification over standard Q-learning. We illustrate this new Q-learning approach using data collected from a sequential multiple assignment randomized trial of patients with schizophrenia. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Löwe, P.; Hammitzsch, M.; Babeyko, A.; Wächter, J.
2012-04-01
The development of new Tsunami Early Warning Systems (TEWS) requires the modelling of spatio-temporal spreading of tsunami waves both recorded from past events and hypothetical future cases. The model results are maintained in digital repositories for use in TEWS command and control units for situation assessment once a real tsunami occurs. Thus the simulation results must be absolutely trustworthy, in a sense that the quality of these datasets is assured. This is a prerequisite as solid decision making during a crisis event and the dissemination of dependable warning messages to communities under risk will be based on them. This requires data format validity, but even more the integrity and information value of the content, being a derived value-added product derived from raw tsunami model output. Quality checking of simulation result products can be done in multiple ways, yet the visual verification of both temporal and spatial spreading characteristics for each simulation remains important. The eye of the human observer still remains an unmatched tool for the detection of irregularities. This requires the availability of convenient, human-accessible mappings of each simulation. The improvement of tsunami models necessitates the changes in many variables, including simulation end-parameters. Whenever new improved iterations of the general models or underlying spatial data are evaluated, hundreds to thousands of tsunami model results must be generated for each model iteration, each one having distinct initial parameter settings. The use of a Compute Cluster Environment (CCE) of sufficient size allows the automated generation of all tsunami-results within model iterations in little time. This is a significant improvement to linear processing on dedicated desktop machines or servers. This allows for accelerated/improved visual quality checking iterations, which in turn can provide a positive feedback into the overall model improvement iteratively. An approach to set-up and utilize the CCE has been implemented by the project Collaborative, Complex, and Critical Decision Processes in Evolving Crises (TRIDEC) funded under the European Union's FP7. TRIDEC focuses on real-time intelligent information management in Earth management. The addressed challenges include the design and implementation of a robust and scalable service infrastructure supporting the integration and utilisation of existing resources with accelerated generation of large volumes of data. These include sensor systems, geo-information repositories, simulations and data fusion tools. Additionally, TRIDEC adopts enhancements of Service Oriented Architecture (SOA) principles in terms of Event Driven Architecture (EDA) design. As a next step the implemented CCE's services to generate derived and customized simulation products are foreseen to be provided via an EDA service for on-demand processing for specific threat-parameters and to accommodate for model improvements.
Model Checking, Abstraction, and Compositional Verification
1993-07-01
the ( alois connections used by Bensalrnu el al. [6], and also has some relation to Kurshan’s automata homonuor- phisms [62]. (Actually. we can impose a...multiprocessor simulation model. ACM Transactions on Computer Systems, 4(4):273-298, November 1986. [41 D. L. Beatty, R. E. Bryant, and C.-J. Seger
Ng'andu, N H
1997-03-30
In the analysis of survival data using the Cox proportional hazard (PH) model, it is important to verify that the explanatory variables analysed satisfy the proportional hazard assumption of the model. This paper presents results of a simulation study that compares five test statistics to check the proportional hazard assumption of Cox's model. The test statistics were evaluated under proportional hazards and the following types of departures from the proportional hazard assumption: increasing relative hazards; decreasing relative hazards; crossing hazards; diverging hazards, and non-monotonic hazards. The test statistics compared include those based on partitioning of failure time and those that do not require partitioning of failure time. The simulation results demonstrate that the time-dependent covariate test, the weighted residuals score test and the linear correlation test have equally good power for detection of non-proportionality in the varieties of non-proportional hazards studied. Using illustrative data from the literature, these test statistics performed similarly.
Experiences in integrating auto-translated state-chart designs for model checking
NASA Technical Reports Server (NTRS)
Pingree, P. J.; Benowitz, E. G.
2003-01-01
In the complex environment of JPL's flight missions with increasing dependency on advanced software designs, traditional software validation methods of simulation and testing are being stretched to adequately cover the needs of software development.
NASA Astrophysics Data System (ADS)
Ramezani, Zeinab; Orouji, Ali A.
2017-08-01
This paper suggests and investigates a double-gate (DG) MOSFET, which emulates tunnel field effect transistors (M-TFET). We have combined this novel concept into a double-gate MOSFET, which behaves as a tunneling field effect transistor by work function engineering. In the proposed structure, in addition to the main gate, we utilize another gate over the source region with zero applied voltage and a proper work function to convert the source region from N+ to P+. We check the impact obtained by varying the source gate work function and source doping on the device parameters. The simulation results of the M-TFET indicate that it is a suitable case for a switching performance. Also, we present a two-dimensional analytic potential model of the proposed structure by solving the Poisson's equation in x and y directions and by derivatives from the potential profile; thus, the electric field is achieved. To validate our present model, we use the SILVACO ATLAS device simulator. The analytical results have been compared with it.
[Accuracy Check of Monte Carlo Simulation in Particle Therapy Using Gel Dosimeters].
Furuta, Takuya
2017-01-01
Gel dosimeters are a three-dimensional imaging tool for dose distribution induced by radiations. They can be used for accuracy check of Monte Carlo simulation in particle therapy. An application was reviewed in this article. An inhomogeneous biological sample placing a gel dosimeter behind it was irradiated by carbon beam. The recorded dose distribution in the gel dosimeter reflected the inhomogeneity of the biological sample. Monte Carlo simulation was conducted by reconstructing the biological sample from its CT image. The accuracy of the particle transport by Monte Carlo simulation was checked by comparing the dose distribution in the gel dosimeter between simulation and experiment.
14 CFR 91.1073 - Training program: General.
Code of Federal Regulations, 2012 CFR
2012-01-01
...) Provide enough flight instructors, check pilots, and simulator instructors to conduct required flight training and flight checks, and simulator training courses allowed under this subpart. (b) Whenever a... ensure that each pilot annually completes at least one flight training session in an approved simulator...
14 CFR 91.1073 - Training program: General.
Code of Federal Regulations, 2014 CFR
2014-01-01
...) Provide enough flight instructors, check pilots, and simulator instructors to conduct required flight training and flight checks, and simulator training courses allowed under this subpart. (b) Whenever a... ensure that each pilot annually completes at least one flight training session in an approved simulator...
14 CFR 91.1073 - Training program: General.
Code of Federal Regulations, 2013 CFR
2013-01-01
...) Provide enough flight instructors, check pilots, and simulator instructors to conduct required flight training and flight checks, and simulator training courses allowed under this subpart. (b) Whenever a... ensure that each pilot annually completes at least one flight training session in an approved simulator...
14 CFR 91.1073 - Training program: General.
Code of Federal Regulations, 2011 CFR
2011-01-01
...) Provide enough flight instructors, check pilots, and simulator instructors to conduct required flight training and flight checks, and simulator training courses allowed under this subpart. (b) Whenever a... ensure that each pilot annually completes at least one flight training session in an approved simulator...
Proceedings of the Second NASA Formal Methods Symposium
NASA Technical Reports Server (NTRS)
Munoz, Cesar (Editor)
2010-01-01
This publication contains the proceedings of the Second NASA Formal Methods Symposium sponsored by the National Aeronautics and Space Administration and held in Washington D.C. April 13-15, 2010. Topics covered include: Decision Engines for Software Analysis using Satisfiability Modulo Theories Solvers; Verification and Validation of Flight-Critical Systems; Formal Methods at Intel -- An Overview; Automatic Review of Abstract State Machines by Meta Property Verification; Hardware-independent Proofs of Numerical Programs; Slice-based Formal Specification Measures -- Mapping Coupling and Cohesion Measures to Formal Z; How Formal Methods Impels Discovery: A Short History of an Air Traffic Management Project; A Machine-Checked Proof of A State-Space Construction Algorithm; Automated Assume-Guarantee Reasoning for Omega-Regular Systems and Specifications; Modeling Regular Replacement for String Constraint Solving; Using Integer Clocks to Verify the Timing-Sync Sensor Network Protocol; Can Regulatory Bodies Expect Efficient Help from Formal Methods?; Synthesis of Greedy Algorithms Using Dominance Relations; A New Method for Incremental Testing of Finite State Machines; Verification of Faulty Message Passing Systems with Continuous State Space in PVS; Phase Two Feasibility Study for Software Safety Requirements Analysis Using Model Checking; A Prototype Embedding of Bluespec System Verilog in the PVS Theorem Prover; SimCheck: An Expressive Type System for Simulink; Coverage Metrics for Requirements-Based Testing: Evaluation of Effectiveness; Software Model Checking of ARINC-653 Flight Code with MCP; Evaluation of a Guideline by Formal Modelling of Cruise Control System in Event-B; Formal Verification of Large Software Systems; Symbolic Computation of Strongly Connected Components Using Saturation; Towards the Formal Verification of a Distributed Real-Time Automotive System; Slicing AADL Specifications for Model Checking; Model Checking with Edge-valued Decision Diagrams; and Data-flow based Model Analysis.
Analysis of DIRAC's behavior using model checking with process algebra
NASA Astrophysics Data System (ADS)
Remenska, Daniela; Templon, Jeff; Willemse, Tim; Bal, Henri; Verstoep, Kees; Fokkink, Wan; Charpentier, Philippe; Graciani Diaz, Ricardo; Lanciotti, Elisa; Roiser, Stefan; Ciba, Krzysztof
2012-12-01
DIRAC is the grid solution developed to support LHCb production activities as well as user data analysis. It consists of distributed services and agents delivering the workload to the grid resources. Services maintain database back-ends to store dynamic state information of entities such as jobs, queues, staging requests, etc. Agents use polling to check and possibly react to changes in the system state. Each agent's logic is relatively simple; the main complexity lies in their cooperation. Agents run concurrently, and collaborate using the databases as shared memory. The databases can be accessed directly by the agents if running locally or through a DIRAC service interface if necessary. This shared-memory model causes entities to occasionally get into inconsistent states. Tracing and fixing such problems becomes formidable due to the inherent parallelism present. We propose more rigorous methods to cope with this. Model checking is one such technique for analysis of an abstract model of a system. Unlike conventional testing, it allows full control over the parallel processes execution, and supports exhaustive state-space exploration. We used the mCRL2 language and toolset to model the behavior of two related DIRAC subsystems: the workload and storage management system. Based on process algebra, mCRL2 allows defining custom data types as well as functions over these. This makes it suitable for modeling the data manipulations made by DIRAC's agents. By visualizing the state space and replaying scenarios with the toolkit's simulator, we have detected race-conditions and deadlocks in these systems, which, in several cases, were confirmed to occur in the reality. Several properties of interest were formulated and verified with the tool. Our future direction is automating the translation from DIRAC to a formal model.
A computational study of liposome logic: towards cellular computing from the bottom up
Smaldon, James; Romero-Campero, Francisco J.; Fernández Trillo, Francisco; Gheorghe, Marian; Alexander, Cameron
2010-01-01
In this paper we propose a new bottom-up approach to cellular computing, in which computational chemical processes are encapsulated within liposomes. This “liposome logic” approach (also called vesicle computing) makes use of supra-molecular chemistry constructs, e.g. protocells, chells, etc. as minimal cellular platforms to which logical functionality can be added. Modeling and simulations feature prominently in “top-down” synthetic biology, particularly in the specification, design and implementation of logic circuits through bacterial genome reengineering. The second contribution in this paper is the demonstration of a novel set of tools for the specification, modelling and analysis of “bottom-up” liposome logic. In particular, simulation and modelling techniques are used to analyse some example liposome logic designs, ranging from relatively simple NOT gates and NAND gates to SR-Latches, D Flip-Flops all the way to 3 bit ripple counters. The approach we propose consists of specifying, by means of P systems, gene regulatory network-like systems operating inside proto-membranes. This P systems specification can be automatically translated and executed through a multiscaled pipeline composed of dissipative particle dynamics (DPD) simulator and Gillespie’s stochastic simulation algorithm (SSA). Finally, model selection and analysis can be performed through a model checking phase. This is the first paper we are aware of that brings to bear formal specifications, DPD, SSA and model checking to the problem of modeling target computational functionality in protocells. Potential chemical routes for the laboratory implementation of these simulations are also discussed thus for the first time suggesting a potentially realistic physiochemical implementation for membrane computing from the bottom-up. PMID:21886681
NASA Astrophysics Data System (ADS)
Göschl, Daniel
2018-03-01
We discuss simulation strategies for the massless lattice Schwinger model with a topological term and finite chemical potential. The simulation is done in a dual representation where the complex action problem is solved and the partition function is a sum over fermion loops, fermion dimers and plaquette-occupation numbers. We explore strategies to update the fermion loops coupled to the gauge degrees of freedom and check our results with conventional simulations (without topological term and at zero chemical potential), as well as with exact summation on small volumes. Some physical implications of the results are discussed.
Jackson, Christopher; Steinacher, Arno; Goodman, Anna; Langenberg, Claudia; Griffin, Simon
2018-01-01
Background The National Health Service (NHS) Health Check programme was introduced in 2009 in England to systematically assess all adults in midlife for cardiovascular disease risk factors. However, its current benefit and impact on health inequalities are unknown. It is also unclear whether feasible changes in how it is delivered could result in increased benefits. It is one of the first such programmes in the world. We sought to estimate the health benefits and effect on inequalities of the current NHS Health Check programme and the impact of making feasible changes to its implementation. Methods and findings We developed a microsimulation model to estimate the health benefits (incident ischaemic heart disease, stroke, dementia, and lung cancer) of the NHS Health Check programme in England. We simulated a population of adults in England aged 40–45 years and followed until age 100 years, using data from the Health Survey of England (2009–2012) and the English Longitudinal Study of Aging (1998–2012), to simulate changes in risk factors for simulated individuals over time. We used recent programme data to describe uptake of NHS Health Checks and of 4 associated interventions (statin medication, antihypertensive medication, smoking cessation, and weight management). Estimates of treatment efficacy and adherence were based on trial data. We estimated the benefits of the current NHS Health Check programme compared to a healthcare system without systematic health checks. This counterfactual scenario models the detection and treatment of risk factors that occur within ‘routine’ primary care. We also explored the impact of making feasible changes to implementation of the programme concerning eligibility, uptake of NHS Health Checks, and uptake of treatments offered through the programme. We estimate that the NHS Health Check programme prevents 390 (95% credible interval 290 to 500) premature deaths before 80 years of age and results in an additional 1,370 (95% credible interval 1,100 to 1,690) people being free of disease (ischaemic heart disease, stroke, dementia, and lung cancer) at age 80 years per million people aged 40–45 years at baseline. Over the life of the cohort (i.e., followed from 40–45 years to 100 years), the changes result in an additional 10,000 (95% credible interval 8,200 to 13,000) quality-adjusted life years (QALYs) and an additional 9,000 (6,900 to 11,300) years of life. This equates to approximately 300 fewer premature deaths and 1,000 more people living free of these diseases each year in England. We estimate that the current programme is increasing QALYs by 3.8 days (95% credible interval 3.0–4.7) per head of population and increasing survival by 3.3 days (2.5–4.1) per head of population over the 60 years of follow-up. The current programme has a greater absolute impact on health for those living in the most deprived areas compared to those living in the least deprived areas (4.4 [2.7–6.5] days of additional quality-adjusted life per head of population versus 2.8 [1.7–4.0] days; 5.1 [3.4–7.1] additional days lived per head of population versus 3.3 [2.1–4.5] days). Making feasible changes to the delivery of the existing programme could result in a sizable increase in the benefit. For example, a strategy that combines extending eligibility to those with preexisting hypertension, extending the upper age of eligibility to 79 years, increasing uptake of health checks by 30%, and increasing treatment rates 2.5-fold amongst eligible patients (i.e., ‘maximum potential’ scenario) results in at least a 3-fold increase in benefits compared to the current programme (1,360 premature deaths versus 390; 5,100 people free of 1 of the 4 diseases versus 1,370; 37,000 additional QALYs versus 10,000; 33,000 additional years of life versus 9,000). Ensuring those who are assessed and eligible for statins receive statins is a particularly important strategy to increase benefits. Estimates of overall benefit are based on current incidence and management, and future declines in disease incidence or improvements in treatment could alter the actual benefits observed in the long run. We have focused on the cardiovascular element of the NHS Health Check programme. Some important noncardiovascular health outcomes (e.g., chronic obstructive pulmonary disease [COPD] prevention from smoking cessation and cancer prevention from weight loss) and other parts of the programme (e.g., brief interventions to reduce harmful alcohol consumption) have not been modelled. Conclusions Our model indicates that the current NHS Health Check programme is contributing to improvements in health and reducing health inequalities. Feasible changes in the organisation of the programme could result in more than a 3-fold increase in health benefits. PMID:29509767
Mytton, Oliver T; Jackson, Christopher; Steinacher, Arno; Goodman, Anna; Langenberg, Claudia; Griffin, Simon; Wareham, Nick; Woodcock, James
2018-03-01
The National Health Service (NHS) Health Check programme was introduced in 2009 in England to systematically assess all adults in midlife for cardiovascular disease risk factors. However, its current benefit and impact on health inequalities are unknown. It is also unclear whether feasible changes in how it is delivered could result in increased benefits. It is one of the first such programmes in the world. We sought to estimate the health benefits and effect on inequalities of the current NHS Health Check programme and the impact of making feasible changes to its implementation. We developed a microsimulation model to estimate the health benefits (incident ischaemic heart disease, stroke, dementia, and lung cancer) of the NHS Health Check programme in England. We simulated a population of adults in England aged 40-45 years and followed until age 100 years, using data from the Health Survey of England (2009-2012) and the English Longitudinal Study of Aging (1998-2012), to simulate changes in risk factors for simulated individuals over time. We used recent programme data to describe uptake of NHS Health Checks and of 4 associated interventions (statin medication, antihypertensive medication, smoking cessation, and weight management). Estimates of treatment efficacy and adherence were based on trial data. We estimated the benefits of the current NHS Health Check programme compared to a healthcare system without systematic health checks. This counterfactual scenario models the detection and treatment of risk factors that occur within 'routine' primary care. We also explored the impact of making feasible changes to implementation of the programme concerning eligibility, uptake of NHS Health Checks, and uptake of treatments offered through the programme. We estimate that the NHS Health Check programme prevents 390 (95% credible interval 290 to 500) premature deaths before 80 years of age and results in an additional 1,370 (95% credible interval 1,100 to 1,690) people being free of disease (ischaemic heart disease, stroke, dementia, and lung cancer) at age 80 years per million people aged 40-45 years at baseline. Over the life of the cohort (i.e., followed from 40-45 years to 100 years), the changes result in an additional 10,000 (95% credible interval 8,200 to 13,000) quality-adjusted life years (QALYs) and an additional 9,000 (6,900 to 11,300) years of life. This equates to approximately 300 fewer premature deaths and 1,000 more people living free of these diseases each year in England. We estimate that the current programme is increasing QALYs by 3.8 days (95% credible interval 3.0-4.7) per head of population and increasing survival by 3.3 days (2.5-4.1) per head of population over the 60 years of follow-up. The current programme has a greater absolute impact on health for those living in the most deprived areas compared to those living in the least deprived areas (4.4 [2.7-6.5] days of additional quality-adjusted life per head of population versus 2.8 [1.7-4.0] days; 5.1 [3.4-7.1] additional days lived per head of population versus 3.3 [2.1-4.5] days). Making feasible changes to the delivery of the existing programme could result in a sizable increase in the benefit. For example, a strategy that combines extending eligibility to those with preexisting hypertension, extending the upper age of eligibility to 79 years, increasing uptake of health checks by 30%, and increasing treatment rates 2.5-fold amongst eligible patients (i.e., 'maximum potential' scenario) results in at least a 3-fold increase in benefits compared to the current programme (1,360 premature deaths versus 390; 5,100 people free of 1 of the 4 diseases versus 1,370; 37,000 additional QALYs versus 10,000; 33,000 additional years of life versus 9,000). Ensuring those who are assessed and eligible for statins receive statins is a particularly important strategy to increase benefits. Estimates of overall benefit are based on current incidence and management, and future declines in disease incidence or improvements in treatment could alter the actual benefits observed in the long run. We have focused on the cardiovascular element of the NHS Health Check programme. Some important noncardiovascular health outcomes (e.g., chronic obstructive pulmonary disease [COPD] prevention from smoking cessation and cancer prevention from weight loss) and other parts of the programme (e.g., brief interventions to reduce harmful alcohol consumption) have not been modelled. Our model indicates that the current NHS Health Check programme is contributing to improvements in health and reducing health inequalities. Feasible changes in the organisation of the programme could result in more than a 3-fold increase in health benefits.
Modeling and analysis of cell membrane systems with probabilistic model checking
2011-01-01
Background Recently there has been a growing interest in the application of Probabilistic Model Checking (PMC) for the formal specification of biological systems. PMC is able to exhaustively explore all states of a stochastic model and can provide valuable insight into its behavior which are more difficult to see using only traditional methods for system analysis such as deterministic and stochastic simulation. In this work we propose a stochastic modeling for the description and analysis of sodium-potassium exchange pump. The sodium-potassium pump is a membrane transport system presents in all animal cell and capable of moving sodium and potassium ions against their concentration gradient. Results We present a quantitative formal specification of the pump mechanism in the PRISM language, taking into consideration a discrete chemistry approach and the Law of Mass Action aspects. We also present an analysis of the system using quantitative properties in order to verify the pump reversibility and understand the pump behavior using trend labels for the transition rates of the pump reactions. Conclusions Probabilistic model checking can be used along with other well established approaches such as simulation and differential equations to better understand pump behavior. Using PMC we can determine if specific events happen such as the potassium outside the cell ends in all model traces. We can also have a more detailed perspective on its behavior such as determining its reversibility and why its normal operation becomes slow over time. This knowledge can be used to direct experimental research and make it more efficient, leading to faster and more accurate scientific discoveries. PMID:22369714
Analysis, Simulation, and Verification of Knowledge-Based, Rule-Based, and Expert Systems
NASA Technical Reports Server (NTRS)
Hinchey, Mike; Rash, James; Erickson, John; Gracanin, Denis; Rouff, Chris
2010-01-01
Mathematically sound techniques are used to view a knowledge-based system (KBS) as a set of processes executing in parallel and being enabled in response to specific rules being fired. The set of processes can be manipulated, examined, analyzed, and used in a simulation. The tool that embodies this technology may warn developers of errors in their rules, but may also highlight rules (or sets of rules) in the system that are underspecified (or overspecified) and need to be corrected for the KBS to operate as intended. The rules embodied in a KBS specify the allowed situations, events, and/or results of the system they describe. In that sense, they provide a very abstract specification of a system. The system is implemented through the combination of the system specification together with an appropriate inference engine, independent of the algorithm used in that inference engine. Viewing the rule base as a major component of the specification, and choosing an appropriate specification notation to represent it, reveals how additional power can be derived from an approach to the knowledge-base system that involves analysis, simulation, and verification. This innovative approach requires no special knowledge of the rules, and allows a general approach where standardized analysis, verification, simulation, and model checking techniques can be applied to the KBS.
NASA Astrophysics Data System (ADS)
Balog, Ivan; Tarjus, Gilles; Tissier, Matthieu
2018-03-01
We show that, contrary to previous suggestions based on computer simulations or erroneous theoretical treatments, the critical points of the random-field Ising model out of equilibrium, when quasistatically changing the applied source at zero temperature, and in equilibrium are not in the same universality class below some critical dimension dD R≈5.1 . We demonstrate this by implementing a nonperturbative functional renormalization group for the associated dynamical field theory. Above dD R, the avalanches, which characterize the evolution of the system at zero temperature, become irrelevant at large distance, and hysteresis and equilibrium critical points are then controlled by the same fixed point. We explain how to use computer simulation and finite-size scaling to check the correspondence between in and out of equilibrium criticality in a far less ambiguous way than done so far.
Helicopter simulation: Making it work
NASA Technical Reports Server (NTRS)
Payne, Barry
1992-01-01
The opportunities for improved training and checking by using helicopter simulators are greater than they are for airplane pilot training. Simulators permit the safe creation of training environments that are conducive to the development of pilot decision-making, situational awareness, and cockpit management. This paper defines specific attributes required in a simulator to meet a typical helicopter operator's training and checking objectives.
Shen, Meiyu; Russek-Cohen, Estelle; Slud, Eric V
2016-08-12
Bioequivalence (BE) studies are an essential part of the evaluation of generic drugs. The most common in vivo BE study design is the two-period two-treatment crossover design. AUC (area under the concentration-time curve) and Cmax (maximum concentration) are obtained from the observed concentration-time profiles for each subject from each treatment under each sequence. In the BE evaluation of pharmacokinetic crossover studies, the normality of the univariate response variable, e.g. log(AUC) 1 or log(Cmax), is often assumed in the literature without much evidence. Therefore, we investigate the distributional assumption of the normality of response variables, log(AUC) and log(Cmax), by simulating concentration-time profiles from two-stage pharmacokinetic models (commonly used in pharmacokinetic research) for a wide range of pharmacokinetic parameters and measurement error structures. Our simulations show that, under reasonable distributional assumptions on the pharmacokinetic parameters, log(AUC) has heavy tails and log(Cmax) is skewed. Sensitivity analyses are conducted to investigate how the distribution of the standardized log(AUC) (or the standardized log(Cmax)) for a large number of simulated subjects deviates from normality if distributions of errors in the pharmacokinetic model for plasma concentrations deviate from normality and if the plasma concentration can be described by different compartmental models.
NASA Astrophysics Data System (ADS)
Fang, Kaizheng; Mu, Daobin; Chen, Shi; Wu, Borong; Wu, Feng
2012-06-01
In this study, a prediction model based on artificial neural network is constructed for surface temperature simulation of nickel-metal hydride battery. The model is developed from a back-propagation network which is trained by Levenberg-Marquardt algorithm. Under each ambient temperature of 10 °C, 20 °C, 30 °C and 40 °C, an 8 Ah cylindrical Ni-MH battery is charged in the rate of 1 C, 3 C and 5 C to its SOC of 110% in order to provide data for the model training. Linear regression method is adopted to check the quality of the model training, as well as mean square error and absolute error. It is shown that the constructed model is of excellent training quality for the guarantee of prediction accuracy. The surface temperature of battery during charging is predicted under various ambient temperatures of 50 °C, 60 °C, 70 °C by the model. The results are validated in good agreement with experimental data. The value of battery surface temperature is calculated to exceed 90 °C under the ambient temperature of 60 °C if it is overcharged in 5 C, which might cause battery safety issues.
The Influence of the Number of Different Stocks on the Levy-Levy-Solomon Model
NASA Astrophysics Data System (ADS)
Kohl, R.
The stock market model of Levy, Levy, Solomon is simulated for more than one stock to analyze the behavior for a large number of investors. Small markets can lead to realistic looking prices for one and more stocks. A large number of investors leads to a semi-regular fashion simulating one stock. For many stocks, three of the stocks are semi-regular and dominant, the rest is chaotic. Aside from that we changed the utility function and checked the results.
A Simulated Annealing based Optimization Algorithm for Automatic Variogram Model Fitting
NASA Astrophysics Data System (ADS)
Soltani-Mohammadi, Saeed; Safa, Mohammad
2016-09-01
Fitting a theoretical model to an experimental variogram is an important issue in geostatistical studies because if the variogram model parameters are tainted with uncertainty, the latter will spread in the results of estimations and simulations. Although the most popular fitting method is fitting by eye, in some cases use is made of the automatic fitting method on the basis of putting together the geostatistical principles and optimization techniques to: 1) provide a basic model to improve fitting by eye, 2) fit a model to a large number of experimental variograms in a short time, and 3) incorporate the variogram related uncertainty in the model fitting. Effort has been made in this paper to improve the quality of the fitted model by improving the popular objective function (weighted least squares) in the automatic fitting. Also, since the variogram model function (£) and number of structures (m) too affect the model quality, a program has been provided in the MATLAB software that can present optimum nested variogram models using the simulated annealing method. Finally, to select the most desirable model from among the single/multi-structured fitted models, use has been made of the cross-validation method, and the best model has been introduced to the user as the output. In order to check the capability of the proposed objective function and the procedure, 3 case studies have been presented.
Propel: Tools and Methods for Practical Source Code Model Checking
NASA Technical Reports Server (NTRS)
Mansouri-Samani, Massoud; Mehlitz, Peter; Markosian, Lawrence; OMalley, Owen; Martin, Dale; Moore, Lantz; Penix, John; Visser, Willem
2003-01-01
The work reported here is an overview and snapshot of a project to develop practical model checking tools for in-the-loop verification of NASA s mission-critical, multithreaded programs in Java and C++. Our strategy is to develop and evaluate both a design concept that enables the application of model checking technology to C++ and Java, and a model checking toolset for C++ and Java. The design concept and the associated model checking toolset is called Propel. It builds upon the Java PathFinder (JPF) tool, an explicit state model checker for Java applications developed by the Automated Software Engineering group at NASA Ames Research Center. The design concept that we are developing is Design for Verification (D4V). This is an adaption of existing best design practices that has the desired side-effect of enhancing verifiability by improving modularity and decreasing accidental complexity. D4V, we believe, enhances the applicability of a variety of V&V approaches; we are developing the concept in the context of model checking. The model checking toolset, Propel, is based on extending JPF to handle C++. Our principal tasks in developing the toolset are to build a translator from C++ to Java, productize JPF, and evaluate the toolset in the context of D4V. Through all these tasks we are testing Propel capabilities on customer applications.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-24
... the decision making process. ADDRESSES: Copies of the ROD will be available in an electronic format... at the Savage River Check Station. The 160-vehicle limit is derived from traffic model simulation...
NASA Astrophysics Data System (ADS)
Wegehenkel, M.
In this paper, long-term effects of different afforestation scenarios on landscape wa- ter balance will be analyzed taking into account the results of a regional case study. This analysis is based on using a GIS-coupled simulation model for the the spatially distributed calculation of water balance.For this purpose, the modelling system THE- SEUS with a simple GIS-interface will be used. To take into account the special case of change in forest cover proportion, THESEUS was enhanced with a simple for- est growth model. In the regional case study, model runs will be performed using a detailed spatial data set from North-East Germany. This data set covers a mesoscale catchment located at the moraine landscape of North-East Germany. Based on this data set, the influence of the actual landuse and of different landuse change scenarios on water balance dynamics will be investigated taking into account the spatial distributed modelling results from THESEUS. The model was tested using different experimen- tal data sets from field plots as well as obsverded catchment discharge. Additionally to such convential validation techniques, remote sensing data were used to check the simulated regional distribution of water balance components like evapotranspiration in the catchment.
Review: Modelling chemical kinetics and convective heating in giant planet entries
NASA Astrophysics Data System (ADS)
Reynier, Philippe; D'Ammando, Giuliano; Bruno, Domenico
2018-01-01
A review of the existing chemical kinetics models for H2 / He mixtures and related transport and thermodynamic properties is presented as a pre-requisite towards the development of innovative models based on the state-to-state approach. A survey of the available results obtained during the mission preparation and post-flight analyses of the Galileo mission has been undertaken and a computational matrix has been derived. Different chemical kinetics schemes for hydrogen/helium mixtures have been applied to numerical simulations of the selected points along the entry trajectory. First, a reacting scheme, based on literature data, has been set up for computing the flow-field around the probe at high altitude and comparisons with existing numerical predictions are performed. Then, a macroscopic model derived from a state-to-state model has been constructed and incorporated into a CFD code. Comparisons with existing numerical results from the literature have been performed as well as cross-check comparisons between the predictions provided by the different models in order to evaluate the potential of innovative chemical kinetics models based on the state-to-state approach.
The Worm Propagation Model with Dual Dynamic Quarantine Strategy
NASA Astrophysics Data System (ADS)
Yao, Yu; Xie, Xiao-Wu; Guo, Hao; Gao, Fu-Xiang; Yu, Ge
Internet worms are becoming more and more harmful with the rapid development of the Internet. Due to the extremely fast spread and great destructive power of network worms, strong dynamic quarantine strategies are necessary. Inspired by the real-world approach to the prevention and treatment of infectious diseases, this paper proposes a quarantine strategy based on dynamic worm propagation model: the SIQRV dual quarantine model. This strategy uses dynamic quarantine method to make the vulnerable host and infected host quarantined, and then release them after a certain period of time, regardless of whether quarantined host security is checked. Through mathematic modeling, it has been found that when the basic reproduction number R0 is less than a critical value, the system will stabilize in the disease-free equilibrium, that is, in theory, the infected hosts will be completely immune. Finally, by comparing the simulation results and numerical analysis, the basic agreement between the two curves supports the validity of the mathematical model. Our future work will be focusing on taking both the delay and double-quarantine strategy into account and further expanding the scale of our simulation work.
Foundations of the Bandera Abstraction Tools
NASA Technical Reports Server (NTRS)
Hatcliff, John; Dwyer, Matthew B.; Pasareanu, Corina S.; Robby
2003-01-01
Current research is demonstrating that model-checking and other forms of automated finite-state verification can be effective for checking properties of software systems. Due to the exponential costs associated with model-checking, multiple forms of abstraction are often necessary to obtain system models that are tractable for automated checking. The Bandera Tool Set provides multiple forms of automated support for compiling concurrent Java software systems to models that can be supplied to several different model-checking tools. In this paper, we describe the foundations of Bandera's data abstraction mechanism which is used to reduce the cardinality (and the program's state-space) of data domains in software to be model-checked. From a technical standpoint, the form of data abstraction used in Bandera is simple, and it is based on classical presentations of abstract interpretation. We describe the mechanisms that Bandera provides for declaring abstractions, for attaching abstractions to programs, and for generating abstracted programs and properties. The contributions of this work are the design and implementation of various forms of tool support required for effective application of data abstraction to software components written in a programming language like Java which has a rich set of linguistic features.
Model based systems engineering for astronomical projects
NASA Astrophysics Data System (ADS)
Karban, R.; Andolfato, L.; Bristow, P.; Chiozzi, G.; Esselborn, M.; Schilling, M.; Schmid, C.; Sommer, H.; Zamparelli, M.
2014-08-01
Model Based Systems Engineering (MBSE) is an emerging field of systems engineering for which the System Modeling Language (SysML) is a key enabler for descriptive, prescriptive and predictive models. This paper surveys some of the capabilities, expectations and peculiarities of tools-assisted MBSE experienced in real-life astronomical projects. The examples range in depth and scope across a wide spectrum of applications (for example documentation, requirements, analysis, trade studies) and purposes (addressing a particular development need, or accompanying a project throughout many - if not all - its lifecycle phases, fostering reuse and minimizing ambiguity). From the beginnings of the Active Phasing Experiment, through VLT instrumentation, VLTI infrastructure, Telescope Control System for the E-ELT, until Wavefront Control for the E-ELT, we show how stepwise refinements of tools, processes and methods have provided tangible benefits to customary system engineering activities like requirement flow-down, design trade studies, interfaces definition, and validation, by means of a variety of approaches (like Model Checking, Simulation, Model Transformation) and methodologies (like OOSEM, State Analysis)
Extensions to the visual predictive check to facilitate model performance evaluation.
Post, Teun M; Freijer, Jan I; Ploeger, Bart A; Danhof, Meindert
2008-04-01
The Visual Predictive Check (VPC) is a valuable and supportive instrument for evaluating model performance. However in its most commonly applied form, the method largely depends on a subjective comparison of the distribution of the simulated data with the observed data, without explicitly quantifying and relating the information in both. In recent adaptations to the VPC this drawback is taken into consideration by presenting the observed and predicted data as percentiles. In addition, in some of these adaptations the uncertainty in the predictions is represented visually. However, it is not assessed whether the expected random distribution of the observations around the predicted median trend is realised in relation to the number of observations. Moreover the influence of and the information residing in missing data at each time point is not taken into consideration. Therefore, in this investigation the VPC is extended with two methods to support a less subjective and thereby more adequate evaluation of model performance: (i) the Quantified Visual Predictive Check (QVPC) and (ii) the Bootstrap Visual Predictive Check (BVPC). The QVPC presents the distribution of the observations as a percentage, thus regardless the density of the data, above and below the predicted median at each time point, while also visualising the percentage of unavailable data. The BVPC weighs the predicted median against the 5th, 50th and 95th percentiles resulting from a bootstrap of the observed data median at each time point, while accounting for the number and the theoretical position of unavailable data. The proposed extensions to the VPC are illustrated by a pharmacokinetic simulation example and applied to a pharmacodynamic disease progression example.
Astronauts Koichi Wakata (left) and Daniel T. Barry check the settings on a 35mm camera during an
NASA Technical Reports Server (NTRS)
1996-01-01
STS-72 TRAINING VIEW --- Astronauts Koichi Wakata (left) and Daniel T. Barry check the settings on a 35mm camera during an STS-72 training session. Wakata is a mission specialist, representing Japan's National Space Development Agency (NASDA) and Barry is a United States astronaut assigned as mission specialist for the same mission. The two are on the aft flight deck of the fixed base Shuttle Mission Simulator (SMS) at the Johnson Space Center (JSC).
A verification library for multibody simulation software
NASA Technical Reports Server (NTRS)
Kim, Sung-Soo; Haug, Edward J.; Frisch, Harold P.
1989-01-01
A multibody dynamics verification library, that maintains and manages test and validation data is proposed, based on RRC Robot arm and CASE backhoe validation and a comparitive study of DADS, DISCOS, and CONTOPS that are existing public domain and commercial multibody dynamic simulation programs. Using simple representative problems, simulation results from each program are cross checked, and the validation results are presented. Functionalities of the verification library are defined, in order to automate validation procedure.
A jazz-based approach for optimal setting of pressure reducing valves in water distribution networks
NASA Astrophysics Data System (ADS)
De Paola, Francesco; Galdiero, Enzo; Giugni, Maurizio
2016-05-01
This study presents a model for valve setting in water distribution networks (WDNs), with the aim of reducing the level of leakage. The approach is based on the harmony search (HS) optimization algorithm. The HS mimics a jazz improvisation process able to find the best solutions, in this case corresponding to valve settings in a WDN. The model also interfaces with the improved version of a popular hydraulic simulator, EPANET 2.0, to check the hydraulic constraints and to evaluate the performances of the solutions. Penalties are introduced in the objective function in case of violation of the hydraulic constraints. The model is applied to two case studies, and the obtained results in terms of pressure reductions are comparable with those of competitive metaheuristic algorithms (e.g. genetic algorithms). The results demonstrate the suitability of the HS algorithm for water network management and optimization.
Rotolo, Federico; Paoletti, Xavier; Michiels, Stefan
2018-03-01
Surrogate endpoints are attractive for use in clinical trials instead of well-established endpoints because of practical convenience. To validate a surrogate endpoint, two important measures can be estimated in a meta-analytic context when individual patient data are available: the R indiv 2 or the Kendall's τ at the individual level, and the R trial 2 at the trial level. We aimed at providing an R implementation of classical and well-established as well as more recent statistical methods for surrogacy assessment with failure time endpoints. We also intended incorporating utilities for model checking and visualization and data generating methods described in the literature to date. In the case of failure time endpoints, the classical approach is based on two steps. First, a Kendall's τ is estimated as measure of individual level surrogacy using a copula model. Then, the R trial 2 is computed via a linear regression of the estimated treatment effects; at this second step, the estimation uncertainty can be accounted for via measurement-error model or via weights. In addition to the classical approach, we recently developed an approach based on bivariate auxiliary Poisson models with individual random effects to measure the Kendall's τ and treatment-by-trial interactions to measure the R trial 2 . The most common data simulation models described in the literature are based on: copula models, mixed proportional hazard models, and mixture of half-normal and exponential random variables. The R package surrosurv implements the classical two-step method with Clayton, Plackett, and Hougaard copulas. It also allows to optionally adjusting the second-step linear regression for measurement-error. The mixed Poisson approach is implemented with different reduced models in addition to the full model. We present the package functions for estimating the surrogacy models, for checking their convergence, for performing leave-one-trial-out cross-validation, and for plotting the results. We illustrate their use in practice on individual patient data from a meta-analysis of 4069 patients with advanced gastric cancer from 20 trials of chemotherapy. The surrosurv package provides an R implementation of classical and recent statistical methods for surrogacy assessment of failure time endpoints. Flexible simulation functions are available to generate data according to the methods described in the literature. Copyright © 2017 Elsevier B.V. All rights reserved.
A MATLAB/Simulink based GUI for the CERES Simulator
NASA Technical Reports Server (NTRS)
Valencia, Luis H.
1995-01-01
The Clouds and The Earth's Radiant Energy System (CERES) simulator will allow flight operational familiarity with the CERES instrument prior to launch. It will provide a CERES instrument simulation facility for NASA Langley Research Center. NASA Goddard Space Flight Center and TRW. One of the objectives of building this simulator would be for use as a testbed for functionality checking of atypical memory uploads and for anomaly investigation. For instance, instrument malfunction due to memory damage requires troubleshooting on a simulator to determine the nature of the problem and to find a solution.
STS-72 crew trains in Fixed Base (FB) Shuttle Mission Simulator (SMS)
1995-06-07
S95-12706 (May 1995) --- Astronaut Koichi Wakata, representing Japan's National Space Development Agency (NASDA) and assigned as mission specialist for the STS-72 mission, checks over a copy of the flight plan. Wakata is on the flight deck of the fixed base Shuttle Mission Simulator (SMS) at the Johnson Space Center (JSC). He will join five NASA astronauts aboard Endeavour for a scheduled nine-day mission, now set for the winter of this year.
Stochastic Game Analysis and Latency Awareness for Self-Adaptation
2014-01-01
this paper, we introduce a formal analysis technique based on model checking of stochastic multiplayer games (SMGs) that enables us to quantify the...Additional Key Words and Phrases: Proactive adaptation, Stochastic multiplayer games , Latency 1. INTRODUCTION When planning how to adapt, self-adaptive...contribution of this paper is twofold: (1) A novel analysis technique based on model checking of stochastic multiplayer games (SMGs) that enables us to
NASA Astrophysics Data System (ADS)
Ambroglini, Filippo; Jerome Burger, William; Battiston, Roberto; Vitale, Vincenzo; Zhang, Yu
2014-05-01
During last decades, few space experiments revealed anomalous bursts of charged particles, mainly electrons with energy larger than few MeV. A possible source of these bursts are the low-frequency seismo-electromagnetic emissions, which can cause the precipitation of the electrons from the lower boundary of their inner belt. Studies of these bursts reported also a short-term pre-seismic excess. Starting from simulation tools traditionally used on high energy physics we developed a dedicated application SEPS (Space Perturbation Earthquake Simulation), based on the Geant4 tool and PLANETOCOSMICS program, able to model and simulate the electromagnetic interaction between the earthquake and the particles trapped in the inner Van Allen belt. With SEPS one can study the transport of particles trapped in the Van Allen belts through the Earth's magnetic field also taking into account possible interactions with the Earth's atmosphere. SEPS provides the possibility of: testing different models of interaction between electromagnetic waves and trapped particles, defining the mechanism of interaction as also shaping the area in which this takes place,assessing the effects of perturbations in the magnetic field on the particles path, performing back-tracking analysis and also modelling the interaction with electric fields. SEPS is in advanced development stage, so that it could be already exploited to test in details the results of correlation analysis between particle bursts and earthquakes based on NOAA and SAMPEX data. The test was performed both with a full simulation analysis, (tracing from the position of the earthquake and going to see if there were paths compatible with the burst revealed) and with a back-tracking analysis (tracing from the burst detection point and checking the compatibility with the position of associated earthquake).
Galvanin, Federico; Ballan, Carlo C; Barolo, Massimiliano; Bezzo, Fabrizio
2013-08-01
The use of pharmacokinetic (PK) and pharmacodynamic (PD) models is a common and widespread practice in the preliminary stages of drug development. However, PK-PD models may be affected by structural identifiability issues intrinsically related to their mathematical formulation. A preliminary structural identifiability analysis is usually carried out to check if the set of model parameters can be uniquely determined from experimental observations under the ideal assumptions of noise-free data and no model uncertainty. However, even for structurally identifiable models, real-life experimental conditions and model uncertainty may strongly affect the practical possibility to estimate the model parameters in a statistically sound way. A systematic procedure coupling the numerical assessment of structural identifiability with advanced model-based design of experiments formulations is presented in this paper. The objective is to propose a general approach to design experiments in an optimal way, detecting a proper set of experimental settings that ensure the practical identifiability of PK-PD models. Two simulated case studies based on in vitro bacterial growth and killing models are presented to demonstrate the applicability and generality of the methodology to tackle model identifiability issues effectively, through the design of feasible and highly informative experiments.
Building Program Verifiers from Compilers and Theorem Provers
2015-05-14
Checking with SMT UFO • LLVM-based front-end (partially reused in SeaHorn) • Combines Abstract Interpretation with Interpolation-Based Model Checking • (no...assertions Counter-examples are long Hard to determine (from main) what is relevant Assertion Main 35 Building Verifiers from Comp and SMT Gurfinkel, 2015
[Research on Kalman interpolation prediction model based on micro-region PM2.5 concentration].
Wang, Wei; Zheng, Bin; Chen, Binlin; An, Yaoming; Jiang, Xiaoming; Li, Zhangyong
2018-02-01
In recent years, the pollution problem of particulate matter, especially PM2.5, is becoming more and more serious, which has attracted many people's attention from all over the world. In this paper, a Kalman prediction model combined with cubic spline interpolation is proposed, which is applied to predict the concentration of PM2.5 in the micro-regional environment of campus, and to realize interpolation simulation diagram of concentration of PM2.5 and simulate the spatial distribution of PM2.5. The experiment data are based on the environmental information monitoring system which has been set up by our laboratory. And the predicted and actual values of PM2.5 concentration data have been checked by the way of Wilcoxon signed-rank test. We find that the value of bilateral progressive significance probability was 0.527, which is much greater than the significant level α = 0.05. The mean absolute error (MEA) of Kalman prediction model was 1.8 μg/m 3 , the average relative error (MER) was 6%, and the correlation coefficient R was 0.87. Thus, the Kalman prediction model has a better effect on the prediction of concentration of PM2.5 than those of the back propagation (BP) prediction and support vector machine (SVM) prediction. In addition, with the combination of Kalman prediction model and the spline interpolation method, the spatial distribution and local pollution characteristics of PM2.5 can be simulated.
Formal Verification of Quasi-Synchronous Systems
2015-07-01
pg. 215-226, Springer-Verlag: London, UK, 2001. [4] Nicolas Halbwachs and Louis Mandel, Simulation and Verification of Asynchronous Systems by...Huang, S. A. Smolka, W. Tan , and S. Tripakis, Deep Random Search for Efficient Model Checking of Timed Automata, in Proceedings of the 13th Monterey
Decision Engines for Software Analysis Using Satisfiability Modulo Theories Solvers
NASA Technical Reports Server (NTRS)
Bjorner, Nikolaj
2010-01-01
The area of software analysis, testing and verification is now undergoing a revolution thanks to the use of automated and scalable support for logical methods. A well-recognized premise is that at the core of software analysis engines is invariably a component using logical formulas for describing states and transformations between system states. The process of using this information for discovering and checking program properties (including such important properties as safety and security) amounts to automatic theorem proving. In particular, theorem provers that directly support common software constructs offer a compelling basis. Such provers are commonly called satisfiability modulo theories (SMT) solvers. Z3 is a state-of-the-art SMT solver. It is developed at Microsoft Research. It can be used to check the satisfiability of logical formulas over one or more theories such as arithmetic, bit-vectors, lists, records and arrays. The talk describes some of the technology behind modern SMT solvers, including the solver Z3. Z3 is currently mainly targeted at solving problems that arise in software analysis and verification. It has been applied to various contexts, such as systems for dynamic symbolic simulation (Pex, SAGE, Vigilante), for program verification and extended static checking (Spec#/Boggie, VCC, HAVOC), for software model checking (Yogi, SLAM), model-based design (FORMULA), security protocol code (F7), program run-time analysis and invariant generation (VS3). We will describe how it integrates support for a variety of theories that arise naturally in the context of the applications. There are several new promising avenues and the talk will touch on some of these and the challenges related to SMT solvers. Proceedings
South Asian summer monsoon breaks: Process-based diagnostics in HIRHAM5
NASA Astrophysics Data System (ADS)
Hanf, Franziska S.; Annamalai, H.; Rinke, Annette; Dethloff, Klaus
2017-05-01
This study assesses the ability of a high-resolution downscaling simulation with the regional climate model (RCM) HIRHAM5 in capturing the monsoon basic state and boreal summer intraseasonal variability (BSISV) over South Asia with focus on moist and radiative processes during 1979-2012. A process-based vertically integrated moist static energy (MSE) budget is performed to understand the model's fidelity in representing leading processes that govern the monsoon breaks over continental India. In the climatology (June-September) HIRHAM5 simulates a dry bias over central India in association with descent throughout the free troposphere. Sources of dry bias are interpreted as (i) near-equatorial Rossby wave response forced by excess rainfall over the southern Bay of Bengal promotes anomalous descent to its northwest and (ii) excessive rainfall over near-equatorial Arabian Sea and Bay of Bengal anchor a "local Hadley-type" circulation with descent anomalies over continental India. Compared with observations HIRHAM5 captures the leading processes that account for breaks, although with generally reduced amplitudes over central India. In the model too, anomalous dry advection and net radiative cooling are responsible for the initiation and maintenance of breaks, respectively. However, weaker contributions of all adiabatic MSE budget terms, and an inconsistent relationship between negative rainfall anomalies and radiative cooling reveals shortcomings in HIRHAM5's moisture-radiation interaction. Our study directly implies that process-based budget diagnostics are necessary, apart from just checking the northward propagation feature to examine RCM's fidelity to simulate BSISV.
Detection of IMRT delivery errors based on a simple constancy check of transit dose by using an EPID
NASA Astrophysics Data System (ADS)
Baek, Tae Seong; Chung, Eun Ji; Son, Jaeman; Yoon, Myonggeun
2015-11-01
Beam delivery errors during intensity modulated radiotherapy (IMRT) were detected based on a simple constancy check of the transit dose by using an electronic portal imaging device (EPID). Twenty-one IMRT plans were selected from various treatment sites, and the transit doses during treatment were measured by using an EPID. Transit doses were measured 11 times for each course of treatment, and the constancy check was based on gamma index (3%/3 mm) comparisons between a reference dose map (the first measured transit dose) and test dose maps (the following ten measured dose maps). In a simulation using an anthropomorphic phantom, the average passing rate of the tested transit dose was 100% for three representative treatment sites (head & neck, chest, and pelvis), indicating that IMRT was highly constant for normal beam delivery. The average passing rate of the transit dose for 1224 IMRT fields from 21 actual patients was 97.6% ± 2.5%, with the lower rate possibly being due to inaccuracies of patient positioning or anatomic changes. An EPIDbased simple constancy check may provide information about IMRT beam delivery errors during treatment.
STS-72 crew trains in Fixed Base (FB) Shuttle Mission Simulator (SMS)
1995-06-07
S95-12703 (May 1995) --- Astronauts Koichi Wakata (left) and Daniel T. Barry check the settings on a 35mm camera during an STS-72 training session. Wakata is a mission specialist, representing Japan's National Space Development Agency (NASDA) and Barry is a United States astronaut assigned as mission specialist for the same mission. The two are on the aft flight deck of the fixed base Shuttle Mission Simulator (SMS) at the Johnson Space Center (JSC).
14 CFR 135.337 - Qualifications: Check airmen (aircraft) and check airmen (simulator).
Code of Federal Regulations, 2010 CFR
2010-01-01
... satisfactorily completed the training phases for the aircraft, including recurrent training, that are required to... under this part; (2) Has satisfactorily completed the appropriate training phases for the aircraft... ON BOARD SUCH AIRCRAFT Training § 135.337 Qualifications: Check airmen (aircraft) and check airmen...
K+-nucleus scattering using K {yields} {mu}{nu} decays as a normalization check
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michael, R.; Hicks, K.; Bart, S.
1995-04-01
Elastic scattering of 720 and 620 MeV/c positive kaons from targets of {sup 12}C and {sup 6}Li has been measured up to laboratory angles of 42{degrees}. Since the magnitude of the cross sections is sensitive to nuclear medium effects, the K{yields}{mu}{nu} decay mode has been used to check the normalization. GEANT has been used to mimic the kaon decays over a path length of 12cm, with a correlated beam structure matching the experimental kaon beam. The corresponding muon distribution has been passed thru Monte Carlo simulations of the moby dick spectrometer. The results are compared with the experimental number ofmore » decay muons with good agreement. These results also agree with the normalization found using p-p elastic scattering. The normalized K{sup +} elastic data are compared to recent optical model predictions based on both Klein-Gordon and KDP equations in the impulse approximation.« less
Aerodynamic Drag Analysis of 3-DOF Flex-Gimbal GyroWheel System in the Sense of Ground Test
Huo, Xin; Feng, Sizhao; Liu, Kangzhi; Wang, Libin; Chen, Weishan
2016-01-01
GyroWheel is an innovative device that combines the actuating capabilities of a control moment gyro with the rate sensing capabilities of a tuned rotor gyro by using a spinning flex-gimbal system. However, in the process of the ground test, the existence of aerodynamic disturbance is inevitable, which hinders the improvement of the specification performance and control accuracy. A vacuum tank test is a possible candidate but is sometimes unrealistic due to the substantial increase in costs and complexity involved. In this paper, the aerodynamic drag problem with respect to the 3-DOF flex-gimbal GyroWheel system is investigated by simulation analysis and experimental verification. Concretely, the angular momentum envelope property of the spinning rotor system is studied and its integral dynamical model is deduced based on the physical configuration of the GyroWheel system with an appropriately defined coordinate system. In the sequel, the fluid numerical model is established and the model geometries are checked with FLUENT software. According to the diversity and time-varying properties of the rotor motions in three-dimensions, the airflow field around the GyroWheel rotor is analyzed by simulation with respect to its varying angular velocity and tilt angle. The IPC-based experimental platform is introduced, and the properties of aerodynamic drag in the ground test condition are obtained through comparing the simulation with experimental results. PMID:27941602
Sediment measurement and transport modeling: impact of riparian and filter strip buffers.
Moriasi, Daniel N; Steiner, Jean L; Arnold, Jeffrey G
2011-01-01
Well-calibrated models are cost-effective tools to quantify environmental benefits of conservation practices, but lack of data for parameterization and evaluation remains a weakness to modeling. Research was conducted in southwestern Oklahoma within the Cobb Creek subwatershed (CCSW) to develop cost-effective methods to collect stream channel parameterization and evaluation data for modeling in watersheds with sparse data. Specifically, (i) simple stream channel observations obtained by rapid geomorphic assessment (RGA) were used to parameterize the Soil and Water Assessment Tool (SWAT) model stream channel variables before calibrating SWAT for streamflow and sediment, and (ii) average annual reservoir sedimentation rate, measured at the Crowder Lake using the acoustic profiling system (APS), was used to cross-check Crowder Lake sediment accumulation rate simulated by SWAT. Additionally, the calibrated and cross-checked SWAT model was used to simulate impacts of riparian forest buffer (RF) and bermudagrass [ (L.) Pers.] filter strip buffer (BFS) on sediment yield and concentration in the CCSW. The measured average annual sedimentation rate was between 1.7 and 3.5 t ha yr compared with simulated sediment rate of 2.4 t ha yr Application of BFS across cropped fields resulted in a 72% reduction of sediment delivery to the stream, while the RF and the combined RF and BFS reduced the suspended sediment concentration at the CCSW outlet by 68 and 73%, respectively. Effective riparian practices have potential to increase reservoir life. These results indicate promise for using the RGA and APS methods to obtain data to improve water quality simulations in ungauged watersheds. American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America.
Sampling ARG of multiple populations under complex configurations of subdivision and admixture.
Carrieri, Anna Paola; Utro, Filippo; Parida, Laxmi
2016-04-01
Simulating complex evolution scenarios of multiple populations is an important task for answering many basic questions relating to population genomics. Apart from the population samples, the underlying Ancestral Recombinations Graph (ARG) is an additional important means in hypothesis checking and reconstruction studies. Furthermore, complex simulations require a plethora of interdependent parameters making even the scenario-specification highly non-trivial. We present an algorithm SimRA that simulates generic multiple population evolution model with admixture. It is based on random graphs that improve dramatically in time and space requirements of the classical algorithm of single populations.Using the underlying random graphs model, we also derive closed forms of expected values of the ARG characteristics i.e., height of the graph, number of recombinations, number of mutations and population diversity in terms of its defining parameters. This is crucial in aiding the user to specify meaningful parameters for the complex scenario simulations, not through trial-and-error based on raw compute power but intelligent parameter estimation. To the best of our knowledge this is the first time closed form expressions have been computed for the ARG properties. We show that the expected values closely match the empirical values through simulations.Finally, we demonstrate that SimRA produces the ARG in compact forms without compromising any accuracy. We demonstrate the compactness and accuracy through extensive experiments. SimRA (Simulation based on Random graph Algorithms) source, executable, user manual and sample input-output sets are available for downloading at: https://github.com/ComputationalGenomics/SimRA CONTACT: : parida@us.ibm.com Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Fall 2014 SEI Research Review Probabilistic Analysis of Time Sensitive Systems
2014-10-28
Osmosis SMC Tool Osmosis is a tool for Statistical Model Checking (SMC) with Semantic Importance Sampling. • Input model is written in subset of C...ASSERT() statements in model indicate conditions that must hold. • Input probability distributions defined by the user. • Osmosis returns the...on: – Target relative error, or – Set number of simulations Osmosis Main Algorithm 1 http://dreal.cs.cmu.edu/ (?⃑?): Indicator
A novel approach for connecting temporal-ontologies with blood flow simulations.
Weichert, F; Mertens, C; Walczak, L; Kern-Isberner, G; Wagner, M
2013-06-01
In this paper an approach for developing a temporal domain ontology for biomedical simulations is introduced. The ideas are presented in the context of simulations of blood flow in aneurysms using the Lattice Boltzmann Method. The advantages in using ontologies are manyfold: On the one hand, ontologies having been proven to be able to provide medical special knowledge e.g., key parameters for simulations. On the other hand, based on a set of rules and the usage of a reasoner, a system for checking the plausibility as well as tracking the outcome of medical simulations can be constructed. Likewise, results of simulations including data derived from them can be stored and communicated in a way that can be understood by computers. Later on, this set of results can be analyzed. At the same time, the ontologies provide a way to exchange knowledge between researchers. Lastly, this approach can be seen as a black-box abstraction of the internals of the simulation for the biomedical researcher as well. This approach is able to provide the complete parameter sets for simulations, part of the corresponding results and part of their analysis as well as e.g., geometry and boundary conditions. These inputs can be transferred to different simulation methods for comparison. Variations on the provided parameters can be automatically used to drive these simulations. Using a rule base, unphysical inputs or outputs of the simulation can be detected and communicated to the physician in a suitable and familiar way. An example for an instantiation of the blood flow simulation ontology and exemplary rules for plausibility checking are given. Copyright © 2013 Elsevier Inc. All rights reserved.
Assessment of a "stress" responsive-set in the Composite Mood Adjective Check List.
DOT National Transportation Integrated Search
1971-04-01
The effects of response sets to emphasize the appearance of stress in Composite Mood Adjective Check List (CMACL) records was investigated. Responses of 79 subjects asked to simulate stress, and 80 subjects asked to simulate stress in a subtle manner...
Effects of and preference for pay for performance: an analogue analysis.
Long, Robert D; Wilder, David A; Betz, Alison; Dutta, Ami
2012-01-01
We examined the effects of 2 payment systems on the rate of check processing and time spent on task by participants in a simulated work setting. Three participants experienced individual pay-for-performance (PFP) without base pay and pay-for-time (PFT) conditions. In the last phase, we asked participants to choose which system they preferred. For all participants, the PFP condition produced higher rates of check processing and more time spent on task than did the PFT condition, but choice of payment system varied both within and across participants.
Crosstalk eliminating and low-density parity-check codes for photochromic dual-wavelength storage
NASA Astrophysics Data System (ADS)
Wang, Meicong; Xiong, Jianping; Jian, Jiqi; Jia, Huibo
2005-01-01
Multi-wavelength storage is an approach to increase the memory density with the problem of crosstalk to be deal with. We apply Low Density Parity Check (LDPC) codes as error-correcting codes in photochromic dual-wavelength optical storage based on the investigation of LDPC codes in optical data storage. A proper method is applied to reduce the crosstalk and simulation results show that this operation is useful to improve Bit Error Rate (BER) performance. At the same time we can conclude that LDPC codes outperform RS codes in crosstalk channel.
Simulation and fitting of complex reaction network TPR: The key is the objective function
Savara, Aditya Ashi
2016-07-07
In this research, a method has been developed for finding improved fits during simulation and fitting of data from complex reaction network temperature programmed reactions (CRN-TPR). It was found that simulation and fitting of CRN-TPR presents additional challenges relative to simulation and fitting of simpler TPR systems. The method used here can enable checking the plausibility of proposed chemical mechanisms and kinetic models. The most important finding was that when choosing an objective function, use of an objective function that is based on integrated production provides more utility in finding improved fits when compared to an objective function based onmore » the rate of production. The response surface produced by using the integrated production is monotonic, suppresses effects from experimental noise, requires fewer points to capture the response behavior, and can be simulated numerically with smaller errors. For CRN-TPR, there is increased importance (relative to simple reaction network TPR) in resolving of peaks prior to fitting, as well as from weighting of experimental data points. Using an implicit ordinary differential equation solver was found to be inadequate for simulating CRN-TPR. Lastly, the method employed here was capable of attaining improved fits in simulation and fitting of CRN-TPR when starting with a postulated mechanism and physically realistic initial guesses for the kinetic parameters.« less
Simulation of an epidemic model with vector transmission
NASA Astrophysics Data System (ADS)
Dickman, Adriana G.; Dickman, Ronald
2015-03-01
We study a lattice model for vector-mediated transmission of a disease in a population consisting of two species, A and B, which contract the disease from one another. Individuals of species A are sedentary, while those of species B (the vector) diffuse in space. Examples of such diseases are malaria, dengue fever, and Pierce's disease in vineyards. The model exhibits a phase transition between an absorbing (infection free) phase and an active one as parameters such as infection rates and vector density are varied. We study the static and dynamic critical behavior of the model using initial spreading, initial decay, and quasistationary simulations. Simulations are checked against mean-field analysis. Although phase transitions to an absorbing state fall generically in the directed percolation universality class, this appears not to be the case for the present model.
Pisu, Massimo; Concas, Alessandro; Cao, Giacomo
2015-04-01
Cell cycle regulates proliferative cell capacity under normal or pathologic conditions, and in general it governs all in vivo/in vitro cell growth and proliferation processes. Mathematical simulation by means of reliable and predictive models represents an important tool to interpret experiment results, to facilitate the definition of the optimal operating conditions for in vitro cultivation, or to predict the effect of a specific drug in normal/pathologic mammalian cells. Along these lines, a novel model of cell cycle progression is proposed in this work. Specifically, it is based on a population balance (PB) approach that allows one to quantitatively describe cell cycle progression through the different phases experienced by each cell of the entire population during its own life. The transition between two consecutive cell cycle phases is simulated by taking advantage of the biochemical kinetic model developed by Gérard and Goldbeter (2009) which involves cyclin-dependent kinases (CDKs) whose regulation is achieved through a variety of mechanisms that include association with cyclins and protein inhibitors, phosphorylation-dephosphorylation, and cyclin synthesis or degradation. This biochemical model properly describes the entire cell cycle of mammalian cells by maintaining a sufficient level of detail useful to identify check point for transition and to estimate phase duration required by PB. Specific examples are discussed to illustrate the ability of the proposed model to simulate the effect of drugs for in vitro trials of interest in oncology, regenerative medicine and tissue engineering. Copyright © 2015 Elsevier Ltd. All rights reserved.
Stress analysis of 27% scale model of AH-64 main rotor hub
NASA Technical Reports Server (NTRS)
Hodges, R. V.
1985-01-01
Stress analysis of an AH-64 27% scale model rotor hub was performed. Component loads and stresses were calculated based upon blade root loads and motions. The static and fatigue analysis indicates positive margins of safety in all components checked. Using the format developed here, the hub can be stress checked for future application.
Determination of MLC model parameters for Monaco using commercial diode arrays.
Kinsella, Paul; Shields, Laura; McCavana, Patrick; McClean, Brendan; Langan, Brian
2016-07-08
Multileaf collimators (MLCs) need to be characterized accurately in treatment planning systems to facilitate accurate intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT). The aim of this study was to examine the use of MapCHECK 2 and ArcCHECK diode arrays for optimizing MLC parameters in Monaco X-ray voxel Monte Carlo (XVMC) dose calculation algorithm. A series of radiation test beams designed to evaluate MLC model parameters were delivered to MapCHECK 2, ArcCHECK, and EBT3 Gafchromic film for comparison. Initial comparison of the calculated and ArcCHECK-measured dose distributions revealed it was unclear how to change the MLC parameters to gain agreement. This ambiguity arose due to an insufficient sampling of the test field dose distributions and unexpected discrepancies in the open parts of some test fields. Consequently, the XVMC MLC parameters were optimized based on MapCHECK 2 measurements. Gafchromic EBT3 film was used to verify the accuracy of MapCHECK 2 measured dose distributions. It was found that adjustment of the MLC parameters from their default values resulted in improved global gamma analysis pass rates for MapCHECK 2 measurements versus calculated dose. The lowest pass rate of any MLC-modulated test beam improved from 68.5% to 93.5% with 3% and 2 mm gamma criteria. Given the close agreement of the optimized model to both MapCHECK 2 and film, the optimized model was used as a benchmark to highlight the relatively large discrepancies in some of the test field dose distributions found with ArcCHECK. Comparison between the optimized model-calculated dose and ArcCHECK-measured dose resulted in global gamma pass rates which ranged from 70.0%-97.9% for gamma criteria of 3% and 2 mm. The simple square fields yielded high pass rates. The lower gamma pass rates were attributed to the ArcCHECK overestimating the dose in-field for the rectangular test fields whose long axis was parallel to the long axis of the ArcCHECK. Considering ArcCHECK measurement issues and the lower gamma pass rates for the MLC-modulated test beams, it was concluded that MapCHECK 2 was a more suitable detector than ArcCHECK for the optimization process. © 2016 The Authors
Initiative towards more affordable flight simulators for U.S. commuter airline training
DOT National Transportation Integrated Search
1996-03-15
Recent regulatory action, coupled to a policy of encouraging commuter airlines to conduct all pilot training and checking activities in ground based equipment, has created an impetus to consider how best to ameliorate the conditions which have discou...
Yu, Jia; Yu, Zhichao; Tang, Chenlong
2016-07-04
The hot work environment of electronic components in the instrument cabin of spacecraft was researched, and a new thermal protection structure, namely graphite carbon foam, which is an impregnated phase-transition material, was adopted to implement the thermal control on the electronic components. We used the optimized parameters obtained from ANSYS to conduct 2D optimization, 3-D modeling and simulation, as well as the strength check. Finally, the optimization results were verified by experiments. The results showed that after optimization, the structured carbon-based energy-storing composite material could reduce the mass and realize the thermal control over electronic components. This phase-transition composite material still possesses excellent temperature control performance after its repeated melting and solidifying.
Model-Driven Safety Analysis of Closed-Loop Medical Systems
Pajic, Miroslav; Mangharam, Rahul; Sokolsky, Oleg; Arney, David; Goldman, Julian; Lee, Insup
2013-01-01
In modern hospitals, patients are treated using a wide array of medical devices that are increasingly interacting with each other over the network, thus offering a perfect example of a cyber-physical system. We study the safety of a medical device system for the physiologic closed-loop control of drug infusion. The main contribution of the paper is the verification approach for the safety properties of closed-loop medical device systems. We demonstrate, using a case study, that the approach can be applied to a system of clinical importance. Our method combines simulation-based analysis of a detailed model of the system that contains continuous patient dynamics with model checking of a more abstract timed automata model. We show that the relationship between the two models preserves the crucial aspect of the timing behavior that ensures the conservativeness of the safety analysis. We also describe system design that can provide open-loop safety under network failure. PMID:24177176
Model-Driven Safety Analysis of Closed-Loop Medical Systems.
Pajic, Miroslav; Mangharam, Rahul; Sokolsky, Oleg; Arney, David; Goldman, Julian; Lee, Insup
2012-10-26
In modern hospitals, patients are treated using a wide array of medical devices that are increasingly interacting with each other over the network, thus offering a perfect example of a cyber-physical system. We study the safety of a medical device system for the physiologic closed-loop control of drug infusion. The main contribution of the paper is the verification approach for the safety properties of closed-loop medical device systems. We demonstrate, using a case study, that the approach can be applied to a system of clinical importance. Our method combines simulation-based analysis of a detailed model of the system that contains continuous patient dynamics with model checking of a more abstract timed automata model. We show that the relationship between the two models preserves the crucial aspect of the timing behavior that ensures the conservativeness of the safety analysis. We also describe system design that can provide open-loop safety under network failure.
Multi-Source Cooperative Data Collection with a Mobile Sink for the Wireless Sensor Network.
Han, Changcai; Yang, Jinsheng
2017-10-30
The multi-source cooperation integrating distributed low-density parity-check codes is investigated to jointly collect data from multiple sensor nodes to the mobile sink in the wireless sensor network. The one-round and two-round cooperative data collection schemes are proposed according to the moving trajectories of the sink node. Specifically, two sparse cooperation models are firstly formed based on geographical locations of sensor source nodes, the impairment of inter-node wireless channels and moving trajectories of the mobile sink. Then, distributed low-density parity-check codes are devised to match the directed graphs and cooperation matrices related with the cooperation models. In the proposed schemes, each source node has quite low complexity attributed to the sparse cooperation and the distributed processing. Simulation results reveal that the proposed cooperative data collection schemes obtain significant bit error rate performance and the two-round cooperation exhibits better performance compared with the one-round scheme. The performance can be further improved when more source nodes participate in the sparse cooperation. For the two-round data collection schemes, the performance is evaluated for the wireless sensor networks with different moving trajectories and the variant data sizes.
Multi-Source Cooperative Data Collection with a Mobile Sink for the Wireless Sensor Network
Han, Changcai; Yang, Jinsheng
2017-01-01
The multi-source cooperation integrating distributed low-density parity-check codes is investigated to jointly collect data from multiple sensor nodes to the mobile sink in the wireless sensor network. The one-round and two-round cooperative data collection schemes are proposed according to the moving trajectories of the sink node. Specifically, two sparse cooperation models are firstly formed based on geographical locations of sensor source nodes, the impairment of inter-node wireless channels and moving trajectories of the mobile sink. Then, distributed low-density parity-check codes are devised to match the directed graphs and cooperation matrices related with the cooperation models. In the proposed schemes, each source node has quite low complexity attributed to the sparse cooperation and the distributed processing. Simulation results reveal that the proposed cooperative data collection schemes obtain significant bit error rate performance and the two-round cooperation exhibits better performance compared with the one-round scheme. The performance can be further improved when more source nodes participate in the sparse cooperation. For the two-round data collection schemes, the performance is evaluated for the wireless sensor networks with different moving trajectories and the variant data sizes. PMID:29084155
Collection of Calibration and Validation Data for an Airport Landside Dynamic Simulation Model.
1980-04-01
movements. The volume of skiers passing through Denver is sufficiently large to warrant the installation of special check-in counters for passengers with...Terminal, only seven sectors were used. Training Procedures MIA was the first of the three airports surveyed. A substantial amount of knowledge and
NASA Astrophysics Data System (ADS)
Nili, Samaun; Park, Chanyoung; Haftka, Raphael T.; Kim, Nam H.; Balachandar, S.
2017-11-01
Point particle methods are extensively used in simulating Euler-Lagrange multiphase dispersed flow. When particles are much smaller than the Eulerian grid the point particle model is on firm theoretical ground. However, this standard approach of evaluating the gas-particle coupling at the particle center fails to converge as the Eulerian grid is reduced below particle size. We present an approach to model the interaction between particles and fluid for finite size particles that permits convergence. We use the generalized Faxen form to compute the force on a particle and compare the results against traditional point particle method. We apportion the different force components on the particle to fluid cells based on the fraction of particle volume or surface in the cell. The application is to a one-dimensional model of shock propagation through a particle-laden field at moderate volume fraction, where the convergence is achieved for a well-formulated force model and back coupling for finite size particles. Comparison with 3D direct fully resolved numerical simulations will be used to check if the approach also improves accuracy compared to the point particle model. Work supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, under Contract No. DE-NA0002378.
Formal Validation of Fault Management Design Solutions
NASA Technical Reports Server (NTRS)
Gibson, Corrina; Karban, Robert; Andolfato, Luigi; Day, John
2013-01-01
The work presented in this paper describes an approach used to develop SysML modeling patterns to express the behavior of fault protection, test the model's logic by performing fault injection simulations, and verify the fault protection system's logical design via model checking. A representative example, using a subset of the fault protection design for the Soil Moisture Active-Passive (SMAP) system, was modeled with SysML State Machines and JavaScript as Action Language. The SysML model captures interactions between relevant system components and system behavior abstractions (mode managers, error monitors, fault protection engine, and devices/switches). Development of a method to implement verifiable and lightweight executable fault protection models enables future missions to have access to larger fault test domains and verifiable design patterns. A tool-chain to transform the SysML model to jpf-Statechart compliant Java code and then verify the generated code via model checking was established. Conclusions and lessons learned from this work are also described, as well as potential avenues for further research and development.
NASA Technical Reports Server (NTRS)
Grubb, Matt
2016-01-01
The NASA Operational Simulator for Small Satellites (NOS3) is a suite of tools to aid in areas such as software development, integration test (IT), mission operations training, verification and validation (VV), and software systems check-out. NOS3 provides a software development environment, a multi-target build system, an operator interface-ground station, dynamics and environment simulations, and software-based hardware models. NOS3 enables the development of flight software (FSW) early in the project life cycle, when access to hardware is typically not available. For small satellites there are extensive lead times on many of the commercial-off-the-shelf (COTS) components as well as limited funding for engineering test units (ETU). Considering the difficulty of providing a hardware test-bed to each developer tester, hardware models are modeled based upon characteristic data or manufacturers data sheets for each individual component. The fidelity of each hardware models is such that FSW executes unaware that physical hardware is not present. This allows binaries to be compiled for both the simulation environment, and the flight computer, without changing the FSW source code. For hardware models that provide data dependent on the environment, such as a GPS receiver or magnetometer, an open-source tool from NASA GSFC (42 Spacecraft Simulation) is used to provide the necessary data. The underlying infrastructure used to transfer messages between FSW and the hardware models can also be used to monitor, intercept, and inject messages, which has proven to be beneficial for VV of larger missions such as James Webb Space Telescope (JWST). As hardware is procured, drivers can be added to the environment to enable hardware-in-the-loop (HWIL) testing. When strict time synchronization is not vital, any number of combinations of hardware components and software-based models can be tested. The open-source operator interface used in NOS3 is COSMOS from Ball Aerospace. For testing, plug-ins are implemented in COSMOS to control the NOS3 simulations, while the command and telemetry tools available in COSMOS are used to communicate with FSW. NOS3 is actively being used for FSW development and component testing of the Simulation-to-Flight 1 (STF-1) CubeSat. As NOS3 matures, hardware models have been added for common CubeSat components such as Novatel GPS receivers, ClydeSpace electrical power systems and batteries, ISISpace antenna systems, etc. In the future, NASA IVV plans to distribute NOS3 to other CubeSat developers and release the suite to the open-source community.
Trajectory Design to Mitigate Risk on the Transiting Exoplanet Survey Satellite (TESS) Mission
NASA Technical Reports Server (NTRS)
Dichmann, Donald
2016-01-01
The Transiting Exoplanet Survey Satellite (TESS) will employ a highly eccentric Earth orbit, in 2:1 lunar resonance, reached with a lunar flyby preceded by 3.5 phasing loops. The TESS mission has limited propellant and several orbit constraints. Based on analysis and simulation, we have designed the phasing loops to reduce delta-V and to mitigate risk due to maneuver execution errors. We have automated the trajectory design process and use distributed processing to generate and to optimize nominal trajectories, check constraint satisfaction, and finally model the effects of maneuver errors to identify trajectories that best meet the mission requirements.
NASA Astrophysics Data System (ADS)
Orra, Kashfull; Choudhury, Sounak K.
2016-12-01
The purpose of this paper is to build an adaptive feedback linear control system to check the variation of cutting force signal to improve the tool life. The paper discusses the use of transfer function approach in improving the mathematical modelling and adaptively controlling the process dynamics of the turning operation. The experimental results shows to be in agreement with the simulation model and error obtained is less than 3%. The state space approach model used in this paper successfully check the adequacy of the control system through controllability and observability test matrix and can be transferred from one state to another by appropriate input control in a finite time. The proposed system can be implemented to other machining process under varying range of cutting conditions to improve the efficiency and observability of the system.
ERIC Educational Resources Information Center
Wang, Yan; Rodríguez de Gil, Patricia; Chen, Yi-Hsin; Kromrey, Jeffrey D.; Kim, Eun Sook; Pham, Thanh; Nguyen, Diep; Romano, Jeanine L.
2017-01-01
Various tests to check the homogeneity of variance assumption have been proposed in the literature, yet there is no consensus as to their robustness when the assumption of normality does not hold. This simulation study evaluated the performance of 14 tests for the homogeneity of variance assumption in one-way ANOVA models in terms of Type I error…
ERIC Educational Resources Information Center
Lee, Young-Sun; Wollack, James A.; Douglas, Jeffrey
2009-01-01
The purpose of this study was to assess the model fit of a 2PL through comparison with the nonparametric item characteristic curve (ICC) estimation procedures. Results indicate that three nonparametric procedures implemented produced ICCs that are similar to that of the 2PL for items simulated to fit the 2PL. However for misfitting items,…
SimCheck: An Expressive Type System for Simulink
NASA Technical Reports Server (NTRS)
Roy, Pritam; Shankar, Natarajan
2010-01-01
MATLAB Simulink is a member of a class of visual languages that are used for modeling and simulating physical and cyber-physical systems. A Simulink model consists of blocks with input and output ports connected using links that carry signals. We extend the type system of Simulink with annotations and dimensions/units associated with ports and links. These types can capture invariants on signals as well as relations between signals. We define a type-checker that checks the wellformedness of Simulink blocks with respect to these type annotations. The type checker generates proof obligations that are solved by SRI's Yices solver for satisfiability modulo theories (SMT). This translation can be used to detect type errors, demonstrate counterexamples, generate test cases, or prove the absence of type errors. Our work is an initial step toward the symbolic analysis of MATLAB Simulink models.
Bifurcation analysis of parametrically excited bipolar disorder model
NASA Astrophysics Data System (ADS)
Nana, Laurent
2009-02-01
Bipolar II disorder is characterized by alternating hypomanic and major depressive episode. We model the periodic mood variations of a bipolar II patient with a negatively damped harmonic oscillator. The medications administrated to the patient are modeled via a forcing function that is capable of stabilizing the mood variations and of varying their amplitude. We analyze analytically, using perturbation method, the amplitude and stability of limit cycles and check this analysis with numerical simulations.
A Simulation Testbed for Adaptive Modulation and Coding in Airborne Telemetry
2014-05-29
its modulation waveforms and LDPC for the FEC codes . It also uses several sets of published telemetry channel sounding data as its channel models...waveforms and LDPC for the FEC codes . It also uses several sets of published telemetry channel sounding data as its channel models. Within the context...check ( LDPC ) codes with tunable code rates, and both static and dynamic telemetry channel models are included. In an effort to maximize the
Bender, Anne Mette; Kawachi, Ichiro; Jørgensen, Torben; Pisinger, Charlotta
2015-07-22
Participation in population-based preventive health check has declined over the past decades. More research is needed to determine factors enhancing participation. The objective of this study was to examine the association between two measures of neighborhood level social capital on participation in the health check phase of a population-based lifestyle intervention. The study population comprised 12,568 residents of 73 Danish neighborhoods in the intervention group of a large population-based lifestyle intervention study - the Inter99. Two measures of social capital were applied; informal socializing and voting turnout. In a multilevel analysis only adjusting for age and sex, a higher level of neighborhood social capital was associated with higher probability of participating in the health check. Inclusion of both individual socioeconomic position and neighborhood deprivation in the model attenuated the coefficients for informal socializing, while voting turnout became non-significant. Higher level of neighborhood social capital was associated with higher probability of participating in the health check phase of a population-based lifestyle intervention. Most of the association between neighborhood social capital and participation in preventive health checks can be explained by differences in individual socioeconomic position and level of neighborhood deprivation. Nonetheless, there seems to be some residual association between social capital and health check participation, suggesting that activating social relations in the community may be an avenue for boosting participation rates in population-based health checks. ClinicalTrials.gov (registration no. NCT00289237 ).
Full-chip level MEEF analysis using model based lithography verification
NASA Astrophysics Data System (ADS)
Kim, Juhwan; Wang, Lantian; Zhang, Daniel; Tang, Zongwu
2005-11-01
MEEF (Mask Error Enhancement Factor) has become a critical factor in CD uniformity control since optical lithography process moved to sub-resolution era. A lot of studies have been done by quantifying the impact of the mask CD (Critical Dimension) errors on the wafer CD errors1-2. However, the benefits from those studies were restricted only to small pattern areas of the full-chip data due to long simulation time. As fast turn around time can be achieved for the complicated verifications on very large data by linearly scalable distributed processing technology, model-based lithography verification becomes feasible for various types of applications such as post mask synthesis data sign off for mask tape out in production and lithography process development with full-chip data3,4,5. In this study, we introduced two useful methodologies for the full-chip level verification of mask error impact on wafer lithography patterning process. One methodology is to check MEEF distribution in addition to CD distribution through process window, which can be used for RET/OPC optimization at R&D stage. The other is to check mask error sensitivity on potential pinch and bridge hotspots through lithography process variation, where the outputs can be passed on to Mask CD metrology to add CD measurements on those hotspot locations. Two different OPC data were compared using the two methodologies in this study.
XMI2USE: A Tool for Transforming XMI to USE Specifications
NASA Astrophysics Data System (ADS)
Sun, Wuliang; Song, Eunjee; Grabow, Paul C.; Simmonds, Devon M.
The UML-based Specification Environment (USE) tool supports syntactic analysis, type checking, consistency checking, and dynamic validation of invariants and pre-/post conditions specified in the Object Constraint Language (OCL). Due to its animation and analysis power, it is useful when checking critical non-functional properties such as security policies. However, the USE tool requires one to specify (i.e., "write") a model using its own textual language and does not allow one to import any model specification files created by other UML modeling tools. Hence, to make the best use of existing UML tools, we often create a model with OCL constraints using a modeling tool such as the IBM Rational Software Architect (RSA) and then use the USE tool for model validation. This approach, however, requires a manual transformation between the specifications of two different tool formats, which is error-prone and diminishes the benefit of automated model-level validations. In this paper, we describe our own implementation of a specification transformation engine that is based on the Model Driven Architecture (MDA) framework and currently supports automatic tool-level transformations from RSA to USE.
Tempo: A Toolkit for the Timed Input/Output Automata Formalism
2008-01-30
generation of distributed code from specifications. F .4.3 [Formal Languages]: Tempo;, D.3 [Programming Many distributed systems involve a combination of...and require The chek (i) transition is enabled when process i’s program the simulator to check the assertions after every single step counter is set to...output foo (n:Int) The Tempo simulator addresses this issue by putting the states x: Int : = 10;transitions modeler in charge of resolving the non
Bayesian model checking: A comparison of tests
NASA Astrophysics Data System (ADS)
Lucy, L. B.
2018-06-01
Two procedures for checking Bayesian models are compared using a simple test problem based on the local Hubble expansion. Over four orders of magnitude, p-values derived from a global goodness-of-fit criterion for posterior probability density functions agree closely with posterior predictive p-values. The former can therefore serve as an effective proxy for the difficult-to-calculate posterior predictive p-values.
NASA Astrophysics Data System (ADS)
Vijay Singh, Ran; Agilandeeswari, L.
2017-11-01
To handle the large amount of client’s data in open cloud lots of security issues need to be address. Client’s privacy should not be known to other group members without data owner’s valid permission. Sometime clients are fended to have accessing with open cloud servers due to some restrictions. To overcome the security issues and these restrictions related to storing, data sharing in an inter domain network and privacy checking, we propose a model in this paper which is based on an identity based cryptography in data transmission and intermediate entity which have client’s reference with identity that will take control handling of data transmission in an open cloud environment and an extended remote privacy checking technique which will work at admin side. On behalf of data owner’s authority this proposed model will give best options to have secure cryptography in data transmission and remote privacy checking either as private or public or instructed. The hardness of Computational Diffie-Hellman assumption algorithm for key exchange makes this proposed model more secure than existing models which are being used for public cloud environment.
Validation of Aquarius Measurements Using Radiative Transfer Models at L-Band
NASA Technical Reports Server (NTRS)
Dinnat, E.; LeVine, David M.; Abraham, S.; DeMattheis, P.; Utku, C.
2012-01-01
Aquarius/SAC-D was launched in June 2011 by NASA and CONAE (Argentine space agency). Aquarius includes three L-band (1.4 GHz) radiometers dedicated to measuring sea surface salinity. We report detailed comparisons of Aquarius measurements with radiative transfer model predictions. These comparisons were used as part ofthe initial assessment of Aquarius data. In particular, they were used successfully to estimate the radiometer calibration bias and stability. Further comparisons are being performed to assess the performance of models in the retrieval algorithm for correcting the effect of sources of geophysical "noise" (e.g. the galactic background, atmospheric attenuation and reflected signal from the Sun). Such corrections are critical in bringing the error in retrieved salinity down to the required 0.2 practical salinity unit (psu) on monthly global maps at 150 km by 150 km resolution. The forward models making up the Aquarius simulator have been very useful for preparatory studies in the years leading to Aquarius' launch. The simulator includes various components to compute effects ofthe following processes on the measured signal: 1) emission from Earth surfaces (ocean, land, ice), 2) atmospheric emission and absorption, 3) emission from the Sun, Moon and celestial Sky (directly through the antenna sidelobes or after reflection/scattering at the Earth surface), 4) Faraday rotation, and 5) convolution of the scene by the antenna gain patterns. Since the Aquarius radiometers tum-on in late July 2011, the simulator has been used to perform a first order validation of the data. This included checking the order of magnitude ofthe signal over ocean, land and ice surfaces, checking the relative amplitude of signal at different polarizations, and checking the variation with incidence angle. The comparisons were also used to assess calibration bias and monitor instruments calibration drift. The simulator is also being used in the salinity retrieval. For example, initial assessments of the salinity retrieved from Aquarius data showed degradation in accuracy at locations where glint from the galactic sky background was important. This was traced to an inaccurate correction for the Sky glint. We present comparisons of the simulator prediction to the Aquarius data in order to assess the performances of the models of various physical processes impacting the measurements, such as the effect of sea surface roughness, the impact of the celestial Sky and the Sun emission scattered at the rough ocean surface. We discuss what components of the simulator appear reliable and which ones need improvements. Improved knowledge on the radiative transfer models at L-band will not only lead to better salinity retrieved from Aquarius data, it will also allow be beneficial for SMOS or the upcoming SMAP mission.
Experimental investigation of a new method for advanced fast reactor shutdown cooling
NASA Astrophysics Data System (ADS)
Pakholkov, V. V.; Kandaurov, A. A.; Potseluev, A. I.; Rogozhkin, S. A.; Sergeev, D. A.; Troitskaya, Yu. I.; Shepelev, S. F.
2017-07-01
We consider a new method for fast reactor shutdown cooling using a decay heat removal system (DHRS) with a check valve. In this method, a coolant from the decay heat exchanger (DHX) immersed into the reactor upper plenum is supplied to the high-pressure plenum and, then, inside the fuel subassemblies (SAs). A check valve installed at the DHX outlet opens by the force of gravity after primary pumps (PP-1) are shut down. Experimental studies of the new and alternative methods of shutdown cooling were performed at the TISEY test facility at OKBM. The velocity fields in the upper plenum of the reactor model were obtained using the optical particle image velocimetry developed at the Institute of Applied Physics (Russian Academy of Sciences). The study considers the process of development of natural circulation in the reactor and the DHRS models and the corresponding evolution of the temperature and velocity fields. A considerable influence of the valve position in the displacer of the primary pump on the natural circulation of water in the reactor through the DHX was discovered (in some modes, circulation reversal through the DHX was obtained). Alternative DHRS designs without a shell at the DHX outlet with open and closed check valve are also studied. For an open check valve, in spite of the absence of a shell, part of the flow is supplied through the DHX pipeline and then inside the SA simulators. When simulating power modes of the reactor operation, temperature stratification of the liquid was observed, which increased in the cooling mode via the DHRS. These data qualitatively agree with the results of tests at BN-600 and BN-800 reactors.
Automation for System Safety Analysis
NASA Technical Reports Server (NTRS)
Malin, Jane T.; Fleming, Land; Throop, David; Thronesbery, Carroll; Flores, Joshua; Bennett, Ted; Wennberg, Paul
2009-01-01
This presentation describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis and simulation to identify and evaluate possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations and scenarios; and 4) identify resulting candidate scenarios for software integration testing. There has been significant technical progress in model extraction from Orion program text sources, architecture model derivation (components and connections) and documentation of extraction sources. Models have been derived from Internal Interface Requirements Documents (IIRDs) and FMEA documents. Linguistic text processing is used to extract model parts and relationships, and the Aerospace Ontology also aids automated model development from the extracted information. Visualizations of these models assist analysts in requirements overview and in checking consistency and completeness.
NASA Technical Reports Server (NTRS)
Schmidt, J. M.; Cairns, Iver H.; Xie, Hong; St. Cyr, O. C.; Gopalswamy, N.
2016-01-01
Coronal mass ejections (CMEs) are major transient phenomena in the solar corona that are observed with ground-based and spacecraft-based coronagraphs in white light or with in situ measurements by spacecraft. CMEs transport mass and momentum and often drive shocks. In order to derive the CME and shock trajectories with high precision, we apply the graduated cylindrical shell (GCS) model to fit a flux rope to the CME directed toward STEREO A after about 19:00 UT on 29 November 2013 and check the quality of the heliocentric distance-time evaluations by carrying out a three-dimensional magnetohydrodynamic (MHD) simulation of the same CME with the Block Adaptive Tree Solar-Wind Roe Upwind Scheme (BATS-R-US) code. Heliocentric distances of the CME and shock leading edges are determined from the simulated white light images and magnetic field strength data. We find very good agreement between the predicted and observed heliocentric distances, showing that the GCS model and the BATS-R-US simulation approach work very well and are consistent. In order to assess the validity of CME and shock identification criteria in coronagraph images, we also compute synthetic white light images of the CME and shock. We find that the outer edge of a cloud-like illuminated area in the observed and predicted images in fact coincides with the leading edge of the CME flux rope and that the outer edge of a faint illuminated band in front of the CME leading edge coincides with the CME-driven shock front.
Real-time simulation model of the HL-20 lifting body
NASA Technical Reports Server (NTRS)
Jackson, E. Bruce; Cruz, Christopher I.; Ragsdale, W. A.
1992-01-01
A proposed manned spacecraft design, designated the HL-20, has been under investigation at Langley Research Center. Included in that investigation are flight control design and flying qualities studies utilizing a man-in-the-loop real-time simulator. This report documents the current real-time simulation model of the HL-20 lifting body vehicle, known as version 2.0, presently in use at NASA Langley Research Center. Included are data on vehicle aerodynamics, inertias, geometries, guidance and control laws, and cockpit displays and controllers. In addition, trim case and dynamic check case data is provided. The intent of this document is to provide the reader with sufficient information to develop and validate an equivalent simulation of the HL-20 for use in real-time or analytical studies.
STS-72 crew trains in Fixed Base (FB) Shuttle Mission Simulator (SMS)
1995-06-07
S95-12725 (May 1995) --- Astronaut Koichi Wakata, representing Japan's National Space Development Agency (NASDA) and assigned as mission specialist for the STS-72 mission, checks over a copy of the flight plan. Wakata is on the flight deck of the fixed base Shuttle Mission Simulator (SMS) at the Johnson Space Center (JSC). In the background is astronaut Brent W. Jett, pilot. The two will join four NASA astronauts aboard Space Shuttle Endeavour for a scheduled nine-day mission, now set for the winter of this year.
A rule-based approach to model checking of UML state machines
NASA Astrophysics Data System (ADS)
Grobelna, Iwona; Grobelny, Michał; Stefanowicz, Łukasz
2016-12-01
In the paper a new approach to formal verification of control process specification expressed by means of UML state machines in version 2.x is proposed. In contrast to other approaches from the literature, we use the abstract and universal rule-based logical model suitable both for model checking (using the nuXmv model checker), but also for logical synthesis in form of rapid prototyping. Hence, a prototype implementation in hardware description language VHDL can be obtained that fully reflects the primary, already formally verified specification in form of UML state machines. Presented approach allows to increase the assurance that implemented system meets the user-defined requirements.
Leveraging OpenStudio's Application Programming Interfaces: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, N.; Ball, B.; Goldwasser, D.
2013-11-01
OpenStudio development efforts have been focused on providing Application Programming Interfaces (APIs) where users are able to extend OpenStudio without the need to compile the open source libraries. This paper will discuss the basic purposes and functionalities of the core libraries that have been wrapped with APIs including the Building Model, Results Processing, Advanced Analysis, UncertaintyQuantification, and Data Interoperability through Translators. Several building energy modeling applications have been produced using OpenStudio's API and Software Development Kits (SDK) including the United States Department of Energy's Asset ScoreCalculator, a mobile-based audit tool, an energy design assistance reporting protocol, and a portfolio scalemore » incentive optimization analysismethodology. Each of these software applications will be discussed briefly and will describe how the APIs were leveraged for various uses including high-level modeling, data transformations from detailed building audits, error checking/quality assurance of models, and use of high-performance computing for mass simulations.« less
FUX-Sim: Implementation of a fast universal simulation/reconstruction framework for X-ray systems.
Abella, Monica; Serrano, Estefania; Garcia-Blas, Javier; García, Ines; de Molina, Claudia; Carretero, Jesus; Desco, Manuel
2017-01-01
The availability of digital X-ray detectors, together with advances in reconstruction algorithms, creates an opportunity for bringing 3D capabilities to conventional radiology systems. The downside is that reconstruction algorithms for non-standard acquisition protocols are generally based on iterative approaches that involve a high computational burden. The development of new flexible X-ray systems could benefit from computer simulations, which may enable performance to be checked before expensive real systems are implemented. The development of simulation/reconstruction algorithms in this context poses three main difficulties. First, the algorithms deal with large data volumes and are computationally expensive, thus leading to the need for hardware and software optimizations. Second, these optimizations are limited by the high flexibility required to explore new scanning geometries, including fully configurable positioning of source and detector elements. And third, the evolution of the various hardware setups increases the effort required for maintaining and adapting the implementations to current and future programming models. Previous works lack support for completely flexible geometries and/or compatibility with multiple programming models and platforms. In this paper, we present FUX-Sim, a novel X-ray simulation/reconstruction framework that was designed to be flexible and fast. Optimized implementation for different families of GPUs (CUDA and OpenCL) and multi-core CPUs was achieved thanks to a modularized approach based on a layered architecture and parallel implementation of the algorithms for both architectures. A detailed performance evaluation demonstrates that for different system configurations and hardware platforms, FUX-Sim maximizes performance with the CUDA programming model (5 times faster than other state-of-the-art implementations). Furthermore, the CPU and OpenCL programming models allow FUX-Sim to be executed over a wide range of hardware platforms.
Silverman, Barry G; Hanrahan, Nancy; Bharathy, Gnana; Gordon, Kim; Johnson, Dan
2015-02-01
Explore whether agent-based modeling and simulation can help healthcare administrators discover interventions that increase population wellness and quality of care while, simultaneously, decreasing costs. Since important dynamics often lie in the social determinants outside the health facilities that provide services, this study thus models the problem at three levels (individuals, organizations, and society). The study explores the utility of translating an existing (prize winning) software for modeling complex societal systems and agent's daily life activities (like a Sim City style of software), into a desired decision support system. A case study tests if the 3 levels of system modeling approach is feasible, valid, and useful. The case study involves an urban population with serious mental health and Philadelphia's Medicaid population (n=527,056), in particular. Section 3 explains the models using data from the case study and thereby establishes feasibility of the approach for modeling a real system. The models were trained and tuned using national epidemiologic datasets and various domain expert inputs. To avoid co-mingling of training and testing data, the simulations were then run and compared (Section 4.1) to an analysis of 250,000 Philadelphia patient hospital admissions for the year 2010 in terms of re-hospitalization rate, number of doctor visits, and days in hospital. Based on the Student t-test, deviations between simulated vs. real world outcomes are not statistically significant. Validity is thus established for the 2008-2010 timeframe. We computed models of various types of interventions that were ineffective as well as 4 categories of interventions (e.g., reduced per-nurse caseload, increased check-ins and stays, etc.) that result in improvement in well-being and cost. The 3 level approach appears to be useful to help health administrators sort through system complexities to find effective interventions at lower costs. Copyright © 2014 Elsevier B.V. All rights reserved.
Morita, Shigemichi; Takahashi, Toshiya; Yoshida, Yasushi; Yokota, Naohisa
2016-04-01
Hydroxychloroquine (HCQ) is an effective treatment for patients with cutaneous lupus erythematosus (CLE) or systemic lupus erythematosus (SLE) and has been used for these patients in more than 70 nations. However, in Japan, HCQ has not been approved for CLE or SLE. To establish an appropriate therapeutic regimen and to clarify the pharmacokinetics (PK) of HCQ in Japanese patients with CLE with or without SLE (CLE/SLE), a population pharmacokinetic (PopPK) analysis was performed. In a clinical study of Japanese patients with a diagnosis of CLE irrespective of the presence of SLE, blood and plasma drug concentration-time data receiving multiple oral doses of HCQ sulfate (200-400 mg daily) were analyzed using nonlinear mixed-effects model software. The blood and plasma concentrations of HCQ were analyzed using a high-performance liquid chromatography tandem mass spectrometry method. Model evaluation and validation were performed using goodness-of-fit (GOF) plots, visual predictive check, and a bootstrap. The PopPKs of HCQ in the blood and plasma of 90 Japanese patients with CLE/SLE were well described by a 1-compartment model with first-order absorption and absorption lag time. Body weight was a significant (P < 0.001) covariate of oral clearance of HCQ. The final model was assessed using GOF plots, a bootstrap, and visual predictive check, and this model was appropriate. Simulations based on the final model suggested that the recommended daily doses of HCQ sulfate (200-400 mg) based on the ideal body weight in Japanese patients with CLE/SLE were in the similar concentration ranges. The PopPK models derived from both blood and plasma HCQ concentrations of Japanese patients with CLE/SLE were developed and validated. Based on this study, the dosage regimens of HCQ sulfate for Japanese patients with CLE/SLE should be calculated using the individual ideal body weight.
Code of Federal Regulations, 2013 CFR
2013-01-01
... simulator or training device; and (2) A flight check in the aircraft or a check in the simulator or training..., requalification, and differences flight training. 91.1103 Section 91.1103 Aeronautics and Space FEDERAL AVIATION... OPERATING AND FLIGHT RULES Fractional Ownership Operations Program Management § 91.1103 Pilots: Initial...
Code of Federal Regulations, 2014 CFR
2014-01-01
... simulator or training device; and (2) A flight check in the aircraft or a check in the simulator or training..., requalification, and differences flight training. 91.1103 Section 91.1103 Aeronautics and Space FEDERAL AVIATION... OPERATING AND FLIGHT RULES Fractional Ownership Operations Program Management § 91.1103 Pilots: Initial...
Code of Federal Regulations, 2011 CFR
2011-01-01
... simulator or training device; and (2) A flight check in the aircraft or a check in the simulator or training..., requalification, and differences flight training. 91.1103 Section 91.1103 Aeronautics and Space FEDERAL AVIATION... OPERATING AND FLIGHT RULES Fractional Ownership Operations Program Management § 91.1103 Pilots: Initial...
Code of Federal Regulations, 2012 CFR
2012-01-01
... simulator or training device; and (2) A flight check in the aircraft or a check in the simulator or training..., requalification, and differences flight training. 91.1103 Section 91.1103 Aeronautics and Space FEDERAL AVIATION... OPERATING AND FLIGHT RULES Fractional Ownership Operations Program Management § 91.1103 Pilots: Initial...
Code of Federal Regulations, 2010 CFR
2010-01-01
... simulator or training device; and (2) A flight check in the aircraft or a check in the simulator or training..., requalification, and differences flight training. 91.1103 Section 91.1103 Aeronautics and Space FEDERAL AVIATION... OPERATING AND FLIGHT RULES Fractional Ownership Operations Program Management § 91.1103 Pilots: Initial...
14 CFR Appendix H to Part 121 - Advanced Simulation
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Advanced Simulation H Appendix H to Part... REQUIREMENTS: DOMESTIC, FLAG, AND SUPPLEMENTAL OPERATIONS Pt. 121, App. H Appendix H to Part 121—Advanced... ensure that all instructors and check airmen used in appendix H training and checking are highly...
Warburton, Bruce; Gormley, Andrew M
2015-01-01
Internationally, invasive vertebrate species pose a significant threat to biodiversity, agricultural production and human health. To manage these species a wide range of tools, including traps, are used. In New Zealand, brushtail possums (Trichosurus vulpecula), stoats (Mustela ermine), and ship rats (Rattus rattus) are invasive and there is an ongoing demand for cost-effective non-toxic methods for controlling these pests. Recently, traps with multiple-capture capability have been developed which, because they do not require regular operator-checking, are purported to be more cost-effective than traditional single-capture traps. However, when pest populations are being maintained at low densities (as is typical of orchestrated pest management programmes) it remains uncertain if it is more cost-effective to use fewer multiple-capture traps or more single-capture traps. To address this uncertainty, we used an individual-based spatially explicit modelling approach to determine the likely maximum animal-captures per trap, given stated pest densities and defined times traps are left between checks. In the simulation, single- or multiple-capture traps were spaced according to best practice pest-control guidelines. For possums with maintenance densities set at the lowest level (i.e. 0.5/ha), 98% of all simulated possums were captured with only a single capacity trap set at each site. When possum density was increased to moderate levels of 3/ha, having a capacity of three captures per trap caught 97% of all simulated possums. Results were similar for stoats, although only two potential captures per site were sufficient to capture 99% of simulated stoats. For rats, which were simulated at their typically higher densities, even a six-capture capacity per trap site only resulted in 80% kill. Depending on target species, prevailing density and extent of immigration, the most cost-effective strategy for pest control in New Zealand might be to deploy several single-capture traps rather than investing in fewer, but more expense, multiple-capture traps.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakaguchi, Yuji, E-mail: nkgc2003@yahoo.co.jp; Ono, Takeshi; Onitsuka, Ryota
COMPASS system (IBA Dosimetry, Schwarzenbruck, Germany) and ArcCHECK with 3DVH software (Sun Nuclear Corp., Melbourne, FL) are commercial quasi-3-dimensional (3D) dosimetry arrays. Cross-validation to compare them under the same conditions, such as a treatment plan, allows for clear evaluation of such measurement devices. In this study, we evaluated the accuracy of reconstructed dose distributions from the COMPASS system and ArcCHECK with 3DVH software using Monte Carlo simulation (MC) for multi-leaf collimator (MLC) test patterns and clinical VMAT plans. In a phantom study, ArcCHECK 3DVH showed clear differences from COMPASS, measurement and MC due to the detector resolution and the dosemore » reconstruction method. Especially, ArcCHECK 3DVH showed 7% difference from MC for the heterogeneous phantom. ArcCHECK 3DVH only corrects the 3D dose distribution of treatment planning system (TPS) using ArcCHECK measurement, and therefore the accuracy of ArcCHECK 3DVH depends on TPS. In contrast, COMPASS showed good agreement with MC for all cases. However, the COMPASS system requires many complicated installation procedures such as beam modeling, and appropriate commissioning is needed. In terms of clinical cases, there were no large differences for each QA device. The accuracy of the compass and ArcCHECK 3DVH systems for phantoms and clinical cases was compared. Both systems have advantages and disadvantages for clinical use, and consideration of the operating environment is important. The QA system selection is depending on the purpose and workflow in each hospital.« less
Li, Qinwei; Xiao, Xia; Wang, Liang; Song, Hang; Kono, Hayato; Liu, Peifang; Lu, Hong; Kikkawa, Takamaro
2015-10-01
A direct extraction method of tumor response based on ensemble empirical mode decomposition (EEMD) is proposed for early breast cancer detection by ultra-wide band (UWB) microwave imaging. With this approach, the image reconstruction for the tumor detection can be realized with only extracted signals from as-detected waveforms. The calibration process executed in the previous research for obtaining reference waveforms which stand for signals detected from the tumor-free model is not required. The correctness of the method is testified by successfully detecting a 4 mm tumor located inside the glandular region in one breast model and by the model located at the interface between the gland and the fat, respectively. The reliability of the method is checked by distinguishing a tumor buried in the glandular tissue whose dielectric constant is 35. The feasibility of the method is confirmed by showing the correct tumor information in both simulation results and experimental results for the realistic 3-D printed breast phantom.
Verification and Planning Based on Coinductive Logic Programming
NASA Technical Reports Server (NTRS)
Bansal, Ajay; Min, Richard; Simon, Luke; Mallya, Ajay; Gupta, Gopal
2008-01-01
Coinduction is a powerful technique for reasoning about unfounded sets, unbounded structures, infinite automata, and interactive computations [6]. Where induction corresponds to least fixed point's semantics, coinduction corresponds to greatest fixed point semantics. Recently coinduction has been incorporated into logic programming and an elegant operational semantics developed for it [11, 12]. This operational semantics is the greatest fix point counterpart of SLD resolution (SLD resolution imparts operational semantics to least fix point based computations) and is termed co- SLD resolution. In co-SLD resolution, a predicate goal p( t) succeeds if it unifies with one of its ancestor calls. In addition, rational infinite terms are allowed as arguments of predicates. Infinite terms are represented as solutions to unification equations and the occurs check is omitted during the unification process. Coinductive Logic Programming (Co-LP) and Co-SLD resolution can be used to elegantly perform model checking and planning. A combined SLD and Co-SLD resolution based LP system forms the common basis for planning, scheduling, verification, model checking, and constraint solving [9, 4]. This is achieved by amalgamating SLD resolution, co-SLD resolution, and constraint logic programming [13] in a single logic programming system. Given that parallelism in logic programs can be implicitly exploited [8], complex, compute-intensive applications (planning, scheduling, model checking, etc.) can be executed in parallel on multi-core machines. Parallel execution can result in speed-ups as well as in larger instances of the problems being solved. In the remainder we elaborate on (i) how planning can be elegantly and efficiently performed under real-time constraints, (ii) how real-time systems can be elegantly and efficiently model- checked, as well as (iii) how hybrid systems can be verified in a combined system with both co-SLD and SLD resolution. Implementations of co-SLD resolution as well as preliminary implementations of the planning and verification applications have been developed [4]. Co-LP and Model Checking: The vast majority of properties that are to be verified can be classified into safety properties and liveness properties. It is well known within model checking that safety properties can be verified by reachability analysis, i.e, if a counter-example to the property exists, it can be finitely determined by enumerating all the reachable states of the Kripke structure.
Computer simulation in mechanical spectroscopy
NASA Astrophysics Data System (ADS)
Blanter, M. S.
2012-09-01
Several examples are given for use of computer simulation in mechanical spectroscopy. On one hand simulation makes it possible to study relaxation mechanisms, and on the other hand to use the colossal accumulation of experimental material to study metals and alloys. The following examples are considered: the effect of Al atom ordering on the Snoek carbon peak in alloys of the system Fe - Al - C; the effect of plastic strain on Finkel'shtein - Rozin relaxation in Fe - Ni - C austenitic steel; checking the adequacy of energy interactions of interstitial atoms, calculated on the basis of a first-principle model by simulation of the concentration dependence of Snoek relaxation parameters in Nb - O.
NASA Astrophysics Data System (ADS)
Langella, Giuliano; Basile, Angelo; Bonfante, Antonello; De Mascellis, Roberto; Manna, Piero; Terribile, Fabio
2016-04-01
WeatherProg is a computer program for the semi-automatic handling of data measured at ground stations within a climatic network. The program performs a set of tasks ranging from gathering raw point-based sensors measurements to the production of digital climatic maps. Originally the program was developed as the baseline asynchronous engine for the weather records management within the SOILCONSWEB Project (LIFE08 ENV/IT/000408), in which daily and hourly data where used to run water balance in the soil-plant-atmosphere continuum or pest simulation models. WeatherProg can be configured to automatically perform the following main operations: 1) data retrieval; 2) data decoding and ingestion into a database (e.g. SQL based); 3) data checking to recognize missing and anomalous values (using a set of differently combined checks including logical, climatological, spatial, temporal and persistence checks); 4) infilling of data flagged as missing or anomalous (deterministic or statistical methods); 5) spatial interpolation based on alternative/comparative methods such as inverse distance weighting, iterative regression kriging, and a weighted least squares regression (based on physiography), using an approach similar to PRISM. 6) data ingestion into a geodatabase (e.g. PostgreSQL+PostGIS or rasdaman). There is an increasing demand for digital climatic maps both for research and development (there is a gap between the major of scientific modelling approaches that requires digital climate maps and the gauged measurements) and for practical applications (e.g. the need to improve the management of weather records which in turn raises the support provided to farmers). The demand is particularly burdensome considering the requirement to handle climatic data at the daily (e.g. in the soil hydrological modelling) or even at the hourly time step (e.g. risk modelling in phytopathology). The key advantage of WeatherProg is the ability to perform all the required operations and calculations in an automatic fashion, except the need of a human interaction upon specific issues (such as the decision whether a measurement is an anomaly or not according to the detected temporal and spatial variations with contiguous points). The presented computer program runs from command line and shows peculiar characteristics in the cascade modelling within different contexts belonging to agriculture, phytopathology and environment. In particular, it can be a powerful tool to set up cutting-edge regional web services based on weather information. Indeed, it can support territorial agencies in charge of meteorological and phytopathological bulletins.
Dynamics of a Class of HIV Infection Models with Cure of Infected Cells in Eclipse Stage.
Maziane, Mehdi; Lotfi, El Mehdi; Hattaf, Khalid; Yousfi, Noura
2015-12-01
In this paper, we propose two HIV infection models with specific nonlinear incidence rate by including a class of infected cells in the eclipse phase. The first model is described by ordinary differential equations (ODEs) and generalizes a set of previously existing models and their results. The second model extends our ODE model by taking into account the diffusion of virus. Furthermore, the global stability of both models is investigated by constructing suitable Lyapunov functionals. Finally, we check our theoretical results with numerical simulations.
Efficient Translation of LTL Formulae into Buchi Automata
NASA Technical Reports Server (NTRS)
Giannakopoulou, Dimitra; Lerda, Flavio
2001-01-01
Model checking is a fully automated technique for checking that a system satisfies a set of required properties. With explicit-state model checkers, properties are typically defined in linear-time temporal logic (LTL), and are translated into B chi automata in order to be checked. This report presents how we have combined and improved existing techniques to obtain an efficient LTL to B chi automata translator. In particular, we optimize the core of existing tableau-based approaches to generate significantly smaller automata. Our approach has been implemented and is being released as part of the Java PathFinder software (JPF), an explicit state model checker under development at the NASA Ames Research Center.
On the transport coefficients of hydrogen in the inertial confinement fusion regime
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lambert, Flavien; Recoules, Vanina; Decoster, Alain
2011-05-15
Ab initio molecular dynamics is used to compute the thermal and electrical conductivities of hydrogen from 10 to 160 g cm{sup -3} and temperatures up to 800 eV, i.e., thermodynamical conditions relevant to inertial confinement fusion (ICF). The ionic structure is obtained using molecular dynamics simulations based on an orbital-free treatment for the electrons. The transport properties were computed using ab initio simulations in the DFT/LDA approximation. The thermal and electrical conductivities are evaluated using Kubo-Greenwood formulation. Particular attention is paid to the convergence of electronic transport properties with respect to the number of bands and atoms. These calculations aremore » then used to check various analytical models (Hubbard's, Lee-More's and Ichimaru's) widely used in hydrodynamics simulations of ICF capsule implosions. The Lorenz number, which is the ratio between thermal and electrical conductivities, is also computed and compared to the well-known Wiedemann-Franz law in different regimes ranging from the highly degenerate to the kinetic one. This allows us to deduce electrical conductivity from thermal conductivity for analytical model. We find that the coupling of Hubbard and Spitzer models gives a correct description of the behavior of electrical and thermal conductivities in the whole thermodynamic regime.« less
Tagore, Somnath; De, Rajat K.
2013-01-01
Disease Systems Biology is an area of life sciences, which is not very well understood to date. Analyzing infections and their spread in healthy metabolite networks can be one of the focussed areas in this regard. We have proposed a theory based on the classical forest fire model for analyzing the path of infection spread in healthy metabolic pathways. The theory suggests that when fire erupts in a forest, it spreads, and the surrounding trees also catch fire. Similarly, when we consider a metabolic network, the infection caused in the metabolites of the network spreads like a fire. We have constructed a simulation model which is used to study the infection caused in the metabolic networks from the start of infection, to spread and ultimately combating it. For implementation, we have used two approaches, first, based on quantitative strategies using ordinary differential equations and second, using graph-theory based properties. Furthermore, we are using certain probabilistic scores to complete this task and for interpreting the harm caused in the network, given by a ‘critical value’ to check whether the infection can be cured or not. We have tested our simulation model on metabolic pathways involved in Type I Diabetes mellitus in Homo sapiens. For validating our results biologically, we have used sensitivity analysis, both local and global, as well as for identifying the role of feedbacks in spreading infection in metabolic pathways. Moreover, information in literature has also been used to validate the results. The metabolic network datasets have been collected from the Kyoto Encyclopedia of Genes and Genomes (KEGG). PMID:24039701
Kleijn, Huub J; Zollinger, Daniel P; van den Heuvel, Michiel W; Kerbusch, Thomas
2011-01-01
AIMS An integrated population pharmacokinetic–pharmacodynamic model was developed with the following aims: to simultaneously describe pharmacokinetic behaviour of sugammadex and rocuronium; to establish the pharmacokinetic–pharmacodynamic model for rocuronium-induced neuromuscular blockade and reversal by sugammadex; to evaluate covariate effects; and to explore, by simulation, typical covariate effects on reversal time. METHODS Data (n = 446) from eight sugammadex clinical studies covering men, women, non-Asians, Asians, paediatrics, adults and the elderly, with various degrees of renal impairment, were used. Modelling and simulation techniques based on physiological principles were applied to capture rocuronium and sugammadex pharmacokinetics and pharmacodynamics and to identify and quantify covariate effects. RESULTS Sugammadex pharmacokinetics were affected by renal function, bodyweight and race, and rocuronium pharmacokinetics were affected by age, renal function and race. Sevoflurane potentiated rocuronium-induced neuromuscular blockade. Posterior predictive checks and bootstrapping illustrated the accuracy and robustness of the model. External validation showed concordance between observed and predicted reversal times, but interindividual variability in reversal time was pronounced. Simulated reversal times in typical adults were 0.8, 1.5 and 1.4 min upon reversal with sugammadex 16 mg kg−1 3 min after rocuronium, sugammadex 4 mg kg−1 during deep neuromuscular blockade and sugammadex 2 mg kg−1 during moderate blockade, respectively. Simulations indicated that reversal times were faster in paediatric patients and slightly slower in elderly patients compared with adults. Renal function did not affect reversal time. CONCLUSIONS Simulations of the therapeutic dosing regimens demonstrated limited impact of age, renal function and sevoflurane use, as predicted reversal time in typical subjects was always <2 min. PMID:21535448
Predicting overload-affected fatigue crack growth in steels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skorupa, M.; Skorupa, A.; Ladecki, B.
1996-12-01
The ability of semi-empirical crack closure models to predict the effect of overloads on fatigue crack growth in low-alloy steels has been investigated. With this purpose, the CORPUS model developed for aircraft metals and spectra has been checked first through comparisons between the simulated and observed results for a low-alloy steel. The CORPUS predictions of crack growth under several types of simple load histories containing overloads appeared generally unconservative which prompted the authors to formulate a new model, more suitable for steels. With the latter approach, the assumed evolution of the crack opening stress during the delayed retardation stage hasmore » been based on experimental results reported for various steels. For all the load sequences considered, the predictions from the proposed model appeared to be by far more accurate than those from CORPUS. Based on the analysis results, the capability of semi-empirical prediction concepts to cover experimentally observed trends that have been reported for sequences with overloads is discussed. Finally, possibilities of improving the model performance are considered.« less
Martelli, F; Contini, D; Taddeucci, A; Zaccanti, G
1997-07-01
In our companion paper we presented a model to describe photon migration through a diffusing slab. The model, developed for a homogeneous slab, is based on the diffusion approximation and is able to take into account reflection at the boundaries resulting from the refractive index mismatch. In this paper the predictions of the model are compared with solutions of the radiative transfer equation obtained by Monte Carlo simulations in order to determine the applicability limits of the approximated theory in different physical conditions. A fitting procedure, carried out with the optical properties as fitting parameters, is used to check the application of the model to the inverse problem. The results show that significant errors can be made if the effect of the refractive index mismatch is not properly taken into account. Errors are more important when measurements of transmittance are used. The effects of using a receiver with a limited angular field of view and the angular distribution of the radiation that emerges from the slab have also been investigated.
NASA Technical Reports Server (NTRS)
Dougherty, N. S.; Johnson, S. L.
1993-01-01
Multiple rocket exhaust plume interactions at high altitudes can produce base flow recirculation with attendant alteration of the base pressure coefficient and increased base heating. A search for a good wind tunnel benchmark problem to check grid clustering technique and turbulence modeling turned up the experiment done at AEDC in 1961 by Goethert and Matz on a 4.25-in. diameter domed missile base model with four rocket nozzles. This wind tunnel model with varied external bleed air flow for the base flow wake produced measured p/p(sub ref) at the center of the base as high as 3.3 due to plume flow recirculation back onto the base. At that time in 1961, relatively inexpensive experimentation with air at gamma = 1.4 and nozzle A(sub e)/A of 10.6 and theta(sub n) = 7.55 deg with P(sub c) = 155 psia simulated a LO2/LH2 rocket exhaust plume with gamma = 1.20, A(sub e)/A of 78 and P(sub c) about 1,000 psia. An array of base pressure taps on the aft dome gave a clear measurement of the plume recirculation effects at p(infinity) = 4.76 psfa corresponding to 145,000 ft altitude. Our CFD computations of the flow field with direct comparison of computed-versus-measured base pressure distribution (across the dome) provide detailed information on velocities and particle traces as well eddy viscosity in the base and nozzle region. The solution was obtained using a six-zone mesh with 284,000 grid points for one quadrant taking advantage of symmetry. Results are compared using a zero-equation algebraic and a one-equation pointwise R(sub t) turbulence model (work in progress). Good agreement with the experimental pressure data was obtained with both; and this benchmark showed the importance of: (1) proper grid clustering and (2) proper choice of turbulence modeling for rocket plume problems/recirculation at high altitude.
STS-30 crewmembers train on JSC shuttle mission simulator (SMS) flight deck
NASA Technical Reports Server (NTRS)
1988-01-01
Wearing headsets, Mission Specialist (MS) Mark C. Lee (left), MS Mary L. Cleave (center), and MS Norman E. Thagard pose on aft flight deck in JSC's fixed base (FB) shuttle mission simulator (SMS). In background, Commander David M. Walker and Pilot Ronald J. Grabe check data on forward flight deck CRT monitors. FB-SMS is located in JSC's Mission Simulation and Training Facility Bldg 5. Crewmembers are scheduled to fly aboard Atlantis, Orbiter Vehicle (OV) 104, in April 1989 for NASA mission STS-30.
Predicting durations of online collective actions based on Peaks' heights
NASA Astrophysics Data System (ADS)
Lu, Peng; Nie, Shizhao; Wang, Zheng; Jing, Ziwei; Yang, Jianwu; Qi, Zhongxiang; Pujia, Wangmo
2018-02-01
Capturing the whole process of collective actions, the peak model contains four stages, including Prepare, Outbreak, Peak, and Vanish. Based on the peak model, one of the key variables, factors and parameters are further investigated in this paper, which is the rate between peaks and spans. Although the durations or spans and peaks' heights are highly diversified, it seems that the ratio between them is quite stable. If the rate's regularity is discovered, we can predict how long the collective action lasts and when it ends based on the peak's height. In this work, we combined mathematical simulations and empirical big data of 148 cases to explore the regularity of ratio's distribution. It is indicated by results of simulations that the rate has some regularities of distribution, which is not normal distribution. The big data has been collected from the 148 online collective actions and the whole processes of participation are recorded. The outcomes of empirical big data indicate that the rate seems to be closer to being log-normally distributed. This rule holds true for both the total cases and subgroups of 148 online collective actions. The Q-Q plot is applied to check the normal distribution of the rate's logarithm, and the rate's logarithm does follow the normal distribution.
Construction of Protograph LDPC Codes with Linear Minimum Distance
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Sam; Jones, Christopher
2006-01-01
A construction method for protograph-based LDPC codes that simultaneously achieve low iterative decoding threshold and linear minimum distance is proposed. We start with a high-rate protograph LDPC code with variable node degrees of at least 3. Lower rate codes are obtained by splitting check nodes and connecting them by degree-2 nodes. This guarantees the linear minimum distance property for the lower-rate codes. Excluding checks connected to degree-1 nodes, we show that the number of degree-2 nodes should be at most one less than the number of checks for the protograph LDPC code to have linear minimum distance. Iterative decoding thresholds are obtained by using the reciprocal channel approximation. Thresholds are lowered by using either precoding or at least one very high-degree node in the base protograph. A family of high- to low-rate codes with minimum distance linearly increasing in block size and with capacity-approaching performance thresholds is presented. FPGA simulation results for a few example codes show that the proposed codes perform as predicted.
RAYLEIGH–TAYLOR UNSTABLE FLAMES—FAST OR FASTER?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hicks, E. P., E-mail: eph2001@columbia.edu
2015-04-20
Rayleigh–Taylor (RT) unstable flames play a key role in the explosions of supernovae Ia. However, the dynamics of these flames are still not well understood. RT unstable flames are affected by both the RT instability of the flame front and by RT-generated turbulence. The coexistence of these factors complicates the choice of flame speed subgrid models for full-star Type Ia simulations. Both processes can stretch and wrinkle the flame surface, increasing its area and, therefore, the burning rate. In past research, subgrid models have been based on either the RT instability or turbulence setting the flame speed. We evaluate bothmore » models, checking their assumptions and their ability to correctly predict the turbulent flame speed. Specifically, we analyze a large parameter study of 3D direct numerical simulations of RT unstable model flames. This study varies both the simulation domain width and the gravity in order to probe a wide range of flame behaviors. We show that RT unstable flames are different from traditional turbulent flames: they are thinner rather than thicker when turbulence is stronger. We also show that none of the several different types of turbulent flame speed models accurately predicts measured flame speeds. In addition, we find that the RT flame speed model only correctly predicts the measured flame speed in a certain parameter regime. Finally, we propose that the formation of cusps may be the factor causing the flame to propagate more quickly than predicted by the RT model.« less
Rayleigh-Taylor Unstable Flames -- Fast or Faster?
NASA Astrophysics Data System (ADS)
Hicks, E. P.
2015-04-01
Rayleigh-Taylor (RT) unstable flames play a key role in the explosions of supernovae Ia. However, the dynamics of these flames are still not well understood. RT unstable flames are affected by both the RT instability of the flame front and by RT-generated turbulence. The coexistence of these factors complicates the choice of flame speed subgrid models for full-star Type Ia simulations. Both processes can stretch and wrinkle the flame surface, increasing its area and, therefore, the burning rate. In past research, subgrid models have been based on either the RT instability or turbulence setting the flame speed. We evaluate both models, checking their assumptions and their ability to correctly predict the turbulent flame speed. Specifically, we analyze a large parameter study of 3D direct numerical simulations of RT unstable model flames. This study varies both the simulation domain width and the gravity in order to probe a wide range of flame behaviors. We show that RT unstable flames are different from traditional turbulent flames: they are thinner rather than thicker when turbulence is stronger. We also show that none of the several different types of turbulent flame speed models accurately predicts measured flame speeds. In addition, we find that the RT flame speed model only correctly predicts the measured flame speed in a certain parameter regime. Finally, we propose that the formation of cusps may be the factor causing the flame to propagate more quickly than predicted by the RT model.
Generalized Symbolic Execution for Model Checking and Testing
NASA Technical Reports Server (NTRS)
Khurshid, Sarfraz; Pasareanu, Corina; Visser, Willem; Kofmeyer, David (Technical Monitor)
2003-01-01
Modern software systems, which often are concurrent and manipulate complex data structures must be extremely reliable. We present a novel framework based on symbolic execution, for automated checking of such systems. We provide a two-fold generalization of traditional symbolic execution based approaches: one, we define a program instrumentation, which enables standard model checkers to perform symbolic execution; two, we give a novel symbolic execution algorithm that handles dynamically allocated structures (e.g., lists and trees), method preconditions (e.g., acyclicity of lists), data (e.g., integers and strings) and concurrency. The program instrumentation enables a model checker to automatically explore program heap configurations (using a systematic treatment of aliasing) and manipulate logical formulae on program data values (using a decision procedure). We illustrate two applications of our framework: checking correctness of multi-threaded programs that take inputs from unbounded domains with complex structure and generation of non-isomorphic test inputs that satisfy a testing criterion. Our implementation for Java uses the Java PathFinder model checker.
A Computer-Aided Exercise for Checking Novices' Understanding of Market Equilibrium Changes.
ERIC Educational Resources Information Center
Katz, Arnold
1999-01-01
Describes a computer-aided supplement to the introductory microeconomics course that enhances students' understanding with simulation-based tools for reviewing what they have learned from lectures and conventional textbooks about comparing market equilibria. Includes a discussion of students' learning progressions and retention after using the…
LoRa Scalability: A Simulation Model Based on Interference Measurements
Haxhibeqiri, Jetmir; Van den Abeele, Floris; Moerman, Ingrid; Hoebeke, Jeroen
2017-01-01
LoRa is a long-range, low power, low bit rate and single-hop wireless communication technology. It is intended to be used in Internet of Things (IoT) applications involving battery-powered devices with low throughput requirements. A LoRaWAN network consists of multiple end nodes that communicate with one or more gateways. These gateways act like a transparent bridge towards a common network server. The amount of end devices and their throughput requirements will have an impact on the performance of the LoRaWAN network. This study investigates the scalability in terms of the number of end devices per gateway of single-gateway LoRaWAN deployments. First, we determine the intra-technology interference behavior with two physical end nodes, by checking the impact of an interfering node on a transmitting node. Measurements show that even under concurrent transmission, one of the packets can be received under certain conditions. Based on these measurements, we create a simulation model for assessing the scalability of a single gateway LoRaWAN network. We show that when the number of nodes increases up to 1000 per gateway, the losses will be up to 32%. In such a case, pure Aloha will have around 90% losses. However, when the duty cycle of the application layer becomes lower than the allowed radio duty cycle of 1%, losses will be even lower. We also show network scalability simulation results for some IoT use cases based on real data. PMID:28545239
LoRa Scalability: A Simulation Model Based on Interference Measurements.
Haxhibeqiri, Jetmir; Van den Abeele, Floris; Moerman, Ingrid; Hoebeke, Jeroen
2017-05-23
LoRa is a long-range, low power, low bit rate and single-hop wireless communication technology. It is intended to be used in Internet of Things (IoT) applications involving battery-powered devices with low throughput requirements. A LoRaWAN network consists of multiple end nodes that communicate with one or more gateways. These gateways act like a transparent bridge towards a common network server. The amount of end devices and their throughput requirements will have an impact on the performance of the LoRaWAN network. This study investigates the scalability in terms of the number of end devices per gateway of single-gateway LoRaWAN deployments. First, we determine the intra-technology interference behavior with two physical end nodes, by checking the impact of an interfering node on a transmitting node. Measurements show that even under concurrent transmission, one of the packets can be received under certain conditions. Based on these measurements, we create a simulation model for assessing the scalability of a single gateway LoRaWAN network. We show that when the number of nodes increases up to 1000 per gateway, the losses will be up to 32%. In such a case, pure Aloha will have around 90% losses. However, when the duty cycle of the application layer becomes lower than the allowed radio duty cycle of 1%, losses will be even lower. We also show network scalability simulation results for some IoT use cases based on real data.
NASA Astrophysics Data System (ADS)
Jöckel, Patrick; Tost, Holger; Pozzer, Andrea; Kunze, Markus; Kirner, Oliver; Brenninkmeijer, Carl A. M.; Brinkop, Sabine; Cai, Duy S.; Dyroff, Christoph; Eckstein, Johannes; Frank, Franziska; Garny, Hella; Gottschaldt, Klaus-Dirk; Graf, Phoebe; Grewe, Volker; Kerkweg, Astrid; Kern, Bastian; Matthes, Sigrun; Mertens, Mariano; Meul, Stefanie; Neumaier, Marco; Nützel, Matthias; Oberländer-Hayn, Sophie; Ruhnke, Roland; Runde, Theresa; Sander, Rolf; Scharffe, Dieter; Zahn, Andreas
2016-03-01
Three types of reference simulations, as recommended by the Chemistry-Climate Model Initiative (CCMI), have been performed with version 2.51 of the European Centre for Medium-Range Weather Forecasts - Hamburg (ECHAM)/Modular Earth Submodel System (MESSy) Atmospheric Chemistry (EMAC) model: hindcast simulations (1950-2011), hindcast simulations with specified dynamics (1979-2013), i.e. nudged towards ERA-Interim reanalysis data, and combined hindcast and projection simulations (1950-2100). The manuscript summarizes the updates of the model system and details the different model set-ups used, including the on-line calculated diagnostics. Simulations have been performed with two different nudging set-ups, with and without interactive tropospheric aerosol, and with and without a coupled ocean model. Two different vertical resolutions have been applied. The on-line calculated sources and sinks of reactive species are quantified and a first evaluation of the simulation results from a global perspective is provided as a quality check of the data. The focus is on the intercomparison of the different model set-ups. The simulation data will become publicly available via CCMI and the Climate and Environmental Retrieval and Archive (CERA) database of the German Climate Computing Centre (DKRZ). This manuscript is intended to serve as an extensive reference for further analyses of the Earth System Chemistry integrated Modelling (ESCiMo) simulations.
NASA Astrophysics Data System (ADS)
Mendes, Isabel; Proença, Isabel
2011-11-01
In this article, we apply count-data travel-cost methods to a truncated sample of visitors to estimate the Peneda-Gerês National Park (PGNP) average consumer surplus (CS) for each day of visit. The measurement of recreation demand is highly specific because it is calculated by number of days of stay per visit. We therefore propose the application of altered truncated count-data models or truncated count-data models on grouped data to estimate a single, on-site individual recreation demand function, with the price (cost) of each recreation day per trip equal to out-of-pocket and time travel plus out-of-pocket and on-site time costs. We further check the sensitivity of coefficient estimations to alternative models and analyse the welfare measure precision by using the delta and simulation methods by Creel and Loomis. With simulated limits, CS is estimated to be €194 (range €116 to €448). This information is of use in the quest to improve government policy and PNPG management and conservation as well as promote nature-based tourism. To our knowledge, this is the first attempt to measure the average recreation net benefits of each day of stay generated by a national park by using truncated altered and truncated grouped count-data travel-cost models based on observing the individual number of days of stay.
Mendes, Isabel; Proença, Isabel
2011-11-01
In this article, we apply count-data travel-cost methods to a truncated sample of visitors to estimate the Peneda-Gerês National Park (PGNP) average consumer surplus (CS) for each day of visit. The measurement of recreation demand is highly specific because it is calculated by number of days of stay per visit. We therefore propose the application of altered truncated count-data models or truncated count-data models on grouped data to estimate a single, on-site individual recreation demand function, with the price (cost) of each recreation day per trip equal to out-of-pocket and time travel plus out-of-pocket and on-site time costs. We further check the sensitivity of coefficient estimations to alternative models and analyse the welfare measure precision by using the delta and simulation methods by Creel and Loomis. With simulated limits, CS is estimated to be
Study of Magnetic Damping Effect on Convection and Solidification Under G-Jitter Conditions
NASA Technical Reports Server (NTRS)
Li, Ben Q.; deGroh, H. C., III
1999-01-01
As shown by NASA resources dedicated to measuring residual gravity (SAMS and OARE systems), g-jitter is a critical issue affecting space experiments on solidification processing of materials. This study aims to provide, through extensive numerical simulations and ground based experiments, an assessment of the use of magnetic fields in combination with microgravity to reduce the g-jitter induced convective flows in space processing systems. We have so far completed asymptotic analyses based on the analytical solutions for g-jitter driven flow and magnetic field damping effects for a simple one-dimensional parallel plate configuration, and developed both 2-D and 3-D numerical models for g-jitter driven flows in simple solidification systems with and without presence of an applied magnetic field. Numerical models have been checked with the analytical solutions and have been applied to simulate the convective flows and mass transfer using both synthetic g-jitter functions and the g-jitter data taken from space flight. Some useful findings have been obtained from the analyses and the modeling results. Some key points may be summarized as follows: (1) the amplitude of the oscillating velocity decreases at a rate inversely proportional to the g-jitter frequency and with an increase in the applied magnetic field; (2) the induced flow approximately oscillates at the same frequency as the affecting g-jitter, but out of a phase angle; (3) the phase angle is a complicated function of geometry, applied magnetic field, temperature gradient and frequency; (4) g-jitter driven flows exhibit a complex fluid flow pattern evolving in time; (5) the damping effect is more effective for low frequency flows; and (6) the applied magnetic field helps to reduce the variation of solutal distribution along the solid-liquid interface. Work in progress includes numerical simulations and ground-based measurements. Both 2-D and 3-D numerical simulations are being continued to obtain further information on g-jitter driven flows and magnetic field effects. A physical model for ground-based measurements is completed and some measurements of the oscillating convection are being taken on the physical model. The comparison of the measurements with numerical simulations is in progress. Additional work planned in the project will also involve extending the 2-D numerical model to include the solidification phenomena with the presence of both g-jitter and magnetic fields.
Resin Film Infusion (RFI) Process Modeling for Large Transport Aircraft Wing Structures
NASA Technical Reports Server (NTRS)
Loos, Alfred C.; Caba, Aaron C.; Furrow, Keith W.
2000-01-01
This investigation completed the verification of a three-dimensional resin transfer molding/resin film infusion (RTM/RFI) process simulation model. The model incorporates resin flow through an anisotropic carbon fiber preform, cure kinetics of the resin, and heat transfer within the preform/tool assembly. The computer model can predict the flow front location, resin pressure distribution, and thermal profiles in the modeled part. The formulation for the flow model is given using the finite element/control volume (FE/CV) technique based on Darcy's Law of creeping flow through a porous media. The FE/CV technique is a numerically efficient method for finding the flow front location and the fluid pressure. The heat transfer model is based on the three-dimensional, transient heat conduction equation, including heat generation. Boundary conditions include specified temperature and convection. The code was designed with a modular approach so the flow and/or the thermal module may be turned on or off as desired. Both models are solved sequentially in a quasi-steady state fashion. A mesh refinement study was completed on a one-element thick model to determine the recommended size of elements that would result in a converged model for a typical RFI analysis. Guidelines are established for checking the convergence of a model, and the recommended element sizes are listed. Several experiments were conducted and computer simulations of the experiments were run to verify the simulation model. Isothermal, non-reacting flow in a T-stiffened section was simulated to verify the flow module. Predicted infiltration times were within 12-20% of measured times. The predicted pressures were approximately 50% of the measured pressures. A study was performed to attempt to explain the difference in pressures. Non-isothermal experiments with a reactive resin were modeled to verify the thermal module and the resin model. Two panels were manufactured using the RFI process. One was a stepped panel and the other was a panel with two 'T' stiffeners. The difference between the predicted infiltration times and the experimental times was 4% to 23%.
Effects of Changing Emissions on Ozone and Particulates in the Northeastern United States
NASA Astrophysics Data System (ADS)
Frost, G. J.; McKeen, S.; Trainer, M.; Ryerson, T.; Holloway, J.; Brock, C.; Middlebrook, A.; Wollny, A.; Matthew, B.; Williams, E.; Lerner, B.; Fortin, T.; Sueper, D.; Parrish, D.; Fehsenfeld, F.; Peckham, S.; Grell, G.; Peltier, R.; Weber, R.; Quinn, P.; Bates, T.
2004-12-01
Emissions of nitrogen oxides (NOx) from electric power generation have decreased in recent years due to changes in burner technology and fuels used. Mobile NOx emissions assessments are less certain, since they must account for increases in vehicle miles traveled, changes in the proportion of diesel and gasoline vehicles, and more stringent controls on engines and fuels. The impact of these complicated emission changes on a particular region's air quality must be diagnosed by a combination of observation and model simulation. The New England Air Quality Study - Intercontinental Transport and Chemical Transformation 2004 (NEAQS-ITCT 2004) program provides an opportunity to test the effects of changes in emissions of NOx and other precursors on air quality in the northeastern United States. An array of ground, marine, and airborne observation platforms deployed during the study offer checks on emission inventories and air quality model simulations, like those of the Weather Research and Forecasting model coupled with online chemistry (WRF-Chem). Retrospective WRF-Chem runs are carried out with two EPA inventories, one compiled for base year 1999 and an update for 2004 incorporating projected and known changes in emissions during the past 5 years. Differences in model predictions of ozone, particulates, and other tracers using the two inventories are investigated. The inventories themselves and the model simulations are compared with the extensive observations available during NEAQS-ITCT 2004. Preliminary insights regarding the sensitivity of the model to NOx emission changes are discussed.
Posterior Predictive Checks for Conditional Independence between Response Time and Accuracy
ERIC Educational Resources Information Center
Bolsinova, Maria; Tijmstra, Jesper
2016-01-01
Conditional independence (CI) between response time and response accuracy is a fundamental assumption of many joint models for time and accuracy used in educational measurement. In this study, posterior predictive checks (PPCs) are proposed for testing this assumption. These PPCs are based on three discrepancy measures reflecting different…
Trajectory Design Enhancements to Mitigate Risk for the Transiting Exoplanet Survey Satellite (TESS)
NASA Technical Reports Server (NTRS)
Dichmann, Donald; Parker, Joel; Nickel, Craig; Lutz, Stephen
2016-01-01
The Transiting Exoplanet Survey Satellite (TESS) will employ a highly eccentric Earth orbit, in 2:1 lunar resonance, which will be reached with a lunar flyby preceded by 3.5 phasing loops. The TESS mission has limited propellant and several constraints on the science orbit and on the phasing loops. Based on analysis and simulation, we have designed the phasing loops to reduce delta-V (DV) and to mitigate risk due to maneuver execution errors. We have automated the trajectory design process and use distributed processing to generate and optimal nominal trajectories; to check constraint satisfaction; and finally to model the effects of maneuver errors to identify trajectories that best meet the mission requirements.
Absolute magnitude calibration using trigonometric parallax - Incomplete, spectroscopic samples
NASA Technical Reports Server (NTRS)
Ratnatunga, Kavan U.; Casertano, Stefano
1991-01-01
A new numerical algorithm is used to calibrate the absolute magnitude of spectroscopically selected stars from their observed trigonometric parallax. This procedure, based on maximum-likelihood estimation, can retrieve unbiased estimates of the intrinsic absolute magnitude and its dispersion even from incomplete samples suffering from selection biases in apparent magnitude and color. It can also make full use of low accuracy and negative parallaxes and incorporate censorship on reported parallax values. Accurate error estimates are derived for each of the fitted parameters. The algorithm allows an a posteriori check of whether the fitted model gives a good representation of the observations. The procedure is described in general and applied to both real and simulated data.
Wu, Binxin
2010-12-01
In this paper, 12 turbulence models for single-phase non-newtonian fluid flow in a pipe are evaluated by comparing the frictional pressure drops obtained from computational fluid dynamics (CFD) with those from three friction factor correlations. The turbulence models studied are (1) three high-Reynolds-number k-ε models, (2) six low-Reynolds-number k-ε models, (3) two k-ω models, and (4) the Reynolds stress model. The simulation results indicate that the Chang-Hsieh-Chen version of the low-Reynolds-number k-ε model performs better than the other models in predicting the frictional pressure drops while the standard k-ω model has an acceptable accuracy and a low computing cost. In the model applications, CFD simulation of mixing in a full-scale anaerobic digester with pumped circulation is performed to propose an improvement in the effective mixing standards recommended by the U.S. EPA based on the effect of rheology on the flow fields. Characterization of the velocity gradient is conducted to quantify the growth or breakage of an assumed floc size. Placement of two discharge nozzles in the digester is analyzed to show that spacing two nozzles 180° apart with each one discharging at an angle of 45° off the wall is the most efficient. Moreover, the similarity rules of geometry and mixing energy are checked for scaling up the digester.
Avionics System Architecture Tool
NASA Technical Reports Server (NTRS)
Chau, Savio; Hall, Ronald; Traylor, marcus; Whitfield, Adrian
2005-01-01
Avionics System Architecture Tool (ASAT) is a computer program intended for use during the avionics-system-architecture- design phase of the process of designing a spacecraft for a specific mission. ASAT enables simulation of the dynamics of the command-and-data-handling functions of the spacecraft avionics in the scenarios in which the spacecraft is expected to operate. ASAT is built upon I-Logix Statemate MAGNUM, providing a complement of dynamic system modeling tools, including a graphical user interface (GUI), modeling checking capabilities, and a simulation engine. ASAT augments this with a library of predefined avionics components and additional software to support building and analyzing avionics hardware architectures using these components.
Prediction Interval Development for Wind-Tunnel Balance Check-Loading
NASA Technical Reports Server (NTRS)
Landman, Drew; Toro, Kenneth G.; Commo, Sean A.; Lynn, Keith C.
2014-01-01
Results from the Facility Analysis Verification and Operational Reliability project revealed a critical gap in capability in ground-based aeronautics research applications. Without a standardized process for check-loading the wind-tunnel balance or the model system, the quality of the aerodynamic force data collected varied significantly between facilities. A prediction interval is required in order to confirm a check-loading. The prediction interval provides an expected upper and lower bound on balance load prediction at a given confidence level. A method has been developed which accounts for sources of variability due to calibration and check-load application. The prediction interval method of calculation and a case study demonstrating its use is provided. Validation of the methods is demonstrated for the case study based on the probability of capture of confirmation points.
2014-06-19
urgent and compelling. Recent efforts in this area automate program analysis techniques using model checking and symbolic execution [2, 5–7]. These...bounded model checking tool for x86 binary programs developed at the Air Force Institute of Technology (AFIT). Jiseki creates a bit-vector logic model based...assume there are n different paths through the function foo . The program could potentially call the function foo a bound number of times, resulting in n
NASA Astrophysics Data System (ADS)
Zhang, Liangjing; Dahle, Christoph; Neumayer, Karl-Hans; Dobslaw, Henryk; Flechtner, Frank; Thomas, Maik
2016-04-01
Terrestrial water storage (TWS) variations obtained from GRACE play an increasingly important role in various hydrological and hydro-meteorological applications. Since monthly-mean gravity fields are contaminated by errors caused by a number of sources with distinct spatial correlation structures, filtering is needed to remove in particular high frequency noise. Subsequently, bias and leakage caused by the filtering need to be corrected before the final results are interpreted as GRACE-based observations of TWS. Knowledge about the reliability and performance of different post-processing methods is highly important for the GRACE users. In this contribution, we re-assess a number of commonly used post-processing methods using a simulated GRACE-like gravity field time-series based on realistic orbits and instrument error assumptions as well as background error assumptions out of the updated ESA Earth System Model. Two non-isotropic filter methods from Kusche (2007) and Swenson and Wahr (2006) are tested. Rescaling factors estimated from five different hydrological models and the ensemble median are applied to the post-processed simulated GRACE-like TWS estimates to correct the bias and leakage. Since TWS anomalies out of the post-processed simulation results can be readily compared to the time-variable Earth System Model initially used as "truth" during the forward simulation step, we are able to thoroughly check the plausibility of our error estimation assessment and will subsequently recommend a processing strategy that shall also be applied to planned GRACE and GRACE-FO Level-3 products for hydrological applications provided by GFZ. Kusche, J. (2007): Approximate decorrelation and non-isotropic smoothing of time-variable GRACE-type gravity field models. J. Geodesy, 81 (11), 733-749, doi:10.1007/s00190-007-0143-3. Swenson, S. and Wahr, J. (2006): Post-processing removal of correlated errors in GRACE data. Geophysical Research Letters, 33(8):L08402.
Assessment of rockfall susceptibility by integrating statistical and physically-based approaches
NASA Astrophysics Data System (ADS)
Frattini, Paolo; Crosta, Giovanni; Carrara, Alberto; Agliardi, Federico
In Val di Fassa (Dolomites, Eastern Italian Alps) rockfalls constitute the most significant gravity-induced natural disaster that threatens both the inhabitants of the valley, who are few, and the thousands of tourists who populate the area in summer and winter. To assess rockfall susceptibility, we developed an integrated statistical and physically-based approach that aimed to predict both the susceptibility to onset and the probability that rockfalls will attain specific reaches. Through field checks and multi-temporal aerial photo-interpretation, we prepared a detailed inventory of both rockfall source areas and associated scree-slope deposits. Using an innovative technique based on GIS tools and a 3D rockfall simulation code, grid cells pertaining to the rockfall source-area polygons were classified as active or inactive, based on the state of activity of the associated scree-slope deposits. The simulation code allows one to link each source grid cell with scree deposit polygons by calculating the trajectory of each simulated launch of blocks. By means of discriminant analysis, we then identified the mix of environmental variables that best identifies grid cells with low or high susceptibility to rockfalls. Among these variables, structural setting, land use, and morphology were the most important factors that led to the initiation of rockfalls. We developed 3D simulation models of the runout distance, intensity and frequency of rockfalls, whose source grid cells corresponded either to the geomorphologically-defined source polygons ( geomorphological scenario) or to study area grid cells with slope angle greater than an empirically-defined value of 37° ( empirical scenario). For each scenario, we assigned to the source grid cells an either fixed or variable onset susceptibility; the latter was derived from the discriminant model group (active/inactive) membership probabilities. Comparison of these four models indicates that the geomorphological scenario with variable onset susceptibility appears to be the most realistic model. Nevertheless, political and legal issues seem to guide local administrators, who tend to select the more conservative empirically-based scenario as a land-planning tool.
Wu, Menglong; Han, Dahai; Zhang, Xiang; Zhang, Feng; Zhang, Min; Yue, Guangxin
2014-03-10
We have implemented a modified Low-Density Parity-Check (LDPC) codec algorithm in ultraviolet (UV) communication system. Simulations are conducted with measured parameters to evaluate the LDPC-based UV system performance. Moreover, LDPC (960, 480) and RS (18, 10) are implemented and experimented via a non-line-of-sight (NLOS) UV test bed. The experimental results are in agreement with the simulation and suggest that based on the given power and 10(-3)bit error rate (BER), in comparison with an uncoded system, average communication distance increases 32% with RS code, while 78% with LDPC code.
Madrasi, Kumpal; Chaturvedula, Ayyappa; Haberer, Jessica E; Sale, Mark; Fossler, Michael J; Bangsberg, David; Baeten, Jared M; Celum, Connie; Hendrix, Craig W
2017-05-01
Adherence is a major factor in the effectiveness of preexposure prophylaxis (PrEP) for HIV prevention. Modeling patterns of adherence helps to identify influential covariates of different types of adherence as well as to enable clinical trial simulation so that appropriate interventions can be developed. We developed a Markov mixed-effects model to understand the covariates influencing adherence patterns to daily oral PrEP. Electronic adherence records (date and time of medication bottle cap opening) from the Partners PrEP ancillary adherence study with a total of 1147 subjects were used. This study included once-daily dosing regimens of placebo, oral tenofovir disoproxil fumarate (TDF), and TDF in combination with emtricitabine (FTC), administered to HIV-uninfected members of serodiscordant couples. One-coin and first- to third-order Markov models were fit to the data using NONMEM ® 7.2. Model selection criteria included objective function value (OFV), Akaike information criterion (AIC), visual predictive checks, and posterior predictive checks. Covariates were included based on forward addition (α = 0.05) and backward elimination (α = 0.001). Markov models better described the data than 1-coin models. A third-order Markov model gave the lowest OFV and AIC, but the simpler first-order model was used for covariate model building because no additional benefit on prediction of target measures was observed for higher-order models. Female sex and older age had a positive impact on adherence, whereas Sundays, sexual abstinence, and sex with a partner other than the study partner had a negative impact on adherence. Our findings suggest adherence interventions should consider the role of these factors. © 2016, The American College of Clinical Pharmacology.
Development of a paediatric population-based model of the pharmacokinetics of rivaroxaban.
Willmann, Stefan; Becker, Corina; Burghaus, Rolf; Coboeken, Katrin; Edginton, Andrea; Lippert, Jörg; Siegmund, Hans-Ulrich; Thelen, Kirstin; Mück, Wolfgang
2014-01-01
Venous thromboembolism has been increasingly recognised as a clinical problem in the paediatric population. Guideline recommendations for antithrombotic therapy in paediatric patients are based mainly on extrapolation from adult clinical trial data, owing to the limited number of clinical trials in paediatric populations. The oral, direct Factor Xa inhibitor rivaroxaban has been approved in adult patients for several thromboembolic disorders, and its well-defined pharmacokinetic and pharmacodynamic characteristics and efficacy and safety profiles in adults warrant further investigation of this agent in the paediatric population. The objective of this study was to develop and qualify a physiologically based pharmacokinetic (PBPK) model for rivaroxaban doses of 10 and 20 mg in adults and to scale this model to the paediatric population (0-18 years) to inform the dosing regimen for a clinical study of rivaroxaban in paediatric patients. Experimental data sets from phase I studies supported the development and qualification of an adult PBPK model. This adult PBPK model was then scaled to the paediatric population by including anthropometric and physiological information, age-dependent clearance and age-dependent protein binding. The pharmacokinetic properties of rivaroxaban in virtual populations of children were simulated for two body weight-related dosing regimens equivalent to 10 and 20 mg once daily in adults. The quality of the model was judged by means of a visual predictive check. Subsequently, paediatric simulations of the area under the plasma concentration-time curve (AUC), maximum (peak) plasma drug concentration (C max) and concentration in plasma after 24 h (C 24h) were compared with the adult reference simulations. Simulations for AUC, C max and C 24h throughout the investigated age range largely overlapped with values obtained for the corresponding dose in the adult reference simulation for both body weight-related dosing regimens. However, pharmacokinetic values in infants and preschool children (body weight <40 kg) were lower than the 90 % confidence interval threshold of the adult reference model and, therefore, indicated that doses in these groups may need to be increased to achieve the same plasma levels as in adults. For children with body weight between 40 and 70 kg, simulated plasma pharmacokinetic parameters (C max, C 24h and AUC) overlapped with the values obtained in the corresponding adult reference simulation, indicating that body weight-related exposure was similar between these children and adults. In adolescents of >70 kg body weight, the simulated 90 % prediction interval values of AUC and C 24h were much higher than the 90 % confidence interval of the adult reference population, owing to the weight-based simulation approach, but for these patients rivaroxaban would be administered at adult fixed doses of 10 and 20 mg. The paediatric PBPK model developed here allowed an exploratory analysis of the pharmacokinetics of rivaroxaban in children to inform the dosing regimen for a clinical study in paediatric patients.
Piha, Kustaa; Sumanen, Hilla; Lahelma, Eero; Rahkonen, Ossi
2017-04-01
There is contradictory evidence on the association between health check-ups and future morbidity. Among the general population, those with high socioeconomic position participate more often in health check-ups. The main aims of this study were to analyse if attendance to health check-ups are socioeconomically patterned and affect sickness absence over a 10-year follow-up. This register-based follow-up study included municipal employees of the City of Helsinki. 13 037 employees were invited to age-based health check-up during 2000-2002, with a 62% attendance rate. Education, occupational class and individual income were used to measure socioeconomic position. Medically certified sickness absence of 4 days or more was measured and controlled for at the baseline and used as an outcome over follow-up. The mean follow-up time was 7.5 years. Poisson regression was used. Men and employees with lower socioeconomic position participated more actively in health check-ups. Among women, non-attendance to health check-up predicted higher sickness absence during follow-up (relative risk =1.26, 95% CI 1.17 to 1.37) in the fully adjusted model. Health check-ups were not effective in reducing socioeconomic differences in sickness absence. Age-based health check-ups reduced subsequent sickness absence and should be promoted. Attendance to health check-ups should be as high as possible. Contextual factors need to be taken into account when applying the results in interventions in other settings. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Concrete Model Checking with Abstract Matching and Refinement
NASA Technical Reports Server (NTRS)
Pasareanu Corina S.; Peianek Radek; Visser, Willem
2005-01-01
We propose an abstraction-based model checking method which relies on refinement of an under-approximation of the feasible behaviors of the system under analysis. The method preserves errors to safety properties, since all analyzed behaviors are feasible by definition. The method does not require an abstract transition relation to he generated, but instead executes the concrete transitions while storing abstract versions of the concrete states, as specified by a set of abstraction predicates. For each explored transition. the method checks, with the help of a theorem prover, whether there is any loss of precision introduced by abstraction. The results of these checks are used to decide termination or to refine the abstraction, by generating new abstraction predicates. If the (possibly infinite) concrete system under analysis has a finite bisimulation quotient, then the method is guaranteed to eventually explore an equivalent finite bisimilar structure. We illustrate the application of the approach for checking concurrent programs. We also show how a lightweight variant can be used for efficient software testing.
Quasi-Geostrophic Diagnosis of Mixed-Layer Dynamics Embedded in a Mesoscale Turbulent Field
NASA Astrophysics Data System (ADS)
Chavanne, C. P.; Klein, P.
2016-02-01
A new quasi-geostrophic model has been developed to diagnose the three-dimensional circulation, including the vertical velocity, in the upper ocean from high-resolution observations of sea surface height and buoyancy. The formulation for the adiabatic component departs from the classical surface quasi-geostrophic framework considered before since it takes into account the stratification within the surface mixed-layer that is usually much weaker than that in the ocean interior. To achieve this, the model approximates the ocean with two constant-stratification layers : a finite-thickness surface layer (or the mixed-layer) and an infinitely-deep interior layer. It is shown that the leading-order adiabatic circulation is entirely determined if both the surface streamfunction and buoyancy anomalies are considered. The surface layer further includes a diabatic dynamical contribution. Parameterization of diabatic vertical velocities is based on their restoring impacts of the thermal-wind balance that is perturbed by turbulent vertical mixing of momentum and buoyancy. The model skill in reproducing the three-dimensional circulation in the upper ocean from surface data is checked against the output of a high-resolution primitive-equation numerical simulation. Correlation between simulated and diagnosed vertical velocities are significantly improved in the mixed-layer for the new model compared to the classical surface quasi-geostrophic model, reaching 0.9 near the surface.
Diagnostics for generalized linear hierarchical models in network meta-analysis.
Zhao, Hong; Hodges, James S; Carlin, Bradley P
2017-09-01
Network meta-analysis (NMA) combines direct and indirect evidence comparing more than 2 treatments. Inconsistency arises when these 2 information sources differ. Previous work focuses on inconsistency detection, but little has been done on how to proceed after identifying inconsistency. The key issue is whether inconsistency changes an NMA's substantive conclusions. In this paper, we examine such discrepancies from a diagnostic point of view. Our methods seek to detect influential and outlying observations in NMA at a trial-by-arm level. These observations may have a large effect on the parameter estimates in NMA, or they may deviate markedly from other observations. We develop formal diagnostics for a Bayesian hierarchical model to check the effect of deleting any observation. Diagnostics are specified for generalized linear hierarchical NMA models and investigated for both published and simulated datasets. Results from our example dataset using either contrast- or arm-based models and from the simulated datasets indicate that the sources of inconsistency in NMA tend not to be influential, though results from the example dataset suggest that they are likely to be outliers. This mimics a familiar result from linear model theory, in which outliers with low leverage are not influential. Future extensions include incorporating baseline covariates and individual-level patient data. Copyright © 2017 John Wiley & Sons, Ltd.
Sequence dependency of canonical base pair opening in the DNA double helix
Villa, Alessandra
2017-01-01
The flipping-out of a DNA base from the double helical structure is a key step of many cellular processes, such as DNA replication, modification and repair. Base pair opening is the first step of base flipping and the exact mechanism is still not well understood. We investigate sequence effects on base pair opening using extensive classical molecular dynamics simulations targeting the opening of 11 different canonical base pairs in two DNA sequences. Two popular biomolecular force fields are applied. To enhance sampling and calculate free energies, we bias the simulation along a simple distance coordinate using a newly developed adaptive sampling algorithm. The simulation is guided back and forth along the coordinate, allowing for multiple opening pathways. We compare the calculated free energies with those from an NMR study and check assumptions of the model used for interpreting the NMR data. Our results further show that the neighboring sequence is an important factor for the opening free energy, but also indicates that other sequence effects may play a role. All base pairs are observed to have a propensity for opening toward the major groove. The preferred opening base is cytosine for GC base pairs, while for AT there is sequence dependent competition between the two bases. For AT opening, we identify two non-canonical base pair interactions contributing to a local minimum in the free energy profile. For both AT and CG we observe long-lived interactions with water and with sodium ions at specific sites on the open base pair. PMID:28369121
Structured Low-Density Parity-Check Codes with Bandwidth Efficient Modulation
NASA Technical Reports Server (NTRS)
Cheng, Michael K.; Divsalar, Dariush; Duy, Stephanie
2009-01-01
In this work, we study the performance of structured Low-Density Parity-Check (LDPC) Codes together with bandwidth efficient modulations. We consider protograph-based LDPC codes that facilitate high-speed hardware implementations and have minimum distances that grow linearly with block sizes. We cover various higher- order modulations such as 8-PSK, 16-APSK, and 16-QAM. During demodulation, a demapper transforms the received in-phase and quadrature samples into reliability information that feeds the binary LDPC decoder. We will compare various low-complexity demappers and provide simulation results for assorted coded-modulation combinations on the additive white Gaussian noise and independent Rayleigh fading channels.
A Weakest Precondition Approach to Robustness
NASA Astrophysics Data System (ADS)
Balliu, Musard; Mastroeni, Isabella
With the increasing complexity of information management computer systems, security becomes a real concern. E-government, web-based financial transactions or military and health care information systems are only a few examples where large amount of information can reside on different hosts distributed worldwide. It is clear that any disclosure or corruption of confidential information in these contexts can result fatal. Information flow controls constitute an appealing and promising technology to protect both data confidentiality and data integrity. The certification of the security degree of a program that runs in untrusted environments still remains an open problem in the area of language-based security. Robustness asserts that an active attacker, who can modify program code in some fixed points (holes), is unable to disclose more private information than a passive attacker, who merely observes unclassified data. In this paper, we extend a method recently proposed for checking declassified non-interference in presence of passive attackers only, in order to check robustness by means of weakest precondition semantics. In particular, this semantics simulates the kind of analysis that can be performed by an attacker, i.e., from public output towards private input. The choice of semantics allows us to distinguish between different attacks models and to characterize the security of applications in different scenarios.
Geochemical Reaction Mechanism Discovery from Molecular Simulation
Stack, Andrew G.; Kent, Paul R. C.
2014-11-10
Methods to explore reactions using computer simulation are becoming increasingly quantitative, versatile, and robust. In this review, a rationale for how molecular simulation can help build better geochemical kinetics models is first given. We summarize some common methods that geochemists use to simulate reaction mechanisms, specifically classical molecular dynamics and quantum chemical methods and discuss their strengths and weaknesses. Useful tools such as umbrella sampling and metadynamics that enable one to explore reactions are discussed. Several case studies wherein geochemists have used these tools to understand reaction mechanisms are presented, including water exchange and sorption on aqueous species and mineralmore » surfaces, surface charging, crystal growth and dissolution, and electron transfer. The impact that molecular simulation has had on our understanding of geochemical reactivity are highlighted in each case. In the future, it is anticipated that molecular simulation of geochemical reaction mechanisms will become more commonplace as a tool to validate and interpret experimental data, and provide a check on the plausibility of geochemical kinetic models.« less
Stable lattice Boltzmann model for Maxwell equations in media
NASA Astrophysics Data System (ADS)
Hauser, A.; Verhey, J. L.
2017-12-01
The present work shows a method for stable simulations via the lattice Boltzmann (LB) model for electromagnetic waves (EM) transiting homogeneous media. LB models for such media were already presented in the literature, but they suffer from numerical instability when the media transitions are sharp. We use one of these models in the limit of pure vacuum derived from Liu and Yan [Appl. Math. Model. 38, 1710 (2014), 10.1016/j.apm.2013.09.009] and apply an extension that treats the effects of polarization and magnetization separately. We show simulations of simple examples in which EM waves travel into media to quantify error scaling, stability, accuracy, and time scaling. For conductive media, we use the Strang splitting and check the simulations accuracy at the example of the skin effect. Like pure EM propagation, the error for the static limits, which are constructed with a current density added in a first-order scheme, can be less than 1 % . The presented method is an easily implemented alternative for the stabilization of simulation for EM waves propagating in spatially complex structured media properties and arbitrary transitions.
Simulation study of pedestrian flow in a station hall during the Spring Festival travel rush
NASA Astrophysics Data System (ADS)
Wang, Lei; Zhang, Qian; Cai, Yun; Zhang, Jianlin; Ma, Qingguo
2013-05-01
The Spring Festival is the most important festival in China. How can passengers go home smoothly and quickly during the Spring Festival travel rush, especially when emergencies of terrible winter weather happen? By modifying the social force model, we simulated the pedestrian flow in a station hall. The simulation revealed casualties happened when passengers escaped from panic induced by crowd turbulence. The results suggest that passenger numbers, ticket checking patterns, baggage volumes, and anxiety can affect the speed of passing through the waiting corridor. Our approach is meaningful in understanding the feature of a crowd moving and can be served to reproduce mass events. Therefore, it not only develops a realistic modeling of pedestrian flow but also is important for a better preparation of emergency management.
NASA Astrophysics Data System (ADS)
Zhang, Liangjing; Dobslaw, Henryk; Dahle, Christoph; Thomas, Maik; Neumayer, Karl-Hans; Flechtner, Frank
2017-04-01
By operating for more than one decade now, the GRACE satellite provides valuable information on the total water storage (TWS) for hydrological and hydro-meteorological applications. The increasing interest in use of the GRACE-based TWS requires an in-depth assessment of the reliability of the outputs and also its uncertainties. Through years of development, different post-processing methods have been suggested for TWS estimation. However, since GRACE offers an unique way to provide high spatial and temporal scale TWS, there is no global ground truth data available to fully validate the results. In this contribution, we re-assess a number of commonly used post-processing methods using a simulated GRACE-type gravity field time-series based on realistic orbits and instrument error assumptions as well as background error assumptions out of the updated ESA Earth System Model. Three non-isotropic filter methods from Kusche (2007) and a combined filter from DDK1 and DDK3 based on the ground tracks are tested. Rescaling factors estimated from five different hydrological models and the ensemble median are applied to the post-processed simulated GRACE-type TWS estimates to correct the bias and leakage. Time variant rescaling factors as monthly scaling factors and scaling factors for seasonal and long-term variations separately are investigated as well. Since TWS anomalies out of the post-processed simulation results can be readily compared to the time-variable Earth System Model initially used as "truth" during the forward simulation step, we are able to thoroughly check the plausibility of our error estimation assessment (Zhang et al., 2016) and will subsequently recommend a processing strategy that shall also be applied for planned GRACE and GRACE-FO Level-3 products for terrestrial applications provided by GFZ. Kusche, J., 2007:Approximate decorrelation and non-isotropic smoothing of time-variable GRACE-type gravity field models. J. Geodesy, 81 (11), 733-749, doi:10.1007/s00190-007-0143-3. Zhang L, Dobslaw H, Thomas M (2016) Globally gridded terrestrial water storage variations from GRACE satellite gravimetry for hydrometeorological applications. Geophysical Journal International 206(1):368-378, DOI 10.1093/gji/ggw153.
NASA Astrophysics Data System (ADS)
Bai, Cheng-lin; Cheng, Zhi-hui
2016-09-01
In order to further improve the carrier synchronization estimation range and accuracy at low signal-to-noise ratio ( SNR), this paper proposes a code-aided carrier synchronization algorithm based on improved nonbinary low-density parity-check (NB-LDPC) codes to study the polarization-division-multiplexing coherent optical orthogonal frequency division multiplexing (PDM-CO-OFDM) system performance in the cases of quadrature phase shift keying (QPSK) and 16 quadrature amplitude modulation (16-QAM) modes. The simulation results indicate that this algorithm can enlarge frequency and phase offset estimation ranges and enhance accuracy of the system greatly, and the bit error rate ( BER) performance of the system is improved effectively compared with that of the system employing traditional NB-LDPC code-aided carrier synchronization algorithm.
A mathematical function for the description of nutrient-response curve
Ahmadi, Hamed
2017-01-01
Several mathematical equations have been proposed to modeling nutrient-response curve for animal and human justified on the goodness of fit and/or on the biological mechanism. In this paper, a functional form of a generalized quantitative model based on Rayleigh distribution principle for description of nutrient-response phenomena is derived. The three parameters governing the curve a) has biological interpretation, b) may be used to calculate reliable estimates of nutrient response relationships, and c) provide the basis for deriving relationships between nutrient and physiological responses. The new function was successfully applied to fit the nutritional data obtained from 6 experiments including a wide range of nutrients and responses. An evaluation and comparison were also done based simulated data sets to check the suitability of new model and four-parameter logistic model for describing nutrient responses. This study indicates the usefulness and wide applicability of the new introduced, simple and flexible model when applied as a quantitative approach to characterizing nutrient-response curve. This new mathematical way to describe nutritional-response data, with some useful biological interpretations, has potential to be used as an alternative approach in modeling nutritional responses curve to estimate nutrient efficiency and requirements. PMID:29161271
Takeuchi, Masato; Yano, Ikuko; Ito, Satoko; Sugimoto, Mitsuhiro; Yamamoto, Shota; Yonezawa, Atsushi; Ikeda, Akio; Matsubara, Kazuo
2017-04-01
Topiramate is a second-generation antiepileptic drug used as monotherapy and adjunctive therapy in adults and children with partial seizures. A population pharmacokinetic (PPK) analysis was performed to improve the topiramate dosage adjustment for individualized treatment. Patients whose steady-state serum concentration of topiramate was routinely monitored at Kyoto University Hospital from April 2012 to March 2013 were included in the model-building data. A nonlinear mixed effects modeling program was used to evaluate the influence of covariates on topiramate pharmacokinetics. The obtained PPK model was evaluated by internal model validations, including goodness-of-fit plots and prediction-corrected visual predictive checks, and was externally confirmed using the validation data from January 2015 to December 2015. A total of 177 steady-state serum concentrations from 93 patients were used for the model-building analysis. The patients' age ranged from 2 to 68 years, and body weight ranged from 8.6 to 105 kg. The median serum concentration of topiramate was 1.7 mcg/mL, and half of the patients received carbamazepine coadministration. Based on a one-compartment model with first order absorption and elimination, the apparent volume of distribution was 105 L/70 kg, and the apparent clearance was allometrically related to the body weight as 2.25 L·h·70 kg without carbamazepine or phenytoin. Combination treatment with carbamazepine or phenytoin increased the apparent clearance to 3.51 L·h·70 kg. Goodness-of-fit plots, prediction-corrected visual predictive check, and external validation using the validation data from 43 patients confirmed an appropriateness of the final model. Simulations based on the final model showed that dosage adjustments allometrically scaling to body weight can equalize the serum concentrations in children of various ages and adults. The PPK model, using the power scaling of body weight, effectively elucidated the topiramate serum concentration profile ranging from pediatric to adult patients. Dosage adjustments based on body weight and concomitant antiepileptic drug help obtain the dosage of topiramate necessary to reach an effective concentration in each individual.
Construction of type-II QC-LDPC codes with fast encoding based on perfect cyclic difference sets
NASA Astrophysics Data System (ADS)
Li, Ling-xiang; Li, Hai-bing; Li, Ji-bi; Jiang, Hua
2017-09-01
In view of the problems that the encoding complexity of quasi-cyclic low-density parity-check (QC-LDPC) codes is high and the minimum distance is not large enough which leads to the degradation of the error-correction performance, the new irregular type-II QC-LDPC codes based on perfect cyclic difference sets (CDSs) are constructed. The parity check matrices of these type-II QC-LDPC codes consist of the zero matrices with weight of 0, the circulant permutation matrices (CPMs) with weight of 1 and the circulant matrices with weight of 2 (W2CMs). The introduction of W2CMs in parity check matrices makes it possible to achieve the larger minimum distance which can improve the error- correction performance of the codes. The Tanner graphs of these codes have no girth-4, thus they have the excellent decoding convergence characteristics. In addition, because the parity check matrices have the quasi-dual diagonal structure, the fast encoding algorithm can reduce the encoding complexity effectively. Simulation results show that the new type-II QC-LDPC codes can achieve a more excellent error-correction performance and have no error floor phenomenon over the additive white Gaussian noise (AWGN) channel with sum-product algorithm (SPA) iterative decoding.
Path integral pricing of Wasabi option in the Black-Scholes model
NASA Astrophysics Data System (ADS)
Cassagnes, Aurelien; Chen, Yu; Ohashi, Hirotada
2014-11-01
In this paper, using path integral techniques, we derive a formula for a propagator arising in the study of occupation time derivatives. Using this result we derive a fair price for the case of the cumulative Parisian option. After confirming the validity of the derived result using Monte Carlo simulation, a new type of heavily path dependent derivative product is investigated. We derive an approximation for our so-called Wasabi option fair price and check the accuracy of our result with a Monte Carlo simulation.
NASA Astrophysics Data System (ADS)
Fan, Zuhui
2000-01-01
The linear bias of the dark halos from a model under the Zeldovich approximation is derived and compared with the fitting formula of simulation results. While qualitatively similar to the Press-Schechter formula, this model gives a better description for the linear bias around the turnaround point. This advantage, however, may be compromised by the large uncertainty of the actual behavior of the linear bias near the turnaround point. For a broad class of structure formation models in the cold dark matter framework, a general relation exists between the number density and the linear bias of dark halos. This relation can be readily tested by numerical simulations. Thus, instead of laboriously checking these models one by one, numerical simulation studies can falsify a whole category of models. The general validity of this relation is important in identifying key physical processes responsible for the large-scale structure formation in the universe.
Verification and Validation of Autonomy Software at NASA
NASA Technical Reports Server (NTRS)
Pecheur, Charles
2000-01-01
Autonomous software holds the promise of new operation possibilities, easier design and development and lower operating costs. However, as those system close control loops and arbitrate resources on board with specialized reasoning, the range of possible situations becomes very large and uncontrollable from the outside, making conventional scenario-based testing very inefficient. Analytic verification and validation (V&V) techniques, and model checking in particular, can provide significant help for designing autonomous systems in a more efficient and reliable manner, by providing a better coverage and allowing early error detection. This article discusses the general issue of V&V of autonomy software, with an emphasis towards model-based autonomy, model-checking techniques and concrete experiments at NASA.
Verification and Validation of Autonomy Software at NASA
NASA Technical Reports Server (NTRS)
Pecheur, Charles
2000-01-01
Autonomous software holds the promise of new operation possibilities, easier design and development, and lower operating costs. However, as those system close control loops and arbitrate resources on-board with specialized reasoning, the range of possible situations becomes very large and uncontrollable from the outside, making conventional scenario-based testing very inefficient. Analytic verification and validation (V&V) techniques, and model checking in particular, can provide significant help for designing autonomous systems in a more efficient and reliable manner, by providing a better coverage and allowing early error detection. This article discusses the general issue of V&V of autonomy software, with an emphasis towards model-based autonomy, model-checking techniques, and concrete experiments at NASA.
Site investigation and modelling at "La Maina" landslide (Carnian Alps, Italy)
NASA Astrophysics Data System (ADS)
Marcato, G.; Mantovani, M.; Pasuto, A.; Silvano, S.; Tagliavini, F.; Zabuski, L.; Zannoni, A.
2006-01-01
The Sauris reservoir is a hydroelectric basin closed downstream by a 136 m high, double arc concrete dam. The dam is firmly anchored to a consistent rock (Dolomia dello Schlern), but the Lower Triassic clayey formations, cropping out especially in the lower part of the slopes, have made the whole catchment basin increasingly prone to landslides. In recent years, the "La Maina landslide" has opened up several joints over a surface of about 100 000 m2, displacing about 1 500 000 m3 of material. Particular attention is now being given to the evolution of the instability area, as the reservoir is located at the foot of the landslide. Under the commission of the Regional Authority for Civil Protection a numerical modelling simulation in a pseudo-time condition of the slope was developed, in order to understand the risk for transport infrastructures, for some houses and for the reservoir and to take urgent mesaures to stabilize the slope. A monitoring system consisting of four inclinometers, three wire extensometers and ten GPS bench-mark pillars was immediately set up to check on surface and deep displacements. The data collected and the geological and geomorphological evidences was used to carry out a numerical simulation. The reliability of the results was checked by comparing the model with the morphological evidence of the movement. The mitigation measures were designed and realised following the indications provided by the model.
Information Extraction for System-Software Safety Analysis: Calendar Year 2008 Year-End Report
NASA Technical Reports Server (NTRS)
Malin, Jane T.
2009-01-01
This annual report describes work to integrate a set of tools to support early model-based analysis of failures and hazards due to system-software interactions. The tools perform and assist analysts in the following tasks: 1) extract model parts from text for architecture and safety/hazard models; 2) combine the parts with library information to develop the models for visualization and analysis; 3) perform graph analysis and simulation to identify and evaluate possible paths from hazard sources to vulnerable entities and functions, in nominal and anomalous system-software configurations and scenarios; and 4) identify resulting candidate scenarios for software integration testing. There has been significant technical progress in model extraction from Orion program text sources, architecture model derivation (components and connections) and documentation of extraction sources. Models have been derived from Internal Interface Requirements Documents (IIRDs) and FMEA documents. Linguistic text processing is used to extract model parts and relationships, and the Aerospace Ontology also aids automated model development from the extracted information. Visualizations of these models assist analysts in requirements overview and in checking consistency and completeness.
Monte Carlo modeling of the Siemens Optifocus multileaf collimator.
Laliena, Victor; García-Romero, Alejandro
2015-05-01
We have developed a new component module for the BEAMnrc software package, called SMLC, which models the tongue-and-groove structure of the Siemens Optifocus multileaf collimator. The ultimate goal is to perform accurate Monte Carlo simulations of the IMRT treatments carried out with Optifocus. SMLC has been validated by direct geometry checks and by comparing quantitatively the results of simulations performed with it and with the component module VARMLC. Measurements and Monte Carlo simulations of absorbed dose distributions of radiation fields sensitive to the tongue-and-groove effect have been performed to tune the free parameters of SMLC. The measurements cannot be accurately reproduced with VARMLC. Finally, simulations of a typical IMRT field showed that SMLC improves the agreement with experimental measurements with respect to VARMLC in clinically relevant cases. 87.55. K. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Kolyaie, S.; Yaghooti, M.; Majidi, G.
2011-12-01
This paper is a part of an ongoing research to examine the capability of geostatistical analysis for mobile networks coverage prediction, simulation and tuning. Mobile network coverage predictions are used to find network coverage gaps and areas with poor serviceability. They are essential data for engineering and management in order to make better decision regarding rollout, planning and optimisation of mobile networks.The objective of this research is to evaluate different interpolation techniques in coverage prediction. In method presented here, raw data collected from drive testing a sample of roads in study area is analysed and various continuous surfaces are created using different interpolation methods. Two general interpolation methods are used in this paper with different variables; first, Inverse Distance Weighting (IDW) with various powers and number of neighbours and second, ordinary kriging with Gaussian, spherical, circular and exponential semivariogram models with different number of neighbours. For the result comparison, we have used check points coming from the same drive test data. Prediction values for check points are extracted from each surface and the differences with actual value are computed. The output of this research helps finding an optimised and accurate model for coverage prediction.
Bender, Anne Mette; Kawachi, Ichiro; Jørgensen, Torben; Pisinger, Charlotta
2015-01-01
We sought to examine whether neighborhood deprivation is associated with participation in a large population-based health check. Such analyses will help answer the question whether health checks, which are designed to meet the needs of residents in deprived neighborhoods, may increase participation and prove to be more effective in preventing disease. In Europe, no study has previously looked at the association between neighborhood deprivation and participation in a population-based health check. The study population comprised 12,768 persons invited for a health check including screening for ischemic heart disease and lifestyle counseling. The study population was randomly drawn from a population of 179,097 persons living in 73 neighborhoods in Denmark. Data on neighborhood deprivation (percentage with basic education, with low income and not in work) and individual socioeconomic position were retrieved from national administrative registers. Multilevel regression analyses with log links and binary distributions were conducted to obtain relative risks, intraclass correlation coefficients and proportional change in variance. Large differences between neighborhoods existed in both deprivation levels and neighborhood health check participation rate (mean 53%; range 35-84%). In multilevel analyses adjusted for age and sex, higher levels of all three indicators of neighborhood deprivation and a deprivation score were associated with lower participation in a dose-response fashion. Persons living in the most deprived neighborhoods had up to 37% decreased probability of participating compared to those living in the least deprived neighborhoods. Inclusion of individual socioeconomic position in the model attenuated the neighborhood deprivation coefficients, but all except for income deprivation remained statistically significant. Neighborhood deprivation was associated with participation in a population-based health check in a dose-response manner, in which increasing neighborhood deprivation was associated with decreasing participation. This suggests the need to develop preventive health checks tailored to deprived neighborhoods.
HyDE Framework for Stochastic and Hybrid Model-Based Diagnosis
NASA Technical Reports Server (NTRS)
Narasimhan, Sriram; Brownston, Lee
2012-01-01
Hybrid Diagnosis Engine (HyDE) is a general framework for stochastic and hybrid model-based diagnosis that offers flexibility to the diagnosis application designer. The HyDE architecture supports the use of multiple modeling paradigms at the component and system level. Several alternative algorithms are available for the various steps in diagnostic reasoning. This approach is extensible, with support for the addition of new modeling paradigms as well as diagnostic reasoning algorithms for existing or new modeling paradigms. HyDE is a general framework for stochastic hybrid model-based diagnosis of discrete faults; that is, spontaneous changes in operating modes of components. HyDE combines ideas from consistency-based and stochastic approaches to model- based diagnosis using discrete and continuous models to create a flexible and extensible architecture for stochastic and hybrid diagnosis. HyDE supports the use of multiple paradigms and is extensible to support new paradigms. HyDE generates candidate diagnoses and checks them for consistency with the observations. It uses hybrid models built by the users and sensor data from the system to deduce the state of the system over time, including changes in state indicative of faults. At each time step when observations are available, HyDE checks each existing candidate for continued consistency with the new observations. If the candidate is consistent, it continues to remain in the candidate set. If it is not consistent, then the information about the inconsistency is used to generate successor candidates while discarding the candidate that was inconsistent. The models used by HyDE are similar to simulation models. They describe the expected behavior of the system under nominal and fault conditions. The model can be constructed in modular and hierarchical fashion by building component/subsystem models (which may themselves contain component/ subsystem models) and linking them through shared variables/parameters. The component model is expressed as operating modes of the component and conditions for transitions between these various modes. Faults are modeled as transitions whose conditions for transitions are unknown (and have to be inferred through the reasoning process). Finally, the behavior of the components is expressed as a set of variables/ parameters and relations governing the interaction between the variables. The hybrid nature of the systems being modeled is captured by a combination of the above transitional model and behavioral model. Stochasticity is captured as probabilities associated with transitions (indicating the likelihood of that transition being taken), as well as noise on the sensed variables.
Automated Assume-Guarantee Reasoning by Abstraction Refinement
NASA Technical Reports Server (NTRS)
Pasareanu, Corina S.; Giannakopoulous, Dimitra; Glannakopoulou, Dimitra
2008-01-01
Current automated approaches for compositional model checking in the assume-guarantee style are based on learning of assumptions as deterministic automata. We propose an alternative approach based on abstraction refinement. Our new method computes the assumptions for the assume-guarantee rules as conservative and not necessarily deterministic abstractions of some of the components, and refines those abstractions using counter-examples obtained from model checking them together with the other components. Our approach also exploits the alphabets of the interfaces between components and performs iterative refinement of those alphabets as well as of the abstractions. We show experimentally that our preliminary implementation of the proposed alternative achieves similar or better performance than a previous learning-based implementation.
Probabilistic Priority Message Checking Modeling Based on Controller Area Networks
NASA Astrophysics Data System (ADS)
Lin, Cheng-Min
Although the probabilistic model checking tool called PRISM has been applied in many communication systems, such as wireless local area network, Bluetooth, and ZigBee, the technique is not used in a controller area network (CAN). In this paper, we use PRISM to model the mechanism of priority messages for CAN because the mechanism has allowed CAN to become the leader in serial communication for automobile and industry control. Through modeling CAN, it is easy to analyze the characteristic of CAN for further improving the security and efficiency of automobiles. The Markov chain model helps us to model the behaviour of priority messages.
A dynamic motion simulator for future European docking systems
NASA Technical Reports Server (NTRS)
Brondino, G.; Marchal, PH.; Grimbert, D.; Noirault, P.
1990-01-01
Europe's first confrontation with docking in space will require extensive testing to verify design and performance and to qualify hardware. For this purpose, a Docking Dynamics Test Facility (DDTF) was developed. It allows reproduction on the ground of the same impact loads and relative motion dynamics which would occur in space during docking. It uses a 9 degree of freedom, servo-motion system, controlled by a real time computer, which simulates the docking spacecraft in a zero-g environment. The test technique involves and active loop based on six axis force and torque detection, a mathematical simulation of individual spacecraft dynamics, and a 9 degree of freedom servomotion of which 3 DOFs allow extension of the kinematic range to 5 m. The configuration was checked out by closed loop tests involving spacecraft control models and real sensor hardware. The test facility at present has an extensive configuration that allows evaluation of both proximity control and docking systems. It provides a versatile tool to verify system design, hardware items and performance capabilities in the ongoing HERMES and COLUMBUS programs. The test system is described and its capabilities are summarized.
Automated System Checkout to Support Predictive Maintenance for the Reusable Launch Vehicle
NASA Technical Reports Server (NTRS)
Patterson-Hine, Ann; Deb, Somnath; Kulkarni, Deepak; Wang, Yao; Lau, Sonie (Technical Monitor)
1998-01-01
The Propulsion Checkout and Control System (PCCS) is a predictive maintenance software system. The real-time checkout procedures and diagnostics are designed to detect components that need maintenance based on their condition, rather than using more conventional approaches such as scheduled or reliability centered maintenance. Predictive maintenance can reduce turn-around time and cost and increase safety as compared to conventional maintenance approaches. Real-time sensor validation, limit checking, statistical anomaly detection, and failure prediction based on simulation models are employed. Multi-signal models, useful for testability analysis during system design, are used during the operational phase to detect and isolate degraded or failed components. The TEAMS-RT real-time diagnostic engine was developed to utilize the multi-signal models by Qualtech Systems, Inc. Capability of predicting the maintenance condition was successfully demonstrated with a variety of data, from simulation to actual operation on the Integrated Propulsion Technology Demonstrator (IPTD) at Marshall Space Flight Center (MSFC). Playback of IPTD valve actuations for feature recognition updates identified an otherwise undetectable Main Propulsion System 12 inch prevalve degradation. The algorithms were loaded into the Propulsion Checkout and Control System for further development and are the first known application of predictive Integrated Vehicle Health Management to an operational cryogenic testbed. The software performed successfully in real-time, meeting the required performance goal of 1 second cycle time.
Automated Verification of Specifications with Typestates and Access Permissions
NASA Technical Reports Server (NTRS)
Siminiceanu, Radu I.; Catano, Nestor
2011-01-01
We propose an approach to formally verify Plural specifications based on access permissions and typestates, by model-checking automatically generated abstract state-machines. Our exhaustive approach captures all the possible behaviors of abstract concurrent programs implementing the specification. We describe the formal methodology employed by our technique and provide an example as proof of concept for the state-machine construction rules. The implementation of a fully automated algorithm to generate and verify models, currently underway, provides model checking support for the Plural tool, which currently supports only program verification via data flow analysis (DFA).
SeaWiFS technical report series. Volume 10: Modeling of the SeaWiFS solar and lunar observations
NASA Technical Reports Server (NTRS)
Woodward, Robert H.; Barnes, Robert A.; Mcclain, Charles R.; Esaias, Wayne E.; Barnes, William L.; Mecherikunnel, Ann T.; Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor)
1993-01-01
Post-launch stability monitoring of the Sea-viewing Wide Field-of-view Sensor (SeaWifs) will include periodic sweeps of both an onboard solar diffuser plate and the moon. The diffuser views will provide short-term checks and the lunar views will monitor long-term trends in the instrument's radiometric stability. Models of the expected sensor response to these observations were created on the SeaWiFS computer at the National Aeronautics and Space Administration's (NASA) Goddard Space Flight Center (GSFC) using the Interactive Data Language (IDL) utility with a graphical user interface (GUI). The solar model uses the area of intersecting circles to simulate the ramping of sensor response while viewing the diffuser. This model is compared with preflight laboratory scans of the solar diffuser. The lunar model reads a high-resolution lunar image as input. The observations of the moon are simulated with a bright target recovery algorithm that includes ramping and ringing functions. Tests using the lunar model indicate that the integrated radiance of the entire lunar surface provides a more stable quantity than the mean of radiances from centralized pixels. The lunar model is compared to ground-based scans by the SeaWiFS instrument of a full moon in December 1992. Quality assurance and trend analyses routines for calibration and for telemetry data are also discussed.
NASA Astrophysics Data System (ADS)
Massimo Rossa, Andrea; Laudanna Del Guerra, Franco; Borga, Marco; Zanon, Francesco; Settin, Tommaso; Leuenberger, Daniel
2010-05-01
Space and time scales of flash floods are such that flash flood forecasting and warning systems depend upon the accurate real-time provision of rainfall information, high-resolution numerical weather prediction (NWP) forecasts and the use of hydrological models. Currently available high-resolution NWP model models can potentially provide warning forecasters information on the future evolution of storms and their internal structure, thereby increasing convective-scale warning lead times. However, it is essential that the model be started with a very accurate representation of on-going convection, which calls for assimilation of high-resolution rainfall data. This study aims to assess the feasibility of using carefully checked radar-derived quantitative precipitation estimates (QPE) for assimilation into NWP and hydrological models. The hydrometeorological modeling chain includes the convection-permitting NWP model COSMO-2 and a hydrologic-hydraulic models built upon the concept of geomorphological transport. Radar rainfall observations are assimilated into the NWP model via the latent heat nudging method. The study is focused on 26 September 2007 extreme flash flood event which impacted the coastal area of north-eastern Italy around Venice. The hydro-meteorological modeling system is implemented over the Dese river, a 90 km2 catchment flowing to the Venice lagoon. The radar rainfall observations are carefully checked for artifacts, including beam attenuation, by means of physics-based correction procedures and comparison with a dense network of raingauges. The impact of the radar QPE in the assimilation cycle of the NWP model is very significant, in that the main individual organized convective systems were successfully introduced into the model state, both in terms of timing and localization. Also, incorrectly localized precipitation in the model reference run without rainfall assimilation was correctly reduced to about the observed levels. On the other hand, the highest rainfall intensities were underestimated by 20% at a scale of 1000 km2, and the local peaks by 50%. The positive impact of the assimilated radar rainfall was carried over into the free forecast for about 2-5 hours, depending on when this forecast was started, and was larger, when the main mesoscale convective system was present in the initial conditions. The improvements of the meteorological model simulations were directly propagated to the river flow simulations, with an extension of the warning lead time up to three hours.
Stability analysis for a multi-camera photogrammetric system.
Habib, Ayman; Detchev, Ivan; Kwak, Eunju
2014-08-18
Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction.
Stability Analysis for a Multi-Camera Photogrammetric System
Habib, Ayman; Detchev, Ivan; Kwak, Eunju
2014-01-01
Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction. PMID:25196012
Quantifying geomorphic change at ephemeral stream restoration sites using a coupled-model approach
Norman, Laura M.; Sankey, Joel B.; Dean, David; Caster, Joshua J.; DeLong, Stephen B.; Henderson-DeLong, Whitney; Pelletier, Jon D.
2017-01-01
Rock-detention structures are used as restoration treatments to engineer ephemeral stream channels of southeast Arizona, USA, to reduce streamflow velocity, limit erosion, retain sediment, and promote surface-water infiltration. Structures are intended to aggrade incised stream channels, yet little quantified evidence of efficacy is available. The goal of this 3-year study was to characterize the geomorphic impacts of rock-detention structures used as a restoration strategy and develop a methodology to predict the associated changes. We studied reaches of two ephemeral streams with different watershed management histories: one where thousands of loose-rock check dams were installed 30 years prior to our study, and one with structures constructed at the beginning of our study. The methods used included runoff, sediment transport, and geomorphic modelling and repeat terrestrial laser scanner (TLS) surveys to map landscape change. Where discharge data were not available, event-based runoff was estimated using KINEROS2, a one-dimensional kinematic-wave runoff and erosion model. Discharge measurements and estimates were used as input to a two-dimensional unsteady flow-and-sedimentation model (Nays2DH) that combined a gridded flow, transport, and bed and bank simulation with geomorphic change. Through comparison of consecutive DEMs, the potential to substitute uncalibrated models to analyze stream restoration is introduced. We demonstrate a new approach to assess hydraulics and associated patterns of aggradation and degradation resulting from the construction of check-dams and other transverse structures. Notably, we find that stream restoration using rock-detention structures is effective across vastly different timescales.
Uncertainty of GHz-band Whole-body Average SARs in Infants based on their Kaup Indices
NASA Astrophysics Data System (ADS)
Miwa, Hironobu; Hirata, Akimasa; Fujiwara, Osamu; Nagaoka, Tomoaki; Watanabe, Soichi
We previously showed that a strong correlation exists between the absorption cross section and the body surface area of a human for 0.3-2GHz far field exposure, and proposed a formula for estimating whole-body-average specific absorption rates (WBA-SARs) in terms of height and weight. In this study, to evaluate variability in the WBA-SARs in infants based on their physique, we derived a new formula including Kaup indices of infants, which are being used to check their growth, and thereby estimated the WBA-SARs in infants with respect to their age from 0 month to three years. As a result, we found that under the same height/weight, the smaller the Kaup indices are, the larger the WBA-SARs become, and that the variability in the WBA-SARs is around 15% at the same age. To validate these findings, using the FDTD method, we simulated the GHz-band WBA-SARs in numerical human models corresponding to infants with age of 0, 1, 3, 6 and 9 months, which were obtained by scaling down the anatomically based Japanese three-year child model developed by NICT (National Institute of Information and Communications Technology). Results show that the FDTD-simulated WBA-SARs are smaller by 20% compared to those estimated for infants having the median height and the Kaup index of 0.5 percentiles, which provide conservative WBA-SARs.
Dickinson, Jesse; Hanson, R.T.; Mehl, Steffen W.; Hill, Mary C.
2011-01-01
The computer program described in this report, MODPATH-LGR, is designed to allow simulation of particle tracking in locally refined grids. The locally refined grids are simulated by using MODFLOW-LGR, which is based on MODFLOW-2005, the three-dimensional groundwater-flow model published by the U.S. Geological Survey. The documentation includes brief descriptions of the methods used and detailed descriptions of the required input files and how the output files are typically used. The code for this model is available for downloading from the World Wide Web from a U.S. Geological Survey software repository. The repository is accessible from the U.S. Geological Survey Water Resources Information Web page at http://water.usgs.gov/software/ground_water.html. The performance of the MODPATH-LGR program has been tested in a variety of applications. Future applications, however, might reveal errors that were not detected in the test simulations. Users are requested to notify the U.S. Geological Survey of any errors found in this document or the computer program by using the email address available on the Web site. Updates might occasionally be made to this document and to the MODPATH-LGR program, and users should check the Web site periodically.
Bearing tester data compilation, analysis, and reporting and bearing math modeling
NASA Technical Reports Server (NTRS)
1986-01-01
A test condition data base was developed for the Bearing and Seal Materials Tester (BSMT) program which permits rapid retrieval of test data for trend analysis and evaluation. A model was developed for the Space shuttle Main Engine (SSME) Liquid Oxygen (LOX) turbopump shaft/bearing system. The model was used to perform parametric analyses to determine the sensitivity of bearing operating characteristics and temperatures to variations in: axial preload, contact friction, coolant flow and subcooling, heat transfer coefficients, outer race misalignments, and outer race to isolator clearances. The bearing program ADORE (Advanced Dynamics of Rolling Elements) was installed on the UNIVAC 1100/80 computer system and is operational. ADORE is an advanced FORTRAN computer program for the real time simulation of the dynamic performance of rolling bearings. A model of the 57 mm turbine-end bearing is currently being checked out. Analyses were conducted to estimate flow work energy for several flow diverter configurations and coolant flow rates for the LOX BSMT.
NASA Astrophysics Data System (ADS)
Frolov, S. V.; Potlov, A. Yu.; Petrov, D. A.; Proskurin, S. G.
2017-03-01
A method of optical coherence tomography (OCT) structural images reconstruction using Monte Carlo simulations is described. Biological object is considered as a set of 3D elements that allow simulation of media, structure of which cannot be described analytically. Each voxel is characterized by its refractive index and anisotropy parameter, scattering and absorption coefficients. B-scans of the inner structure are used to reconstruct a simulated image instead of analytical representation of the boundary geometry. Henye-Greenstein scattering function, Beer-Lambert-Bouguer law and Fresnel equations are used for photon transport description. Efficiency of the described technique is checked by the comparison of the simulated and experimentally acquired A-scans.
Probabilistic evaluation of on-line checks in fault-tolerant multiprocessor systems
NASA Technical Reports Server (NTRS)
Nair, V. S. S.; Hoskote, Yatin V.; Abraham, Jacob A.
1992-01-01
The analysis of fault-tolerant multiprocessor systems that use concurrent error detection (CED) schemes is much more difficult than the analysis of conventional fault-tolerant architectures. Various analytical techniques have been proposed to evaluate CED schemes deterministically. However, these approaches are based on worst-case assumptions related to the failure of system components. Often, the evaluation results do not reflect the actual fault tolerance capabilities of the system. A probabilistic approach to evaluate the fault detecting and locating capabilities of on-line checks in a system is developed. The various probabilities associated with the checking schemes are identified and used in the framework of the matrix-based model. Based on these probabilistic matrices, estimates for the fault tolerance capabilities of various systems are derived analytically.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lazkoz, Ruth; Escamilla-Rivera, Celia; Salzano, Vincenzo
Cosmography provides a model-independent way to map the expansion history of the Universe. In this paper we simulate a Euclid-like survey and explore cosmographic constraints from future Baryonic Acoustic Oscillations (BAO) observations. We derive general expressions for the BAO transverse and radial modes and discuss the optimal order of the cosmographic expansion that provides reliable cosmological constraints. Through constraints on the deceleration and jerk parameters, we show that future BAO data have the potential to provide a model-independent check of the cosmic acceleration as well as a discrimination between the standard ΛCDM model and alternative mechanisms of cosmic acceleration.
Designing Crop Simulation Web Service with Service Oriented Architecture Principle
NASA Astrophysics Data System (ADS)
Chinnachodteeranun, R.; Hung, N. D.; Honda, K.
2015-12-01
Crop simulation models are efficient tools for simulating crop growth processes and yield. Running crop models requires data from various sources as well as time-consuming data processing, such as data quality checking and data formatting, before those data can be inputted to the model. It makes the use of crop modeling limited only to crop modelers. We aim to make running crop models convenient for various users so that the utilization of crop models will be expanded, which will directly improve agricultural applications. As the first step, we had developed a prototype that runs DSSAT on Web called as Tomorrow's Rice (v. 1). It predicts rice yields based on a planting date, rice's variety and soil characteristics using DSSAT crop model. A user only needs to select a planting location on the Web GUI then the system queried historical weather data from available sources and expected yield is returned. Currently, we are working on weather data connection via Sensor Observation Service (SOS) interface defined by Open Geospatial Consortium (OGC). Weather data can be automatically connected to a weather generator for generating weather scenarios for running the crop model. In order to expand these services further, we are designing a web service framework consisting of layers of web services to support compositions and executions for running crop simulations. This framework allows a third party application to call and cascade each service as it needs for data preparation and running DSSAT model using a dynamic web service mechanism. The framework has a module to manage data format conversion, which means users do not need to spend their time curating the data inputs. Dynamic linking of data sources and services are implemented using the Service Component Architecture (SCA). This agriculture web service platform demonstrates interoperability of weather data using SOS interface, convenient connections between weather data sources and weather generator, and connecting various services for running crop models for decision support.
NASA Astrophysics Data System (ADS)
Rossa, Andrea M.; Laudanna Del Guerra, Franco; Borga, Marco; Zanon, Francesco; Settin, Tommaso; Leuenberger, Daniel
2010-11-01
SummaryThis study aims to assess the feasibility of assimilating carefully checked radar rainfall estimates into a numerical weather prediction (NWP) to extend the forecasting lead time for an extreme flash flood. The hydro-meteorological modeling chain includes the convection-permitting NWP model COSMO-2 and a coupled hydrological-hydraulic model. Radar rainfall estimates are assimilated into the NWP model via the latent heat nudging method. The study is focused on 26 September 2007 extreme flash flood which impacted the coastal area of North-eastern Italy around Venice. The hydro-meteorological modeling system is implemented over the 90 km2 Dese river basin draining to the Venice Lagoon. The radar rainfall observations are carefully checked for artifacts, including rain-induced signal attenuation, by means of physics-based correction procedures and comparison with a dense network of raingauges. The impact of the radar rainfall estimates in the assimilation cycle of the NWP model is very significant. The main individual organized convective systems are successfully introduced into the model state, both in terms of timing and localization. Also, high-intensity incorrectly localized precipitation is correctly reduced to about the observed levels. On the other hand, the highest rainfall intensities computed after assimilation underestimate the observed values by 20% and 50% at a scale of 20 km and 5 km, respectively. The positive impact of assimilating radar rainfall estimates is carried over into the free forecast for about 2-5 h, depending on when the forecast was started. The positive impact is larger when the main mesoscale convective system is present in the initial conditions. The improvements in the precipitation forecasts are propagated to the river flow simulations, with an extension of the forecasting lead time up to 3 h.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montano, Joshua Daniel
2015-03-23
Coordinate Measuring Machines (CMM) are widely used in industry, throughout the Nuclear Weapons Complex and at Los Alamos National Laboratory (LANL) to verify part conformance to design definition. Calibration cycles for CMMs at LANL are predominantly one year in length. Unfortunately, several nonconformance reports have been generated to document the discovery of a certified machine found out of tolerance during a calibration closeout. In an effort to reduce risk to product quality two solutions were proposed – shorten the calibration cycle which could be costly, or perform an interim check to monitor the machine’s performance between cycles. The CMM interimmore » check discussed makes use of Renishaw’s Machine Checking Gauge. This off-the-shelf product simulates a large sphere within a CMM’s measurement volume and allows for error estimation. Data was gathered, analyzed, and simulated from seven machines in seventeen different configurations to create statistical process control run charts for on-the-floor monitoring.« less
Parallel Proximity Detection for Computer Simulation
NASA Technical Reports Server (NTRS)
Steinman, Jeffrey S. (Inventor); Wieland, Frederick P. (Inventor)
1997-01-01
The present invention discloses a system for performing proximity detection in computer simulations on parallel processing architectures utilizing a distribution list which includes movers and sensor coverages which check in and out of grids. Each mover maintains a list of sensors that detect the mover's motion as the mover and sensor coverages check in and out of the grids. Fuzzy grids are includes by fuzzy resolution parameters to allow movers and sensor coverages to check in and out of grids without computing exact grid crossings. The movers check in and out of grids while moving sensors periodically inform the grids of their coverage. In addition, a lookahead function is also included for providing a generalized capability without making any limiting assumptions about the particular application to which it is applied. The lookahead function is initiated so that risk-free synchronization strategies never roll back grid events. The lookahead function adds fixed delays as events are scheduled for objects on other nodes.
Parallel Proximity Detection for Computer Simulations
NASA Technical Reports Server (NTRS)
Steinman, Jeffrey S. (Inventor); Wieland, Frederick P. (Inventor)
1998-01-01
The present invention discloses a system for performing proximity detection in computer simulations on parallel processing architectures utilizing a distribution list which includes movers and sensor coverages which check in and out of grids. Each mover maintains a list of sensors that detect the mover's motion as the mover and sensor coverages check in and out of the grids. Fuzzy grids are included by fuzzy resolution parameters to allow movers and sensor coverages to check in and out of grids without computing exact grid crossings. The movers check in and out of grids while moving sensors periodically inform the grids of their coverage. In addition, a lookahead function is also included for providing a generalized capability without making any limiting assumptions about the particular application to which it is applied. The lookahead function is initiated so that risk-free synchronization strategies never roll back grid events. The lookahead function adds fixed delays as events are scheduled for objects on other nodes.
Huri, Emre; Skolarikos, Andreas; Tatar, İlkan; Binbay, Murat; Sofikerim, Mustafa; Yuruk, Emrah; Karakan, Tolga; Sargon, Mustafa; Demiryurek, Deniz; Miano, Roberto; Bagcioglu, Murat; Ezer, Mehmet; Cracco, Cecilia Maria; Scoffone, Cesare Marco
2016-05-01
The aim of the current study was to evaluate the use of fresh-frozen concurrently with embalmed cadavers as initial training models for flexible ureteroscopy (fURS) in a group of urologists who were inexperienced in retrograde intrarenal surgery (RIRS). Twelve urologists involved in a cadaveric fURS training course were enrolled into this prospective study. All the participants were inexperienced in fURS. Theoretical lectures and step-by-step tips and tricks video presentations on fURS were used to incorporate the technical background of the procedure to the hands-on-training course and to standardize the operating steps of the procedure. An 8-item survey was administered to the participants upon initiation and at the end of the course. Pre- and post-training scores were similar for each question. All the participants successfully completed the hands-on-training tasks. Mean pre-training duration [3.56 ± 2.0 min (range 1.21-7.46)] was significantly higher than mean post-training duration [1.76 ± 1.54 min (range 1.00-6.34)] (p = 0.008). At the end of the day, the trainers checked the integrity of the collecting system both by endoscopy and by fluoroscopy and could not detect any injury of the upper ureteral wall or pelvicalyceal structures. The functionality of the scopes was also checked, and no scope injury (including a reduction in the deflection capacity) was noted. The fURS simulation training model using soft human cadavers has the unique advantage of perfectly mimicking the living human tissues. This similarity makes this model one of the best if not the perfect simulator for an effective endourologic training.
A distributed fault-detection and diagnosis system using on-line parameter estimation
NASA Technical Reports Server (NTRS)
Guo, T.-H.; Merrill, W.; Duyar, A.
1991-01-01
The development of a model-based fault-detection and diagnosis system (FDD) is reviewed. The system can be used as an integral part of an intelligent control system. It determines the faults of a system from comparison of the measurements of the system with a priori information represented by the model of the system. The method of modeling a complex system is described and a description of diagnosis models which include process faults is presented. There are three distinct classes of fault modes covered by the system performance model equation: actuator faults, sensor faults, and performance degradation. A system equation for a complete model that describes all three classes of faults is given. The strategy for detecting the fault and estimating the fault parameters using a distributed on-line parameter identification scheme is presented. A two-step approach is proposed. The first step is composed of a group of hypothesis testing modules, (HTM) in parallel processing to test each class of faults. The second step is the fault diagnosis module which checks all the information obtained from the HTM level, isolates the fault, and determines its magnitude. The proposed FDD system was demonstrated by applying it to detect actuator and sensor faults added to a simulation of the Space Shuttle Main Engine. The simulation results show that the proposed FDD system can adequately detect the faults and estimate their magnitudes.
Energy Minimization of Molecular Features Observed on the (110) Face of Lysozyme Crystals
NASA Technical Reports Server (NTRS)
Perozzo, Mary A.; Konnert, John H.; Li, Huayu; Nadarajah, Arunan; Pusey, Marc
1999-01-01
Molecular dynamics and energy minimization have been carried out using the program XPLOR to check the plausibility of a model lysozyme crystal surface. The molecular features of the (110) face of lysozyme were observed using atomic force microscopy (AFM). A model of the crystal surface was constructed using the PDB file 193L, and was used to simulate an AFM image. Molecule translations, van der Waals radii, and assumed AFM tip shape were adjusted to maximize the correlation coefficient between the experimental and simulated images. The highest degree of 0 correlation (0.92) was obtained with the molecules displaced over 6 A from their positions within the bulk of the crystal. The quality of this starting model, the extent of energy minimization, and the correlation coefficient between the final model and the experimental data will be discussed.
Simulation of dense amorphous polymers by generating representative atomistic models
NASA Astrophysics Data System (ADS)
Curcó, David; Alemán, Carlos
2003-08-01
A method for generating atomistic models of dense amorphous polymers is presented. The generated models can be used as starting structures of Monte Carlo and molecular dynamics simulations, but also are suitable for the direct evaluation physical properties. The method is organized in a two-step procedure. First, structures are generated using an algorithm that minimizes the torsional strain. After this, an iterative algorithm is applied to relax the nonbonding interactions. In order to check the performance of the method we examined structure-dependent properties for three polymeric systems: polyethyelene (ρ=0.85 g/cm3), poly(L,D-lactic) acid (ρ=1.25 g/cm3), and polyglycolic acid (ρ=1.50 g/cm3). The method successfully generated representative packings for such dense systems using minimum computational resources.
ERIC Educational Resources Information Center
Barrett, Jeffrey E.; Sarama, Julie; Clements, Douglas H.; Cullen, Craig; McCool, Jenni; Witkowski-Rumsey, Chepina; Klanderman, David
2012-01-01
We examined children's development of strategic and conceptual knowledge for linear measurement. We conducted teaching experiments with eight students in grades 2 and 3, based on our hypothetical learning trajectory for length to check its coherence and to strengthen the domain-specific model for learning and teaching. We checked the hierarchical…
14 CFR 60.35 - Specific full flight simulator compliance requirements.
Code of Federal Regulations, 2011 CFR
2011-01-01
... TRANSPORTATION (CONTINUED) AIRMEN FLIGHT SIMULATION TRAINING DEVICE INITIAL AND CONTINUING QUALIFICATION AND USE... the extent necessary for the training, testing, and/or checking that comprise the simulation portion...
14 CFR 60.35 - Specific full flight simulator compliance requirements.
Code of Federal Regulations, 2014 CFR
2014-01-01
... TRANSPORTATION (CONTINUED) AIRMEN FLIGHT SIMULATION TRAINING DEVICE INITIAL AND CONTINUING QUALIFICATION AND USE... the extent necessary for the training, testing, and/or checking that comprise the simulation portion...
14 CFR 60.35 - Specific full flight simulator compliance requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
... TRANSPORTATION (CONTINUED) AIRMEN FLIGHT SIMULATION TRAINING DEVICE INITIAL AND CONTINUING QUALIFICATION AND USE... the extent necessary for the training, testing, and/or checking that comprise the simulation portion...
14 CFR 60.35 - Specific full flight simulator compliance requirements.
Code of Federal Regulations, 2012 CFR
2012-01-01
... TRANSPORTATION (CONTINUED) AIRMEN FLIGHT SIMULATION TRAINING DEVICE INITIAL AND CONTINUING QUALIFICATION AND USE... the extent necessary for the training, testing, and/or checking that comprise the simulation portion...
14 CFR 60.35 - Specific full flight simulator compliance requirements.
Code of Federal Regulations, 2013 CFR
2013-01-01
... TRANSPORTATION (CONTINUED) AIRMEN FLIGHT SIMULATION TRAINING DEVICE INITIAL AND CONTINUING QUALIFICATION AND USE... the extent necessary for the training, testing, and/or checking that comprise the simulation portion...
NASA Astrophysics Data System (ADS)
Stockhause, M.; Höck, H.; Toussaint, F.; Weigel, T.; Lautenschlager, M.
2012-12-01
We present the publication process for the CMIP5 (Coupled Model Intercomparison Project Phase 5) data with special emphasis on the current role of identifiers and the potential future role of PIDs in such distributed technical infrastructures. The DataCite data publication with DOI assignment finalizes the 3 levels quality control procedure for CMIP5 data (Stockhause et al., 2012). WDCC utilizes the Assistant System Atarrabi to support the publication process. Atarrabi is a web-based workflow system for metadata reviews of data creators and Publication Agents (PAs). Within the quality checks for level 3 all available information in the different infrastructure components is cross-checked for consistency by the DataCite PA. This information includes: metadata on data, metadata in the long-term archive of the Publication Agency, quality information, and external metadata on model and simulation (CIM). For these consistency checks metadata related to the data publication has to be identified. The Data Reference Syntax (DRS) convention functions as global identifier for data. Since the DRS structures the data, hierarchically, it can be used to identify data collections like DataCite publication units, i.e. all data belonging to a CMIP5 simulation. Every technical component of the infrastructure uses DRS or maps to it, but there is no central repository storing DRS_ids. Thus they have to be mapped, occasionally. Additional local identifiers are used within the different technical infrastructure components. Identification of related pieces of information in their repositories is cumbersome and tricky for the PA. How could PIDs improve the situation? To establish a reliable distributed data and metadata infrastructure, PIDs for all objects are needed as well as relations between them. An ideal data publication scenario for federated community projects within Earth System Sciences, e.g. CMIP, would be: 1. Data creators at the modeling centers define their simulation, related metadata, and software, which are assigned PIDs. 2. During ESGF data publication the data entities are assigned PIDs with references to the PIDs of 1. Since we deal with different hierarchical levels, the definition of collections on these levels is advantageous. A possible implementation concept using Handles is described by Weigel et al. (2012). 3. Quality results are assigned PID(s) and a reference to the data. A quality PID is added as a reference to the data collection PID. 4. The PA accesses the PID on the data collection to get the data and all related information for cross-checking. The presented example of the technical infrastructure for the CMIP5 data distribution shows the importance of PIDs, especially as the data is distributed over multiple repositories world-wide and additional separate pieces of data related information are independently collected from the data. References: Stockhause, M., Höck, H., Toussaint, F., Lautenschlager, M. (2012): 'Quality assessment concept of the World Data Center for Climate and its application to CMIP5 data', Geosci. Model Dev. Discuss., 5, 781-802, doi:10.5194/gmdd-5-781-2012. Weigel, T., et al. (2012): 'Structural Elements in a Persistent Identifier Infrastructure and Resulting Benefits for the Earth Science Community', submitted to AGU 2012 Session IN009.
NASA Astrophysics Data System (ADS)
Silva, F. E. O. E.; Naghettini, M. D. C.; Fernandes, W.
2014-12-01
This paper evaluated the uncertainties associated with the estimation of the parameters of a conceptual rainfall-runoff model, through the use of Bayesian inference techniques by Monte Carlo simulation. The Pará River sub-basin, located in the upper São Francisco river basin, in southeastern Brazil, was selected for developing the studies. In this paper, we used the Rio Grande conceptual hydrologic model (EHR/UFMG, 2001) and the Markov Chain Monte Carlo simulation method named DREAM (VRUGT, 2008a). Two probabilistic models for the residues were analyzed: (i) the classic [Normal likelihood - r ≈ N (0, σ²)]; and (ii) a generalized likelihood (SCHOUPS & VRUGT, 2010), in which it is assumed that the differences between observed and simulated flows are correlated, non-stationary, and distributed as a Skew Exponential Power density. The assumptions made for both models were checked to ensure that the estimation of uncertainties in the parameters was not biased. The results showed that the Bayesian approach proved to be adequate to the proposed objectives, enabling and reinforcing the importance of assessing the uncertainties associated with hydrological modeling.
NASA Astrophysics Data System (ADS)
Banda, Gourinath; Gallagher, John P.
interpretation provides a practical approach to verifying properties of infinite-state systems. We apply the framework of abstract interpretation to derive an abstract semantic function for the modal μ-calculus, which is the basis for abstract model checking. The abstract semantic function is constructed directly from the standard concrete semantics together with a Galois connection between the concrete state-space and an abstract domain. There is no need for mixed or modal transition systems to abstract arbitrary temporal properties, as in previous work in the area of abstract model checking. Using the modal μ-calculus to implement CTL, the abstract semantics gives an over-approximation of the set of states in which an arbitrary CTL formula holds. Then we show that this leads directly to an effective implementation of an abstract model checking algorithm for CTL using abstract domains based on linear constraints. The implementation of the abstract semantic function makes use of an SMT solver. We describe an implemented system for proving properties of linear hybrid automata and give some experimental results.
Launch Vehicle Sizing Benefits Utilizing Main Propulsion System Crossfeed and Project Status
NASA Technical Reports Server (NTRS)
Chandler, Frank; Scheiern, M.; Champion, R.; Mazurkivich, P.; Lyles, Garry (Technical Monitor)
2002-01-01
To meet the goals for a next generation Reusable Launch Vehicle (RLV), a unique propulsion feed system concept was identified using crossfeed between the booster and orbiter stages that could reduce the Two-Stage-to-Orbit (TSTO) vehicle weight and Design, Development, Test and Evaluation (DDT&E) costs by approximately 25%, while increasing safety and reliability. The Main Propulsion System (MPS) crossfeed water demonstration test program addresses all activities required to reduce the risks for the MPS crossfeed system from a Technology Readiness Level (TRL) of 2 to 4 by the completion of testing and analysis by June 2003. During the initial period, that ended in March 2002, a subscale water flow test article was defined. Procurement of a subscale crossfeed check valve was initiated and the specifications for the various components were developed. The fluid transient and pressurization analytical models were developed separately and successfully integrated. The test matrix for the water flow test was developed to correlate the integrated model. A computational fluid dynamics (CFD) model of the crossfeed check valve was developed to assess flow disturbances and internal flow dynamics. Based on the results, the passive crossfeed system concept was very feasible and offered a safe system to be used in an RLV architecture. A water flow test article was designed to accommodate a wide range of flows simulating a number of different types of propellant systems. During the follow-on period, the crossfeed system model will be further refined, the test article will be completed, the water flow test will be performed, and finally the crossfeed system model will be correlated with the test data. This validated computer model will be used to predict the full-scale vehicle crossfeed system performance.
Effects of and Preference for Pay for Performance: An Analogue Analysis
ERIC Educational Resources Information Center
Long, Robert D., III; Wilder, David A.; Betz, Alison; Dutta, Ami
2012-01-01
We examined the effects of 2 payment systems on the rate of check processing and time spent on task by participants in a simulated work setting. Three participants experienced individual pay-for-performance (PFP) without base pay and pay-for-time (PFT) conditions. In the last phase, we asked participants to choose which system they preferred. For…
NASA Astrophysics Data System (ADS)
San Juan, M.; de la Iglesia, J. M.; Martín, O.; Santos, F. J.
2009-11-01
In despite of the important progresses achieved in the knowledge of cutting processes, the study of certain aspects has undergone the very limitations of the experimental means: temperature gradients, frictions, contact, etc… Therefore, the development of numerical models is a valid tool as a first approach to study of those problems. In the present work, a calculation model under Abaqus Explicit code is developed to represent the orthogonal cutting of AISI 4140 steel. A bidimensional simulation under plane strain conditions, which is considered as adiabatic due to the high speed of the material flow, is chosen. The chip separation is defined by means of a fracture law that allows complex simulations of tool penetration in the workpiece. The strong influence of friction on cutting is proved, therefore a very good definition of materials behaviour laws could be obtained, but an erroneous value of friction coefficient could notably reduce the reliability. Considering the difficulty of checking the friction models used in the simulation, from the tests carried out habitually, the most efficacious way to characterize the friction would be to combine simulation models with cutting tests.
Addressing Dynamic Issues of Program Model Checking
NASA Technical Reports Server (NTRS)
Lerda, Flavio; Visser, Willem
2001-01-01
Model checking real programs has recently become an active research area. Programs however exhibit two characteristics that make model checking difficult: the complexity of their state and the dynamic nature of many programs. Here we address both these issues within the context of the Java PathFinder (JPF) model checker. Firstly, we will show how the state of a Java program can be encoded efficiently and how this encoding can be exploited to improve model checking. Next we show how to use symmetry reductions to alleviate some of the problems introduced by the dynamic nature of Java programs. Lastly, we show how distributed model checking of a dynamic program can be achieved, and furthermore, how dynamic partitions of the state space can improve model checking. We support all our findings with results from applying these techniques within the JPF model checker.
FISHER'S GEOMETRIC MODEL WITH A MOVING OPTIMUM
Matuszewski, Sebastian; Hermisson, Joachim; Kopp, Michael
2014-01-01
Fisher's geometric model has been widely used to study the effects of pleiotropy and organismic complexity on phenotypic adaptation. Here, we study a version of Fisher's model in which a population adapts to a gradually moving optimum. Key parameters are the rate of environmental change, the dimensionality of phenotype space, and the patterns of mutational and selectional correlations. We focus on the distribution of adaptive substitutions, that is, the multivariate distribution of the phenotypic effects of fixed beneficial mutations. Our main results are based on an “adaptive-walk approximation,” which is checked against individual-based simulations. We find that (1) the distribution of adaptive substitutions is strongly affected by the ecological dynamics and largely depends on a single composite parameter γ, which scales the rate of environmental change by the “adaptive potential” of the population; (2) the distribution of adaptive substitution reflects the shape of the fitness landscape if the environment changes slowly, whereas it mirrors the distribution of new mutations if the environment changes fast; (3) in contrast to classical models of adaptation assuming a constant optimum, with a moving optimum, more complex organisms evolve via larger adaptive steps. PMID:24898080
NASA Astrophysics Data System (ADS)
Ciampalini, Rossano; Kendon, Elizabeth; Constantine, José Antonio; Schindewolf, Marcus; Hall, Ian
2016-04-01
Twenty-first century climate change simulations for Great Britain reveal an increase in heavy precipitation that may lead to widespread soil loss and reduced soil carbon stores by increasing the likelihood of surface runoff. We find the quality and resolution of the simulated rainfall used to drive soil loss variation can widely influence the results. Hourly high definition rainfall simulations from a 1.5km resolution regional climate model are used to examine the soil erosion response in two UK catchments. The catchments have different sensitivity to soil erosion. "Rother" in West Sussex, England, reports some of the most erosive events that have been observed during the last 50 years in the UK. "Conwy" in North Wales, is resilient to soil erosion because of the abundant natural vegetation cover and very limited agricultural practises. We modelled with Erosion3D to check variations in soil erosion as influenced by climate variations for the periods 1996-2009 and 2086-2099. Our results indicate the Rother catchment is the most erosive, while the Conwy catchment is confirmed as the more resilient to soil erosion. The values of the reference-base period are consistent with the values of those locally observed in the previous decades. A soil erosion comparison for the two periods shows an increasing of sediment production (off-site erosion) for the end of the century at about 27% in the Rother catchment and about 50% for the Conwy catchment. The results, thanks to high-definition rainfall predictions, throw some light on the effect of climatic change effects in Great Britain.
On the validity of time-dependent AUC estimators.
Schmid, Matthias; Kestler, Hans A; Potapov, Sergej
2015-01-01
Recent developments in molecular biology have led to the massive discovery of new marker candidates for the prediction of patient survival. To evaluate the predictive value of these markers, statistical tools for measuring the performance of survival models are needed. We consider estimators of discrimination measures, which are a popular approach to evaluate survival predictions in biomarker studies. Estimators of discrimination measures are usually based on regularity assumptions such as the proportional hazards assumption. Based on two sets of molecular data and a simulation study, we show that violations of the regularity assumptions may lead to over-optimistic estimates of prediction accuracy and may therefore result in biased conclusions regarding the clinical utility of new biomarkers. In particular, we demonstrate that biased medical decision making is possible even if statistical checks indicate that all regularity assumptions are satisfied. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Testing for detailed balance in a financial market
NASA Astrophysics Data System (ADS)
Fiebig, H. R.; Musgrove, D. P.
2015-06-01
We test a historical price-time series in a financial market (the NASDAQ 100 index) for a statistical property known as detailed balance. The presence of detailed balance would imply that the market can be modeled by a stochastic process based on a Markov chain, thus leading to equilibrium. In economic terms, a positive outcome of the test would support the efficient market hypothesis, a cornerstone of neo-classical economic theory. In contrast to the usage in prevalent economic theory the term equilibrium here is tied to the returns, rather than the price-time series. The test is based on an action functional S constructed from the elements of the detailed balance condition and the historical data set, and then analyzing S by means of simulated annealing. Checks are performed to verify the validity of the analysis method. We discuss the outcome of this analysis.
World energy projection system: Model documentation
NASA Astrophysics Data System (ADS)
1992-06-01
The World Energy Project System (WEPS) is an accounting framework that incorporates projects from independently documented models and assumptions about the future energy intensity of economic activity (ratios of total energy consumption divided by gross domestic product) and about the rate of incremental energy requirements met by hydropower, geothermal, coal, and natural gas to produce projections of world energy consumption published annually by the Energy Information Administration (EIA) in the International Energy Outlook (IEO). Two independently documented models presented in Figure 1, the Oil Market Simulation (OMS) model and the World Integrated Nuclear Evaluation System (WINES), provide projections of oil and nuclear power consumption published in the IEO. Output from a third independently documented model, and the International Coal Trade Model (ICTM), is not published in the IEO but is used in WEPS as a supply check on projections of world coal consumption produced by WEPS and published in the IEO. A WEPS model of natural gas production documented in this report provides the same type of implicit supply check on the WEPS projections of world natural gas consumption published in the IEO. Two additional models are included in Figure 1, the OPEC Capacity model and the Non-OPEC Oil Production model. These WEPS models provide inputs to the OMS model and are documented in this report.
Discrete Event Simulation-Based Resource Modelling in Health Technology Assessment.
Salleh, Syed; Thokala, Praveen; Brennan, Alan; Hughes, Ruby; Dixon, Simon
2017-10-01
The objective of this article was to conduct a systematic review of published research on the use of discrete event simulation (DES) for resource modelling (RM) in health technology assessment (HTA). RM is broadly defined as incorporating and measuring effects of constraints on physical resources (e.g. beds, doctors, nurses) in HTA models. Systematic literature searches were conducted in academic databases (JSTOR, SAGE, SPRINGER, SCOPUS, IEEE, Science Direct, PubMed, EMBASE) and grey literature (Google Scholar, NHS journal library), enhanced by manual searchers (i.e. reference list checking, citation searching and hand-searching techniques). The search strategy yielded 4117 potentially relevant citations. Following the screening and manual searches, ten articles were included. Reviewing these articles provided insights into the applications of RM: firstly, different types of economic analyses, model settings, RM and cost-effectiveness analysis (CEA) outcomes were identified. Secondly, variation in the characteristics of the constraints such as types and nature of constraints and sources of data for the constraints were identified. Thirdly, it was found that including the effects of constraints caused the CEA results to change in these articles. The review found that DES proved to be an effective technique for RM but there were only a small number of studies applied in HTA. However, these studies showed the important consequences of modelling physical constraints and point to the need for a framework to be developed to guide future applications of this approach.
NASA Astrophysics Data System (ADS)
Gromek, Katherine Emily
A novel computational and inference framework of the physics-of-failure (PoF) reliability modeling for complex dynamic systems has been established in this research. The PoF-based reliability models are used to perform a real time simulation of system failure processes, so that the system level reliability modeling would constitute inferences from checking the status of component level reliability at any given time. The "agent autonomy" concept is applied as a solution method for the system-level probabilistic PoF-based (i.e. PPoF-based) modeling. This concept originated from artificial intelligence (AI) as a leading intelligent computational inference in modeling of multi agents systems (MAS). The concept of agent autonomy in the context of reliability modeling was first proposed by M. Azarkhail [1], where a fundamentally new idea of system representation by autonomous intelligent agents for the purpose of reliability modeling was introduced. Contribution of the current work lies in the further development of the agent anatomy concept, particularly the refined agent classification within the scope of the PoF-based system reliability modeling, new approaches to the learning and the autonomy properties of the intelligent agents, and modeling interacting failure mechanisms within the dynamic engineering system. The autonomous property of intelligent agents is defined as agent's ability to self-activate, deactivate or completely redefine their role in the analysis. This property of agents and the ability to model interacting failure mechanisms of the system elements makes the agent autonomy fundamentally different from all existing methods of probabilistic PoF-based reliability modeling. 1. Azarkhail, M., "Agent Autonomy Approach to Physics-Based Reliability Modeling of Structures and Mechanical Systems", PhD thesis, University of Maryland, College Park, 2007.
NASA Technical Reports Server (NTRS)
Molthan, Andrew L.; Petersen, Walter A.; Case, Jonathan L.; Dembek, Scott R.; Jedlovec, Gary J.
2009-01-01
Increases in computational resources have allowed operational forecast centers to pursue experimental, high resolution simulations that resolve the microphysical characteristics of clouds and precipitation. These experiments are motivated by a desire to improve the representation of weather and climate, but will also benefit current and future satellite campaigns, which often use forecast model output to guide the retrieval process. Aircraft, surface and radar data from the Canadian CloudSat/CALIPSO Validation Project are used to check the validity of size distribution and density characteristics for snowfall simulated by the NASA Goddard six-class, single-moment bulk water microphysics scheme, currently available within the Weather Research and Forecast (WRF) Model. Widespread snowfall developed across the region on January 22, 2007, forced by the passing of a midlatitude cyclone, and was observed by the dual-polarimetric, C-band radar King City, Ontario, as well as the NASA 94 GHz CloudSat Cloud Profiling Radar. Combined, these data sets provide key metrics for validating model output: estimates of size distribution parameters fit to the inverse-exponential equations prescribed within the model, bulk density and crystal habit characteristics sampled by the aircraft, and representation of size characteristics as inferred by the radar reflectivity at C- and W-band. Specified constants for distribution intercept and density differ significantly from observations throughout much of the cloud depth. Alternate parameterizations are explored, using column-integrated values of vapor excess to avoid problems encountered with temperature-based parameterizations in an environment where inversions and isothermal layers are present. Simulation of CloudSat reflectivity is performed by adopting the discrete-dipole parameterizations and databases provided in literature, and demonstrate an improved capability in simulating radar reflectivity at W-band versus Mie scattering assumptions.
Coupled Kardar-Parisi-Zhang Equations in One Dimension
NASA Astrophysics Data System (ADS)
Ferrari, Patrik L.; Sasamoto, Tomohiro; Spohn, Herbert
2013-11-01
Over the past years our understanding of the scaling properties of the solutions to the one-dimensional KPZ equation has advanced considerably, both theoretically and experimentally. In our contribution we export these insights to the case of coupled KPZ equations in one dimension. We establish equivalence with nonlinear fluctuating hydrodynamics for multi-component driven stochastic lattice gases. To check the predictions of the theory, we perform Monte Carlo simulations of the two-component AHR model. Its steady state is computed using the matrix product ansatz. Thereby all coefficients appearing in the coupled KPZ equations are deduced from the microscopic model. Time correlations in the steady state are simulated and we confirm not only the scaling exponent, but also the scaling function and the non-universal coefficients.
Chiral extrapolations of the ρ ( 770 ) meson in N f = 2 + 1 lattice QCD simulations
Hu, B.; Molina, R.; Döring, M.; ...
2017-08-24
Recentmore » $$N_f=2+1$$ lattice data for meson-meson scattering in $p$-wave and isospin $I=1$ are analyzed using a unitarized model inspired by Chiral Perturbation Theory in the inverse-amplitude formulation for two and three flavors. We perform chiral extrapolations that postdict phase shifts extracted from experiment quite well. Additionally, the low-energy constants are compared to the ones from a recent analysis of $$N_f=2$$ lattice QCD simulations to check for the consistency of the hadronic model used here. Some inconsistencies are detected in the fits to $$N_f=2+1$$ data, in contrast to the previous analysis of $$N_f=2$$ data.« less
Shaded computer graphic techniques for visualizing and interpreting analytic fluid flow models
NASA Technical Reports Server (NTRS)
Parke, F. I.
1981-01-01
Mathematical models which predict the behavior of fluid flow in different experiments are simulated using digital computers. The simulations predict values of parameters of the fluid flow (pressure, temperature and velocity vector) at many points in the fluid. Visualization of the spatial variation in the value of these parameters is important to comprehend and check the data generated, to identify the regions of interest in the flow, and for effectively communicating information about the flow to others. The state of the art imaging techniques developed in the field of three dimensional shaded computer graphics is applied to visualization of fluid flow. Use of an imaging technique known as 'SCAN' for visualizing fluid flow, is studied and the results are presented.
Matin, Ivan; Hadzistevic, Miodrag; Vukelic, Djordje; Potran, Michal; Brajlih, Tomaz
2017-07-01
Nowadays, the integrated CAD/CAE systems are favored solutions for the design of simulation models for casting metal substructures of metal-ceramic crowns. The worldwide authors have used different approaches to solve the problems using an expert system. Despite substantial research progress in the design of experts systems for the simulation model design and manufacturing have insufficiently considered the specifics of casting in dentistry, especially the need for further CAD, RE, CAE for the estimation of casting parameters and the control of the casting machine. The novel expert system performs the following: CAD modeling of the simulation model for casting, fast modeling of gate design, CAD eligibility and cast ability check of the model, estimation and running of the program code for the casting machine, as well as manufacturing time reduction of the metal substructure. The authors propose an integration method using common data model approach, blackboard architecture, rule-based reasoning and iterative redesign method. Arithmetic mean roughness values was determinated with constant Gauss low-pass filter (cut-off length of 2.5mm) according to ISO 4287 using Mahr MARSURF PS1. Dimensional deviation between the designed model and manufactured cast was determined using the coordinate measuring machine Zeiss Contura G2 and GOM Inspect software. The ES allows for obtaining the castings derived roughness grade number N7. The dimensional deviation between the simulation model of the metal substructure and the manufactured cast is 0.018mm. The arithmetic mean roughness values measured on the casting substructure are from 1.935µm to 2.778µm. The realized developed expert system with the integrated database is fully applicable for the observed hardware and software. Values of the arithmetic mean roughness and dimensional deviation indicate that casting substructures are surface quality, which is more than enough and useful for direct porcelain veneering. The manufacture of the substructure shows that the proposed ES allows the improvement of the design process while reducing the manufacturing time. Copyright © 2017 Elsevier B.V. All rights reserved.
Sinusoidal synthesis based adaptive tracking for rotating machinery fault detection
NASA Astrophysics Data System (ADS)
Li, Gang; McDonald, Geoff L.; Zhao, Qing
2017-01-01
This paper presents a novel Sinusoidal Synthesis Based Adaptive Tracking (SSBAT) technique for vibration-based rotating machinery fault detection. The proposed SSBAT algorithm is an adaptive time series technique that makes use of both frequency and time domain information of vibration signals. Such information is incorporated in a time varying dynamic model. Signal tracking is then realized by applying adaptive sinusoidal synthesis to the vibration signal. A modified Least-Squares (LS) method is adopted to estimate the model parameters. In addition to tracking, the proposed vibration synthesis model is mainly used as a linear time-varying predictor. The health condition of the rotating machine is monitored by checking the residual between the predicted and measured signal. The SSBAT method takes advantage of the sinusoidal nature of vibration signals and transfers the nonlinear problem into a linear adaptive problem in the time domain based on a state-space realization. It has low computation burden and does not need a priori knowledge of the machine under the no-fault condition which makes the algorithm ideal for on-line fault detection. The method is validated using both numerical simulation and practical application data. Meanwhile, the fault detection results are compared with the commonly adopted autoregressive (AR) and autoregressive Minimum Entropy Deconvolution (ARMED) method to verify the feasibility and performance of the SSBAT method.
Automatic Testcase Generation for Flight Software
NASA Technical Reports Server (NTRS)
Bushnell, David Henry; Pasareanu, Corina; Mackey, Ryan M.
2008-01-01
The TacSat3 project is applying Integrated Systems Health Management (ISHM) technologies to an Air Force spacecraft for operational evaluation in space. The experiment will demonstrate the effectiveness and cost of ISHM and vehicle systems management (VSM) technologies through onboard operation for extended periods. We present two approaches to automatic testcase generation for ISHM: 1) A blackbox approach that views the system as a blackbox, and uses a grammar-based specification of the system's inputs to automatically generate *all* inputs that satisfy the specifications (up to prespecified limits); these inputs are then used to exercise the system. 2) A whitebox approach that performs analysis and testcase generation directly on a representation of the internal behaviour of the system under test. The enabling technologies for both these approaches are model checking and symbolic execution, as implemented in the Ames' Java PathFinder (JPF) tool suite. Model checking is an automated technique for software verification. Unlike simulation and testing which check only some of the system executions and therefore may miss errors, model checking exhaustively explores all possible executions. Symbolic execution evaluates programs with symbolic rather than concrete values and represents variable values as symbolic expressions. We are applying the blackbox approach to generating input scripts for the Spacecraft Command Language (SCL) from Interface and Control Systems. SCL is an embedded interpreter for controlling spacecraft systems. TacSat3 will be using SCL as the controller for its ISHM systems. We translated the SCL grammar into a program that outputs scripts conforming to the grammars. Running JPF on this program generates all legal input scripts up to a prespecified size. Script generation can also be targeted to specific parts of the grammar of interest to the developers. These scripts are then fed to the SCL Executive. ICS's in-house coverage tools will be run to measure code coverage. Because the scripts exercise all parts of the grammar, we expect them to provide high code coverage. This blackbox approach is suitable for systems for which we do not have access to the source code. We are applying whitebox test generation to the Spacecraft Health INference Engine (SHINE) that is part of the ISHM system. In TacSat3, SHINE will execute an on-board knowledge base for fault detection and diagnosis. SHINE converts its knowledge base into optimized C code which runs onboard TacSat3. SHINE can translate its rules into an intermediate representation (Java) suitable for analysis with JPF. JPF will analyze SHINE's Java output using symbolic execution, producing testcases that can provide either complete or directed coverage of the code. Automatically generated test suites can provide full code coverage and be quickly regenerated when code changes. Because our tools analyze executable code, they fully cover the delivered code, not just models of the code. This approach also provides a way to generate tests that exercise specific sections of code under specific preconditions. This capability gives us more focused testing of specific sections of code.
Air traffic control in airline pilot simulator training and evaluation
DOT National Transportation Integrated Search
2001-01-01
Much airline pilot training and checking occurs entirely in the simulator, and the first time a pilot flies a particular airplane, it may carry passengers. Simulator qualification standards, however, focus on the simulation of the airplane without re...
Nonlinear solid finite element analysis of mitral valves with heterogeneous leaflet layers
NASA Astrophysics Data System (ADS)
Prot, V.; Skallerud, B.
2009-02-01
An incompressible transversely isotropic hyperelastic material for solid finite element analysis of a porcine mitral valve response is described. The material model implementation is checked in single element tests and compared with a membrane implementation in an out-of-plane loading test to study how the layered structures modify the stress response for a simple geometry. Three different collagen layer arrangements are used in finite element analysis of the mitral valve. When the leaflets are arranged in two layers with the collagen on the ventricular side, the stress in the fibre direction through the thickness in the central part of the anterior leaflet is homogenized and the peak stress is reduced. A simulation using membrane elements is also carried out for comparison with the solid finite element results. Compared to echocardiographic measurements, the finite element models bulge too much in the left atrium. This may be due to evidence of active muscle fibres in some parts of the anterior leaflet, whereas our constitutive modelling is based on passive material.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wannamaker, Philip E.
We have developed an algorithm for the inversion of magnetotelluric (MT) data to a 3D earth resistivity model based upon the finite element method. Hexahedral edge finite elements are implemented to accommodate discontinuities in the electric field across resistivity boundaries, and to accurately simulate topographic variations. All matrices are reduced and solved using direct solution modules which avoids ill-conditioning endemic to iterative solvers such as conjugate gradients, principally PARDISO for the finite element system and PLASMA for the parameter step estimate. Large model parameterizations can be handled by transforming the Gauss-Newton estimator to data-space form. Accuracy of the forward problemmore » and jacobians has been checked by comparison to integral equations results and by limiting asymptotes. Inverse accuracy and performance has been verified against the public Dublin Secret Test Model 2 and the well-known Mount St Helens 3D MT data set. This algorithm we believe is the most capable yet for forming 3D images of earth resistivity structure and their implications for geothermal fluids and pathways.« less
Automated Sequence Processor: Something Old, Something New
NASA Technical Reports Server (NTRS)
Streiffert, Barbara; Schrock, Mitchell; Fisher, Forest; Himes, Terry
2012-01-01
High productivity required for operations teams to meet schedules Risk must be minimized. Scripting used to automate processes. Scripts perform essential operations functions. Automated Sequence Processor (ASP) was a grass-roots task built to automate the command uplink process System engineering task for ASP revitalization organized. ASP is a set of approximately 200 scripts written in Perl, C Shell, AWK and other scripting languages.. ASP processes/checks/packages non-interactive commands automatically.. Non-interactive commands are guaranteed to be safe and have been checked by hardware or software simulators.. ASP checks that commands are non-interactive.. ASP processes the commands through a command. simulator and then packages them if there are no errors.. ASP must be active 24 hours/day, 7 days/week..
Wheeler, Derek S; Geis, Gary; Mack, Elizabeth H; LeMaster, Tom; Patterson, Mary D
2013-06-01
In situ simulation training is a team-based training technique conducted on actual patient care units using equipment and resources from that unit, and involving actual members of the healthcare team. We describe our experience with in situ simulation training in a major children's medical centre. In situ simulations were conducted using standardised scenarios approximately twice per month on inpatient hospital units on a rotating basis. Simulations were scheduled so that each unit participated in at least two in situ simulations per year. Simulations were conducted on a revolving schedule alternating on the day and night shifts and were unannounced. Scenarios were preselected to maximise the educational experience, and frequently involved clinical deterioration to cardiopulmonary arrest. We performed 64 of the scheduled 112 (57%) in situ simulations on all shifts and all units over 21 months. We identified 134 latent safety threats and knowledge gaps during these in situ simulations, which we categorised as medication, equipment, and/or resource/system threats. Identification of these errors resulted in modification of systems to reduce the risk of error. In situ simulations also provided a method to reinforce teamwork behaviours, such as the use of assertive statements, role clarity, performance of frequent updating, development of a shared mental model, performance of independent double checks of high-risk medicines, and overcoming authority gradients between team members. Participants stated that the training programme was effective and did not disrupt patient care. In situ simulations can identify latent safety threats, identify knowledge gaps, and reinforce teamwork behaviours when used as part of an organisation-wide safety programme.
A multilevel-skin neighbor list algorithm for molecular dynamics simulation
NASA Astrophysics Data System (ADS)
Zhang, Chenglong; Zhao, Mingcan; Hou, Chaofeng; Ge, Wei
2018-01-01
Searching of the interaction pairs and organization of the interaction processes are important steps in molecular dynamics (MD) algorithms and are critical to the overall efficiency of the simulation. Neighbor lists are widely used for these steps, where thicker skin can reduce the frequency of list updating but is discounted by more computation in distance check for the particle pairs. In this paper, we propose a new neighbor-list-based algorithm with a precisely designed multilevel skin which can reduce unnecessary computation on inter-particle distances. The performance advantages over traditional methods are then analyzed against the main simulation parameters on Intel CPUs and MICs (many integrated cores), and are clearly demonstrated. The algorithm can be generalized for various discrete simulations using neighbor lists.
Program Model Checking: A Practitioner's Guide
NASA Technical Reports Server (NTRS)
Pressburger, Thomas T.; Mansouri-Samani, Masoud; Mehlitz, Peter C.; Pasareanu, Corina S.; Markosian, Lawrence Z.; Penix, John J.; Brat, Guillaume P.; Visser, Willem C.
2008-01-01
Program model checking is a verification technology that uses state-space exploration to evaluate large numbers of potential program executions. Program model checking provides improved coverage over testing by systematically evaluating all possible test inputs and all possible interleavings of threads in a multithreaded system. Model-checking algorithms use several classes of optimizations to reduce the time and memory requirements for analysis, as well as heuristics for meaningful analysis of partial areas of the state space Our goal in this guidebook is to assemble, distill, and demonstrate emerging best practices for applying program model checking. We offer it as a starting point and introduction for those who want to apply model checking to software verification and validation. The guidebook will not discuss any specific tool in great detail, but we provide references for specific tools.
NASA Astrophysics Data System (ADS)
Tasdighi, A.; Arabi, M.
2014-12-01
Calibration of physically-based distributed hydrologic models has always been a challenging task and subject of controversy in the literature. This study is aimed to investigate how different physiographic characteristics of watersheds call for adaption of the methods used in order to have more robust and internally justifiable simulations. Haw Watershed (1300 sq. mi.) is located in the piedmont region of North Carolina draining into B. Everett Jordan Lake located in west of Raleigh. Major land covers in this watershed are forest (50%), urban/suburban (21%) and agriculture (25%) of which a large portion is pasture. Different hydrologic behaviors are observed in this watershed based on the land use composition and size of the sub-watersheds. Highly urbanized sub-watersheds show flashier hydrographs and near instantaneous hydrologic responses. This is also the case with smaller sub-watersheds with relatively lower percentage of urban areas. The Soil and Water Assessment Tool (SWAT) has been widely used in the literature for hydrologic simulation on daily basis using Soil Conservation Service Curve Number method (SCS CN). However, it has not been used as frequently using the sub-daily routines. In this regard there are a number of studies in the literature which have used coarse time scale (daily) precipitation with methods like SCS CN to calibrate SWAT for watersheds containing different types of land uses and soils reporting satisfying results at the outlet of the watershed. This is while for physically-based distributed models, the more important concern should be to check and analyze the internal processes leading to those results. In this study, the watershed is divided into several sub-watersheds to compare the performance of SCS CN and Green & Ampt (GA) methods on different land uses at different spatial scales. The results suggest better performance of GA compared to SCS CN for smaller and highly urbanized sub-watersheds although GA predominance is not very significant for the latter. Also, the better performance of GA in simulating the peak flows and flashy behavior of the hydrographs is notable. GA did not show a significant improvement over SCS CN in simulating the excess rainfall for larger sub-watersheds.
NASA Technical Reports Server (NTRS)
Molthan, Andrew L.; Petersen, Walter A.; Case, Jonathan L.; Dembek, Scott R.
2009-01-01
Increases in computational resources have allowed operational forecast centers to pursue experimental, high resolution simulations that resolve the microphysical characteristics of clouds and precipitation. These experiments are motivated by a desire to improve the representation of weather and climate, but will also benefit current and future satellite campaigns, which often use forecast model output to guide the retrieval process. The combination of reliable cloud microphysics and radar reflectivity may constrain radiative transfer models used in satellite simulators during future missions, including EarthCARE and the NASA Global Precipitation Measurement. Aircraft, surface and radar data from the Canadian CloudSat/CALIPSO Validation Project are used to check the validity of size distribution and density characteristics for snowfall simulated by the NASA Goddard six-class, single moment bulk water microphysics scheme, currently available within the Weather Research and Forecast (WRF) Model. Widespread snowfall developed across the region on January 22, 2007, forced by the passing of a mid latitude cyclone, and was observed by the dual-polarimetric, C-band radar King City, Ontario, as well as the NASA 94 GHz CloudSat Cloud Profiling Radar. Combined, these data sets provide key metrics for validating model output: estimates of size distribution parameters fit to the inverse-exponential equations prescribed within the model, bulk density and crystal habit characteristics sampled by the aircraft, and representation of size characteristics as inferred by the radar reflectivity at C- and W-band. Specified constants for distribution intercept and density differ significantly from observations throughout much of the cloud depth. Alternate parameterizations are explored, using column-integrated values of vapor excess to avoid problems encountered with temperature-based parameterizations in an environment where inversions and isothermal layers are present. Simulation of CloudSat reflectivity is performed by adopting the discrete-dipole parameterizations and databases provided in literature, and demonstrate an improved capability in simulating radar reflectivity at W-band versus Mie scattering assumptions.
Karkute, Suhas G; Easwaran, Murugesh; Gujjar, Ranjit Singh; Piramanayagam, Shanmughavel; Singh, Major
2015-10-01
WRKY genes are members of one of the largest families of plant transcription factors and play an important role in response to biotic and abiotic stresses, and overall growth and development. Understanding the interaction of WRKY proteins with other proteins/ligands in plant cells is of utmost importance to develop plants having tolerance to biotic and abiotic stresses. The SlWRKY4 gene was cloned from a drought tolerant wild species of tomato (Solanum habrochaites) and the secondary structure and 3D modeling of this protein were predicted using Schrödinger Suite-Prime. Predicted structures were also subjected to plot against Ramachandran's conformation, and the modeled structure was minimized using Macromodel. Finally, the minimized structure was simulated in the water environment to check the protein stability. The behavior of the modeled structure was well-simulated and analyzed through RMSD and RMSF of the protein. The present work provides the modeled 3D structure of SlWRKY4 that will help in understanding the mechanism of gene regulation by further in silico interaction studies.
Forkmann, Thomas; Kroehne, Ulf; Wirtz, Markus; Norra, Christine; Baumeister, Harald; Gauggel, Siegfried; Elhan, Atilla Halil; Tennant, Alan; Boecker, Maren
2013-11-01
This study conducted a simulation study for computer-adaptive testing based on the Aachen Depression Item Bank (ADIB), which was developed for the assessment of depression in persons with somatic diseases. Prior to computer-adaptive test simulation, the ADIB was newly calibrated. Recalibration was performed in a sample of 161 patients treated for a depressive syndrome, 103 patients from cardiology, and 103 patients from otorhinolaryngology (mean age 44.1, SD=14.0; 44.7% female) and was cross-validated in a sample of 117 patients undergoing rehabilitation for cardiac diseases (mean age 58.4, SD=10.5; 24.8% women). Unidimensionality of the itembank was checked and a Rasch analysis was performed that evaluated local dependency (LD), differential item functioning (DIF), item fit and reliability. CAT-simulation was conducted with the total sample and additional simulated data. Recalibration resulted in a strictly unidimensional item bank with 36 items, showing good Rasch model fit (item fit residuals<|2.5|) and no DIF or LD. CAT simulation revealed that 13 items on average were necessary to estimate depression in the range of -2 and +2 logits when terminating at SE≤0.32 and 4 items if using SE≤0.50. Receiver Operating Characteristics analysis showed that θ estimates based on the CAT algorithm have good criterion validity with regard to depression diagnoses (Area Under the Curve≥.78 for all cut-off criteria). The recalibration of the ADIB succeeded and the simulation studies conducted suggest that it has good screening performance in the samples investigated and that it may reasonably add to the improvement of depression assessment. © 2013.
65nm OPC and design optimization by using simple electrical transistor simulation
NASA Astrophysics Data System (ADS)
Trouiller, Yorick; Devoivre, Thierry; Belledent, Jerome; Foussadier, Franck; Borjon, Amandine; Patterson, Kyle; Lucas, Kevin; Couderc, Christophe; Sundermann, Frank; Urbani, Jean-Christophe; Baron, Stanislas; Rody, Yves; Chapon, Jean-Damien; Arnaud, Franck; Entradas, Jorge
2005-05-01
In the context of 65nm logic technology where gate CD control budget requirements are below 5nm, it is mandatory to properly quantify the impact of the 2D effects on the electrical behavior of the transistor [1,2]. This study uses the following sequence to estimate the impact on transistor performance: 1) A lithographic simulation is performed after OPC (Optical Proximity Correction) of active and poly using a calibrated model at best conditions. Some extrapolation of this model can also be used to assess marginalities due to process window (focus, dose, mask errors, and overlay). In our case study, we mainly checked the poly to active misalignment effects. 2) Electrical behavior of the transistor (Ion, Ioff, Vt) is calculated based on a derivative spice model using the simulated image of the gate as an input. In most of the cases Ion analysis, rather than Vt or leakage, gives sufficient information for patterning optimization. We have demonstrated the benefit of this approach with two different examples: -design rule trade-off : we estimated the impact with and without misalignment of critical rules like poly corner to active distance, active corner to poly distance or minimum space between small transistor and big transistor. -Library standard cell debugging: we applied this methodology to the most critical one hundred transistors of our standard cell libraries and calculate Ion behavior with and without misalignment between active and poly. We compared two scanner illumination modes and two OPC versions based on the behavior of the one hundred transistors. We were able to see the benefits of one illumination, and also the improvement in the OPC maturity.
Testing a Hypothesis for the Evolution of Sex
NASA Astrophysics Data System (ADS)
Örçal, Bora; Tüzel, Erkan; Sevim, Volkan; Jan, Naeem; Erzan, Ayşe.
An asexual set of primitive bacteria is simulated with a bit-string Penna model with a Fermi function for survival. A recent hypothesis by Jan, Stauffer, and Moseley on the evolution of sex from asexual cells as a strategy for trying to escape the effects of deleterious mutations is checked. This strategy is found to provide a successful scenario for the evolution of a stable macroscopic sexual population.
MOM: A meteorological data checking expert system in CLIPS
NASA Technical Reports Server (NTRS)
Odonnell, Richard
1990-01-01
Meteorologists have long faced the problem of verifying the data they use. Experience shows that there is a sizable number of errors in the data reported by meteorological observers. This is unacceptable for computer forecast models, which depend on accurate data for accurate results. Most errors that occur in meteorological data are obvious to the meteorologist, but time constraints prevent hand-checking. For this reason, it is necessary to have a 'front end' to the computer model to ensure the accuracy of input. Various approaches to automatic data quality control have been developed by several groups. MOM is a rule-based system implemented in CLIPS and utilizing 'consistency checks' and 'range checks'. The system is generic in the sense that it knows some meteorological principles, regardless of specific station characteristics. Specific constraints kept as CLIPS facts in a separate file provide for system flexibility. Preliminary results show that the expert system has detected some inconsistencies not noticed by a local expert.
Featured Image: Simulating Planetary Gaps
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2017-03-01
The authors model of howthe above disk would look as we observe it in a scattered-light image. The morphology of the gap can be used to estimate the mass of the planet that caused it. [Dong Fung 2017]The above image from a computer simulation reveals the dust structure of a protoplanetary disk (with the star obscured in the center) as a newly formed planet orbits within it. A recent study by Ruobing Dong (Steward Observatory, University of Arizona) and Jeffrey Fung (University of California, Berkeley) examines how we can determine mass of such a planet based on our observations of the gap that the planet opens in the disk as it orbits. The authors models help us to better understand how our observations of gaps might change if the disk is inclined relative to our line of sight, and how we can still constrain the mass of the gap-opening planet and the viscosity of the disk from the scattered-light images we have recently begun to obtain of distant protoplanetary disks. For more information, check out the paper below!CitationRuobing Dong () and Jeffrey Fung () 2017 ApJ 835 146. doi:10.3847/1538-4357/835/2/146
Bayesian structural equation modeling: a more flexible representation of substantive theory.
Muthén, Bengt; Asparouhov, Tihomir
2012-09-01
This article proposes a new approach to factor analysis and structural equation modeling using Bayesian analysis. The new approach replaces parameter specifications of exact zeros with approximate zeros based on informative, small-variance priors. It is argued that this produces an analysis that better reflects substantive theories. The proposed Bayesian approach is particularly beneficial in applications where parameters are added to a conventional model such that a nonidentified model is obtained if maximum-likelihood estimation is applied. This approach is useful for measurement aspects of latent variable modeling, such as with confirmatory factor analysis, and the measurement part of structural equation modeling. Two application areas are studied, cross-loadings and residual correlations in confirmatory factor analysis. An example using a full structural equation model is also presented, showing an efficient way to find model misspecification. The approach encompasses 3 elements: model testing using posterior predictive checking, model estimation, and model modification. Monte Carlo simulations and real data are analyzed using Mplus. The real-data analyses use data from Holzinger and Swineford's (1939) classic mental abilities study, Big Five personality factor data from a British survey, and science achievement data from the National Educational Longitudinal Study of 1988.
Li, X Y; Yang, G W; Zheng, D S; Guo, W S; Hung, W N N
2015-04-28
Genetic regulatory networks are the key to understanding biochemical systems. One condition of the genetic regulatory network under different living environments can be modeled as a synchronous Boolean network. The attractors of these Boolean networks will help biologists to identify determinant and stable factors. Existing methods identify attractors based on a random initial state or the entire state simultaneously. They cannot identify the fixed length attractors directly. The complexity of including time increases exponentially with respect to the attractor number and length of attractors. This study used the bounded model checking to quickly locate fixed length attractors. Based on the SAT solver, we propose a new algorithm for efficiently computing the fixed length attractors, which is more suitable for large Boolean networks and numerous attractors' networks. After comparison using the tool BooleNet, empirical experiments involving biochemical systems demonstrated the feasibility and efficiency of our approach.
Detection of Local Temperature Change on HTS Cables via Time-Frequency Domain Reflectometry
NASA Astrophysics Data System (ADS)
Bang, Su Sik; Lee, Geon Seok; Kwon, Gu-Young; Lee, Yeong Ho; Ji, Gyeong Hwan; Sohn, Songho; Park, Kijun; Shin, Yong-June
2017-07-01
High temperature superconducting (HTS) cables are drawing attention as transmission and distribution cables in future grid, and related researches on HTS cables have been conducted actively. As HTS cables have come to the demonstration stage, failures of cooling systems inducing quench phenomenon of the HTS cables have become significant. Several diagnosis of the HTS cables have been developed but there are still some limitations of the experimental setup. In this paper, a non-destructive diagnostic technique for the detection of the local temperature change point is proposed. Also, a simulation model of HTS cables with a local temperature change point is suggested to verify the proposed diagnosis. The performance of the diagnosis is checked by comparative analysis between the proposed simulation results and experiment results of a real-world HTS cable. It is expected that the suggested simulation model and diagnosis will contribute to the commercialization of HTS cables in the power grid.
Mohammed, Yassene; Verhey, Janko F
2005-01-01
Background Laser Interstitial ThermoTherapy (LITT) is a well established surgical method. The use of LITT is so far limited to homogeneous tissues, e.g. the liver. One of the reasons is the limited capability of existing treatment planning models to calculate accurately the damage zone. The treatment planning in inhomogeneous tissues, especially of regions near main vessels, poses still a challenge. In order to extend the application of LITT to a wider range of anatomical regions new simulation methods are needed. The model described with this article enables efficient simulation for predicting damaged tissue as a basis for a future laser-surgical planning system. Previously we described the dependency of the model on geometry. With the presented paper including two video files we focus on the methodological, physical and mathematical background of the model. Methods In contrast to previous simulation attempts, our model is based on finite element method (FEM). We propose the use of LITT, in sensitive areas such as the neck region to treat tumours in lymph node with dimensions of 0.5 cm – 2 cm in diameter near the carotid artery. Our model is based on calculations describing the light distribution using the diffusion approximation of the transport theory; the temperature rise using the bioheat equation, including the effect of microperfusion in tissue to determine the extent of thermal damage; and the dependency of thermal and optical properties on the temperature and the injury. Injury is estimated using a damage integral. To check our model we performed a first in vitro experiment on porcine muscle tissue. Results We performed the derivation of the geometry from 3D ultrasound data and show for this proposed geometry the energy distribution, the heat elevation, and the damage zone. Further on, we perform a comparison with the in-vitro experiment. The calculation shows an error of 5% in the x-axis parallel to the blood vessel. Conclusions The FEM technique proposed can overcome limitations of other methods and enables an efficient simulation for predicting the damage zone induced using LITT. Our calculations show clearly that major vessels would not be damaged. The area/volume of the damaged zone calculated from both simulation and in-vitro experiment fits well and the deviation is small. One of the main reasons for the deviation is the lack of accurate values of the tissue optical properties. In further experiments this needs to be validated. PMID:15631630
Summary of FY15 results of benchmark modeling activities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arguello, J. Guadalupe
2015-08-01
Sandia is participating in the third phase of an is a contributing partner to a U.S.-German "Joint Project" entitled "Comparison of current constitutive models and simulation procedures on the basis of model calculations of the thermo-mechanical behavior and healing of rock salt." The first goal of the project is to check the ability of numerical modeling tools to correctly describe the relevant deformation phenomena in rock salt under various influences. Achieving this goal will lead to increased confidence in the results of numerical simulations related to the secure storage of radioactive wastes in rock salt, thereby enhancing the acceptance ofmore » the results. These results may ultimately be used to make various assertions regarding both the stability analysis of an underground repository in salt, during the operating phase, and the long-term integrity of the geological barrier against the release of harmful substances into the biosphere, in the post-operating phase.« less
Code of Federal Regulations, 2012 CFR
2012-01-01
... used during line operational simulation for evaluation and line-oriented flight training only to...) When flight testing, flight checking, or line operational simulation is being conducted, the...
Code of Federal Regulations, 2013 CFR
2013-01-01
... used during line operational simulation for evaluation and line-oriented flight training only to...) When flight testing, flight checking, or line operational simulation is being conducted, the...
Code of Federal Regulations, 2010 CFR
2010-01-01
... used during line operational simulation for evaluation and line-oriented flight training only to...) When flight testing, flight checking, or line operational simulation is being conducted, the...
Code of Federal Regulations, 2011 CFR
2011-01-01
... used during line operational simulation for evaluation and line-oriented flight training only to...) When flight testing, flight checking, or line operational simulation is being conducted, the...
Code of Federal Regulations, 2014 CFR
2014-01-01
... used during line operational simulation for evaluation and line-oriented flight training only to...) When flight testing, flight checking, or line operational simulation is being conducted, the...
A class of all digital phase locked loops - Modeling and analysis
NASA Technical Reports Server (NTRS)
Reddy, C. P.; Gupta, S. C.
1973-01-01
An all digital phase locked loop which tracks the phase of the incoming signal once per carrier cycle is proposed. The different elements and their functions, and the phase lock operation are explained in detail. The general digital loop operation is governed by a nonlinear difference equation from which a suitable model is developed. The lock range for the general model is derived. The performance of the digital loop for phase step and frequency step inputs for different levels of quantization without loop filter are studied. The analytical results are checked by simulating the actual system on the digital computer.
Model Checking Degrees of Belief in a System of Agents
NASA Technical Reports Server (NTRS)
Raimondi, Franco; Primero, Giuseppe; Rungta, Neha
2014-01-01
Reasoning about degrees of belief has been investigated in the past by a number of authors and has a number of practical applications in real life. In this paper we present a unified framework to model and verify degrees of belief in a system of agents. In particular, we describe an extension of the temporal-epistemic logic CTLK and we introduce a semantics based on interpreted systems for this extension. In this way, degrees of beliefs do not need to be provided externally, but can be derived automatically from the possible executions of the system, thereby providing a computationally grounded formalism. We leverage the semantics to (a) construct a model checking algorithm, (b) investigate its complexity, (c) provide a Java implementation of the model checking algorithm, and (d) evaluate our approach using the standard benchmark of the dining cryptographers. Finally, we provide a detailed case study: using our framework and our implementation, we assess and verify the situational awareness of the pilot of Air France 447 flying in off-nominal conditions.
Ureba, A; Salguero, F J; Barbeiro, A R; Jimenez-Ortega, E; Baeza, J A; Miras, H; Linares, R; Perucha, M; Leal, A
2014-08-01
The authors present a hybrid direct multileaf collimator (MLC) aperture optimization model exclusively based on sequencing of patient imaging data to be implemented on a Monte Carlo treatment planning system (MC-TPS) to allow the explicit radiation transport simulation of advanced radiotherapy treatments with optimal results in efficient times for clinical practice. The planning system (called CARMEN) is a full MC-TPS, controlled through aMATLAB interface, which is based on the sequencing of a novel map, called "biophysical" map, which is generated from enhanced image data of patients to achieve a set of segments actually deliverable. In order to reduce the required computation time, the conventional fluence map has been replaced by the biophysical map which is sequenced to provide direct apertures that will later be weighted by means of an optimization algorithm based on linear programming. A ray-casting algorithm throughout the patient CT assembles information about the found structures, the mass thickness crossed, as well as PET values. Data are recorded to generate a biophysical map for each gantry angle. These maps are the input files for a home-made sequencer developed to take into account the interactions of photons and electrons with the MLC. For each linac (Axesse of Elekta and Primus of Siemens) and energy beam studied (6, 9, 12, 15 MeV and 6 MV), phase space files were simulated with the EGSnrc/BEAMnrc code. The dose calculation in patient was carried out with the BEAMDOSE code. This code is a modified version of EGSnrc/DOSXYZnrc able to calculate the beamlet dose in order to combine them with different weights during the optimization process. Three complex radiotherapy treatments were selected to check the reliability of CARMEN in situations where the MC calculation can offer an added value: A head-and-neck case (Case I) with three targets delineated on PET/CT images and a demanding dose-escalation; a partial breast irradiation case (Case II) solved with photon and electron modulated beams (IMRT + MERT); and a prostatic bed case (Case III) with a pronounced concave-shaped PTV by using volumetric modulated arc therapy. In the three cases, the required target prescription doses and constraints on organs at risk were fulfilled in a short enough time to allow routine clinical implementation. The quality assurance protocol followed to check CARMEN system showed a high agreement with the experimental measurements. A Monte Carlo treatment planning model exclusively based on maps performed from patient imaging data has been presented. The sequencing of these maps allows obtaining deliverable apertures which are weighted for modulation under a linear programming formulation. The model is able to solve complex radiotherapy treatments with high accuracy in an efficient computation time.
Post-OPC verification using a full-chip pattern-based simulation verification method
NASA Astrophysics Data System (ADS)
Hung, Chi-Yuan; Wang, Ching-Heng; Ma, Cliff; Zhang, Gary
2005-11-01
In this paper, we evaluated and investigated techniques for performing fast full-chip post-OPC verification using a commercial product platform. A number of databases from several technology nodes, i.e. 0.13um, 0.11um and 90nm are used in the investigation. Although it has proven that for most cases, our OPC technology is robust in general, due to the variety of tape-outs with complicated design styles and technologies, it is difficult to develop a "complete or bullet-proof" OPC algorithm that would cover every possible layout patterns. In the evaluation, among dozens of databases, some OPC databases were found errors by Model-based post-OPC checking, which could cost significantly in manufacturing - reticle, wafer process, and more importantly the production delay. From such a full-chip OPC database verification, we have learned that optimizing OPC models and recipes on a limited set of test chip designs may not provide sufficient coverage across the range of designs to be produced in the process. And, fatal errors (such as pinch or bridge) or poor CD distribution and process-sensitive patterns may still occur. As a result, more than one reticle tape-out cycle is not uncommon to prove models and recipes that approach the center of process for a range of designs. So, we will describe a full-chip pattern-based simulation verification flow serves both OPC model and recipe development as well as post OPC verification after production release of the OPC. Lastly, we will discuss the differentiation of the new pattern-based and conventional edge-based verification tools and summarize the advantages of our new tool and methodology: 1). Accuracy: Superior inspection algorithms, down to 1nm accuracy with the new "pattern based" approach 2). High speed performance: Pattern-centric algorithms to give best full-chip inspection efficiency 3). Powerful analysis capability: Flexible error distribution, grouping, interactive viewing and hierarchical pattern extraction to narrow down to unique patterns/cells.
Implementing Model-Check for Employee and Management Satisfaction
NASA Technical Reports Server (NTRS)
Jones, Corey; LaPha, Steven
2013-01-01
This presentation will discuss methods to which ModelCheck can be implemented to not only improve model quality, but also satisfy both employees and management through different sets of quality checks. This approach allows a standard set of modeling practices to be upheld throughout a company, with minimal interaction required by the end user. The presenter will demonstrate how to create multiple ModelCheck standards, preventing users from evading the system, and how it can improve the quality of drawings and models.
H2/H∞ control for grid-feeding converter considering system uncertainty
NASA Astrophysics Data System (ADS)
Li, Zhongwen; Zang, Chuanzhi; Zeng, Peng; Yu, Haibin; Li, Shuhui; Fu, Xingang
2017-05-01
Three-phase grid-feeding converters are key components to integrate distributed generation and renewable power sources to the power utility. Conventionally, proportional integral and proportional resonant-based control strategies are applied to control the output power or current of a GFC. But, those control strategies have poor transient performance and are not robust against uncertainties and volatilities in the system. This paper proposes a H2/H∞-based control strategy, which can mitigate the above restrictions. The uncertainty and disturbance are included to formulate the GFC system state-space model, making it more accurate to reflect the practical system conditions. The paper uses a convex optimisation method to design the H2/H∞-based optimal controller. Instead of using a guess-and-check method, the paper uses particle swarm optimisation to search a H2/H∞ optimal controller. Several case studies implemented by both simulation and experiment can verify the superiority of the proposed control strategy than the traditional PI control methods especially under dynamic and variable system conditions.
Dziuda, Lukasz; Biernacki, Marcin P; Baran, Paulina M; Truszczyński, Olaf E
2014-05-01
In the study, we checked: 1) how the simulator test conditions affect the severity of simulator sickness symptoms; 2) how the severity of simulator sickness symptoms changes over time; and 3) whether the conditions of the simulator test affect the severity of these symptoms in different ways, depending on the time that has elapsed since the performance of the task in the simulator. We studied 12 men aged 24-33 years (M = 28.8, SD = 3.26) using a truck simulator. The SSQ questionnaire was used to assess the severity of the symptoms of simulator sickness. Each of the subjects performed three 30-minute tasks running along the same route in a driving simulator. Each of these tasks was carried out in a different simulator configuration: A) fixed base platform with poor visibility; B) fixed base platform with good visibility; and C) motion base platform with good visibility. The measurement of the severity of the simulator sickness symptoms took place in five consecutive intervals. The results of the analysis showed that the simulator test conditions affect in different ways the severity of the simulator sickness symptoms, depending on the time which has elapsed since performing the task on the simulator. The simulator sickness symptoms persisted at the highest level for the test conditions involving the motion base platform. Also, when performing the tasks on the motion base platform, the severity of the simulator sickness symptoms varied depending on the time that had elapsed since performing the task. Specifically, the addition of motion to the simulation increased the oculomotor and disorientation symptoms reported as well as the duration of the after-effects. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batcheller, Thomas Aquinas; Taylor, Dean Dalton
Idaho Nuclear Technology and Engineering Center 300,000-gallon vessel WM-189 was filled in late 2001 with concentrated sodium bearing waste (SBW). Three airlifted liquid samples and a steam jetted slurry sample were obtained for quantitative analysis and characterization of WM-189 liquid phase SBW and tank heel sludge. Estimates were provided for most of the reported data values, based on the greater of (a) analytical uncertainty, and (b) variation of analytical results between nominally similar samples. A consistency check on the data was performed by comparing the total mass of dissolved solids in the liquid, as measured gravimetrically from a dried sample,more » with the corresponding value obtained by summing the masses of cations and anions in the liquid, based on the reported analytical data. After reasonable adjustments to the nitrate and oxygen concentrations, satisfactory consistency between the two results was obtained. A similar consistency check was performed on the reported compositional data for sludge solids from the steam jetted sample. In addition to the compositional data, various other analyses were performed: particle size distribution was measured for the sludge solids, sludge settling tests were performed, and viscosity measurements were made. WM-189 characterization results were compared with those for WM-180, and other Tank Farm Facility tank characterization data. A 2-liter batch of WM-189 simulant was prepared and a clear, stable solution was obtained, based on a general procedure for mixing SBW simulant that was develop by Dr. Jerry Christian. This WM-189 SBW simulant is considered suitable for laboratory testing for process development.« less
Medium range order in aluminum-based metallic glasses
NASA Astrophysics Data System (ADS)
Yi, Feng
2011-12-01
Medium range order (MRO) is the structure order existing between the short range order and long range order in amorphous materials. Fluctuation electron microscopy (FEM) is an effective method to quantify MRO. The FEM signal depends on several effects. In this thesis, I will show how the probe coherence, sample thickness and energy filter affect the FEM signal. We have found that microalloying in Al-based glass has dramatic effect on the primary crystallization temperature and nanocrystal density after annealing treatment. FEM alone cannot uncover the details of MRO in these alloys. Therefore, I resort to modeling to solve the relationship between the variance signal and MRO structure. I improved Stratton and Voyles's analytical model. I also did computer simulation. I explored the effects of thermal disorder and hydrostatic strain on the variance. The extracted size d and volume fraction phi in Al88Y7Fe5, Al88Y6Fe 5Cu1 and Al87Y7Fe5Cu 1 as-spun samples reveals the relationship between MRO in as-quenched sample and thermal behaviors in these alloys. I also did FEM experiments in relaxed Al88Y7Fe 5 samples at various annealing times. MRO structure in these samples does not change. FEM was also done on Al87Y7Fe5Cu 1 to check MRO variation during transient nucleation period. The extracted (d, phi) based on combination of experimental data and simulation shows how MRO changes during this period.
The DaveMLTranslator: An Interface for DAVE-ML Aerodynamic Models
NASA Technical Reports Server (NTRS)
Hill, Melissa A.; Jackson, E. Bruce
2007-01-01
It can take weeks or months to incorporate a new aerodynamic model into a vehicle simulation and validate the performance of the model. The Dynamic Aerospace Vehicle Exchange Markup Language (DAVE-ML) has been proposed as a means to reduce the time required to accomplish this task by defining a standard format for typical components of a flight dynamic model. The purpose of this paper is to describe an object-oriented C++ implementation of a class that interfaces a vehicle subsystem model specified in DAVE-ML and a vehicle simulation. Using the DaveMLTranslator class, aerodynamic or other subsystem models can be automatically imported and verified at run-time, significantly reducing the elapsed time between receipt of a DAVE-ML model and its integration into a simulation environment. The translator performs variable initializations, data table lookups, and mathematical calculations for the aerodynamic build-up, and executes any embedded static check-cases for verification. The implementation is efficient, enabling real-time execution. Simple interface code for the model inputs and outputs is the only requirement to integrate the DaveMLTranslator as a vehicle aerodynamic model. The translator makes use of existing table-lookup utilities from the Langley Standard Real-Time Simulation in C++ (LaSRS++). The design and operation of the translator class is described and comparisons with existing, conventional, C++ aerodynamic models of the same vehicle are given.
Aligning observed and modelled behaviour based on workflow decomposition
NASA Astrophysics Data System (ADS)
Wang, Lu; Du, YuYue; Liu, Wei
2017-09-01
When business processes are mostly supported by information systems, the availability of event logs generated from these systems, as well as the requirement of appropriate process models are increasing. Business processes can be discovered, monitored and enhanced by extracting process-related information. However, some events cannot be correctly identified because of the explosion of the amount of event logs. Therefore, a new process mining technique is proposed based on a workflow decomposition method in this paper. Petri nets (PNs) are used to describe business processes, and then conformance checking of event logs and process models is investigated. A decomposition approach is proposed to divide large process models and event logs into several separate parts that can be analysed independently; while an alignment approach based on a state equation method in PN theory enhances the performance of conformance checking. Both approaches are implemented in programmable read-only memory (ProM). The correctness and effectiveness of the proposed methods are illustrated through experiments.
Finding Feasible Abstract Counter-Examples
NASA Technical Reports Server (NTRS)
Pasareanu, Corina S.; Dwyer, Matthew B.; Visser, Willem; Clancy, Daniel (Technical Monitor)
2002-01-01
A strength of model checking is its ability to automate the detection of subtle system errors and produce traces that exhibit those errors. Given the high computational cost of model checking most researchers advocate the use of aggressive property-preserving abstractions. Unfortunately, the more aggressively a system is abstracted the more infeasible behavior it will have. Thus, while abstraction enables efficient model checking it also threatens the usefulness of model checking as a defect detection tool, since it may be difficult to determine whether a counter-example is feasible and hence worth developer time to analyze. We have explored several strategies for addressing this problem by extending an explicit-state model checker, Java PathFinder (JPF), to search for and analyze counter-examples in the presence of abstractions. We demonstrate that these techniques effectively preserve the defect detection ability of model checking in the presence of aggressive abstraction by applying them to check properties of several abstracted multi-threaded Java programs. These new capabilities are not specific to JPF and can be easily adapted to other model checking frameworks; we describe how this was done for the Bandera toolset.
Integration of crosswell seismic data for simulating porosity in a heterogeneous carbonate aquifer
NASA Astrophysics Data System (ADS)
Emery, Xavier; Parra, Jorge
2013-11-01
A challenge for the geostatistical simulation of subsurface properties in mining, petroleum and groundwater applications is the integration of well logs and seismic measurements, which can provide information on geological heterogeneities at a wide range of scales. This paper presents a case study conducted at the Port Mayaca aquifer, located in western Martin County, Florida, in which it is of interest to simulate porosity, based on porosity logs at two wells and high-resolution crosswell seismic measurements of P-wave impedance. To this end, porosity and impedance are transformed into cross-correlated Gaussian random fields, using local transformations. The model parameters (transformation functions, mean values and correlation structure of the transformed fields) are inferred and checked against the data. Multiple realizations of porosity can then be constructed conditionally to the impedance information in the interwell region, which allow identifying one low-porosity structure and two to three flow units that connect the two wells, mapping heterogeneities within these units and visually assessing fluid paths in the aquifer. In particular, the results suggest that the paths in the lower flow units, formed by a network of heterogeneous conduits, are not as smooth as in the upper flow unit.
Code of Federal Regulations, 2014 CFR
2014-01-01
... processing regions)]. If you make the deposit in person to one of our employees, funds from the following... in different states or check processing regions)]. If you make the deposit in person to one of our...] Substitute Checks and Your Rights What Is a Substitute Check? To make check processing faster, federal law...
Strangeness S =-1 hyperon-nucleon interactions: Chiral effective field theory versus lattice QCD
NASA Astrophysics Data System (ADS)
Song, Jing; Li, Kai-Wen; Geng, Li-Sheng
2018-06-01
Hyperon-nucleon interactions serve as basic inputs to studies of hypernuclear physics and dense (neutron) stars. Unfortunately, a precise understanding of these important quantities has lagged far behind that of the nucleon-nucleon interaction due to lack of high-precision experimental data. Historically, hyperon-nucleon interactions are either formulated in quark models or meson exchange models. In recent years, lattice QCD simulations and chiral effective field theory approaches start to offer new insights from first principles. In the present work, we contrast the state-of-the-art lattice QCD simulations with the latest chiral hyperon-nucleon forces and show that the leading order relativistic chiral results can already describe the lattice QCD data reasonably well. Given the fact that the lattice QCD simulations are performed with pion masses ranging from the (almost) physical point to 700 MeV, such studies provide a useful check on both the chiral effective field theory approaches as well as lattice QCD simulations. Nevertheless more precise lattice QCD simulations are eagerly needed to refine our understanding of hyperon-nucleon interactions.
Model Checking Artificial Intelligence Based Planners: Even the Best Laid Plans Must Be Verified
NASA Technical Reports Server (NTRS)
Smith, Margaret H.; Holzmann, Gerard J.; Cucullu, Gordon C., III; Smith, Benjamin D.
2005-01-01
Automated planning systems (APS) are gaining acceptance for use on NASA missions as evidenced by APS flown On missions such as Orbiter and Deep Space 1 both of which were commanded by onboard planning systems. The planning system takes high level goals and expands them onboard into a detailed of action fiat the spacecraft executes. The system must be verified to ensure that the automatically generated plans achieve the goals as expected and do not generate actions that would harm the spacecraft or mission. These systems are typically tested using empirical methods. Formal methods, such as model checking, offer exhaustive or measurable test coverage which leads to much greater confidence in correctness. This paper describes a formal method based on the SPIN model checker. This method guarantees that possible plans meet certain desirable properties. We express the input model in Promela, the language of SPIN and express the properties of desirable plans formally.
The use of vestibular models for design and evaluation of flight simulator motion
NASA Technical Reports Server (NTRS)
Bussolari, Steven R.; Young, Laurence R.; Lee, Alfred T.
1989-01-01
Quantitative models for the dynamics of the human vestibular system are applied to the design and evaluation of flight simulator platform motion. An optimal simulator motion control algorithm is generated to minimize the vector difference between perceived spatial orientation estimated in flight and in simulation. The motion controller has been implemented on the Vertical Motion Simulator at NASA Ames Research Center and evaluated experimentally through measurement of pilot performance and subjective rating during VTOL aircraft simulation. In general, pilot performance in a longitudinal tracking task (formation flight) did not appear to be sensitive to variations in platform motion condition as long as motion was present. However, pilot assessment of motion fidelity by means of a rating scale designed for this purpose, were sensitive to motion controller design. Platform motion generated with the optimal motion controller was found to be generally equivalent to that generated by conventional linear crossfeed washout. The vestibular models are used to evaluate the motion fidelity of transport category aircraft (Boeing 727) simulation in a pilot performance and simulator acceptability study at the Man-Vehicle Systems Research Facility at NASA Ames Research Center. Eighteen airline pilots, currently flying B-727, were given a series of flight scenarios in the simulator under various conditions of simulator motion. The scenarios were chosen to reflect the flight maneuvers that these pilots might expect to be given during a routine pilot proficiency check. Pilot performance and subjective rating of simulator fidelity was relatively insensitive to the motion condition, despite large differences in the amplitude of motion provided. This lack of sensitivity may be explained by means of the vestibular models, which predict little difference in the modeled motion sensations of the pilots when different motion conditions are imposed.
Ab initio simulation of diffractometer instrumental function for high-resolution X-ray diffraction1
Mikhalychev, Alexander; Benediktovitch, Andrei; Ulyanenkova, Tatjana; Ulyanenkov, Alex
2015-01-01
Modeling of the X-ray diffractometer instrumental function for a given optics configuration is important both for planning experiments and for the analysis of measured data. A fast and universal method for instrumental function simulation, suitable for fully automated computer realization and describing both coplanar and noncoplanar measurement geometries for any combination of X-ray optical elements, is proposed. The method can be identified as semi-analytical backward ray tracing and is based on the calculation of a detected signal as an integral of X-ray intensities for all the rays reaching the detector. The high speed of calculation is provided by the expressions for analytical integration over the spatial coordinates that describe the detection point. Consideration of the three-dimensional propagation of rays without restriction to the diffraction plane provides the applicability of the method for noncoplanar geometry and the accuracy for characterization of the signal from a two-dimensional detector. The correctness of the simulation algorithm is checked in the following two ways: by verifying the consistency of the calculated data with the patterns expected for certain simple limiting cases and by comparing measured reciprocal-space maps with the corresponding maps simulated by the proposed method for the same diffractometer configurations. Both kinds of tests demonstrate the agreement of the simulated instrumental function shape with the measured data. PMID:26089760
NASA Astrophysics Data System (ADS)
Caporali, E.; Chiarello, V.; Galeati, G.
2014-12-01
Peak discharges estimates for a given return period are of primary importance in engineering practice for risk assessment and hydraulic structure design. Different statistical methods are chosen here for the assessment of flood frequency curve: one indirect technique based on the extreme rainfall event analysis, the Peak Over Threshold (POT) model and the Annual Maxima approach as direct techniques using river discharge data. In the framework of the indirect method, a Monte Carlo simulation approach is adopted to determine a derived frequency distribution of peak runoff using a probabilistic formulation of the SCS-CN method as stochastic rainfall-runoff model. A Monte Carlo simulation is used to generate a sample of different runoff events from different stochastic combination of rainfall depth, storm duration, and initial loss inputs. The distribution of the rainfall storm events is assumed to follow the GP law whose parameters are estimated through GEV's parameters of annual maximum data. The evaluation of the initial abstraction ratio is investigated since it is one of the most questionable assumption in the SCS-CN model and plays a key role in river basin characterized by high-permeability soils, mainly governed by infiltration excess mechanism. In order to take into account the uncertainty of the model parameters, this modified approach, that is able to revise and re-evaluate the original value of the initial abstraction ratio, is implemented. In the POT model the choice of the threshold has been an essential issue, mainly based on a compromise between bias and variance. The Generalized Extreme Value (GEV) distribution fitted to the annual maxima discharges is therefore compared with the Pareto distributed peaks to check the suitability of the frequency of occurrence representation. The methodology is applied to a large dam in the Serchio river basin, located in the Tuscany Region. The application has shown as Monte Carlo simulation technique can be a useful tool to provide more robust estimation of the results obtained by direct statistical methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Ena; Ostriker, Jeremiah P.; Naab, Thorsten
2012-08-01
We study the growth of black holes (BHs) in galaxies using three-dimensional smoothed particle hydrodynamic simulations with new implementations of the momentum mechanical feedback, and restriction of accreted elements to those that are gravitationally bound to the BH. We also include the feedback from the X-ray radiation emitted by the BH, which heats the surrounding gas in the host galaxies, and adds radial momentum to the fluid. We perform simulations of isolated galaxies and merging galaxies and test various feedback models with the new treatment of the Bondi radius criterion. We find that overall the BH growth is similar tomore » what has been obtained by earlier works using the Springel, Di Matteo, and Hernquist algorithms. However, the outflowing wind velocities and mechanical energy emitted by winds are considerably higher (v{sub w} {approx} 1000-3000 km s{sup -1}) compared to the standard thermal feedback model (v{sub w} {approx} 50-100 km s{sup -1}). While the thermal feedback model emits only 0.1% of BH released energy in winds, the momentum feedback model emits more than 30% of the total energy released by the BH in winds. In the momentum feedback model, the degree of fluctuation in both radiant and wind output is considerably larger than in standard treatments. We check that the new model of BH mass accretion agrees with analytic results for the standard Bondi problem.« less
NASA Astrophysics Data System (ADS)
Zucker, Shay; Giryes, Raja
2018-04-01
Transits of habitable planets around solar-like stars are expected to be shallow, and to have long periods, which means low information content. The current bottleneck in the detection of such transits is caused in large part by the presence of red (correlated) noise in the light curves obtained from the dedicated space telescopes. Based on the groundbreaking results deep learning achieves in many signal and image processing applications, we propose to use deep neural networks to solve this problem. We present a feasibility study, in which we applied a convolutional neural network on a simulated training set. The training set comprised light curves received from a hypothetical high-cadence space-based telescope. We simulated the red noise by using Gaussian Processes with a wide variety of hyper-parameters. We then tested the network on a completely different test set simulated in the same way. Our study proves that very difficult cases can indeed be detected. Furthermore, we show how detection trends can be studied and detection biases quantified. We have also checked the robustness of the neural-network performance against practical artifacts such as outliers and discontinuities, which are known to affect space-based high-cadence light curves. Future work will allow us to use the neural networks to characterize the transit model and identify individual transits. This new approach will certainly be an indispensable tool for the detection of habitable planets in the future planet-detection space missions such as PLATO.
van der Heijden, Amy; Mulder, Bob C; Poortvliet, P Marijn; van Vliet, Arnold J H
2017-11-25
Performing a tick check after visiting nature is considered the most important preventive measure to avoid contracting Lyme disease. Checking the body for ticks after visiting nature is the only measure that can fully guarantee whether one has been bitten by a tick and provides the opportunity to remove the tick as soon as possible, thereby greatly reducing the chance of contracting Lyme disease. However, compliance to performing the tick check is low. In addition, most previous studies on determinants of preventive measures to avoid Lyme disease lack a clear definition and/or operationalization of the term "preventive measures". Those that do distinguish multiple behaviors including the tick check, fail to describe the systematic steps that should be followed in order to perform the tick check effectively. Hence, the purpose of this study was to identify determinants of systematically performing the tick check, based on social cognitive theory. A cross-sectional self-administered survey questionnaire was filled out online by 508 respondents (M age = 51.7, SD = 16.0; 50.2% men; 86.4% daily or weekly nature visitors). Bivariate correlations and multivariate regression analyses were conducted to identify associations between socio-cognitive determinants (i.e. concepts related to humans' intrinsic and extrinsic motivation to perform certain behavior), and the tick check, and between socio-cognitive determinants and proximal goal to do the tick check. The full regression model explained 28% of the variance in doing the tick check. Results showed that performing the tick check was associated with proximal goal (β = .23, p < 0.01), self-efficacy (β = .22, p < 0.01), self-evaluative outcome expectations (β = .21, p < 0.01), descriptive norm (β = .16, p < 0.01), and experience (β = .13, p < 0.01). Our study is among the first to examine the determinants of systematic performance of the tick check, using an extended version of social cognitive theory to identify determinants. Based on the results, a number of practical recommendations can be made to promote the performance of the tick check.
User's guide for MAGIC-Meteorologic and hydrologic genscn (generate scenarios) input converter
Ortel, Terry W.; Martin, Angel
2010-01-01
Meteorologic and hydrologic data used in watershed modeling studies are collected by various agencies and organizations, and stored in various formats. Data may be in a raw, un-processed format with little or no quality control, or may be checked for validity before being made available. Flood-simulation systems require data in near real-time so that adequate flood warnings can be made. Additionally, forecasted data are needed to operate flood-control structures to potentially mitigate flood damages. Because real-time data are of a provisional nature, missing data may need to be estimated for use in floodsimulation systems. The Meteorologic and Hydrologic GenScn (Generate Scenarios) Input Converter (MAGIC) can be used to convert data from selected formats into the Hydrologic Simulation System-Fortran hourly-observations format for input to a Watershed Data Management database, for use in hydrologic modeling studies. MAGIC also can reformat the data to the Full Equations model time-series format, for use in hydraulic modeling studies. Examples of the application of MAGIC for use in the flood-simulation system for Salt Creek in northeastern Illinois are presented in this report.
Wolgin, M; Grabowski, S; Elhadad, S; Frank, W; Kielbassa, A M
2018-03-25
This study aimed to evaluate the educational outcome of a digitally based self-assessment concept (prepCheck; DentsplySirona, Wals, Austria) for pre-clinical undergraduates in the context of a regular phantom-laboratory course. A sample of 47 third-year dental students participated in the course. Students were randomly divided into a prepCheck-supervised (self-assessment) intervention group (IG; n = 24); conventionally supervised students constituted the control group (CG; n = 23). During the preparation of three-surface (MOD) class II amalgam cavities, each IG participant could analyse a superimposed 3D image of his/her preparation against the "master preparation" using the prepCheck software. In the CG, several course instructors performed the evaluations according to pre-defined assessment criteria. After completing the course, a mandatory (blinded) practical examination was taken by all course participants (both IG and CG students), and this assessment involved the preparation of a MOD amalgam cavity. Then, optical impressions by means of a CEREC-Omnicam were taken to digitalize all examination preparations, followed by surveying and assessing the latter using prepCheck. The statistical analysis of the digitalized samples (Mann-Whitney U test) revealed no significant differences between the cavity dimensions achieved in the IG and CG (P = .406). Additionally, the sum score of the degree of conformity with the "master preparation" (maximum permissible 10% of plus or minus deviation) was comparable in both groups (P = .259). The implemented interactive digitally based, self-assessment learning tool for undergraduates appears to be equivalent to the conventional form of supervision. Therefore, such digital learning tools could significantly address the ever-increasing student to faculty ratio. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Centralized, decentralized, and independent control of a flexible manipulator on a flexible base
NASA Technical Reports Server (NTRS)
Li, Feiyue; Bainum, Peter M.; Xu, Jianke
1991-01-01
The dynamics and control of a flexible manipulator arm with payload mass on a flexible base in space are considered. The controllers are provided by one torquer at the center of the base and one torquer at the connection joint of the robot and the base. The nonlinear dynamics of the system is modeled by applying the finite element method and Lagrangian formula. Three control strategies are considered and compared, i.e., centralized control, decentralized control, and independent control. All these control designs are based on the linear quadratic regulator theory. A mathematical decomposition is used in the decentralization process so that the coupling between the subsystems is weak, while a physical decomposition is used in the independent control design process. For both the decentralized and the independent controls, the stability of the overall linear system is checked before a numerical simulations is initiated. Two numerical examples show that the response of the independent control system are close to those of the centralized control system, while the responses of the decentralized control system are not.
Bańkowski, Robert; Wiadrowska, Bozena; Beresińska, Martyna; Ludwicki, Jan K; Noworyta-Głowacka, Justyna; Godyń, Artur; Doruchowski, Grzegorz; Hołownicki, Ryszard
2013-01-01
Faulty but still operating agricultural pesticide sprayers may pose an unacceptable health risk for operators. The computerized models designed to calculate exposure and risk for pesticide sprayers used as an aid in the evaluation and further authorisation of plant protection products may be applied also to assess a health risk for operators when faulty sprayers are used. To evaluate the impact of different exposure scenarios on the health risk for the operators using faulty agricultural spraying equipment by means of computer modelling. The exposure modelling was performed for 15 pesticides (5 insecticides, 7 fungicides and 3 herbicides). The critical parameter, i.e. toxicological end-point, on which the risk assessment was based was the no observable adverse effect level (NOAEL). This enabled risk to be estimated under various exposure conditions such as pesticide concentration in the plant protection product and type of the sprayed crop as well as the number of treatments. Computer modelling was based on the UK POEM model including determination of the acceptable operator exposure level (AOEL). Thus the degree of operator exposure could be defined during pesticide treatment whether or not personal protection equipment had been employed by individuals. Data used for computer modelling was obtained from simulated, pesticide substitute treatments using variously damaged knapsack sprayers. These substitute preparations consisted of markers that allowed computer simulations to be made, analogous to real-life exposure situations, in a dose dependent fashion. Exposures were estimated according to operator dosimetry exposure under 'field' conditions for low level, medium and high target field crops. The exposure modelling in the high target field crops demonstrated exceedance of the AOEL in all simulated treatment cases (100%) using damaged sprayers irrespective of the type of damage or if individual protective measures had been adopted or not. For low level and medium field crops exceedances ranged between 40 - 80% cases. The computer modelling may be considered as an practical tool for the hazard assessment when the faulty agricultural sprayers are used. It also may be applied for programming the quality checks and maintenance systems of this equipment.
Algebraic model checking for Boolean gene regulatory networks.
Tran, Quoc-Nam
2011-01-01
We present a computational method in which modular and Groebner bases (GB) computation in Boolean rings are used for solving problems in Boolean gene regulatory networks (BN). In contrast to other known algebraic approaches, the degree of intermediate polynomials during the calculation of Groebner bases using our method will never grow resulting in a significant improvement in running time and memory space consumption. We also show how calculation in temporal logic for model checking can be done by means of our direct and efficient Groebner basis computation in Boolean rings. We present our experimental results in finding attractors and control strategies of Boolean networks to illustrate our theoretical arguments. The results are promising. Our algebraic approach is more efficient than the state-of-the-art model checker NuSMV on BNs. More importantly, our approach finds all solutions for the BN problems.
Integrated Exoplanet Modeling with the GSFC Exoplanet Modeling & Analysis Center (EMAC)
NASA Astrophysics Data System (ADS)
Mandell, Avi M.; Hostetter, Carl; Pulkkinen, Antti; Domagal-Goldman, Shawn David
2018-01-01
Our ability to characterize the atmospheres of extrasolar planets will be revolutionized by JWST, WFIRST and future ground- and space-based telescopes. In preparation, the exoplanet community must develop an integrated suite of tools with which we can comprehensively predict and analyze observations of exoplanets, in order to characterize the planetary environments and ultimately search them for signs of habitability and life.The GSFC Exoplanet Modeling and Analysis Center (EMAC) will be a web-accessible high-performance computing platform with science support for modelers and software developers to host and integrate their scientific software tools, with the goal of leveraging the scientific contributions from the entire exoplanet community to improve our interpretations of future exoplanet discoveries. Our suite of models will include stellar models, models for star-planet interactions, atmospheric models, planet system science models, telescope models, instrument models, and finally models for retrieving signals from observational data. By integrating this suite of models, the community will be able to self-consistently calculate the emergent spectra from the planet whether from emission, scattering, or in transmission, and use these simulations to model the performance of current and new telescopes and their instrumentation.The EMAC infrastructure will not only provide a repository for planetary and exoplanetary community models, modeling tools and intermodal comparisons, but it will include a "run-on-demand" portal with each software tool hosted on a separate virtual machine. The EMAC system will eventually include a means of running or “checking in” new model simulations that are in accordance with the community-derived standards. Additionally, the results of intermodal comparisons will be used to produce open source publications that quantify the model comparisons and provide an overview of community consensus on model uncertainties on the climates of various planetary targets.
Application of conditional moment tests to model checking for generalized linear models.
Pan, Wei
2002-06-01
Generalized linear models (GLMs) are increasingly being used in daily data analysis. However, model checking for GLMs with correlated discrete response data remains difficult. In this paper, through a case study on marginal logistic regression using a real data set, we illustrate the flexibility and effectiveness of using conditional moment tests (CMTs), along with other graphical methods, to do model checking for generalized estimation equation (GEE) analyses. Although CMTs provide an array of powerful diagnostic tests for model checking, they were originally proposed in the econometrics literature and, to our knowledge, have never been applied to GEE analyses. CMTs cover many existing tests, including the (generalized) score test for an omitted covariate, as special cases. In summary, we believe that CMTs provide a class of useful model checking tools.
14 CFR 135.347 - Pilots: Initial, transition, upgrade, and differences flight training.
Code of Federal Regulations, 2011 CFR
2011-01-01
... the aircraft simulator or training device; and (2) A flight check in the aircraft or a check in the... differences flight training. 135.347 Section 135.347 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION... flight training. (a) Initial, transition, upgrade, and differences training for pilots must include...
14 CFR 135.347 - Pilots: Initial, transition, upgrade, and differences flight training.
Code of Federal Regulations, 2012 CFR
2012-01-01
... the aircraft simulator or training device; and (2) A flight check in the aircraft or a check in the... differences flight training. 135.347 Section 135.347 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION... flight training. (a) Initial, transition, upgrade, and differences training for pilots must include...
14 CFR 135.347 - Pilots: Initial, transition, upgrade, and differences flight training.
Code of Federal Regulations, 2014 CFR
2014-01-01
... the aircraft simulator or training device; and (2) A flight check in the aircraft or a check in the... differences flight training. 135.347 Section 135.347 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION... flight training. (a) Initial, transition, upgrade, and differences training for pilots must include...
14 CFR 135.347 - Pilots: Initial, transition, upgrade, and differences flight training.
Code of Federal Regulations, 2013 CFR
2013-01-01
... the aircraft simulator or training device; and (2) A flight check in the aircraft or a check in the... differences flight training. 135.347 Section 135.347 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION... flight training. (a) Initial, transition, upgrade, and differences training for pilots must include...
14 CFR 135.347 - Pilots: Initial, transition, upgrade, and differences flight training.
Code of Federal Regulations, 2010 CFR
2010-01-01
... the aircraft simulator or training device; and (2) A flight check in the aircraft or a check in the... differences flight training. 135.347 Section 135.347 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION... flight training. (a) Initial, transition, upgrade, and differences training for pilots must include...
Take the Reins on Model Quality with ModelCHECK and Gatekeeper
NASA Technical Reports Server (NTRS)
Jones, Corey
2012-01-01
Model quality and consistency has been an issue for us due to the diverse experience level and imaginative modeling techniques of our users. Fortunately, setting up ModelCHECK and Gatekeeper to enforce our best practices has helped greatly, but it wasn't easy. There were many challenges associated with setting up ModelCHECK and Gatekeeper including: limited documentation, restrictions within ModelCHECK, and resistance from end users. However, we consider ours a success story. In this presentation we will describe how we overcame these obstacles and present some of the details of how we configured them to work for us.
Vermeulen, Peter Johannes
2014-06-01
There is a general notion in the literature that, with increasing latitude, trees have deeper crowns as a result of a lower solar elevation angle. However, these predictions are based on models that did not include the effects of competition for light between individuals. Here, I argue that there should be selection for trees to increase the height of the crown base, as this decreases shading by neighbouring trees, leading to an evolutionarily stable strategy (ESS). Because the level of between-tree shading increases with decreasing solar angle, the predicted ESS will shift to higher crown base height. This argument is supported by a simulation model to check for the effects of crown shape and the change of light intensity that occurs with changing solar angle on model outcomes. So, the lower solar angle at higher latitudes would tend to select for shallower, and not deeper, crowns. This casts doubt on the common belief that a decreasing solar angle increases crown depth. More importantly, it shows that different assumptions about what should be optimized can lead to different predictions, not just for absolute trait values, but for the direction of selection itself. © 2014 The Author. New Phytologist © 2014 New Phytologist Trust.
Spatially coupled low-density parity-check error correction for holographic data storage
NASA Astrophysics Data System (ADS)
Ishii, Norihiko; Katano, Yutaro; Muroi, Tetsuhiko; Kinoshita, Nobuhiro
2017-09-01
The spatially coupled low-density parity-check (SC-LDPC) was considered for holographic data storage. The superiority of SC-LDPC was studied by simulation. The simulations show that the performance of SC-LDPC depends on the lifting number, and when the lifting number is over 100, SC-LDPC shows better error correctability compared with irregular LDPC. SC-LDPC is applied to the 5:9 modulation code, which is one of the differential codes. The error-free point is near 2.8 dB and over 10-1 can be corrected in simulation. From these simulation results, this error correction code can be applied to actual holographic data storage test equipment. Results showed that 8 × 10-2 can be corrected, furthermore it works effectively and shows good error correctability.
Some attributes of a language for property-based testing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neagoe, Vicentiu; Bishop, Matt
Property-based testing is a testing technique that evaluates executions of a program. The method checks that specifications, called properties, hold throughout the execution of the program. TASpec is a language used to specify these properties. This paper compares some attributes of the language with the specification patterns used for model-checking languages, and then presents some descriptions of properties that can be used to detect common security flaws in programs. This report describes the results of a one year research project at the University of California, Davis, which was funded by a University Collaboration LDRD entitled ''Property-based Testing for Cyber Securitymore » Assurance''.« less
A model of tungsten anode x-ray spectra.
Hernández, G; Fernández, F
2016-08-01
A semiempirical model for x-ray production in tungsten thick-targets was evaluated using a new characterization of electron fluence. Electron fluence is modeled taking into account both the energy and angular distributions, each of them adjusted to Monte Carlo simulated data. Distances were scaled by the CSDA range to reduce the energy dependence. Bremsstrahlung production was found by integrating the cross section with the fluence in a 1D penetration model. Characteristic radiation was added using a semiempirical law whose validity was checked. The results were compared the experimental results of Bhat et al., with the SpekCalc numerical tool, and with mcnpx simulation results from the work of Hernandez and Boone. The model described shows better agreement with the experimental results than the SpekCalc predictions in the sense of area between the spectra. A general improvement of the predictions of half-value layers is also found. The results are also in good agreement with the simulation results in the 50-640 keV energy range. A complete model for x-ray production in thick bremsstrahlung targets has been developed, improving the results of previous works and extending the energy range covered to the 50-640 keV interval.
Statistical inference in comparing DInSAR and GPS data in fault areas
NASA Astrophysics Data System (ADS)
Barzaghi, R.; Borghi, A.; Kunzle, A.
2012-04-01
DInSAR and GPS data are nowadays currently used in geophysical investigation, e.g. for estimating slip rate over the fault plane in seismogenic areas. This analysis is usually done by mapping the surface deformation rates as estimated by GPS and DInSAR over the fault plane using suitable geophysical models (e.g. the Okada model). Usually, DInSAR vertical velocities and GPS horizontal velocities are used for getting an integrated slip estimate. However, it is sometimes critical to merge the two kinds of information since they may reflect a common undergoing geophysical signal plus different disturbing signals that are not related to the fault dynamic. In GPS and DInSAR data analysis, these artifacts are mainly connected to signal propagation in the atmosphere and to hydrological phenomena (e.g. variation in the water table). Thus, some coherence test between the two information must be carried out in order to properly merge the GPS and DInSAR velocities in the inversion procedure. To this aim, statistical tests have been studied to check for the compatibility of the two deformation rate estimates coming from GPS and DInSAR data analysis. This has been done according both to standard and Bayesian testing methodology. The effectiveness of the proposed inference methods has been checked with numerical simulations in the case of a normal fault. The fault structure is defined following the Pollino fault model and both GPS and DInSAR data are simulated according to real data acquired in this area.
Testing Instrument for Flight-Simulator Displays
NASA Technical Reports Server (NTRS)
Haines, Richard F.
1987-01-01
Displays for flight-training simulators rapidly aligned with aid of integrated optical instrument. Calibrations and tests such as aligning boresight of display with respect to user's eyes, checking and adjusting display horizon, checking image sharpness, measuring illuminance of displayed scenes, and measuring distance of optical focus of scene performed with single unit. New instrument combines all measurement devices in single, compact, integrated unit. Requires just one initial setup. Employs laser and produces narrow, collimated beam for greater measurement accuracy. Uses only one moving part, double right prism, to position laser beam.
NASA Astrophysics Data System (ADS)
Jiao, Cheng-Liang; Mineshige, Shin; Takeuchi, Shun; Ohsuga, Ken
2015-06-01
We apply our two-dimensional (2D), radially self-similar steady-state accretion flow model to the analysis of hydrodynamic simulation results of supercritical accretion flows. Self-similarity is checked and the input parameters for the model calculation, such as advective factor and heat capacity ratio, are obtained from time-averaged simulation data. Solutions of the model are then calculated and compared with the simulation results. We find that in the converged region of the simulation, excluding the part too close to the black hole, the radial distributions of azimuthal velocity {{v}φ }, density ρ and pressure p basically follow the self-similar assumptions, i.e., they are roughly proportional to {{r}-0.5}, {{r}-n}, and {{r}-(n+1)}, respectively, where n∼ 0.85 for the mass injection rate of 1000{{L}E}/{{c}2}, and n∼ 0.74 for 3000{{L}E}/{{c}2}. The distribution of vr and {{v}θ } agrees less with self-similarity, possibly due to convective motions in the rθ plane. The distribution of velocity, density, and pressure in the θ direction obtained by the steady model agrees well with the simulation results within the calculation boundary of the steady model. Outward mass flux in the simulations is overall directed toward a polar angle of 0.8382 rad (∼ 48\\buildrel{\\circ}\\over{.} 0) for 1000{{L}E}/{{c}2} and 0.7852 rad (∼ 43\\buildrel{\\circ}\\over{.} 4) for 3000{{L}E}/{{c}2}, and ∼94% of the mass inflow is driven away as outflow, while outward momentum and energy fluxes are focused around the polar axis. Parts of these fluxes lie in the region that is not calculated by the steady model, and special attention should be paid when the model is applied.
MO-FG-CAMPUS-TeP1-03: Pre-Treatment Surface Imaging Based Collision Detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiant, D; Maurer, J; Liu, H
2016-06-15
Purpose: Modern radiotherapy increasingly employs large immobilization devices, gantry attachments, and couch rotations for treatments. All of which raise the risk of collisions between the patient and the gantry / couch. Collision detection is often achieved by manually checking each couch position in the treatment room and sometimes results in extraneous imaging if collisions are detected after image based setup has begun. In the interest of improving efficiency and avoiding extra imaging, we explore the use of a surface imaging based collision detection model. Methods: Surfaces acquired from AlignRT (VisionRT, London, UK) were transferred in wavefront format to a custommore » Matlab (Mathworks, Natick, MA) software package (CCHECK). Computed tomography (CT) scans acquired at the same time were sent to CCHECK in DICOM format. In CCHECK, binary maps of the surfaces were created and overlaid on the CT images based on the fixed relationship of the AlignRT and CT coordinate systems. Isocenters were added through a graphical user interface (GUI). CCHECK then compares the inputted surfaces to a model of the linear accelerator (linac) to check for collisions at defined gantry and couch positions. Note, CCHECK may be used with or without a CT. Results: The nominal surface image field of view is 650 mm × 900 mm, with variance based on patient position and size. The accuracy of collision detections is primarily based on the linac model and the surface mapping process. The current linac model and mapping process yield detection accuracies on the order of 5 mm, assuming no change in patient posture between surface acquisition and treatment. Conclusions: CCHECK provides a non-ionizing method to check for collisions without the patient in the treatment room. Collision detection accuracy may be improved with more robust linac modeling. Additional gantry attachments (e.g. conical collimators) can be easily added to the model.« less
A system for EPID-based real-time treatment delivery verification during dynamic IMRT treatment.
Fuangrod, Todsaporn; Woodruff, Henry C; van Uytven, Eric; McCurdy, Boyd M C; Kuncic, Zdenka; O'Connor, Daryl J; Greer, Peter B
2013-09-01
To design and develop a real-time electronic portal imaging device (EPID)-based delivery verification system for dynamic intensity modulated radiation therapy (IMRT) which enables detection of gross treatment delivery errors before delivery of substantial radiation to the patient. The system utilizes a comprehensive physics-based model to generate a series of predicted transit EPID image frames as a reference dataset and compares these to measured EPID frames acquired during treatment. The two datasets are using MLC aperture comparison and cumulative signal checking techniques. The system operation in real-time was simulated offline using previously acquired images for 19 IMRT patient deliveries with both frame-by-frame comparison and cumulative frame comparison. Simulated error case studies were used to demonstrate the system sensitivity and performance. The accuracy of the synchronization method was shown to agree within two control points which corresponds to approximately ∼1% of the total MU to be delivered for dynamic IMRT. The system achieved mean real-time gamma results for frame-by-frame analysis of 86.6% and 89.0% for 3%, 3 mm and 4%, 4 mm criteria, respectively, and 97.9% and 98.6% for cumulative gamma analysis. The system can detect a 10% MU error using 3%, 3 mm criteria within approximately 10 s. The EPID-based real-time delivery verification system successfully detected simulated gross errors introduced into patient plan deliveries in near real-time (within 0.1 s). A real-time radiation delivery verification system for dynamic IMRT has been demonstrated that is designed to prevent major mistreatments in modern radiation therapy.
Evaluation of the implementation of an integrated program for musculoskeletal system care.
Larrañaga, Igor; Soto-Gordoa, Myriam; Arrospide, Arantzazu; Jauregi, María Luz; Millas, Jesús; San Vicente, Ricardo; Aguirrebeña, Jabier; Mar, Javier
The chronic nature of musculoskeletal diseases requires an integrated care which involves the Primary Care and the specialities of Rheumatology, Traumatology and Rehabilitation. The aim of this study was to assess the implementation of an integrated organizational model in osteoporosis, low back pain, shoulder disease and knee disease using Deming's continuous improvement process and considering referrals and resource consumption. A simulation model was used in the planning to predict the evolution of musculoskeletal diseases resource consumption and to carry out a Budget Impact Analysis from 2012 to 2020 in the Goierri-Alto Urola region. In the checking stage the status of the process in 2014 was evaluated using statistical analysis to check the degree of achievement of the objectives for each speciality. Simulation models showed that population with musculoskeletal disease in Goierri-Alto Urola will increase a 4.4% by 2020. Because of that, the expenses for a conventional healthcare system will have increased a 5.9%. However, if the intervention reaches its objectives the budget would decrease an 8.5%. The statistical analysis evidenced a decline in referrals to Traumatology service and a reduction of successive consultations in all specialities. The implementation of the integrated organizational model in osteoporosis, low back pain, shoulder disease and knee disease is still at an early stage. However, the empowerment of Primary Care improved patient referrals and reduced the costs. Copyright © 2016 Elsevier España, S.L.U. and Sociedad Española de Reumatología y Colegio Mexicano de Reumatología. All rights reserved.
Solomon-Krakus, Shauna; Sabiston, Catherine M
2017-12-01
This study examined whether body checking was a correlate of weight- and body-related shame and guilt for men and women. Participants were 537 adults (386 women) between the ages of 17 and 74 (M age =28.29, SD=14.63). Preliminary analyses showed women reported significantly more body-checking (p<.001), weight- and body-related shame (p<.001), and weight- and body-related guilt (p<.001) than men. In sex-stratified hierarchical linear regression models, body checking was significantly and positively associated with weight- and body-related shame (R 2 =.29 and .43, p<.001) and weight- and body-related guilt (R 2 =.34 and .45, p<.001) for men and women, respectively. Based on these findings, body checking is associated with negative weight- and body-related self-conscious emotions. Intervention and prevention efforts aimed at reducing negative weight- and body-related self-conscious emotions should consider focusing on body checking for adult men and women. Copyright © 2017 Elsevier Ltd. All rights reserved.
Capturing, using, and managing quality assurance knowledge for shuttle post-MECO flight design
NASA Technical Reports Server (NTRS)
Peters, H. L.; Fussell, L. R.; Goodwin, M. A.; Schultz, Roger D.
1991-01-01
Ascent initialization values used by the Shuttle's onboard computer for nominal and abort mission scenarios are verified by a six degrees of freedom computer simulation. The procedure that the Ascent Post Main Engine Cutoff (Post-MECO) group uses to perform quality assurance (QA) of the simulation is time consuming. Also, the QA data, checklists and associated rationale, though known by the group members, is not sufficiently documented, hindering transfer of knowledge and problem resolution. A new QA procedure which retains the current high level of integrity while reducing the time required to perform QA is needed to support the increasing Shuttle flight rate. Documenting the knowledge is also needed to increase its availability for training and problem resolution. To meet these needs, a knowledge capture process, embedded into the group activities, was initiated to verify the existing QA checks, define new ones, and document all rationale. The resulting checks were automated in a conventional software program to achieve the desired standardization, integrity, and time reduction. A prototype electronic knowledge base was developed with Macintosh's HyperCard to serve as a knowledge capture tool and data storage.
NASA Astrophysics Data System (ADS)
Huang, Sheng; Ao, Xiang; Li, Yuan-yuan; Zhang, Rui
2016-09-01
In order to meet the needs of high-speed development of optical communication system, a construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on multiplicative group of finite field is proposed. The Tanner graph of parity check matrix of the code constructed by this method has no cycle of length 4, and it can make sure that the obtained code can get a good distance property. Simulation results show that when the bit error rate ( BER) is 10-6, in the same simulation environment, the net coding gain ( NCG) of the proposed QC-LDPC(3 780, 3 540) code with the code rate of 93.7% in this paper is improved by 2.18 dB and 1.6 dB respectively compared with those of the RS(255, 239) code in ITU-T G.975 and the LDPC(3 2640, 3 0592) code in ITU-T G.975.1. In addition, the NCG of the proposed QC-LDPC(3 780, 3 540) code is respectively 0.2 dB and 0.4 dB higher compared with those of the SG-QC-LDPC(3 780, 3 540) code based on the two different subgroups in finite field and the AS-QC-LDPC(3 780, 3 540) code based on the two arbitrary sets of a finite field. Thus, the proposed QC-LDPC(3 780, 3 540) code in this paper can be well applied in optical communication systems.
The meaning of death: some simulations of a model of healthy and unhealthy consumption.
Forster, M
2001-07-01
Simulations of a model of healthy and unhealthy consumption are used to investigate the impact of various terminal conditions on life-span, pathways of health-related consumption and health. A model in which life-span and the 'death' stock of health are fixed is compared to versions in which (i) the 'death' stock of health is freely chosen; (ii) life-span is freely chosen; (iii) both the 'death' stock of health and life-span are freely chosen. The choice of terminal conditions has a striking impact on optimal plans. Results are discussed with reference to the existing demand for health literature and illustrate the application of iterative processes to determine optimal life-span, the role played by the marginal value of health capital in determining optimal plans, and the importance of checking the second-order conditions for the optimal choice of life-span.
Carter, Eleanor L; Duguid, Alasdair; Ercole, Ari; Matta, Basil; Burnstein, Rowan M; Veenith, Tonny
2014-03-01
Ventilation-associated pneumonia (VAP) is the commonest nosocomial infection in intensive care. Implementation of a VAP prevention care bundle is a proven method to reduce its incidence. The UK care bundle recommends maintenance of the tracheal tube cuff pressure at 20 to 30 cmH₂O with 4-hourly pressure checks and use of tracheal tubes with subglottic aspiration ports in patients admitted for more than 72 h. To evaluate the effects of tracheal tube type and cuff pressure monitoring technique on leakage of subglottic secretions past the tracheal tube cuff. Bench-top study. Laboratory. A model adult trachea with simulated subglottic secretions was intubated with a tracheal tube with the cuff inflated to 25 cmH₂O. Experiments were conducted using a Portex Profile Soft Seal tracheal tube with three cuff pressure monitoring strategies and using a Portex SACETT tracheal tube with intermittent cuff pressure checks. Rate of simulated secretion leakage past the tracheal tube cuff. Mean ± SD leakage of fluid past the Profile Soft Seal tracheal tube cuff was 2.25 ± 1.49 ml min⁻¹ with no monitoring of cuff pressure, 2.98 ± 1.63 ml min⁻¹ with intermittent cuff pressure monitoring and 3.83 ± 2.17 ml min⁻¹ with continuous cuff pressure monitoring (P <0.001). Using a SACETT tracheal tube with a subglottic aspiration port and aspirating the simulated secretions prior to intermittent cuff pressure checks reduced the leakage rate to 0.50 ± 0.48 ml min⁻¹ (P <0.001). Subglottic secretions leaked past the tracheal tube cuff with all tube types and cuff pressure monitoring strategies in this model. Significantly higher rates were observed with continuous cuff pressure monitoring and significantly lower rates were observed when using a tracheal tube with a subglottic aspiration port. Further evaluation of medical device performance is needed in order to design more effective VAP prevention strategies.
Three-dimensional computer model for the atmospheric general circulation experiment
NASA Technical Reports Server (NTRS)
Roberts, G. O.
1984-01-01
An efficient, flexible, three-dimensional, hydrodynamic, computer code has been developed for a spherical cap geometry. The code will be used to simulate NASA's Atmospheric General Circulation Experiment (AGCE). The AGCE is a spherical, baroclinic experiment which will model the large-scale dynamics of our atmosphere; it has been proposed to NASA for future Spacelab flights. In the AGCE a radial dielectric body force will simulate gravity, with hot fluid tending to move outwards. In order that this force be dominant, the AGCE must be operated in a low gravity environment such as Spacelab. The full potential of the AGCE will only be realized by working in conjunction with an accurate computer model. Proposed experimental parameter settings will be checked first using model runs. Then actual experimental results will be compared with the model predictions. This interaction between experiment and theory will be very valuable in determining the nature of the AGCE flows and hence their relationship to analytical theories and actual atmospheric dynamics.
Topographic Controls on Landslide and Debris-Flow Mobility
NASA Astrophysics Data System (ADS)
McCoy, S. W.; Pettitt, S.
2014-12-01
Regardless of whether a granular flow initiates from failure and liquefaction of a shallow landslide or from overland flow that entrains sediment to form a debris flow, the resulting flow poses hazards to downslope communities. Understanding controls on granular-flow mobility is critical for accurate hazard prediction. The topographic form of granular-flow paths can vary significantly across different steeplands and is one of the few flow-path properties that can be readily altered by engineered control structures such as closed-type check dams. We use grain-scale numerical modeling (discrete element method simulations) of free-surface, gravity-driven granular flows to investigate how different topographic profiles with the same mean slope and total relief can produce notable differences in flow mobility due to strong nonlinearities inherent to granular-flow dynamics. We describe how varying the profile shape from planar, to convex up, to concave up, as well how varying the number, size, and location of check dams along a flow path, changes flow velocity, thickness, discharge, energy dissipation, impact force and runout distance. Our preliminary results highlight an important path dependence for this nonlinear system, show that caution should be used when predicting flow dynamics from path-averaged properties, and provide some mechanics-based guidance for engineering control structures.
NASA Astrophysics Data System (ADS)
Bormann, H.; Diekkrüger, B.
2003-04-01
A conceptual model is presented to simulate the water fluxes of regional catchments in Benin (West Africa). The model is applied in the framework of the IMPETUS project (an integrated approach to the efficient management of scarce water resources in West Africa) which aims to assess the effects of environmental and anthropogenic changes on the regional hydrological processes and on the water availability in Benin. In order to assess the effects of decreasing precipitation and increasing human activities on the hydrological processes in the upper Ouémé valley, a scenario analysis is performed to predict possible changes. Therefore a regional hydrological model is proposed which reproduces the recent hydrological processes, and which is able to consider the changes of landscape properties.The study presented aims to check the validity of the conceptual and lumped model under the conditions of the subhumid tree savannah and therefore analyses the importance of possible sources of uncertainty. Main focus is set on the uncertainties caused by input data, model parameters and model structure. As the model simulates the water fluxes at the catchment outlet of the Térou river (3133 km2) in a sufficient quality, first results of a scenario analysis are presented. Changes of interest are the expected future decrease in amount and temporal structure of the precipitation (e.g. minus X percent precipitation during the whole season versus minus X percent precipitation in the end of the rainy season, alternatively), the decrease in soil water storage capacity which is caused by erosion, and the increasing consumption of ground water for drinking water and agricultural purposes. Resuming from the results obtained, the perspectives of lumped and conceptual models are discussed with special regard to available management options of this kind of models. Advantages and disadvantages compared to alternative model approaches (process based, physics based) are discussed.
Extension of specification language for soundness and completeness of service workflow
NASA Astrophysics Data System (ADS)
Viriyasitavat, Wattana; Xu, Li Da; Bi, Zhuming; Sapsomboon, Assadaporn
2018-05-01
A Service Workflow is an aggregation of distributed services to fulfill specific functionalities. With ever increasing available services, the methodologies for the selections of the services against the given requirements become main research subjects in multiple disciplines. A few of researchers have contributed to the formal specification languages and the methods for model checking; however, existing methods have the difficulties to tackle with the complexity of workflow compositions. In this paper, we propose to formalize the specification language to reduce the complexity of the workflow composition. To this end, we extend a specification language with the consideration of formal logic, so that some effective theorems can be derived for the verification of syntax, semantics, and inference rules in the workflow composition. The logic-based approach automates compliance checking effectively. The Service Workflow Specification (SWSpec) has been extended and formulated, and the soundness, completeness, and consistency of SWSpec applications have been verified; note that a logic-based SWSpec is mandatory for the development of model checking. The application of the proposed SWSpec has been demonstrated by the examples with the addressed soundness, completeness, and consistency.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-11
...-Based Criminal History Records Checks for Individuals Seeking Unescorted Access to Non-Power Reactors... reactor (NPR) licensees to obtain fingerprint-based criminal history records checks before granting any...) identification and criminal history records checks of individuals permitted unescorted access to a utilization...
Automatic control of finite element models for temperature-controlled radiofrequency ablation.
Haemmerich, Dieter; Webster, John G
2005-07-14
The finite element method (FEM) has been used to simulate cardiac and hepatic radiofrequency (RF) ablation. The FEM allows modeling of complex geometries that cannot be solved by analytical methods or finite difference models. In both hepatic and cardiac RF ablation a common control mode is temperature-controlled mode. Commercial FEM packages don't support automating temperature control. Most researchers manually control the applied power by trial and error to keep the tip temperature of the electrodes constant. We implemented a PI controller in a control program written in C++. The program checks the tip temperature after each step and controls the applied voltage to keep temperature constant. We created a closed loop system consisting of a FEM model and the software controlling the applied voltage. The control parameters for the controller were optimized using a closed loop system simulation. We present results of a temperature controlled 3-D FEM model of a RITA model 30 electrode. The control software effectively controlled applied voltage in the FEM model to obtain, and keep electrodes at target temperature of 100 degrees C. The closed loop system simulation output closely correlated with the FEM model, and allowed us to optimize control parameters. The closed loop control of the FEM model allowed us to implement temperature controlled RF ablation with minimal user input.
Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan; ...
2017-09-07
In this paper, we demonstrate a statistical procedure for learning a high-order eddy viscosity model (EVM) from experimental data and using it to improve the predictive skill of a Reynolds-averaged Navier–Stokes (RANS) simulator. The method is tested in a three-dimensional (3D), transonic jet-in-crossflow (JIC) configuration. The process starts with a cubic eddy viscosity model (CEVM) developed for incompressible flows. It is fitted to limited experimental JIC data using shrinkage regression. The shrinkage process removes all the terms from the model, except an intercept, a linear term, and a quadratic one involving the square of the vorticity. The shrunk eddy viscositymore » model is implemented in an RANS simulator and calibrated, using vorticity measurements, to infer three parameters. The calibration is Bayesian and is solved using a Markov chain Monte Carlo (MCMC) method. A 3D probability density distribution for the inferred parameters is constructed, thus quantifying the uncertainty in the estimate. The phenomenal cost of using a 3D flow simulator inside an MCMC loop is mitigated by using surrogate models (“curve-fits”). A support vector machine classifier (SVMC) is used to impose our prior belief regarding parameter values, specifically to exclude nonphysical parameter combinations. The calibrated model is compared, in terms of its predictive skill, to simulations using uncalibrated linear and CEVMs. Finally, we find that the calibrated model, with one quadratic term, is more accurate than the uncalibrated simulator. The model is also checked at a flow condition at which the model was not calibrated.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ray, Jaideep; Lefantzi, Sophia; Arunajatesan, Srinivasan
In this paper, we demonstrate a statistical procedure for learning a high-order eddy viscosity model (EVM) from experimental data and using it to improve the predictive skill of a Reynolds-averaged Navier–Stokes (RANS) simulator. The method is tested in a three-dimensional (3D), transonic jet-in-crossflow (JIC) configuration. The process starts with a cubic eddy viscosity model (CEVM) developed for incompressible flows. It is fitted to limited experimental JIC data using shrinkage regression. The shrinkage process removes all the terms from the model, except an intercept, a linear term, and a quadratic one involving the square of the vorticity. The shrunk eddy viscositymore » model is implemented in an RANS simulator and calibrated, using vorticity measurements, to infer three parameters. The calibration is Bayesian and is solved using a Markov chain Monte Carlo (MCMC) method. A 3D probability density distribution for the inferred parameters is constructed, thus quantifying the uncertainty in the estimate. The phenomenal cost of using a 3D flow simulator inside an MCMC loop is mitigated by using surrogate models (“curve-fits”). A support vector machine classifier (SVMC) is used to impose our prior belief regarding parameter values, specifically to exclude nonphysical parameter combinations. The calibrated model is compared, in terms of its predictive skill, to simulations using uncalibrated linear and CEVMs. Finally, we find that the calibrated model, with one quadratic term, is more accurate than the uncalibrated simulator. The model is also checked at a flow condition at which the model was not calibrated.« less
Modelling terrestrial nitrous oxide emissions and implications for climate feedback.
Xu-Ri; Prentice, I Colin; Spahni, Renato; Niu, Hai Shan
2012-10-01
Ecosystem nitrous oxide (N2O) emissions respond to changes in climate and CO2 concentration as well as anthropogenic nitrogen (N) enhancements. Here, we aimed to quantify the responses of natural ecosystem N2O emissions to multiple environmental drivers using a process-based global vegetation model (DyN-LPJ). We checked that modelled annual N2O emissions from nonagricultural ecosystems could reproduce field measurements worldwide, and experimentally observed responses to step changes in environmental factors. We then simulated global N2O emissions throughout the 20th century and analysed the effects of environmental changes. The model reproduced well the global pattern of N2O emissions and the observed responses of N cycle components to changes in environmental factors. Simulated 20th century global decadal-average soil emissions were c. 8.2-9.5 Tg N yr(-1) (or 8.3-10.3 Tg N yr(-1) with N deposition). Warming and N deposition contributed 0.85±0.41 and 0.80±0.14 Tg N yr(-1), respectively, to an overall upward trend. Rising CO2 also contributed, in part, through a positive interaction with warming. The modelled temperature dependence of N2O emission (c. 1 Tg N yr(-1) K(-1)) implies a positive climate feedback which, over the lifetime of N2O (114 yr), could become as important as the climate-carbon cycle feedback caused by soil CO2 release. © 2012 The Authors. New Phytologist © 2012 New Phytologist Trust.
GIS-Based Noise Simulation Open Source Software: N-GNOIS
NASA Astrophysics Data System (ADS)
Vijay, Ritesh; Sharma, A.; Kumar, M.; Shende, V.; Chakrabarti, T.; Gupta, Rajesh
2015-12-01
Geographical information system (GIS)-based noise simulation software (N-GNOIS) has been developed to simulate the noise scenario due to point and mobile sources considering the impact of geographical features and meteorological parameters. These have been addressed in the software through attenuation modules of atmosphere, vegetation and barrier. N-GNOIS is a user friendly, platform-independent and open geospatial consortia (OGC) compliant software. It has been developed using open source technology (QGIS) and open source language (Python). N-GNOIS has unique features like cumulative impact of point and mobile sources, building structure and honking due to traffic. Honking is the most common phenomenon in developing countries and is frequently observed on any type of roads. N-GNOIS also helps in designing physical barrier and vegetation cover to check the propagation of noise and acts as a decision making tool for planning and management of noise component in environmental impact assessment (EIA) studies.
Methods for Geometric Data Validation of 3d City Models
NASA Astrophysics Data System (ADS)
Wagner, D.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.
2015-12-01
Geometric quality of 3D city models is crucial for data analysis and simulation tasks, which are part of modern applications of the data (e.g. potential heating energy consumption of city quarters, solar potential, etc.). Geometric quality in these contexts is however a different concept as it is for 2D maps. In the latter case, aspects such as positional or temporal accuracy and correctness represent typical quality metrics of the data. They are defined in ISO 19157 and should be mentioned as part of the metadata. 3D data has a far wider range of aspects which influence their quality, plus the idea of quality itself is application dependent. Thus, concepts for definition of quality are needed, including methods to validate these definitions. Quality on this sense means internal validation and detection of inconsistent or wrong geometry according to a predefined set of rules. A useful starting point would be to have correct geometry in accordance with ISO 19107. A valid solid should consist of planar faces which touch their neighbours exclusively in defined corner points and edges. No gaps between them are allowed, and the whole feature must be 2-manifold. In this paper, we present methods to validate common geometric requirements for building geometry. Different checks based on several algorithms have been implemented to validate a set of rules derived from the solid definition mentioned above (e.g. water tightness of the solid or planarity of its polygons), as they were developed for the software tool CityDoctor. The method of each check is specified, with a special focus on the discussion of tolerance values where they are necessary. The checks include polygon level checks to validate the correctness of each polygon, i.e. closeness of the bounding linear ring and planarity. On the solid level, which is only validated if the polygons have passed validation, correct polygon orientation is checked, after self-intersections outside of defined corner points and edges are detected, among additional criteria. Self-intersection might lead to different results, e.g. intersection points, lines or areas. Depending on the geometric constellation, they might represent gaps between bounding polygons of the solids, overlaps, or violations of the 2-manifoldness. Not least due to the floating point problem in digital numbers, tolerances must be considered in some algorithms, e.g. planarity and solid self-intersection. Effects of different tolerance values and their handling is discussed; recommendations for suitable values are given. The goal of the paper is to give a clear understanding of geometric validation in the context of 3D city models. This should also enable the data holder to get a better comprehension of the validation results and their consequences on the deployment fields of the validated data set.
Status of the Space Radiation Monte Carlos Simulation Based on FLUKA and ROOT
NASA Technical Reports Server (NTRS)
Andersen, Victor; Carminati, Federico; Empl, Anton; Ferrari, Alfredo; Pinsky, Lawrence; Sala, Paola; Wilson, Thomas L.
2002-01-01
The NASA-funded project reported on at the first IWSSRR in Arona to develop a Monte-Carlo simulation program for use in simulating the space radiation environment based on the FLUKA and ROOT codes is well into its second year of development, and considerable progress has been made. The general tasks required to achieve the final goals include the addition of heavy-ion interactions into the FLUKA code and the provision of a ROOT-based interface to FLUKA. The most significant progress to date includes the incorporation of the DPMJET event generator code within FLUKA to handle heavy-ion interactions for incident projectile energies greater than 3GeV/A. The ongoing effort intends to extend the treatment of these interactions down to 10 MeV, and at present two alternative approaches are being explored. The ROOT interface is being pursued in conjunction with the CERN LHC ALICE software team through an adaptation of their existing AliROOT software. As a check on the validity of the code, a simulation of the recent data taken by the ATIC experiment is underway.
A class of all digital phase locked loops - Modelling and analysis.
NASA Technical Reports Server (NTRS)
Reddy, C. P.; Gupta, S. C.
1972-01-01
An all digital phase locked loop which tracks the phase of the incoming signal once per carrier cycle is proposed. The different elements and their functions, and the phase lock operation are explained in detail. The general digital loop operation is governed by a non-linear difference equation from which a suitable model is developed. The lock range for the general model is derived. The performance of the digital loop for phase step, and frequency step inputs for different levels of quantization without loop filter, are studied. The analytical results are checked by simulating the actual system on the digital computer.
Modelisation of an unspecialized quadruped walking mammal.
Neveu, P; Villanova, J; Gasc, J P
2001-12-01
Kinematics and structural analyses were used as basic data to elaborate a dynamic quadruped model that may represent an unspecialized mammal. Hedgehogs were filmed on a treadmill with a cinefluorographic system providing trajectories of skeletal elements during locomotion. Body parameters such as limb segments mass and length, and segments centre of mass were checked from cadavers. These biological parameters were compiled in order to build a virtual quadruped robot. The robot locomotor behaviour was compared with the actual hedgehog to improve the model and to disclose the necessary changes. Apart from use in robotics, the resulting model may be useful to simulate the locomotion of extinct mammals.
Lessons Learned in the Livingstone 2 on Earth Observing One Flight Experiment
NASA Technical Reports Server (NTRS)
Hayden, Sandra C.; Sweet, Adam J.; Shulman, Seth
2005-01-01
The Livingstone 2 (L2) model-based diagnosis software is a reusable diagnostic tool for monitoring complex systems. In 2004, L2 was integrated with the JPL Autonomous Sciencecraft Experiment (ASE) and deployed on-board Goddard's Earth Observing One (EO-1) remote sensing satellite, to monitor and diagnose the EO-1 space science instruments and imaging sequence. This paper reports on lessons learned from this flight experiment. The goals for this experiment, including validation of minimum success criteria and of a series of diagnostic scenarios, have all been successfully net. Long-term operations in space are on-going, as a test of the maturity of the system, with L2 performance remaining flawless. L2 has demonstrated the ability to track the state of the system during nominal operations, detect simulated abnormalities in operations and isolate failures to their root cause fault. Specific advances demonstrated include diagnosis of ambiguity groups rather than a single fault candidate; hypothesis revision given new sensor evidence about the state of the system; and the capability to check for faults in a dynamic system without having to wait until the system is quiescent. The major benefits of this advanced health management technology are to increase mission duration and reliability through intelligent fault protection, and robust autonomous operations with reduced dependency on supervisory operations from Earth. The work-load for operators will be reduced by telemetry of processed state-of-health information rather than raw data. The long-term vision is that of making diagnosis available to the onboard planner or executive, allowing autonomy software to re-plan in order to work around known component failures. For a system that is expected to evolve substantially over its lifetime, as for the International Space Station, the model-based approach has definite advantages over rule-based expert systems and limit-checking fault protection systems, as these do not scale well. The model-based approach facilitates reuse of the L2 diagnostic software; only the model of the system to be diagnosed and telemetry monitoring software has to be rebuilt for a new system or expanded for a growing system. The hierarchical L2 model supports modularity and expendability, and as such is suitable solution for integrated system health management as envisioned for systems-of-systems.
NASA Astrophysics Data System (ADS)
Kagawa, T.; Petukhin, A.; Koketsu, K.; Miyake, H.; Murotani, S.; Tsurugi, M.
2010-12-01
Three dimensional velocity structure model of southwest Japan is provided to simulate long-period ground motions due to the hypothetical subduction earthquakes. The model is constructed from numerous physical explorations conducted in land and offshore areas and observational study of natural earthquakes. Any available information is involved to explain crustal structure and sedimentary structure. Figure 1 shows an example of cross section with P wave velocities. The model has been revised through numbers of simulations of small to middle earthquakes as to have good agreement with observed arrival times, amplitudes, and also waveforms including surface waves. Figure 2 shows a comparison between Observed (dash line) and simulated (solid line) waveforms. Low velocity layers have added on seismological basement to reproduce observed records. The thickness of the layer has been adjusted through iterative analysis. The final result is found to have good agreement with the results from other physical explorations; e.g. gravity anomaly. We are planning to make long-period (about 2 to 10 sec or longer) simulations of ground motion due to the hypothetical Nankai Earthquake with the 3-D velocity structure model. As the first step, we will simulate the observed ground motions of the latest event occurred in 1946 to check the source model and newly developed velocity structure model. This project is partly supported by Integrated Research Project for Long-Period Ground Motion Hazard Maps by Ministry of Education, Culture, Sports, Science and Technology (MEXT). The ground motion data used in this study were provided by National Research Institute for Earth Science and Disaster Prevention Disaster (NIED). Figure 1 An example of cross section with P wave velocities Figure 2 Observed (dash line) and simulated (solid line) waveforms due to a small earthquake
Code of Federal Regulations, 2010 CFR
2010-01-01
... 12 Banks and Banking 3 2010-01-01 2010-01-01 false Model Availability Policy Disclosures, Clauses, and Notices; Model Substitute Check Policy Disclosure and Notices C Appendix C to Part 229 Banks and... OF FUNDS AND COLLECTION OF CHECKS (REGULATION CC) Pt. 229, App. C Appendix C to Part 229—Model...
Mathematical models for space shuttle ground systems
NASA Technical Reports Server (NTRS)
Tory, E. G.
1985-01-01
Math models are a series of algorithms, comprised of algebraic equations and Boolean Logic. At Kennedy Space Center, math models for the Space Shuttle Systems are performed utilizing the Honeywell 66/80 digital computers, Modcomp II/45 Minicomputers and special purpose hardware simulators (MicroComputers). The Shuttle Ground Operations Simulator operating system provides the language formats, subroutines, queueing schemes, execution modes and support software to write, maintain and execute the models. The ground systems presented consist primarily of the Liquid Oxygen and Liquid Hydrogen Cryogenic Propellant Systems, as well as liquid oxygen External Tank Gaseous Oxygen Vent Hood/Arm and the Vehicle Assembly Building (VAB) High Bay Cells. The purpose of math modeling is to simulate the ground hardware systems and to provide an environment for testing in a benign mode. This capability allows the engineers to check out application software for loading and launching the vehicle, and to verify the Checkout, Control, & Monitor Subsystem within the Launch Processing System. It is also used to train operators and to predict system response and status in various configurations (normal operations, emergency and contingent operations), including untried configurations or those too dangerous to try under real conditions, i.e., failure modes.
Dynamic modelling and experimental study of cantilever beam with clearance
NASA Astrophysics Data System (ADS)
Li, B.; Jin, W.; Han, L.; He, Z.
2012-05-01
Clearances occur in almost all mechanical systems, typically such as the clearance between slide plate of gun barrel and guide. Therefore, to study the clearances of mechanisms can be very important to increase the working performance and lifetime of mechanisms. In this paper, rigid dynamic modelling of cantilever with clearance was done according to the subject investigated. In the rigid dynamic modelling, clearance is equivalent to the spring-dashpot model, the impact of beam and boundary face was also taken into consideration. In ADAMS software, the dynamic simulation was carried out according to the model above. The software simulated the movement of cantilever with clearance under external excitation. Research found: When the clearance is larger, the force of impact will become larger. In order to study how the stiffness of the cantilever's supporting part influences natural frequency of the system, A Euler beam which is restricted by a draught spring and a torsion spring at its end was raised. Through numerical calculation, the relationship between natural frequency and stiffness was found. When the value of the stiffness is close to the limit value, the corresponding boundary condition is illustrated. An ADAMS experiment was carried out to check the theory and the simulation.
CMM Interim Check Design of Experiments (U)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montano, Joshua Daniel
2015-07-29
Coordinate Measuring Machines (CMM) are widely used in industry, throughout the Nuclear Weapons Complex and at Los Alamos National Laboratory (LANL) to verify part conformance to design definition. Calibration cycles for CMMs at LANL are predominantly one year in length and include a weekly interim check to reduce risk. The CMM interim check makes use of Renishaw’s Machine Checking Gauge which is an off-the-shelf product simulates a large sphere within a CMM’s measurement volume and allows for error estimation. As verification on the interim check process a design of experiments investigation was proposed to test a couple of key factorsmore » (location and inspector). The results from the two-factor factorial experiment proved that location influenced results more than the inspector or interaction.« less
Desktop microsimulation: a tool to improve efficiency in the medical office practice.
Montgomery, James B; Linville, Beth A; Slonim, Anthony D
2013-01-01
Because the economic crisis in the United States continues to have an impact on healthcare organizations, industry leaders must optimize their decision making. Discrete-event computer simulation is a quality tool with a demonstrated track record of improving the precision of analysis for process redesign. However, the use of simulation to consolidate practices and design efficiencies into an unfinished medical office building was a unique task. A discrete-event computer simulation package was used to model the operations and forecast future results for four orthopedic surgery practices. The scenarios were created to allow an evaluation of the impact of process change on the output variables of exam room utilization, patient queue size, and staff utilization. The model helped with decisions regarding space allocation and efficient exam room use by demonstrating the impact of process changes in patient queues at check-in/out, x-ray, and cast room locations when compared to the status quo model. The analysis impacted decisions on facility layout, patient flow, and staff functions in this newly consolidated practice. Simulation was found to be a useful tool for process redesign and decision making even prior to building occupancy. © 2011 National Association for Healthcare Quality.
Comparing Simulated and Experimental Data from UCN τ
NASA Astrophysics Data System (ADS)
Howard, Dezrick; Holley, Adam
2017-09-01
The UCN τ experiment is designed to measure the average lifetime of a free neutron (τn) by trapping ultracold neutrons (UCN) in a magneto-gravitational trap and allowing them to β-decay, with the ultimate goal of minimizing the uncertainty to approximately 0.01% (0.1 s). Understanding the systematics of the experiment at the level necessary to reach this high precision may help to better understand the disparity between measurements from cold neutron beam and UCN bottle experiments (τn 888 s and τn 878 s, respectively). To assist in evaluating systemics that might conceivably contribute at this level, a neutron spin-tracking Monte Carlo simulation, which models a UCN population's behavior throughout a run, is currently under development. The simulation will utilize an empirical map of the magnetic field in the trap (see poster by K. Hoffman) by interpolating the field between measured points (see poster by J. Felkins) in order to model the depolarization mechanism with high fidelity. As a preliminary step, I have checked that the Monte Carlo model can reasonably reproduce the observed behavior of the experiment. In particular, I will present a comparison between simulated data and data acquired from the 2016-2017 UCN τ run cycle.
NASA Astrophysics Data System (ADS)
Gailler, A.; Loevenbruck, A.; Hebert, H.
2013-12-01
Numerical tsunami propagation and inundation models are well developed and have now reached an impressive level of accuracy, especially in locations such as harbors where the tsunami waves are mostly amplified. In the framework of tsunami warning under real-time operational conditions, the main obstacle for the routine use of such numerical simulations remains the slowness of the numerical computation, which is strengthened when detailed grids are required for the precise modeling of the coastline response of an individual harbor. Thus only tsunami offshore propagation modeling tools using a single sparse bathymetric computation grid are presently included within the French Tsunami Warning Center (CENALT), providing rapid estimation of tsunami warning at western Mediterranean and NE Atlantic basins scale. We present here a preliminary work that performs quick estimates of the inundation at individual harbors from these high sea forecasting tsunami simulations. The method involves an empirical correction based on theoretical amplification laws (either Green's or Synolakis laws). The main limitation is that its application to a given coastal area would require a large database of previous observations, in order to define the empirical parameters of the correction equation. As no such data (i.e., historical tide gage records of significant tsunamis) are available for the western Mediterranean and NE Atlantic basins, we use a set of synthetic mareograms, calculated for both fake and well-known historical tsunamigenic earthquakes in the area. This synthetic dataset is obtained through accurate numerical tsunami propagation and inundation modeling by using several nested bathymetric grids of increasingly fine resolution close to the shores (down to a grid cell size of 3m in some Mediterranean harbors). Non linear shallow water tsunami modeling performed on a single 2' coarse bathymetric grid are compared to the values given by time-consuming nested grids simulations (and observation when available), in order to check to which extent the simple approach based on the amplification laws can explain the data. The idea is to fit tsunami data with numerical modeling carried out without any refined coastal bathymetry/topography. To this end several parameters are discussed, namely the bathymetric depth to which model results must be extrapolated (using the Green's law), or the mean bathymetric slope to consider near the studied coast (when using the Synolakis law).
Featured Image: The Simulated Collapse of a Core
NASA Astrophysics Data System (ADS)
Kohler, Susanna
2016-11-01
This stunning snapshot (click for a closer look!) is from a simulation of a core-collapse supernova. Despite having been studied for many decades, the mechanism driving the explosions of core-collapse supernovae is still an area of active research. Extremely complex simulations such as this one represent best efforts to include as many realistic physical processes as is currently computationally feasible. In this study led by Luke Roberts (a NASA Einstein Postdoctoral Fellow at Caltech at the time), a core-collapse supernova is modeled long-term in fully 3D simulations that include the effects of general relativity, radiation hydrodynamics, and even neutrino physics. The authors use these simulations to examine the evolution of a supernova after its core bounce. To read more about the teams findings (and see more awesome images from their simulations), check out the paper below!CitationLuke F. Roberts et al 2016 ApJ 831 98. doi:10.3847/0004-637X/831/1/98
Development of a numerical methodology for flowforming process simulation of complex geometry tubes
NASA Astrophysics Data System (ADS)
Varela, Sonia; Santos, Maite; Arroyo, Amaia; Pérez, Iñaki; Puigjaner, Joan Francesc; Puigjaner, Blanca
2017-10-01
Nowadays, the incremental flowforming process is widely explored because of the usage of complex tubular products is increasing due to the light-weighting trend and the use of expensive materials. The enhanced mechanical properties of finished parts combined with the process efficiency in terms of raw material and energy consumption are the key factors for its competitiveness and sustainability, which is consistent with EU industry policy. As a promising technology, additional steps for extending the existing flowforming limits in the production of tubular products are required. The objective of the present research is to further expand the current state of the art regarding limitations on tube thickness and diameter, exploring the feasibility to flowform complex geometries as tubes of elevated thickness of up to 60 mm. In this study, the analysis of the backward flowforming process of 7075 aluminum tubular preform is carried out to define the optimum process parameters, machine requirements and tooling geometry as demonstration case. Numerical simulation studies on flowforming of thin walled tubular components have been considered to increase the knowledge of the technology. The calculation of the rotational movement of the mesh preform, the high ratio thickness/length and the thermomechanical condition increase significantly the computation time of the numerical simulation model. This means that efficient and reliable tools able to predict the forming loads and the quality of flowformed thick tubes are not available. This paper aims to overcome this situation by developing a simulation methodology based on FEM simulation code including new strategies. Material characterization has also been performed through tensile test to able to design the process. Finally, to check the reliability of the model, flowforming tests at industrial environment have been developed.
14 CFR 125.297 - Approval of flight simulators and flight training devices.
Code of Federal Regulations, 2013 CFR
2013-01-01
..., testing, and checking required by this subpart. (b) Each flight simulator and flight training device that... 14 Aeronautics and Space 3 2013-01-01 2013-01-01 false Approval of flight simulators and flight... Flight Crewmember Requirements § 125.297 Approval of flight simulators and flight training devices. (a...
14 CFR 125.297 - Approval of flight simulators and flight training devices.
Code of Federal Regulations, 2011 CFR
2011-01-01
..., testing, and checking required by this subpart. (b) Each flight simulator and flight training device that... 14 Aeronautics and Space 3 2011-01-01 2011-01-01 false Approval of flight simulators and flight... Flight Crewmember Requirements § 125.297 Approval of flight simulators and flight training devices. (a...
14 CFR 125.297 - Approval of flight simulators and flight training devices.
Code of Federal Regulations, 2014 CFR
2014-01-01
..., testing, and checking required by this subpart. (b) Each flight simulator and flight training device that... 14 Aeronautics and Space 3 2014-01-01 2014-01-01 false Approval of flight simulators and flight... Flight Crewmember Requirements § 125.297 Approval of flight simulators and flight training devices. (a...
14 CFR 125.297 - Approval of flight simulators and flight training devices.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., testing, and checking required by this subpart. (b) Each flight simulator and flight training device that... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Approval of flight simulators and flight... Flight Crewmember Requirements § 125.297 Approval of flight simulators and flight training devices. (a...
14 CFR 125.297 - Approval of flight simulators and flight training devices.
Code of Federal Regulations, 2012 CFR
2012-01-01
..., testing, and checking required by this subpart. (b) Each flight simulator and flight training device that... 14 Aeronautics and Space 3 2012-01-01 2012-01-01 false Approval of flight simulators and flight... Flight Crewmember Requirements § 125.297 Approval of flight simulators and flight training devices. (a...
NASA Astrophysics Data System (ADS)
Hansen, Akio; Ament, Felix; Lammert, Andrea
2017-04-01
Large-eddy simulations have been performed since several decades, but due to computational limits most studies were restricted to small domains or idealised initial-/boundary conditions. Within the High definition clouds and precipitation for advancing climate prediction (HD(CP)2) project realistic weather forecasting like LES simulations were performed with the newly developed ICON LES model for several days. The domain covers central Europe with a horizontal resolution down to 156 m. The setup consists of more than 3 billion grid cells, by what one 3D dump requires roughly 500 GB. A newly developed online evaluation toolbox was created to check instantaneously for realistic model simulations. The toolbox automatically combines model results with observations and generates several quicklooks for various variables. So far temperature-/humidity profiles, cloud cover, integrated water vapour, precipitation and many more are included. All kind of observations like aircraft observations, soundings or precipitation radar networks are used. For each dataset, a specific module is created, which allows for an easy handling and enhancement of the toolbox. Most of the observations are automatically downloaded from the Standardized Atmospheric Measurement Database (SAMD). The evaluation tool should support scientists at monitoring computational costly model simulations as well as to give a first overview about model's performance. The structure of the toolbox as well as the SAMD database are presented. Furthermore, the toolbox was applied on an ICON LES sensitivity study, where example results are shown.
Computational technique for stepwise quantitative assessment of equation correctness
NASA Astrophysics Data System (ADS)
Othman, Nuru'l Izzah; Bakar, Zainab Abu
2017-04-01
Many of the computer-aided mathematics assessment systems that are available today possess the capability to implement stepwise correctness checking of a working scheme for solving equations. The computational technique for assessing the correctness of each response in the scheme mainly involves checking the mathematical equivalence and providing qualitative feedback. This paper presents a technique, known as the Stepwise Correctness Checking and Scoring (SCCS) technique that checks the correctness of each equation in terms of structural equivalence and provides quantitative feedback. The technique, which is based on the Multiset framework, adapts certain techniques from textual information retrieval involving tokenization, document modelling and similarity evaluation. The performance of the SCCS technique was tested using worked solutions on solving linear algebraic equations in one variable. 350 working schemes comprising of 1385 responses were collected using a marking engine prototype, which has been developed based on the technique. The results show that both the automated analytical scores and the automated overall scores generated by the marking engine exhibit high percent agreement, high correlation and high degree of agreement with manual scores with small average absolute and mixed errors.
NASA Astrophysics Data System (ADS)
Degaudenzi, Riccardo; Vanghi, Vieri
1994-02-01
In all-digital Trellis-Coded 8PSK (TC-8PSK) demodulator well suited for VLSI implementation, including maximum likelihood estimation decision-directed (MLE-DD) carrier phase and clock timing recovery, is introduced and analyzed. By simply removing the trellis decoder the demodulator can efficiently cope with uncoded 8PSK signals. The proposed MLE-DD synchronization algorithm requires one sample for the phase and two samples per symbol for the timing loop. The joint phase and timing discriminator characteristics are analytically derived and numerical results checked by means of computer simulations. An approximated expression for steady-state carrier phase and clock timing mean square error has been derived and successfully checked with simulation findings. Synchronizer deviation from the Cramer Rao bound is also discussed. Mean acquisition time for the digital synchronizer has also been computed and checked, using the Monte Carlo simulation technique. Finally, TC-8PSK digital demodulator performance in terms of bit error rate and mean time to lose lock, including digital interpolators and synchronization loops, is presented.
Effect of noise on defect chaos in a reaction-diffusion model.
Wang, Hongli; Ouyang, Qi
2005-06-01
The influence of noise on defect chaos due to breakup of spiral waves through Doppler and Eckhaus instabilities is investigated numerically with a modified Fitzhugh-Nagumo model. By numerical simulations we show that the noise can drastically enhance the creation and annihilation rates of topological defects. The noise-free probability distribution function for defects in this model is found not to fit with the previously reported squared-Poisson distribution. Under the influence of noise, the distributions are flattened, and can fit with the squared-Poisson or the modified-Poisson distribution. The defect lifetime and diffusive property of defects under the influence of noise are also checked in this model.
Development of a thermodynamic model for a cold cycle 3He-4He dilution refrigerator
NASA Astrophysics Data System (ADS)
Mueller, B. W.; Miller, F. K.
2016-10-01
A thermodynamic model of a 3He-4He cold cycle dilution refrigerator with no actively-driven mechanical components is developed and investigated. The refrigerator employs a reversible superfluid magnetic pump, passive check valves, a phase separation chamber, and a series of recuperative heat exchangers to continuously circulate 3He-4He and maintain a 3He concentration gradient across the mixing chamber. The model predicts cooling power and mixing chamber temperature for a range of design and operating parameters, allowing an evaluation of feasibility for potential 3He-4He cold cycle dilution refrigerator prototype designs. Model simulations for a prototype refrigerator design are presented.
Shakir'yanova, Yu P; Leonov, S V; Pinchuk, P V; Sukhareva, M A
This article was designed to share the experience gained with the three-dimensional modeling for the purpose of situational expertise intended to reconstruct the occurrence circumstances and check up the alternative investigative leads concerning formation of potential injuries to a concrete person. Simulation was performed with the use of the dimensionally scaled model of the place of occurrence as well as the models of the human head and body totally consistent with the anthropometric characteristics of the victim. The results of this work made it possible to reject several potential opportunities for the formation of injuries to the victim and identify the most probable version.
NASA Astrophysics Data System (ADS)
Lang, S. E.; Tao, W. K.; Iguchi, T.
2017-12-01
The Goddard Convective-Stratiform Heating (or CSH) algorithm has been used to estimate cloud heating over the global Tropics using TRMM rainfall data and a set of look-up-tables (LUTs) derived from a series of multi-week cloud-resolving model (CRM) simulations using the Goddard Cumulus Ensemble model (GCE). These simulations link satellite observables (i.e., surface rainfall and stratiform fraction) with cloud heating profiles, which are not directly observable. However, with the launch of GPM in 2014, the range over which such algorithms can be applied has been extended from the Tropics into higher latitudes, including cold season and synoptic weather systems. In response, the CSH algorithm and its LUTs have been revised both to improve the retrievals in the Tropics as well as expand retrievals to higher latitudes. For the Tropics, the GCE simulations used to build the LUTs were upgraded using larger 2D model domains (512 vs 256 km) and a new, improved Goddard 4-ice scheme as well as expanded with additional cases (4 land and 6 ocean in total). The new tropical LUTs are also re-built using additional metrics. Besides surface type, conditional rain intensity and stratiform fraction, the new LUTs incorporate echo top heights and low-level (0-2 km) vertical reflectivity gradients. CSH retrievals in the Tropics based on the new LUTs show significant differences from previous iterations using TRMM data or the old LUT metrics. For the Extra-tropics, 6 NU-WRF simulations of synoptic events (3 East Coast and 3 West Coast), including snow, were used to build new extra-tropical CSH LUTs. The LUT metrics for the extra-tropics are based on radar characteristics and freezing level height. The extra-tropical retrievals are evaluated with a self-consistency check approach using the model heating as `truth,' and freezing level height is used to transition CSH retrievals from the Tropics to Extra-tropics. Retrieved zonal average heating structures in the Extra-tropics are presented and show distinct differences from those in the Tropics.
NASA Technical Reports Server (NTRS)
Morelli, E. A.; Proffitt, M. S.
1999-01-01
The data for longitudinal non-dimensional, aerodynamic coefficients in the High Speed Research Cycle 2B aerodynamic database were modeled using polynomial expressions identified with an orthogonal function modeling technique. The discrepancy between the tabular aerodynamic data and the polynomial models was tested and shown to be less than 15 percent for drag, lift, and pitching moment coefficients over the entire flight envelope. Most of this discrepancy was traced to smoothing local measurement noise and to the omission of mass case 5 data in the modeling process. A simulation check case showed that the polynomial models provided a compact and accurate representation of the nonlinear aerodynamic dependencies contained in the HSR Cycle 2B tabular aerodynamic database.
Rodríguez-Álvarez, María Xosé; Roca-Pardiñas, Javier; Cadarso-Suárez, Carmen; Tahoces, Pablo G
2018-03-01
Prior to using a diagnostic test in a routine clinical setting, the rigorous evaluation of its diagnostic accuracy is essential. The receiver-operating characteristic curve is the measure of accuracy most widely used for continuous diagnostic tests. However, the possible impact of extra information about the patient (or even the environment) on diagnostic accuracy also needs to be assessed. In this paper, we focus on an estimator for the covariate-specific receiver-operating characteristic curve based on direct regression modelling and nonparametric smoothing techniques. This approach defines the class of generalised additive models for the receiver-operating characteristic curve. The main aim of the paper is to offer new inferential procedures for testing the effect of covariates on the conditional receiver-operating characteristic curve within the above-mentioned class. Specifically, two different bootstrap-based tests are suggested to check (a) the possible effect of continuous covariates on the receiver-operating characteristic curve and (b) the presence of factor-by-curve interaction terms. The validity of the proposed bootstrap-based procedures is supported by simulations. To facilitate the application of these new procedures in practice, an R-package, known as npROCRegression, is provided and briefly described. Finally, data derived from a computer-aided diagnostic system for the automatic detection of tumour masses in breast cancer is analysed.
Building and evaluating an ontology-based tool for reasoning about consent permission
Grando, Adela; Schwab, Richard
2013-01-01
Given the lack of mechanisms for specifying, sharing and checking the compliance of consent permissions, we focus on building and testing novel approaches to address this gap. In our previous work, we introduced a “permission ontology” to capture in a precise, machine-interpretable form informed consent permissions in research studies. Here we explain how we built and evaluated a framework for specifying subject’s permissions and checking researcher’s resource request in compliance with those permissions. The framework is proposed as an extension of an existing policy engine based on the eXtensible Access Control Markup Language (XACML), incorporating ontology-based reasoning. The framework is evaluated in the context of the UCSD Moores Cancer Center biorepository, modeling permissions from an informed consent and a HIPAA form. The resulting permission ontology and mechanisms to check subject’s permission are implementation and institution independent, and therefore offer the potential to be reusable in other biorepositories and data warehouses. PMID:24551354
The influence of social anxiety on the body checking behaviors of female college students.
White, Emily K; Warren, Cortney S
2014-09-01
Social anxiety and eating pathology frequently co-occur. However, there is limited research examining the relationship between anxiety and body checking, aside from one study in which social physique anxiety partially mediated the relationship between body checking cognitions and body checking behavior (Haase, Mountford, & Waller, 2007). In an independent sample of 567 college women, we tested the fit of Haase and colleagues' foundational model but did not find evidence of mediation. Thus we tested the fit of an expanded path model that included eating pathology and clinical impairment. In the best-fitting path model (CFI=.991; RMSEA=.083) eating pathology and social physique anxiety positively predicted body checking, and body checking positively predicted clinical impairment. Therefore, women who endorse social physique anxiety may be more likely to engage in body checking behaviors and experience impaired psychosocial functioning. Published by Elsevier Ltd.
Simulations of inspiraling and merging double neutron stars using the Spectral Einstein Code
NASA Astrophysics Data System (ADS)
Haas, Roland; Ott, Christian D.; Szilagyi, Bela; Kaplan, Jeffrey D.; Lippuner, Jonas; Scheel, Mark A.; Barkett, Kevin; Muhlberger, Curran D.; Dietrich, Tim; Duez, Matthew D.; Foucart, Francois; Pfeiffer, Harald P.; Kidder, Lawrence E.; Teukolsky, Saul A.
2016-06-01
We present results on the inspiral, merger, and postmerger evolution of a neutron star-neutron star (NSNS) system. Our results are obtained using the hybrid pseudospectral-finite volume Spectral Einstein Code (SpEC). To test our numerical methods, we evolve an equal-mass system for ≈22 orbits before merger. This waveform is the longest waveform obtained from fully general-relativistic simulations for NSNSs to date. Such long (and accurate) numerical waveforms are required to further improve semianalytical models used in gravitational wave data analysis, for example, the effective one body models. We discuss in detail the improvements to SpEC's ability to simulate NSNS mergers, in particular mesh refined grids to better resolve the merger and postmerger phases. We provide a set of consistency checks and compare our results to NSNS merger simulations with the independent bam code. We find agreement between them, which increases confidence in results obtained with either code. This work paves the way for future studies using long waveforms and more complex microphysical descriptions of neutron star matter in SpEC.
NASA Astrophysics Data System (ADS)
Tanikawa, Ataru; Sato, Yushi; Nomoto, Ken'ichi; Maeda, Keiichi; Nakasato, Naohito; Hachisu, Izumi
2017-04-01
We investigate nucleosynthesis in tidal disruption events (TDEs) of white dwarfs (WDs) by intermediate-mass black holes. We consider various types of WDs with different masses and compositions by means of three-dimensional (3D) smoothed particle hydrodynamics (SPH) simulations. We model these WDs with different numbers of SPH particles, N, from a few 104 to a few 107 in order to check mass resolution convergence, where SPH simulations with N > 107 (or a space resolution of several 106 cm) have unprecedentedly high resolution in this kind of simulation. We find that nuclear reactions become less active with increasing N and that these nuclear reactions are excited by spurious heating due to low resolution. Moreover, we find no shock wave generation. In order to investigate the reason for the absence of a shock wave, we additionally perform one-dimensional (1D) SPH and mesh-based simulations with a space resolution ranging from 104 to 107 cm, using a characteristic flow structure extracted from the 3D SPH simulations. We find shock waves in these 1D high-resolution simulations, one of which triggers a detonation wave. However, we must be careful of the fact that, if the shock wave emerged in an outer region, it could not trigger the detonation wave due to low density. Note that the 1D initial conditions lack accuracy to precisely determine where a shock wave emerges. We need to perform 3D simulations with ≲106 cm space resolution in order to conclude that WD TDEs become optical transients powered by radioactive nuclei.
Anandakrishnan, Ramu; Aguilar, Boris; Onufriev, Alexey V
2012-07-01
The accuracy of atomistic biomolecular modeling and simulation studies depend on the accuracy of the input structures. Preparing these structures for an atomistic modeling task, such as molecular dynamics (MD) simulation, can involve the use of a variety of different tools for: correcting errors, adding missing atoms, filling valences with hydrogens, predicting pK values for titratable amino acids, assigning predefined partial charges and radii to all atoms, and generating force field parameter/topology files for MD. Identifying, installing and effectively using the appropriate tools for each of these tasks can be difficult for novice and time-consuming for experienced users. H++ (http://biophysics.cs.vt.edu/) is a free open-source web server that automates the above key steps in the preparation of biomolecular structures for molecular modeling and simulations. H++ also performs extensive error and consistency checking, providing error/warning messages together with the suggested corrections. In addition to numerous minor improvements, the latest version of H++ includes several new capabilities and options: fix erroneous (flipped) side chain conformations for HIS, GLN and ASN, include a ligand in the input structure, process nucleic acid structures and generate a solvent box with specified number of common ions for explicit solvent MD.
2010-01-01
Background Quantitative models of biochemical and cellular systems are used to answer a variety of questions in the biological sciences. The number of published quantitative models is growing steadily thanks to increasing interest in the use of models as well as the development of improved software systems and the availability of better, cheaper computer hardware. To maximise the benefits of this growing body of models, the field needs centralised model repositories that will encourage, facilitate and promote model dissemination and reuse. Ideally, the models stored in these repositories should be extensively tested and encoded in community-supported and standardised formats. In addition, the models and their components should be cross-referenced with other resources in order to allow their unambiguous identification. Description BioModels Database http://www.ebi.ac.uk/biomodels/ is aimed at addressing exactly these needs. It is a freely-accessible online resource for storing, viewing, retrieving, and analysing published, peer-reviewed quantitative models of biochemical and cellular systems. The structure and behaviour of each simulation model distributed by BioModels Database are thoroughly checked; in addition, model elements are annotated with terms from controlled vocabularies as well as linked to relevant data resources. Models can be examined online or downloaded in various formats. Reaction network diagrams generated from the models are also available in several formats. BioModels Database also provides features such as online simulation and the extraction of components from large scale models into smaller submodels. Finally, the system provides a range of web services that external software systems can use to access up-to-date data from the database. Conclusions BioModels Database has become a recognised reference resource for systems biology. It is being used by the community in a variety of ways; for example, it is used to benchmark different simulation systems, and to study the clustering of models based upon their annotations. Model deposition to the database today is advised by several publishers of scientific journals. The models in BioModels Database are freely distributed and reusable; the underlying software infrastructure is also available from SourceForge https://sourceforge.net/projects/biomodels/ under the GNU General Public License. PMID:20587024
Haslinger, Robert; Pipa, Gordon; Brown, Emery
2010-10-01
One approach for understanding the encoding of information by spike trains is to fit statistical models and then test their goodness of fit. The time-rescaling theorem provides a goodness-of-fit test consistent with the point process nature of spike trains. The interspike intervals (ISIs) are rescaled (as a function of the model's spike probability) to be independent and exponentially distributed if the model is accurate. A Kolmogorov-Smirnov (KS) test between the rescaled ISIs and the exponential distribution is then used to check goodness of fit. This rescaling relies on assumptions of continuously defined time and instantaneous events. However, spikes have finite width, and statistical models of spike trains almost always discretize time into bins. Here we demonstrate that finite temporal resolution of discrete time models prevents their rescaled ISIs from being exponentially distributed. Poor goodness of fit may be erroneously indicated even if the model is exactly correct. We present two adaptations of the time-rescaling theorem to discrete time models. In the first we propose that instead of assuming the rescaled times to be exponential, the reference distribution be estimated through direct simulation by the fitted model. In the second, we prove a discrete time version of the time-rescaling theorem that analytically corrects for the effects of finite resolution. This allows us to define a rescaled time that is exponentially distributed, even at arbitrary temporal discretizations. We demonstrate the efficacy of both techniques by fitting generalized linear models to both simulated spike trains and spike trains recorded experimentally in monkey V1 cortex. Both techniques give nearly identical results, reducing the false-positive rate of the KS test and greatly increasing the reliability of model evaluation based on the time-rescaling theorem.
Vertical eddy diffusivity as a control parameter in the tropical Pacific
NASA Astrophysics Data System (ADS)
Martinez Avellaneda, N.; Cornuelle, B.
2011-12-01
Ocean models suffer from errors in the treatment of turbulent sub-grid-scale motions responsible for mixing and energy dissipation. Unrealistic small-scale physics in models can have large-scale consequences, such as biases in the upper ocean temperature, a symptom of poorly-simulated upwelling, currents and air-sea interactions. This is of special importance in the tropical Pacific Ocean (TP), which is home to energetic air-sea interactions that affect global climate. It has been shown in a number of studies that the simulated ENSO variability is highly dependent on the state of the ocean (e.g.: background mixing). Moreover, the magnitude of the vertical numerical diffusion is of primary importance in properly reproducing the Pacific equatorial thermocline. This work is part of a NASA-funded project to estimate the space- and time-varying ocean mixing coefficients in an eddy-permitting (1/3dgr) model of the TP to obtain an improved estimate of its time-varying circulation and its underlying dynamics. While an estimation procedure for the TP (26dgr S - 30dgr N) in underway using the MIT general circulation model, complementary adjoint-based sensitivity studies have been carried out for the starting ocean state from Forget (2010). This analysis aids the interpretation of the estimated mixing coefficients and possible error compensation. The focus of the sensitivity tests is the Equatorial Undercurrent and sub-thermocline jets (i.e., Tsuchiya Jets), which have been thought to have strong dependence on vertical diffusivity and should provide checks on the estimated mixing parameters. In order to build intuition for the vertical diffusivity adjoint results in the TP, adjoint and forward perturbed simulations were carried out for an idealized sharp thermocline in a rectangular domain.
Askar, Medhat; Sobecks, Ronald; Morishima, Yasuo; Kawase, Takakazu; Nowacki, Amy; Makishima, Hideki; Maciejewski, Jaroslaw
2011-09-01
HLA polymorphism remains a major hurdle for hematopoietic stem cell transplantation (HSCT). In 2004, Elsner et al. proposed the HistoCheck Web-based tool to estimate the allogeneic potential between HLA-mismatched stem cell donor/recipient pairs expressed as a sequence similarity matching (SSM). SSM is based on the structure of HLA molecules and the functional similarity of amino acids. According to this algorithm, a high SSM score represents high dissimilarity between MHC molecules, resulting in a potentially more deleterious impact on stem cell transplant outcomes. We investigated the potential of SSM to predict high-risk HLA allele mismatch combinations responsible for severe acute graft-versus-host disease (aGVHD grades III and IV) published by Kawase et al., by comparing SSM in low- and high-risk combinations. SSM was calculated for allele mismatch combinations using the HistoCheck tool available on the Web (www.histocheck.org). We compared ranges and means of SSM among high-risk (15 combinations observed in 722 donor/recipient pairs) versus low-risk allele combinations (94 combinations in 3490 pairs). Simulation scenarios were created where the recipient's HLA allele was involved in multiple allele mismatch combinations with at least 1 high-risk and 1 low-risk mismatch combination. SSM values were then compared. The mean SSM for high- versus low-risk combinations were 2.39 and 2.90 at A, 1.06 and 2.53 at B, 16.60 and 14.99 at C, 4.02 and 3.81 at DRB1, and 7.47 and 6.94 at DPB1 loci, respectively. In simulation scenarios, no predictable SSM association with high- or low-risk combinations could be distinguished. No DQB1 combinations met the statistical criteria for our study. In conclusion, our analysis demonstrates that mean SSM scores were not significantly different, and SSM distributions were overlapping among high- and low-risk allele combinations within loci HLA-A, B, C, DRB1, and DPB1. This analysis does not support selecting donors for HSCT recipients based on low HistoCheck SSM scores. Copyright © 2011 American Society for Blood and Marrow Transplantation. Published by Elsevier Inc. All rights reserved.
A New Frequency-Domain Method for Bunch Length Measurement
NASA Astrophysics Data System (ADS)
Ferianis, M.; Pros, M.
1997-05-01
A new method for bunch length measurements has been developed at Elettra. It is based on a spectral observation of the synchrotron radiation light pulses. The single pulse spectrum is shaped by means of an optical process which gives the method an increased sensitivity compared to the usual spectral observations. Some simulations have been carried out to check the method in non-ideal conditions. The results of the first measurements are also presented.
NASA Astrophysics Data System (ADS)
Larabi, Mohamed Aziz; Mutschler, Dimitri; Mojtabi, Abdelkader
2016-06-01
Our present work focuses on the coupling between thermal diffusion and convection in order to improve the thermal gravitational separation of mixture components. The separation phenomenon was studied in a porous medium contained in vertical columns. We performed analytical and numerical simulations to corroborate the experimental measurements of the thermal diffusion coefficients of ternary mixture n-dodecane, isobutylbenzene, and tetralin obtained in microgravity in the international space station. Our approach corroborates the existing data published in the literature. The authors show that it is possible to quantify and to optimize the species separation for ternary mixtures. The authors checked, for ternary mixtures, the validity of the "forgotten effect hypothesis" established for binary mixtures by Furry, Jones, and Onsager. Two complete and different analytical resolution methods were used in order to describe the separation in terms of Lewis numbers, the separation ratios, the cross-diffusion coefficients, and the Rayleigh number. The analytical model is based on the parallel flow approximation. In order to validate this model, a numerical simulation was performed using the finite element method. From our new approach to vertical separation columns, new relations for mass fraction gradients and the optimal Rayleigh number for each component of the ternary mixture were obtained.
Code of Federal Regulations, 2012 CFR
2012-01-01
... in different states or check processing regions)]. If you make the deposit in person to one of our... processing regions)]. If you make the deposit in person to one of our employees, funds from the following... Your Rights What Is a Substitute Check? To make check processing faster, federal law permits banks to...
Code of Federal Regulations, 2011 CFR
2011-01-01
... in different states or check processing regions)]. If you make the deposit in person to one of our... processing regions)]. If you make the deposit in person to one of our employees, funds from the following... Your Rights What Is a Substitute Check? To make check processing faster, federal law permits banks to...
Code of Federal Regulations, 2013 CFR
2013-01-01
... in different states or check processing regions)]. If you make the deposit in person to one of our... processing regions)]. If you make the deposit in person to one of our employees, funds from the following... Your Rights What Is a Substitute Check? To make check processing faster, federal law permits banks to...
Coupled simulation of the propulsion system and vehicle using the ESPSS satellite library
NASA Astrophysics Data System (ADS)
Koppel, C. R.; Di Matteo, F.; Moral, J.; Steelant, J.
2018-06-01
The paper documents the implementation and validation of the coupled simulation of the propulsion system and vehicle performed during the 4th development phase of the ESPSS (European Space Propulsion System Simulation) library running on the existing platform EcosimPro®. This covers a significant update of the spacecraft propulsion system modeling: the Fluid flow, Tanks and Combustion chamber components are updated to allow coupling to the vehicle's motion, the Archimedes pressure coming from acceleration and rotations given by the vehicle or by any perturbation forces are taken into account, several new features are added to the Satellite library along with new components enabling full attitude control of a platform. A new powerful compact equation is presented for solving elegantly the Archimedes pressure coming from combined acceleration and rotation in the most general case (noncollinear). Eventually, a propulsion system is modeled to check the correct implementation of the new components especially those dealing with the effects of the mission on the propulsion subsystem.
Statistical analysis and modeling of intermittent transport events in the tokamak scrape-off layer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Johan, E-mail: anderson.johan@gmail.com; Halpern, Federico D.; Ricci, Paolo
The turbulence observed in the scrape-off-layer of a tokamak is often characterized by intermittent events of bursty nature, a feature which raises concerns about the prediction of heat loads on the physical boundaries of the device. It appears thus necessary to delve into the statistical properties of turbulent physical fields such as density, electrostatic potential, and temperature, focusing on the mathematical expression of tails of the probability distribution functions. The method followed here is to generate statistical information from time-traces of the plasma density stemming from Braginskii-type fluid simulations and check this against a first-principles theoretical model. The analysis ofmore » the numerical simulations indicates that the probability distribution function of the intermittent process contains strong exponential tails, as predicted by the analytical theory.« less
Efficient Algorithms for Handling Nondeterministic Automata
NASA Astrophysics Data System (ADS)
Vojnar, Tomáš
Finite (word, tree, or omega) automata play an important role in different areas of computer science, including, for instance, formal verification. Often, deterministic automata are used for which traditional algorithms for important operations such as minimisation and inclusion checking are available. However, the use of deterministic automata implies a need to determinise nondeterministic automata that often arise during various computations even when the computations start with deterministic automata. Unfortunately, determinisation is a very expensive step since deterministic automata may be exponentially bigger than the original nondeterministic automata. That is why, it appears advantageous to avoid determinisation and work directly with nondeterministic automata. This, however, brings a need to be able to implement operations traditionally done on deterministic automata on nondeterministic automata instead. In particular, this is the case of inclusion checking and minimisation (or rather reduction of the size of automata). In the talk, we review several recently proposed techniques for inclusion checking on nondeterministic finite word and tree automata as well as Büchi automata. These techniques are based on using the so called antichains, possibly combined with a use of suitable simulation relations (and, in the case of Büchi automata, the so called Ramsey-based or rank-based approaches). Further, we discuss techniques for reducing the size of nondeterministic word and tree automata using quotienting based on the recently proposed notion of mediated equivalences. The talk is based on several common works with Parosh Aziz Abdulla, Ahmed Bouajjani, Yu-Fang Chen, Peter Habermehl, Lisa Kaati, Richard Mayr, Tayssir Touili, Lorenzo Clemente, Lukáš Holík, and Chih-Duo Hong.
Performance optimization of internet firewalls
NASA Astrophysics Data System (ADS)
Chiueh, Tzi-cker; Ballman, Allen
1997-01-01
Internet firewalls control the data traffic in and out of an enterprise network by checking network packets against a set of rules that embodies an organization's security policy. Because rule checking is computationally more expensive than routing-table look-up, it could become a potential bottleneck for scaling up the performance of IP routers, which typically implement firewall functions in software. in this paper, we analyzed the performance problems associated with firewalls, particularly packet filters, propose a good connection cache to amortize the costly security check over the packets in a connection, and report the preliminary performance results of a trace-driven simulation that show the average packet check time can be reduced by a factor of 2.5 at the least.